Logo
Back to Blog
AI & AutomationApril 29, 202612 min read

OpenAI Flagship Models on AWS: GPT-5.5, Codex & Managed Agents on Amazon Bedrock

OpenAI's GPT-5.5, GPT-5.4, Codex, and Managed Agents are now available on Amazon Bedrock in limited preview. Backed by a $50B Amazon investment, this guide covers enterprise security, the Stateful Runtime Environment, pricing, and architecture patterns for AWS-native teams.

Lushbinary Team

Lushbinary Team

AI & Cloud Solutions

OpenAI Flagship Models on AWS: GPT-5.5, Codex & Managed Agents on Amazon Bedrock

On April 28, 2026 — just one day after Microsoft's exclusivity deal with OpenAI officially ended — AWS launched OpenAI's flagship models on Amazon Bedrock. GPT-5.4, GPT-5.5, Codex, and a new category of Managed Agents are now accessible through the same Bedrock APIs that millions of organizations already use for model inference, fine-tuning, and orchestration.

This isn't just another model provider joining Bedrock. It's backed by a $50 billion Amazon investment in OpenAI, a multi-year infrastructure deal spanning 2 gigawatts of Trainium capacity, and the exclusive third-party cloud distribution rights for OpenAI Frontier — the enterprise agent platform. For AWS-native teams, this eliminates the friction of managing a separate OpenAI vendor relationship while keeping data governance, IAM, and compliance controls intact.

This guide covers what's available today, how the integration works, pricing considerations, the Stateful Runtime Environment for agents, and practical patterns for adopting OpenAI models within your existing AWS architecture.

What This Guide Covers

  1. The Partnership: What Was Announced
  2. Models Available on Bedrock
  3. Enterprise Security & Governance
  4. Codex on Amazon Bedrock
  5. Bedrock Managed Agents Powered by OpenAI
  6. The Stateful Runtime Environment
  7. OpenAI Frontier on AWS
  8. Pricing & Cost Optimization
  9. Architecture Patterns for Adoption
  10. Getting Started: Access & Preview Registration
  11. Why Lushbinary for Your AI Integration

1The Partnership: What Was Announced

The AWS-OpenAI partnership announced on April 28, 2026 is one of the largest AI infrastructure deals in history. Here's what it includes:

  • $50 billion Amazon investment in OpenAI ($15B initial, $35B conditional)
  • OpenAI models on Bedrock — GPT-5.4 and GPT-5.5 available in limited preview
  • Codex on Bedrock — OpenAI's coding agent accessible via AWS credentials
  • Bedrock Managed Agents powered by OpenAI for production-ready agent deployment
  • Stateful Runtime Environment — co-developed persistent execution layer for agents
  • AWS as exclusive third-party cloud distributor for OpenAI Frontier
  • $100B infrastructure expansion over 8 years, including 2GW of Trainium capacity
  • Custom models developed jointly for Amazon's customer-facing applications

⏱️ Timeline Context

Microsoft's exclusivity with OpenAI ended on April 27, 2026. AWS launched OpenAI on Bedrock the very next day, April 28 — indicating this integration was prepared well in advance. The $38B compute agreement was originally announced in November 2025.

2Models Available on Bedrock

The following OpenAI models are available in limited preview on Amazon Bedrock as of April 29, 2026:

ModelStrengthsContextStatus
GPT-5.5Agentic workflows, coding, omnimodal1M tokensPreview
GPT-5.4Reasoning, computer use, tool search1M tokensPreview
GPT-5.4 miniCost-efficient coding, subagents1M tokensComing soon

GPT-5.5 is OpenAI's most capable model, launched on April 23, 2026. It scores 93.6% on GPQA Diamond, 82.7% on Terminal-Bench 2.0, and 78.7% on OSWorld-Verified. GPT-5.4 remains the more cost-effective option with native computer use, tool search (reducing token usage by 47%), and strong reasoning capabilities.

Both models are accessed through the standard Bedrock InvokeModel and Converse APIs — no new SDK or endpoint configuration required. You select the model ID just as you would for Claude, Llama, or Nova models.

3Enterprise Security & Governance

The primary value proposition for running OpenAI models through Bedrock rather than the OpenAI API directly is enterprise governance. OpenAI models on Bedrock inherit the full AWS security stack:

IAM Authentication

Use existing AWS IAM roles and policies. No separate API keys to manage or rotate.

AWS PrivateLink

Keep inference traffic off the public internet. Data never leaves your VPC boundary.

Bedrock Guardrails

Apply content filtering, PII redaction, and topic blocking policies uniformly across all models.

CloudTrail Logging

Full audit trail of every inference call. Meet SOC 2, HIPAA, and FedRAMP compliance requirements.

Encryption

Data encrypted at rest and in transit with AWS KMS. Customer-managed keys supported.

VPC Endpoints

Interface VPC endpoints for Bedrock ensure zero exposure to the public internet.

For regulated industries (healthcare, finance, government), this is the key differentiator. You get OpenAI's frontier intelligence without exposing data to a third-party API endpoint or managing a separate vendor security review.

4Codex on Amazon Bedrock

OpenAI's Codex — the autonomous coding agent that can write, debug, refactor, and ship code — is now available on Bedrock in limited preview. This brings AI-powered software development directly into AWS environments where enterprise teams already build.

Key integration points:

  • AWS credential authentication — no separate OpenAI account needed
  • Inference through Bedrock — all Codex operations run via Bedrock APIs
  • Available via Codex CLI, desktop app, and VS Code extension
  • Usage counts toward AWS cloud commitments — use existing EDPs and savings plans

For teams already using Codex subagents for autonomous coding, the Bedrock integration means you can run the same workflows with enterprise-grade security controls and consolidated billing.

5Bedrock Managed Agents Powered by OpenAI

Bedrock Managed Agents is a new offering that combines OpenAI's frontier models with AWS's agent infrastructure to deploy production-ready AI agents quickly. Each agent gets:

  • Its own identity — scoped IAM permissions, not shared credentials
  • Action logging — every tool call, decision, and output is auditable
  • Your environment — agents run in your AWS account, not OpenAI's
  • OpenAI agent harness — optimized for faster execution, sharper reasoning, and reliable steering of long-running tasks
  • AgentCore integration — default compute environment provided by Bedrock AgentCore

Managed Agents works with Amazon Bedrock AgentCore, which provides the compute runtime, tool execution, memory management, and policy enforcement layer. This is the same AgentCore platform announced at AWS re:Invent 2025.

🎤 AWS re:Invent 2025 Update

Amazon Bedrock AgentCore was first announced at re:Invent 2025 as the production-ready platform for building and operating AI agents at scale. It provides memory, identity, policy enforcement, and tool execution layers. The OpenAI Managed Agents offering builds directly on this foundation, adding OpenAI's frontier models and agent harness to the AgentCore runtime.

6The Stateful Runtime Environment

The most architecturally significant piece of this partnership is the Stateful Runtime Environment — a jointly developed execution layer that solves the biggest pain point in production AI agents: state management.

Traditional agent architectures are stateless. Each API call starts fresh, forcing developers to rebuild context through prompt engineering, external databases, and brittle orchestration code. The Stateful Runtime Environment changes this by providing:

  • Persistent memory — agents retain context across sessions and steps
  • Compute access — agents can execute code, access filesystems, and run tools
  • Identity management — each agent has scoped permissions and audit trails
  • Cross-tool continuity — agents move between software tools and data sources without losing context
  • Long-running project support — handle workflows that span hours or days
Stateful Runtime Environment on BedrockOpenAI Frontier ModelsGPT-5.5 · GPT-5.4 · CodexStateful Runtime EnvironmentPersistent MemoryCompute & ToolsIdentity & PolicyCross-Tool Continuity & Long-Running Workflow SupportAmazon Bedrock AgentCoreIAM · PrivateLink · Guardrails · CloudTrail · KMS EncryptionYour AWS Account (VPC · Data · Services)

The Stateful Runtime Environment is expected to launch in the coming months. It will be trained to run optimally on AWS infrastructure and integrated with Bedrock AgentCore so agents run cohesively with the rest of your infrastructure applications.

7OpenAI Frontier on AWS

OpenAI Frontier is the enterprise platform for building, deploying, and managing teams of AI agents at scale. Launched on February 5, 2026, it treats AI agents as "AI coworkers" that can be onboarded, assigned identities, granted scoped permissions, and continuously evaluated.

AWS is the exclusive third-party cloud distribution provider for Frontier. This means:

  • Enterprises can access Frontier through their existing AWS accounts
  • Frontier agents run on AWS infrastructure with full governance controls
  • No need to establish a separate vendor relationship with OpenAI
  • Usage can be applied toward AWS cloud commitments (EDPs, savings plans)

Frontier enables organizations to deploy teams of agents that operate across real business systems — Salesforce, Workday, internal data warehouses — with shared context, built-in governance, and enterprise-grade security. The AWS distribution means these agents inherit the same compliance posture as the rest of your cloud workloads.

💡 Key Distinction

Bedrock Managed Agents is for building individual agents with OpenAI models. OpenAI Frontier is for orchestrating teams of agents across enterprise systems. Both are available through AWS, but they serve different scales of deployment.

8Pricing & Cost Optimization

Bedrock-specific pricing for OpenAI models has not been publicly disclosed during the limited preview. However, here's what we know about the cost structure:

ModelInput (per 1M tokens)Cached InputOutput (per 1M tokens)
GPT-5.5$5.00$0.50$30.00
GPT-5.4$2.50$0.25$15.00
GPT-5.4 mini$0.75$0.075$4.50

Pricing shown is OpenAI's direct API pricing as of April 2026. Bedrock pricing may differ. Source: OpenAI API Pricing

The critical cost advantage of running OpenAI through Bedrock:

  • Apply to existing commitments — usage counts toward AWS EDPs and savings plans
  • Consolidated billing — one invoice, one vendor relationship, one procurement process
  • Prompt caching — 90% discount on cached inputs reduces costs for repetitive workflows
  • Model routing — use Bedrock's model selection to route simple tasks to cheaper models (Nova, GPT-5.4 mini) and complex tasks to GPT-5.5
  • No separate OpenAI billing — eliminates the overhead of managing a second AI vendor

9Architecture Patterns for Adoption

Here are three practical patterns for integrating OpenAI models into your existing AWS architecture:

Pattern 1: Multi-Model Routing

Use Bedrock's unified API to route requests to the optimal model based on task complexity. Simple classification goes to Nova Lite, coding tasks to GPT-5.4, and complex reasoning to GPT-5.5 or Claude Opus 4.7.

import { BedrockRuntimeClient, InvokeModelCommand } from "@aws-sdk/client-bedrock-runtime";

const client = new BedrockRuntimeClient({ region: "us-east-1" });

// Route to OpenAI GPT-5.4 on Bedrock
const response = await client.send(new InvokeModelCommand({
  modelId: "openai.gpt-5-4",  // Model ID format TBD in GA
  contentType: "application/json",
  body: JSON.stringify({
    messages: [{ role: "user", content: "Analyze this codebase..." }],
    max_tokens: 4096,
  }),
}));

Pattern 2: Agent with Enterprise Data Access

Deploy a Managed Agent powered by OpenAI that connects to your existing data sources (RDS, DynamoDB, S3) through Bedrock AgentCore's tool execution layer. The agent authenticates with its own IAM role and logs every action to CloudTrail.

Pattern 3: Gradual Migration from OpenAI API

If you're currently using the OpenAI API directly, migrate incrementally:

  1. Start with non-critical workloads (internal tools, dev environments)
  2. Validate that Bedrock's response format matches your existing parsing logic
  3. Move production workloads once GA pricing is confirmed
  4. Consolidate billing and retire the separate OpenAI account

10Getting Started: Access & Preview Registration

OpenAI models on Bedrock are currently in limited preview. Here's how to get access:

  1. Register for the preview — visit the OpenAI on Amazon Bedrock page and sign up for early access
  2. Ensure Bedrock access — your AWS account must have Amazon Bedrock enabled in a supported region
  3. Request model access — once approved, request access to OpenAI models through the Bedrock console (Model Access page)
  4. Use existing SDKs — the AWS SDK for JavaScript (v3), Python (boto3), and other languages already support Bedrock's InvokeModel API

General availability is expected in the coming weeks, according to AWS CEO Matt Garman's announcement at the "What's Next with AWS" event in San Francisco.

📺 Recommended re:Invent Session

Learn how to architect scalable and secure agentic AI with Bedrock AgentCore — the same platform that powers OpenAI Managed Agents on AWS.

Watch: Architecting Scalable Agentic AI with Bedrock AgentCore (AIM431) →

📺 Related re:Invent Sessions

11Why Lushbinary for Your AI Integration

Integrating OpenAI models into an existing AWS architecture requires more than just calling a new model ID. You need to design model routing strategies, implement guardrails, manage costs across multiple model providers, and ensure compliance controls are properly configured.

Lushbinary specializes in exactly this kind of work:

  • Multi-model architecture design — we build routing layers that select the optimal model (OpenAI, Claude, Nova) per task
  • AWS-native AI integration — Bedrock, SageMaker, Lambda, and Step Functions orchestration
  • Enterprise security implementation — PrivateLink, guardrails, IAM policies, and compliance controls
  • Cost optimization — prompt caching, model routing, and commitment planning to minimize spend
  • Agent development — production-ready agents with proper observability, error handling, and human-in-the-loop patterns

🚀 Free Consultation

Want to integrate OpenAI models into your AWS infrastructure? Lushbinary will assess your current architecture, recommend a multi-model strategy, and give you a realistic implementation timeline — no obligation.

❓ Frequently Asked Questions

Which OpenAI models are available on Amazon Bedrock?

As of April 2026, GPT-5.4 and GPT-5.5 are available in limited preview on Amazon Bedrock. OpenAI Codex and Bedrock Managed Agents powered by OpenAI are also available in preview.

How much does OpenAI on AWS Bedrock cost?

Bedrock pricing for OpenAI models has not been publicly disclosed yet (limited preview). Direct OpenAI API pricing is $2.50/$15.00 per 1M tokens for GPT-5.4 and $5.00/$30.00 for GPT-5.5. Bedrock usage can be applied toward existing AWS cloud commitments.

What is the Stateful Runtime Environment for agents on Bedrock?

The Stateful Runtime Environment is a jointly developed execution layer by AWS and OpenAI that provides persistent memory, context continuity across sessions, and secure orchestration for multi-step agent workflows running on Amazon Bedrock.

Can I use my existing AWS credentials to access OpenAI models?

Yes. OpenAI models on Bedrock inherit AWS enterprise controls including IAM authentication, AWS PrivateLink, guardrails, encryption, and CloudTrail logging. You authenticate with your existing AWS credentials.

What is OpenAI Frontier and how does it relate to AWS?

OpenAI Frontier is an enterprise platform for building, deploying, and managing teams of AI agents. AWS is the exclusive third-party cloud distribution provider for Frontier, meaning enterprises can access it through their existing AWS infrastructure.

Sources

Content was rephrased for compliance with licensing restrictions. Partnership details sourced from official AWS and OpenAI announcements as of April 29, 2026. Pricing and availability may change — always verify on the vendor's website.

Integrate OpenAI on AWS with Lushbinary

We'll design your multi-model architecture, implement enterprise security controls, and get you running OpenAI models on Bedrock — with proper cost optimization from day one.

Ready to Build Something Great?

Get a free 30-minute strategy call. We'll map out your project, timeline, and tech stack — no strings attached.

Let's Talk About Your Project

Contact Us

OpenAIAmazon BedrockAWSGPT-5.5GPT-5.4CodexAI AgentsManaged AgentsStateful RuntimeEnterprise AICloud AIOpenAI Frontier

ContactUs