This April, Swami Sivasubramanian, Vice President of Data and Machine Learning at AWS, announced Amazon Bedrock and Amazon Titan models as part of new tools for building with generative AI on AWS. Amazon Bedrock, currently available in preview, is a fully managed service that makes foundation models (FMs) from Amazon and leading AI startups—such as AI21 Labs, Anthropic, Cohere, and Stability AI—available through an API.
Today, I’m excited to announce the preview of agents for Amazon Bedrock, a new capability for developers to create fully managed agents in a few clicks. Agents for Amazon Bedrock accelerate the delivery of generative AI applications that can manage and perform tasks by making API calls to your company systems. Agents extend FMs to understand user requests, break down complex tasks into multiple steps, carry on a conversation to collect additional information, and take actions to fulfill the request.
Using agents for Amazon Bedrock, you can automate tasks for your internal or external customers, such as managing retail orders or processing insurance claims. For example, an agent-powered generative AI e-commerce application can not only respond to the question, “Do you have this jacket in blue?” with a simple answer but can also help you with the task of updating your order or managing an exchange.
For this to work, you first need to give the agent access to external data sources and connect it to existing APIs of other applications. This allows the FM that powers the agent to interact with the broader world and extend its utility beyond just language processing tasks. Second, the FM needs to figure out what actions to take, what information to use, and in which sequence to perform these actions. This is possible thanks to an exciting emerging behavior of FMs—their ability to reason. You can show FMs how to handle such interactions and how to reason through tasks by building prompts that include definitions and instructions. The process of designing prompts to guide the model towards desired outputs is known as prompt engineering.
Introducing Agents for Amazon Bedrock
Agents for Amazon Bedrock automate the prompt engineering and orchestration of user-requested tasks. Once configured, an agent automatically builds the prompt and securely augments it with your company-specific information to provide responses back to the user in natural language. The agent is able to figure out the actions required to automatically process user-requested tasks. It breaks the task into multiple steps, orchestrates a sequence of API calls and data lookups, and maintains memory to complete the action for the user.
With fully managed agents, you don’t have to worry about provisioning or managing infrastructure. You’ll have seamless support for monitoring, encryption, user permissions, and API invocation management without writing custom code. As a developer, you can use the Bedrock console or SDK to upload the API schema. The agent then orchestrates the tasks with the help of FMs and performs API calls using AWS Lambda functions.
Primer on Advanced Reasoning and ReAct
You can help FMs to reason and figure out how to solve user-requested tasks with a reasoning technique called ReAct (synergizing reasoning and acting). Using ReAct, you can structure prompts to show an FM how to reason through a task and decide on actions that help find a solution. The structured prompts include a sequence of question-thought-action-observation examples.
The question is the user-requested task or problem to solve. The thought is a reasoning step that helps demonstrate to the FM how to tackle the problem and identify an action to take. The action is an API that the model can invoke from an allowed set of APIs. The observation is the result of carrying out the action. The actions that the FM is able to choose from are defined by a set of instructions that are prepended to the example prompt text. Here is an illustration of how you would build up a ReAct prompt:
The good news is that Bedrock performs the heavy lifting for you! Behind the scenes, agents for Amazon Bedrock build the prompts based on the information and actions you provide.
Now, let me show you how to get started with agents for Amazon Bedrock.
Create an Agent for Amazon Bedrock
Let’s assume you’re a developer at an insurance company and want to provide a generative AI application that helps the insurance agency owners automate repetitive tasks. You create an agent in Bedrock and integrate it into your application.
To get started with the agent, open the Bedrock console, select Agents in the left navigation panel, then choose Create Agent.
This starts the agent creation workflow.
- Provide agent details including agent name, description (optional), whether the agent is allowed to request additional user inputs, and the
- Select a foundation model from Bedrock that fits your use case. Here, you provide an instruction to your agent in natural language. The instruction tells the agent what task it’s supposed to perform and the persona it’s supposed to assume. For example, “You are an agent designed to help with processing insurance claims and managing pending paperwork.”
- Add action groups. An action is a task that the agent can perform automatically by making API calls to your company systems. A set of actions is defined in an action group. Here, you provide an API schema that defines the APIs for all the actions in the group. You also must provide a Lambda function that represents the business logic for each API. For example, let’s define an action group called ClaimManagementActionGroup that manages insurance claims by pulling a list of open claims, identifying outstanding paperwork for each claim, and sending reminders to policy holders. Make sure to capture this information in the action group description. The business logic for my action group is captured in the Lambda function InsuranceClaimsLambda. This AWS Lambda function implements methods for the following API calls:
open-claims
,identify-missing-documents
, andsend-reminders.
Here’s a short extract from my OrderManagementLambda:import json import time def open_claims(): ... def identify_missing_documents(parameters): ... def send_reminders(): ... def lambda_handler(event, context): responses = [] for prediction in event['actionGroups']: response_code = ... action = prediction['actionGroup'] api_path = prediction['apiPath'] if api_path == '/claims': body = open_claims() elif api_path == '/claims/{claimId}/identify-missing-documents': parameters = prediction['parameters'] body = identify_missing_documents(parameters) elif api_path == '/send-reminders': body = send_reminders() else: body = {"{}::{} is not a valid api, try another one.".format(action, api_path)} response_body = { 'application/json': { 'body': str(body) } } action_response = { 'actionGroup': prediction['actionGroup'], 'apiPath': prediction['apiPath'], 'httpMethod': prediction['httpMethod'], 'httpStatusCode': response_code, 'responseBody': response_body } responses.append(action_response) api_response = {'response': responses} return api_response
Note that you also must provide an API schema in the OpenAPI schema JSON format. Here’s what my API schema file
insurance_claim_schema.json
looks like:{"openapi": "3.0.0", "info": { "title": "Insurance Claims Automation API", "version": "1.0.0", "description": "APIs for managing insurance claims by pulling a list of open claims, identifying outstanding paperwork for each claim, and sending reminders to policy holders." }, "paths": { "/claims": { "get": { "summary": "Get a list of all open claims", "description": "Get the list of all open insurance claims. Return all the open claimIds.", "operationId": "getAllOpenClaims", "responses": { "200": { "description": "Gets the list of all open insurance claims for policy holders", "content": { "application/json": { "schema": { "type": "array", "items": { "type": "object", "properties": { "claimId": { "type": "string", "description": "Unique ID of the claim." }, "policyHolderId": { "type": "string", "description": "Unique ID of the policy holder who has filed the claim." }, "claimStatus": { "type": "string", "description": "The status of the claim. Claim can be in Open or Closed state" } } } } } } } } } }, "/claims/{claimId}/identify-missing-documents": { "get": { "summary": "Identify missing documents for a specific claim", "description": "Get the list of pending documents that need to be uploaded by policy holder before the claim can be processed. The API takes in only one claim id and returns the list of documents that are pending to be uploaded by policy holder for that claim. This API should be called for each claim id", "operationId": "identifyMissingDocuments", "parameters": [{ "name": "claimId", "in": "path", "description": "Unique ID of the open insurance claim", "required": true, "schema": { "type": "string" } }], "responses": { "200": { "description": "List of documents that are pending to be uploaded by policy holder for insurance claim", "content": { "application/json": { "schema": { "type": "object", "properties": { "pendingDocuments": { "type": "string", "description": "The list of pending documents for the claim." } } } } } } } } }, "/send-reminders": { "post": { "summary": "API to send reminder to the customer about pending documents for open claim", "description": "Send reminder to the customer about pending documents for open claim. The API takes in only one claim id and its pending documents at a time, sends the reminder and returns the tracking details for the reminder. This API should be called for each claim id you want to send reminders for.", "operationId": "sendReminders", "requestBody": { "required": true, "content": { "application/json": { "schema": { "type": "object", "properties": { "claimId": { "type": "string", "description": "Unique ID of open claims to send reminders for." }, "pendingDocuments": { "type": "string", "description": "The list of pending documents for the claim." } }, "required": [ "claimId", "pendingDocuments" ] } } } }, "responses": { "200": { "description": "Reminders sent successfully", "content": { "application/json": { "schema": { "type": "object", "properties": { "sendReminderTrackingId": { "type": "string", "description": "Unique Id to track the status of the send reminder Call" }, "sendReminderStatus": { "type": "string", "description": "Status of send reminder notifications" } } } } } }, "400": { "description": "Bad request. One or more required fields are missing or invalid." } } } } } }
When a user asks your agent to complete a task, Bedrock will use the FM you configured for the agent to identify the sequence of actions and invoke the corresponding Lambda functions in the right order to solve the user-requested task.
- In the final step, review your agent configuration and choose Create Agent.
- Congratulations, you’ve just created your first agent in Amazon Bedrock!
Deploy an Agent for Amazon Bedrock
To deploy an agent in your application, you must create an alias. Bedrock then automatically creates a version for that alias.
- In the Bedrock console, select your agent, then select Deploy, and choose Create to create an alias.
- Provide an alias name and description and choose whether to create a new version or use an existing version of your agent to associate with this alias.
- This saves a snapshot of the agent code and configuration and associates an alias with this snapshot or version. You can use the alias to integrate the agent into your applications.
Now, let’s test the insurance agent! You can do this right in the Bedrock console.
Let’s ask the agent to “Send reminder to all policy holders with open claims and pending paper work.” You can see how the FM-powered agent is able to understand the user request, break down the task into steps (collect the open insurance claims, lookup the claim IDs, send reminders), and perform the corresponding actions.
Agents for Amazon Bedrock can help you increase productivity, improve your customer service experience, or automate DevOps tasks. I’m excited to see what use cases you will implement!
Learn the Fundamentals of Generative AI
If you’re interested in the fundamentals of generative AI and how to work with FMs, including advanced prompting techniques and agents, check out this this new hands-on course that I developed with AWS colleagues and industry experts in collaboration with DeepLearning.AI:
Generative AI with large language models (LLMs) is an on-demand, three-week course for data scientists and engineers who want to learn how to build generative AI applications with LLMs. It’s the perfect foundation to start building with Amazon Bedrock. Enroll for generative AI with LLMs today.
Sign up to Learn More about Amazon Bedrock (Preview)
Amazon Bedrock is currently available in preview. Reach out to us if you’d like access to agents for Amazon Bedrock as part of the preview. We’re regularly providing access to new customers. Visit the Amazon Bedrock Features page and sign up to learn more about Amazon Bedrock.
— Antje
P.S. We’re focused on improving our content to provide a better customer experience, and we need your feedback to do so. Please take this quick survey to share insights on your experience with the AWS Blog. Note that this survey is hosted by an external company, so the link does not lead to our website. AWS handles your information as described in the AWS Privacy Notice.