Bedrock Brief 17 Sep 2025

Bedrock Brief 17 Sep 2025

Welcome to this week's Bedrock Brief, where we're diving into the AI-powered future faster than you can say "Hey Alexa, what's for dinner?"

AWS is doubling down on its agentic AI ambitions, hiring two heavyweight executives to bolster its developer tools and infrastructure for intelligent agents. David Richardson is back as VP of AgentCore, while UC Berkeley professor Joe Hellerstein joins as VP and Distinguished Scientist to advance Kiro, AWS's agentic IDE. It's like getting the band back together, but this time they're playing AI symphonies. Read more about the executive hires here.

Meanwhile, AWS and Riot Games are teaming up for the Rift Rewind Hackathon, challenging developers to create personalized AI agents for League of Legends players. It's like having your own digital coach, minus the whistle and clipboard. Participants will use AWS AI services to transform gameplay data into intelligent insights, potentially uncovering that your true calling was support all along (sorry, mid-laners).

In other news, Oracle is making waves in the cloud space, with Larry Ellison eyeing AWS's crown. Who knew the database dinosaur had such spring in its step? As AI infrastructure demands skyrocket, Oracle's late-but-determined cloud push might just pay off. It's a reminder that in the AI race, it's not about who starts first, but who adapts fastest. Now, let's dive into this week's AI developments that are sure to make your neurons dance...

Fresh Cut

  • Amazon Lex expands its AI-powered natural language understanding to eight new languages, enabling chatbots to better handle complex requests and extract key information from user inputs. Read announcement →
  • Amazon's R8i and R8i-flex instances, offering up to 15% better price-performance and 2.5x more memory bandwidth than previous generations, are now available in Asia Pacific and Europe regions, providing developers with powerful options for memory-intensive workloads. Read announcement →
  • SageMaker HyperPod's new health monitoring agent automatically detects and replaces faulty nodes in Slurm clusters, helping ML teams train large models for weeks without manual intervention or disruption. Read announcement →
  • Amazon OpenSearch Service enables AI-powered forecasting on time-series data, helping developers predict trends in infrastructure, application metrics, and website traffic without requiring data science expertise. Read announcement →
  • OpenSearch 3.1 brings faster indexing, improved vector search, and a new Search Relevance Workbench, enabling developers to build more efficient and accurate AI-driven search applications. Read announcement →
  • Deploy custom Meta Llama 3.3 models on-demand in Amazon Bedrock, paying only for actual usage and avoiding always-on infrastructure costs. Read announcement →
  • SageMaker notebooks offer powerful P6-B200 instances with 8 NVIDIA Blackwell GPUs, enabling developers to interactively fine-tune large AI models and experiment with generative AI applications directly in JupyterLab or CodeEditor. Read announcement →
  • VS Code users can now connect directly to Amazon SageMaker Unified Studio, enabling developers to use their familiar local setup while tapping into SageMaker's powerful cloud resources for AI/ML and analytics workflows. Read announcement →
  • Amazon ECS now offers AI-powered assistance for creating and editing container task definitions, helping developers write JSON faster with inline chat, code suggestions, and autocomplete features. Read announcement →
  • AWS IoT SiteWise's anomaly detection can now automatically retrain models on a schedule, helping developers keep equipment monitoring up-to-date without manual intervention. Read announcement →

The Quarry

Unified multimodal access layer for Quora’s Poe using Amazon Bedrock

Quora's Poe system just got a major upgrade thanks to a collaboration with AWS that created a unified wrapper API for Amazon Bedrock foundation models. This nifty bit of engineering bridges Poe's event-driven ServerSentEvents protocol with Bedrock's REST APIs, allowing for seamless integration and multi-modal capabilities. The cherry on top? A template-based configuration system that slashed deployment time from days to mere minutes, proving that sometimes the best AI innovations are about making existing tech play nice together. Read blog →

More posts:


Core Sample

Build AI Agents faster with MCP Servers - Part 1/2

This video demonstrates how to supercharge Strands AI agents using MCP (Multi-Cloud Provider) servers, allowing developers to tap into specialized expertise without reinventing the wheel. By incorporating three MCP servers, the tutorial creates an agent capable of generating a detailed report on fine-tuning the Llama model with Amazon SageMaker, complete with instance recommendations and pricing info. The secret sauce lies in the MCP servers' ability to encapsulate domain knowledge, essentially letting you "borrow brains" to accelerate AI agent development and deployment. Watch video →

More videos: