Bedrock Brief 10 Sep 2025
Welcome back, AI adventurers! This week in the wild world of AWS, we've got hackathons, hardware expansions, and enough acronyms to make your head spin faster than a overclocked GPU.
First up, dust off your coding gloves because AWS is throwing an AI Agent Global Hackathon with a whopping $45,000 prize pool. It's time to flex those AI muscles and build agents that can reason, connect, and maybe even do your laundry (okay, maybe not that last one... yet). Whether you're a Bedrock buff or a SageMaker savant, there's a category for you to shine.
But wait, there's more! While you're busy hacking away, AWS is quietly building enough data center capacity to power a small country. Word on the street is they're cozying up to Anthropic, the GenAI wunderkind that's been outpacing even the mighty OpenAI. With "well over a gigawatt" of new capacity in the works, it looks like AWS is betting big on Anthropic's hunger for compute. And speaking of bets, they're doubling down on their homegrown Trainium chips – because who needs GPUs when you can train your own silicon, right?
Fresh Cut
- Amazon Bedrock enables faster text and image embeddings with TwelveLabs' Marengo 2.7, allowing developers to create more responsive search experiences using advanced video understanding AI. Read announcement →
- Contact center admins can now easily choose different AI models for their customer service chatbots directly in Amazon Connect's web interface, allowing customization of AI responses for various business needs without coding. Read announcement →
- AWS WAF includes 500 MB of free CloudWatch logs for every million requests, enabling developers to analyze web traffic patterns and security events without additional costs. Read announcement →
- SageMaker HyperPod's new managed tiered checkpointing saves AI training progress in CPU memory for quick recovery, potentially saving developers hours of work if their large-scale model training is interrupted. Read announcement →
- Amazon SageMaker's AI assistant now understands your project's resources, offering personalized help for data engineering and ML tasks directly in Jupyter notebooks and the command line. Read announcement →
- AWS introduces three condition keys for Amazon Bedrock API keys, allowing developers to control key generation, expiration, and type, enhancing security and management of AI model access. Read announcement →
- Amazon Bedrock enables global cross-region inference for Claude Sonnet 4, allowing developers to route requests to any supported AWS region for higher throughput and better resource utilization. Read announcement →
- AWS Clean Rooms ML introduces redacted error log summaries, allowing developers to troubleshoot custom ML models collaboratively while protecting sensitive data and intellectual property. Read announcement →
The Quarry
Accelerate your model training with managed tiered checkpointing on Amazon SageMaker HyperPod
SageMaker HyperPod now offers managed tiered checkpointing, a nifty trick to speed up your AI model training while keeping your data safe. This feature cleverly uses CPU memory as a high-speed checkpoint storage, automatically replicating data across nearby compute nodes for extra peace of mind. For the tech-savvy, it's like having a turbocharged RAID system for your AI workloads, potentially slashing checkpoint times and getting you to that "model complete" finish line faster. Read blog →
More posts:
- Powering innovation at scale: How AWS is tackling AI infrastructure challenges
- Accelerate your model training with managed tiered checkpointing on Amazon SageMaker HyperPod
- Maximize HyperPod Cluster utilization with HyperPod task governance fine-grained quota allocation
- Build and scale adoption of AI agents for education with Strands Agents, Amazon Bedrock AgentCore, and LibreChat
- Skai uses Amazon Bedrock Agents to significantly improve customer insights by revolutionized data access and analysis
- The power of AI in driving personalized product discovery at Snoonu
- Accelerating HPC and AI research in universities with Amazon SageMaker HyperPod
- Exploring the Real-Time Race Track with Amazon Nova
- Build character consistent storyboards using Amazon Nova in Amazon Bedrock – Part 2
- Build character consistent storyboards using Amazon Nova in Amazon Bedrock – Part 1
- Authenticate Amazon Q Business data accessors using a trusted token issuer
- Unlocking the future of professional services: How Proofpoint uses Amazon Q Business
- Enhancing LLM accuracy with Coveo Passage Retrieval on Amazon Bedrock
Core Sample
AI Agent: Researches Blog and Knowledge Base using Strands SDK
This video demonstrates how to create a Research AI Agent using Strands SDK that can extract entities from a blog, query a vector store for product details, and generate summaries - all with minimal code. It showcases the power of combining built-in Strands tools with custom tools leveraging Anthropic's Claude Sonnet model on AWS Bedrock. For engineers, a key technical highlight is the agent's ability to estimate its own execution cost, providing valuable insight into resource utilization and optimization opportunities. Watch video →
More videos:
- AI Pulse: Vhi improves data analysis through generative AI
- Building Custom Agents with Amazon Q Developer CLI | AWS Developer Tools
- Clearwater Analytics: Redefining investment management with generative AI
- Bundesliga Uses Amazon Nova to Personalize Fan Content with GenAI
- How Startup Culture Transforms Legacy Thinking