Bedrock Brief 27 Aug 2025

Bedrock Brief 27 Aug 2025

Well, folks, it's been quite a week in the world of AWS AI. Grab your coffee and buckle up, because we're diving headfirst into some spicy takes from none other than AWS CEO Matt Garman. In a move that's sure to ruffle some feathers, Garman declared that replacing junior developers with AI is "the dumbest thing I've ever heard." Shots fired, Silicon Valley!

But wait, there's more! While Garman's busy defending the junior devs, he's also throwing shade at companies bragging about AI-generated code metrics. Apparently, boasting about lines of code written by AI is about as impressive as a participation trophy in adult kickball. The real kicker? Over 80% of AWS developers are already using AI in their workflows. So much for replacing the workforce, eh?

Meanwhile, in a plot twist worthy of a Netflix drama, Amazon's AGI research lab is betting big on AI agents. David Luan, the lab's head honcho, is singing the praises of task-completing AI that goes beyond mere chatbots. It's like watching the birth of Skynet, but with more AWS branding and (hopefully) less world domination. Stay tuned, folks – this AI rollercoaster is just getting started!

Fresh Cut

  • Amazon Polly introduces seven new synthetic voices in multiple languages, including a male Canadian French voice that can speak multiple languages while maintaining the same vocal identity, enabling developers to create more natural-sounding multilingual speech applications. Read announcement →
  • Amazon Connect Contact Lens expands its external voice analytics to five new AWS regions, enabling businesses to improve customer experiences across multiple voice systems without migrating their entire contact center. Read announcement →
  • EC2 G6 instances with NVIDIA L4 GPUs are available in the UAE, offering up to 8 GPUs, 192 vCPUs, and 7.52 TB of storage for graphics-intensive and machine learning tasks. Read announcement →
  • Amazon RDS for MariaDB 11.8 introduces vector storage capabilities, enabling developers to build AI-powered features like similarity searches in e-commerce applications without complex database setup. Read announcement →
  • Amazon Neptune's new GraphRAG Toolkit lets developers easily connect knowledge graphs to language models, improving AI responses with structured data for more accurate and explainable results. Read announcement →
  • Amazon Bedrock Data Automation expands document processing to Portuguese, French, Italian, Spanish, and German, enabling developers to create multilingual AI applications with less effort. Read announcement →
  • Amazon Bedrock Data Automation, which helps developers extract insights from unstructured data like documents and images for AI applications, is now available in AWS GovCloud (US-West), expanding its reach to government and highly-regulated sectors. Read announcement →
  • AWS releases open-source MCP server for Billing and Cost Management, enabling developers to analyze spending and optimize costs using their preferred AI assistant with SQL-based calculations and secure AWS service connectivity. Read announcement →
  • Amazon Bedrock's new Count Tokens API helps developers estimate costs and optimize prompts for Claude models by providing token counts before running inference. Read announcement →
  • Data scientists can now easily share files using S3 buckets in Amazon SageMaker Unified Studio projects, simplifying collaboration without the need for Git knowledge. Read announcement →

The Quarry

Beyond the basics: A comprehensive foundation model selection framework for generative AI

Choosing the perfect foundation model for your AI project is like picking the right tool from a Swiss Army knife—overwhelming but crucial. Amazon Bedrock users now have a systematic evaluation methodology that blends theoretical frameworks with hands-on strategies, helping data scientists and ML engineers navigate the ever-expanding model landscape. This approach goes beyond basic metrics, diving into nuanced factors like inference latency and token limits, ensuring you don't just pick a model that works, but one that truly shines for your specific use case. Read blog →

More posts:


Core Sample

Edge Intelligence: AWS IoT Greengrass and Machine Learning at the Edge

AWS IoT Greengrass brings the power of machine learning to edge devices, slashing latency and bandwidth costs while ensuring operations continue even when disconnected from the cloud. The video showcases how the cross-platform ONNX runtime enables ML inference across diverse edge hardware, from Raspberry Pis to industrial gateways. For the technically curious, it delves into the nitty-gritty of implementing edge ML architectures, including a live demo that'll make any self-respecting engineer's fingers itch to start tinkering. Watch video →

More videos: