Twenty years ago, accessing digital information meant sitting at a desktop computer, watching images load line by line. While technology has evolved to make humanity’s knowledge instantly accessible in our pockets, we’re still tethered to screens, continually pulling out phones and tapping away. But the next evolution in computing, using AI agents, promises something fundamentally different: intelligent objects throughout your environment that work together seamlessly and automatically—no screen interface required. “This kind of ambient responsiveness gives us technology that anticipates our needs rather than waiting for commands,” says Clare Liguori, senior principal engineer, AWS Agentic AI.
These automated agents can process sensor data, make decisions, and execute actions without explicit commands from developers. Unlike apps that are trapped in phones, they can be embedded in homes and everyday objects, maintaining a single, coherent understanding of a household or individual.
Consider an AI agent tasked with monitoring your health in the home. By aggregating data from wearables, environmental sensors, and behavioral patterns, it builds a dynamic understanding of what is happening. For example, when pollen counts rise, the agent does not just follow a preprogrammed script. It evaluates multiple factors such as weather forecasts, your medication supply, past allergy patterns, and in-home air quality. It might use this data to automatically adjust your home’s air handling system, change your mattress firmness or position to help with nasal congestion, and display a reminder on your medicine cabinet about your allergy medicine.
Traditional smart home approaches can’t support this type of proactive, autonomous behavior. They rely on explicit programming of decision trees and state machines, requiring developers to anticipate and code for every possible scenario. This is impractical to scale and becomes exponentially complicated if you want to support distributed systems with lots of data.
The fundamental problem lies in the architectural approach. Traditional AI agent development requires developers to hardcode complex workflows—essentially trying to predict every possible scenario and program-specific responses. When pollen counts rise, for instance, a traditional system might have rigid rules: “If pollen > X, turn on air purifier.” But what happens when it’s also raining, or the resident is traveling, or their medication has expired? Each new variable requires additional programming, creating exponential complexity.
This workflow-driven approach has proven brittle in practice. “Teams can spend months building agents that break when they encounter an unexpected scenario. The development cycle becomes: Code workflows, test edge cases, discover failures, add more rules, repeat,” Liguori says. “But what used to take months for teams to go from prototype to production with traditional approaches can now be accomplished in days and weeks with model-driven frameworks.”
This breakthrough allows AI agents to reason about their environment using large language models as their cognitive engine, equipped with tools such as sensors and data sources. Developers can create these systems with software development kits (SDKs) like Strands Agents, an open source SDK launched by AWS in 2025 that lets developers define these agents in just a few lines of code. “We call this the model-driven approach to building agents,” Liguori says.
Agents defined in Strands can run on both local and cloud-based LLMs, enabling ambient intelligence with simple abstractions for integrating and coordinating networks of agents running anywhere you need them. Instead of writing procedural code that defines every possible interaction, developers define high-level goals and provide tools through APIs. The framework handles the complex orchestration of maintaining state, managing context windows, and coordinating the work of multiple agents running in parallel.
For the health-monitoring agent, developers simply define the objective (“optimize air quality for residents”) and provide API access to relevant systems such as air quality sensors as tools. The Strands SDK orchestrates the loop that enables the agent to reason and make autonomous decisions, such as discovering elevated pollen counts in the weather forecast and adjusting the home accordingly.
Agentic platforms like Amazon Bedrock AgentCore enable this distributed intelligence at enterprise scale, allowing agents to maintain coherent understanding across different devices and environments. These can include private networks of agents using LLMs local to devices in the home that delegate complex tasks to agents running in the cloud.
“As we continue to lower the barriers between intelligence and everyday life, we’re moving toward a future where technology truly adapts to us, not the other way around,” Liguori says. “The infrastructure for ambient intelligence is here today, and frameworks like Strands are making it possible for developers to create the seamless integration of intelligence and environment we’ve always imagined.”

