Over the past year, veteran software engineer Jay Prakash Thakur has spent his nights and weekends prototyping AI agents that could, in the near future, order meals and engineer mobile apps almost entirely on their own. His agents, while surprisingly capable, have also exposed new legal questions that await companies trying to capitalize on Silicon Valley’s hottest new technology.
Agents are AI programs that can act mostly independently, allowing companies to automate tasks such as answering customer questions or paying invoices. While ChatGPT and similar chatbots can draft emails or analyze bills upon request, Microsoft and other tech giants expect that agents will tackle more complex functions—and most importantly, do it with little human oversight.
The tech industry’s most ambitious plans involve multi-agent systems, with dozens of agents someday teaming up to replace entire workforces. For companies, the benefit is clear: saving on time and labor costs. Already, demand for the technology is rising. Tech market researcher Gartner estimates that agentic AI will resolve 80 percent of common customer service queries by 2029. Fiverr, a service where businesses can book freelance coders, reports that searches for “ai agent” have surged 18,347 percent in recent months.
Thakur, a mostly self-taught coder living in California, wanted to be at the forefront of the emerging field. His day job at Microsoft isn’t related to agents, but he has been tinkering with AutoGen, Microsoft’s open source software for building agents, since he worked at Amazon back in 2024. Thakur says he has developed multi-agent prototypes using AutoGen with just a dash of programming. Last week, Amazon rolled out a similar agent development tool called Strands; Google offers what it calls an Agent Development Kit.
Because agents are meant to act autonomously, the question of who bears responsibility when their errors cause financial damage has been Thakur’s biggest concern. Assigning blame when agents from different companies miscommunicate within a single, large system could become contentious, he believes. He compared the challenge of reviewing error logs from various agents to reconstructing a conversation based on different people’s notes. “It’s often impossible to pinpoint responsibility,” Thakur says.
Joseph Fireman, senior legal counsel at OpenAI, said on stage at a recent legal conference hosted by the Media Law Resource Center in San Francisco that aggrieved parties tend to go after those with the deepest pockets. That means companies like his will need to be prepared to take some responsibility when agents cause harm—even when a kid messing around with an agent might be to blame. (If that person were at fault, they likely wouldn’t be a worthwhile target moneywise, the thinking goes). “I don’t think anybody is hoping to get through to the consumer sitting in their mom’s basement on the computer,” Fireman said. The insurance industry has begun rolling out coverage for AI chatbot issues to help companies cover the costs of mishaps.
Onion Rings
Thakur’s experiments have involved him stringing together agents in systems that require as little human intervention as possible. One project he pursued was replacing fellow software developers with two agents. One was trained to search for specialized tools needed for making apps, and the other summarized their usage policies. In the future, a third agent could use the identified tools and follow the summarized policies to develop an entirely new app, Thakur says.
When Thakur put his prototype to the test, a search agent found a tool that, according to the website, “supports unlimited requests per minute for enterprise users” (meaning high-paying clients can rely on it as much as they want). But in trying to distill the key information, the summarization agent dropped the crucial qualification of “per minute for enterprise users.” It erroneously told the coding agent, which did not qualify as an enterprise user, that it could write a program that made unlimited requests to the outside service. Because this was a test, there was no harm done. If it had happened in real life, the truncated guidance could have led to the entire system unexpectedly breaking down.