Federal enforcement authorities have expressed confidence that existing legal authorities are adequate in the face of emergent artificial-intelligence ('AI') technology. But particularly as agentic AI enters the mainstream, this proposition seems in question, especially so in the criminal-law context, where scienter provides the dividing line between innocent and wrongful conduct in most cases. For this reason, prosecutors and courts may struggle to find individuals and companies criminally culpable when an AI agent commits the misconduct. The law and society will eventually catch up to these developments, but it will take time. Legislatures may step up to fill this gap, but we may also see prosecutors turn more to civil statutes to address misconduct.
In April 2023, responding to growing popular adoption of generative artificial intelligence ('AI'), the heads of several federal agencies issued a joint statement emphasizing that '[e]xisting legal authorities apply' to the use of AI technologies 'just as they apply to other practices.'2 That same day, Federal Trade Commission Chair Lina M. Khan issued a separate, accompanying statement, asserting that 'AI technologies are covered by existing laws' and '[t]here is no AI exemption to the laws on the books.'3
While it is true that there is no AI exemption to the law generally, the rapid development of AI technologies calls into question whether existing laws are adequate. This is particularly true in criminal law, where the ancient principle of scienter ' mens rea, or in other words, a culpable state of mind ' provides the dividing line separating wrongful acts from innocent acts in most cases.4 The coming wave of agentic AI highlights the potential inadequacy of existing laws. Prosecutors and courts will face difficult questions about whether the person behind a misbehaving AI agent can or should be held criminally liable. To get in front of these difficult cases, legislatures and enforcement agencies should consider proactively addressing how to handle questions of criminal culpability in the agentic AI context.
01 AGENTIC AI: THE NEXT FRONTIER
Although the term AI is often used as if the technology were a monolith, a variety of materially different technologies fall under that label. Understood in its simplest form, AI is simply technology that can perform complex tasks typically associated with human intelligence.5 Anyone who used a search platform in the early aughts or electronic translation tools in the Tens interfaced with AI ' probably without even thinking about it.
Perhaps the most popular form of AI in current parlance is generative AI ('GenAI'). GenAI is AI that is able to create original content based on large datasets at its disposal in response to a user's prompt or request.6 GenAI is closer to human functionality than traditional AI, in that GenAI essentially replicates the learning and decision-making processes of the human brain.7 But GenAI is fundamentally reactive ' it needs to be prompted to act, and its results are only as good as the prompts it is provided.
Agentic AI, on the other hand, takes us into the uncanny valley. The hallmarks of agentic AI are autonomy and judgment ' once provided preprogrammed goals, AI agents are capable of learning and operating on their own. That is, AI agents can 'assess situations and determine the path forward without' ongoing 'human input.'8
02 AGENTIC AI AND CRIMINAL LAW
Given agentic AI's capacity for autonomy and judgment, it is only a matter of time before an AI agent commits a crime. And the question of what to do in those circumstances is likely to pose serious challenges for prosecutors and courts given scienter requirements for most crimes.9 Consider two hypothetical (for now) scenarios.
First: To reduce overhead costs, a financially struggling hospital deploys an AI agent to review and organize files documenting medical services provided, assign correct billing codes to those services, and submit associated invoices to the federal government.10 The AI agent is programmed generally to maximize receipts and avoid violating the law. After reviewing the hospital's files for several months, the AI agent determines that receipts need to increase for the hospital...