Create a free account, or log in

Will AI soon run businesses as ‘artificial persons’?

How can we regulate AI within existing legal frameworks to reduce undesirable behaviours and assign legal responsibility for autonomous actions of AIs?
October 30, 2023
AI bill shock legal
An AI-generated image of artificial intelligence in the workplace. Source: Private Media

Only “persons” can engage with the legal system – for example, by signing contracts or filing lawsuits. There are two main categories of persons: humans, termed “natural persons,” and creations of the law, termed “artificial persons”. These include corporations, nonprofit organisations and limited liability companies (LLCs).

Up to now, artificial persons have served the purpose of helping humans achieve certain goals. For example, people can pool assets in a corporation and limit their liability vis-à-vis customers or other persons who interact with the corporation. But a new type of artificial person is poised to enter the scene – artificial intelligence systems, and they won’t necessarily serve human interests.

As scholars who study AI and law, we believe that this moment presents a significant challenge to the legal system: how to regulate AI within existing legal frameworks to reduce undesirable behaviours, and how to assign legal responsibility for autonomous actions of AIs.

One solution is teaching AIs to be law-abiding entities.

This is far from a philosophical question. The laws governing LLCs in several US states do not require that humans oversee the operations of an LLC. In fact, in some states, it is possible to have an LLC with no human owner, or “member” – for example, in cases where all of the partners have died. Though legislators probably weren’t thinking of AI when they crafted the LLC laws, the possibility of zero-member LLCs opens the door to creating LLCs operated by AIs.

Many functions inside small and large companies have already been delegated to AI in part, including financial operations, human resources and network management, to name just three. AIs can now perform many tasks as well as humans do. For example, AIs can read medical X-rays, do other medical tasks, and carry out tasks that require legal reasoning. This process is likely to accelerate due to innovation and economic interests.

A different kind of person

Humans have occasionally included nonhuman entities like animals, lakes and rivers, as well as corporations, as legal subjects. Though in some cases these entities can be held liable for their actions, the law only allows humans to fully participate in the legal system.

One major barrier to full access to the legal system by nonhuman entities has been the role of language as a uniquely human invention and a vital element in the legal system. Language enables humans to understand norms and institutions that constitute the legal framework. But humans are no longer the only entities using human language.

The recent development of AI’s ability to understand human language unlocks its potential to interact with the legal system. AI has demonstrated proficiency in various legal tasks, such as tax law advice, lobbying, contract drafting and legal reasoning.

A humanoid robot and a man in a business suit shake hands while standing on an industrial waterfront
Would you do business with an AI that didn’t know the law? Source: SM/AIUEO/The Image Bank via Getty Images

An LLC established in a jurisdiction that allows it to operate without human members could trade in digital currencies settled on blockchains, allowing the AI running the LLC to operate autonomously and in a decentralised manner that makes it challenging to regulate. Under a legal principle known as the internal affairs doctrine, even if only one US state allowed AI-operated LLCs, that entity could operate nationwide – and possibly worldwide. This is because courts look to the law of the state of incorporation for rules governing the internal affairs of a corporate entity.

We believe the best path forward, therefore, is aligning AI with existing laws, instead of creating a separate set of rules for AI. Additional laws can be layered on top for artificial agents, but AI should be subject to at least all the laws a human is subject to.

Building the law into AI

We suggest a research direction of integrating law into AI agents to help ensure adherence to legal standards. Researchers could train AI systems to learn methods for internalising the spirit of the law. The training would use data generated by legal processes and tools of law, including methods of lawmaking, statutory interpretation, contract drafting, applications of legal standards, and legal reasoning.

In addition to embedding law into AI agents, researchers can develop AI compliance agents – AIs designed to help an organisation automatically follow the law. These specialised AI systems would provide third-party legal guardrails.

Researchers can develop better AI legal compliance by fine-tuning large language models with supervised learning on labeled legal task completions. Another approach is reinforcement learning, which uses feedback to tell an AI if it’s doing a good or bad job – in this case, attorneys interacting with language models. And legal experts could design prompting schemes – ways of interacting with a language model – to elicit better responses from language models that are more consistent with legal standards.

Law-abiding (artificial) business owners

If an LLC were operated by an AI, it would have to obey the law like any other LLC, and courts could order it to pay damages or stop doing something by issuing an injunction. An AI tasked with operating the LLC and, among other things, maintaining proper business insurance would have an incentive to understand applicable laws and comply. Having minimum business liability insurance policies is a standard requirement that most businesses impose on one another to engage in commercial relationships.

The incentives to establish AI-operated LLCs are there. Fortunately, we believe it is possible and desirable to do the work to embed the law – what has until now been human law – into AI, and AI-powered automated compliance guardrails.The Conversation

Daniel Gervais is a professor of law at Vanderbilt University and John Nay is a fellow at CodeX – Stanford Center for Legal Informatics at Stanford University.

This article is republished from The Conversation under a Creative Commons license. Read the original article.