Despite the incredible potential for innovation and being the front-running symbol of the future, artificial intelligence and the popular discourse surrounding it have been marred by fear. The fear of technological replacement of human labor, in particular, has led to an increasingly loud call for regulation from corporations, labor unions and the general population.
The vast unknowns of AI, ranging from the almost magically innovative to downright dystopian risks, make AI regulation controversial. The challenges of AI regulation are multifaceted and complex, lying in a sort of “rock and hard place” situation between AI’s potential for growth and destructiveness.
Many argue AI regulation must be nuanced, especially in contrast to existing industrial regulations. A starting approach to nuanced AI regulation would be aimed at navigating its complexity, with an emphasis on using legal precedent.
Addressing existing old-fashioned abuses is a vital part of potential AI regulation. For example, there is already a federal advisory present in the Federal Trade Commission’s discussion on AI-enabled voice scams to shield “consumers, creative professionals, and small businesses against the harms of voice cloning.”
Most lawmakers would agree that mitigating the potential for any criminal or malicious use of AI should be at the forefront of future regulations, and would be the easiest to implement due to its lack of public controversy.
Licensing is another example of a potential regulatory action. The Federal Communications Commission already requires licensing of broadcasting, satellite communications and mobile devices. However, there are risks in licensing regulation, as they can lead to monopolistic or oligopolistic abuses.
According to a Brookings Institution analysis, licensing AI could reinforce “dominance by creating a barrier to entry and adding costs to anyone seeking to assault that position.”
Many technologists agree licensing regulations should not be treated as an end-all solution. Numerous AI bigwigs, including OpenAI’s Sam Altman, Microsoft’s Brad Smith, and Google’s Sundar Pichai, are advocating for federal regulation on a broader scale in an effort to preserve industry fairness.
According to an article that appeared in the Wall Street Journal in May 2023, Altman’s campaign for a “new [federal] agency that licenses any [AI] effort above a certain scale of capabilities could take that license away and ensure compliance with safety standards.”
Until any hypothetical federal agency concerned with AI comes to fruition, lawmakers will likely look to existing regulation. Copyright and patents are two current legal systems in the United States that could potentially apply to AI regulation.
While the U.S. Copyright Office has already stated that “work can be copyrighted in cases where AI assisted with the creation,” it remains vague when it comes to generative AI, stating “works wholly created by AI would not be protectable.” The present patent system is structured mainly around the protection of physical inventions and is thus “ill-equipped to deal with software [leading to AI], raising critical questions about who ‘invented’ something and whether it can be patented.”
With the frontier of software patenting being vague and largely undefined, legal protections and regulations are limited. For now, the best source of protection in the AI realm of patenting and invention, and applicable to many other areas of AI creation, lies in individual and private contracts that work to define specific intellectual property rights.
In any case, patent-based support for any sort of infringement of intellectual property by AI would require clear framework definitions which should be decided and clarified through ongoing governmental debates surrounding AI.
In the final analysis, regulation of AI is such a complex issue that it will require careful collaboration between any affected parties, including those in government, labor unions, and the private sector. Until regulatory considerations become more defined, lawmakers can only point to legal precedent and light limitations to establish a regulatory foundation.