Skip navigation Jump to main navigation

Regulating Technological Complexity

By Dr. Steven Cohen, Senior Vice Dean, School of Professional Studies; Professor in the Practice of Public Affairs, School of International and Public Affairs

The Supreme Court and some on the political right wing are trying to limit the ability of administrative agencies to regulate private actions unless those actions are specifically identified and detailed by statute. Many laws specifically grant regulatory agencies the authority to set regulatory standards, but in other cases, the law is less specific and asks for a problem to be addressed but defines the issue so generally that administrative agencies must fill in the blanks. This is administrative discretion, but to some conservatives, it has led to the development of the “deep state.” In my view, the source of the problem is not some bureaucrat’s desire to wield power but the growing complexity of the technology and the productive systems that enable our economy.

Elected officials who are experts at gaining attention, raising money, maneuvering to achieve prominence, and occasionally providing leadership cannot possibly have the scientific, subject matter, and technical expertise to understand complex problems enough to specify rules. Recognizing this reality, elected leaders pass laws that set policy goals and provide resources to achieve those goals, but they do not know enough operational or technical detail to establish specific regulations. It makes sense for elected officials to establish policy and broad programmatic direction—and then leave the specifics to experts. And if the specific rules and actions established by administrative agencies do not conform with the policies set by elected officials, those elected officials should set new policies that direct the agencies to end their current practices.

The Supreme Court is toying with the dangerous idea that if legislation does not literally empower an agency to set a specific rule, it is better to have no rule and allow private parties to do whatever they want. As Kate Shaw wrote in a recent New York Times opinion piece:

“Perhaps the most important case this term is Loper Bright Enterprises v. Raimondo, scheduled for oral arguments in 2024, in which the plaintiffs are asking the court to overrule the best-known case in administrative law, Chevron v. Natural Resources Defense Council. In Chevron, the court announced a rule that directed federal courts to defer to reasonable agency interpretations of statutes they administer. That is, if a statute is silent or ambiguous on a particular question, courts aren’t supposed to write on a blank slate about what the statute means — if an expert agency has already provided an answer to the question, and it’s a reasonable one, the court is supposed to defer to that interpretation.”

In the absence of government regulation, we saw a recent attempt to utilize a non-profit organization to provide internal controls to ensure the safe development of a new and complex technology: artificial intelligence. We observed the limits of that approach in the corporate battle over the leadership of OpenAI and the effort to fire and then rehire Sam Altman to run the company. As Kevin Roose of The New York Times eloquently summarized, the battle was:

“ … a fight between two dueling visions of artificial intelligence. In one vision, A.I. is a transformative new tool, the latest in a line of world-changing innovations that includes the steam engine, electricity and the personal computer, and that, if put to the right uses, could usher in a new era of prosperity and make gobs of money for the businesses that harness its potential. In another vision, A.I. is something closer to an alien life form — a leviathan being summoned from the mathematical depths of neural networks — that must be restrained and deployed with extreme caution in order to prevent it from taking over and killing us all. With the return of Sam Altman on Tuesday to OpenAI, the company whose board fired him as chief executive last Friday, the battle between these two views appears to be over. Team Capitalism won. Team Leviathan lost.”

As many of us learned over the past few weeks, OpenAI was a private firm governed by a nonprofit board. This unusual structure was designed to provide an internal check to ensure the safe development of artificial intelligence. The problem is that, while well-intentioned, as the financial promise of AI has become closer, this internal system collapsed due to the logic of capitalism. From my perspective, that was destined to happen. It’s a little like the concept of corporate social responsibility. I view that concept as an idealistic oxymoron. Corporations, like all of us, should be socially responsible. Many companies and individuals are good citizens. But the purpose of corporations is not to be responsible but to make money. Just as individuals act to promote their self-interest, so too do companies. The way we limit the pursuit of individual and corporate self-interest and protect the public interest is to enact and enforce laws. Only government can accomplish that task. Regulation cannot be privatized. We can teach ethics, and nearly all people do not need the threat of enforcement to reinforce ethical behavior, but laws are designed for individuals and organizations that have not internalized the values and norms of society.

Some laws are obvious, some even have roots in the bible—such as rules against killing and stealing. Some laws arose in response to the development of technology, such as traffic rules like speed limits. But what do we do when the technologies are complicated and few understand them? We understand that a motor vehicle can move so fast it can be unsafe; rule setting is simple and obvious. But what do we know about the algorithms that social media platforms use to manipulate what we and our children are shown in their apps? Or the actions that can be taken due to that new-fangled technology called artificial intelligence? We just learned that once real money appeared on the horizon, the internal safety controls placed on OpenAI by a well-meaning nonprofit collapsed. This should not be surprising. It is simply evidence of the need for government to develop the technical capacity to regulate these new technologies in the public interest. This is not a private but a public responsibility.

The problem, of course, is that elected leaders and their staffs have too many areas of responsibility to develop a sufficient degree of technical expertise needed to regulate all of these new, cutting-edge technologies. Elected leaders are not even good at understanding the old technologies that underpin our current economy. They are not engineers, experts in medical research, computer science, environmental science, or any number of other technical fields that all of us rely on every day. But to ensure that the private parties developing and using these technologies are not causing harm to people or the planet we rely on, the public needs to hire their own experts who are not trying to make money off these technologies but instead are working to understand them in order to set up guardrails to reduce the probability that they will cause harm. This is what conservatives call the deep state, and frankly, without deep expertise, we are at the mercy of private companies and have no way to promote the public interest.

We don’t want intrusive over-regulation because we want to reap the benefits of these new technologies. But we don’t want businesses to operate without rules and boundaries because sometimes these new technologies can cause harm. Since the private sector pays better than the public sector, we often see a revolving door of government experts fleeing to the private sector. We also sometimes see government relying on private experts to explain new products, operations, or technologies since government’s in-house experts don’t have the resources to keep up with private experts. Sometimes regulatory agencies are “captured” by the very industries they are supposed to be regulating. This is a far greater threat to the public interest than a power-grabbing and frankly imaginary deep state.

The cause of these problems is also the cause of the wonderful technologies that have made our lives easier and more rewarding. Transportation, communication, food supplies, entertainment, health care, and education have all been transformed by a wide range of new technologies. We want this to continue. But climate change, drug addiction, toxic waste, air and water pollution, contagious diseases, depression, suicide, cancers, and a wide variety of negative impacts have also been caused by these same technologies. It is in the public interest to regulate these technologies to reduce the harm they create. It is unrealistic to assume that the complexity of these technologies will ever be understood enough by legislators to provide detailed specifications of the rules needed to get them under control. The broad policies can be set by law, but the detailed rules must be left to experts in administrative agencies.

As we see with artificial intelligence, the new technologies will be far more complicated than current technologies. Government must build scientific and technical expertise. The anti-regulatory zeal of the right wing will make this task difficult. Ever since Ronald Reagan defined government as “the problem,” Americans have been on an anti-regulatory binge. This is not helped by progressive governments that over-regulate and over-specify on technical issues they also do not understand. New technologies bring both benefits and costs, as do new rules governing those technologies. We need an adult conversation about the regulation of complex new technologies rather than the ideological discussion we seem doomed to continue.

Views and opinions expressed here are those of the authors, and do not necessarily reflect the official position of Columbia School of Professional Studies or Columbia University.

Authors