Open-source purists, tech insiders, and many others have argued: don’t regulate artificial intelligence (AI) now. They don’t want to hurt innovation. Some have even argued there isn’t an AI industry to regulate yet.
But this ignores the history of just about any nascent industry. Let alone one with the potentials of AI. In this article, I will highlight three industries, which we all know and love, that went largely unregulated in their early years.
It led to disaster.
All of them arguably pose less risk than generative AI, and definitely less than future AI applications like artificial general intelligence. Current laws fail to protect us from AI. History tells us what will happen if we don’t regulate AI now.
First, some background on AI risks
Before we dive headfirst into that history, we need to first address the risks posed by generative and other forms of AI. Some people think the risks are hyperbolic. They must be overblown, right?
Not according to 400 of the top people in the AI industry. Here is a short statement they recently signed:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
People like Sam Altman of OpenAI signed it. He was joined by top executives at Microsoft and Google, in addition to numerous academics.
Ignoring or minimizing the reality of these risks is almost like the select few who advocate for willful blindness with regard to climate change. Words like “risk of extinction from AI” should not be taken lightly by leading experts in the field. And if they should, please tell me in the comments why and what their hyperbolic motivation might be.
Admittedly, I think the biggest extinction risk is from artificial general intelligence, not garden variety AI uses, but it’s important to take all of the AI risks seriously. From national security to deepfakes that could proliferate and misinform, AI risks run the gamut if they go unregulated. Or if we rely on existing laws that were not designed with AI in mind to do the job.
We’ve tried this before.
Do you like food?
Like technology, food touches all of our lives. Anyone who has read The Jungle by Upton Sinclair has an idea of what the industrialized farming system was like in America prior to regulation by administrative agencies like the Food & Drug Administration (FDA).
Here’s an excerpt from a book review I wrote:
“The vivid descriptions Sinclair provides of the killing process is horrific and heinous, often including descriptions of diseased, rotted meat going into sausage or canned goods (all for human consumption). The working conditions were dreadful with streams of cattle blood often spilling to the floor. And God forbid one of the workers injured himself in the production process. Worker’s compensation were two words that had never before been seen together in the English language.”
When industrialized farming introduced the mass slaughter of animals, companies were incentivized to do it as efficiently and as cheaply as possible. There was no central regulator checking their behavior. Nobody even knew the extent of the unsanitary conditions and what went into our food. Except for the companies themselves and their workers, of course.
When Upton Sinclair published The Jungle in the early 20th century, he blew the lid on what was happening. The average American had no idea. They just kept consuming. They appreciated cheaper products and the wider availability of meat.
Outrage ensued. The Food, Drug & Cosmetics Act was passed shortly after the book’s publication. And I didn’t even touch on the adulterated and unsafe drugs that proliferated before that law was passed.
The point is that when the industry drastically changed when industrialized farming gained steam, the government largely did nothing. They didn’t ask what the new risks might be. They didn’t question the industry to see if they had thought about it.
Consumers never questioned it either until it was too late. It took years for unsafe products and unsanitary conditions to persist before anyone did anything to stop it.
Do you like safe working conditions?
Upton Sinclair also highlighted in The Jungle the unsafe working conditions in industrialized America. With industrialization came a movement of labor from the farm to the factory. But nobody thought to question occupational safety or hazards.
They waited for problems to arise. And did they ever. Another excerpt from my review:
“I haven’t even had a chance to describe the fertilizer plant Jurgis worked at after the meat production facility where fertilizer would cover his body, seep into his pores, and kill his senses of smell, taste, and sight by the end of a typical work day. He could not venture out into public without causing a scene of people trying to escape his pungent and foul bodily odor. You might think to tell Jurgis to find another occupation, or maybe stay in his home country, but as a nation built on immigration and given the low supply and high demand for jobs in Chicago, there were no other options for him. His life was threatened everyday in Packingtown from the dangerous working conditions.”
With the New Deal came the National Labor Relations Board and later in the 1970s, the Occupational Safety and Health Administration. These government agencies may not be perfect, but they are far better than the unregulated horrors that occurred in American workplaces prior to their existence.
Do you like regulated financial markets?
You might have heard of the Great Depression. When the U.S. stock market crashed on that fateful day in October 1929, financial markets were largely unregulated.
Publicly-traded companies were not required to tell the truth about their businesses. They could misrepresent the riskiness of their securities.
The industry that bought and sold those securities - the brokers, dealers, and exchanges - weren’t required to treat investors fairly or honestly.
The Securities Acts of 1933 and 1934 changed all of that. The latter created the Securities and Exchange Commission which still regulates U.S. securities markets today.
You might have heard of the roaring twenties. The 1920s in America, the years leading up to the Great Depression, was a boom era. A gilded age. People were making riches in financial markets like never before.
Nobody may have had a full understanding of the risks then, but most failed to even ask the question. The government and public let the industry evolve until one day the wild, unregulated speculation crashed the entire U.S. economy, which sent ripple effects worldwide (even in a non-digital age).
If only we had the sense to ask the right questions before October 1929.
We can ask the right questions of AI now
If we don’t ask the right (and tough) questions of AI now, we risk experiencing something far worse than what we experienced with food, workplace safety, and financial markets. The potential of AI is massive. It could literally touch every industry and person in some way in the near future.
The vast reach makes it far more impactful than any other industry I’ve named. It compounds the risk, especially once we consider something with the power of full human capabilities like artificial general intelligence.
So for people to argue that we’re fine, that we don’t need to do anything to regulate AI now, is simply inviting unmitigated disaster. It’s failing to learn the lessons of regulatory history. It’s ignoring the same story that has played out in practically every industry (don’t forget social media!) where we wait for a cataclysmic crash before placing the guardrails on the cliff.
You might say, “But we don’t know where the cliff is yet!” We may not know precisely, that’s true. But we have a general idea of the risks from national security implications, subliminal manipulation, deepfakes, identity theft and fraud, privacy, and more.
Many of these risks are not adequately covered by existing laws or regulations. They certainly aren’t addressed uniformly on an international level. For more on what we could do, read this:
For now, I’m hoping we can have open debates that start with the premise: we need to innovate and develop the AI industry, but we need basic rules of the road to do that safely.
If we don’t, the disasters from food, workplace safety, and financial markets will play out again. This time, however, they will be specific to AI. Which means they could be far, far worse.
Subscribe to my Medium page and also get access to many other great writers.