“Power tends to corrupt and absolute power corrupts absolutely.” This famous quote from Lord Acton has taken on new relevance in the development of artificial intelligence. Whoever holds the keys to power will have unprecedented control over humanity. One does not have to stroll too far back in history to understand the effects of massive power imbalances. The Gnostics provide an important case study.
I recently read the book, “The Gnostic Gospels” by Elaine Pagels. You can read my review here. The Gnostics have always interested me, especially as someone who was raised Catholic and went through religious education most of my life.
I was never taught about the Gnostics.
I’m not sure if this was an oversight or a deliberate attempt to downplay the diversity of early Christianity. Regardless, the orthodox church, meaning the organized and hierarchical churches of early Christianity, labeled Gnostics as heretics and expelled them from society around the second and third centuries.
Their books were burned, but a few were hidden and recently discovered in 1945. They were translated and researched by an international team of scholars, with one of them (Elaine Pagels), writing The Gnostic Gospels following this work.
It’s a short book that packs a punch, especially for someone who thought he knew about Christianity. The fact entire gospels, theologies, and philosophies were simply expunged, in an effort by the orthodox church to erase them from history, is outrageous, but not surprising.
This raises the question of what other historical narratives have been shaped or distorted by those in power.
The Gnostics are a key element of how Christianity was formed and how it evolved in its early years. Yet we never hear about them because by most accounts, the orthodox church did not (and still does not) want us to.
Gnosticism diverged from orthodox Christianity in significant ways, emphasizing the transcendence of the material world through individual spiritual enlightenment, in contrast to the orthodox emphasis on literal interpretations of events, including the crucifixion and resurrection. Gnostics emphasized only the symbolic and allegorical significance of such events.
And that’s just the tip of the iceberg.
So naturally, when a competing group of Christians came along with antithetical ideas and doctrine, the orthodox church viewed them as threats. They tried to silence them. Expunge them from the history books.
Imagine if humans had lost all evidence of the Gnostics. We would have an incomplete understanding of how Christianity was formed and came to be the most dominant religion in the world.
Now imagine a company developing artificial intelligence doing the same thing. Expunging from their models certain data or information. Tipping the weights in favor of their own biases, whether intentional or not. The question is not whether it will happen, but how we can prevent it from inevitably occuring.
Artificial intelligence must develop in a decentralized way
Too much of the early internet was allowed to consolidate. The big winners bought their competition instead of competing with them.
Facebook bought Instagram.
Google bought YouTube.
And those are just two examples.
So much of the modern internet is dominated by gatekeepers and giant platforms. Reddit is the first bigger tech IPO of the last decade, which should tell you something about the health of internet tech company innovation.
We must learn from history and prevent the centralization of power in AI development.
The government must scrutinize every merger and acquisition, permitting only those with obvious procompetitive benefits that far outweigh anticompetitive risks. Otherwise, promising upstarts like Anthropic will be bought by OpenAI (Microsoft) and Google.
Some may argue that decentralization will lead to excessive fragmentation, coordination, and security risks.
Admittedly, fragmentation may risk insufficient resourcing (e.g., getting enough compute to operate), but that’s far outweighed by the risks posed by power too consolidated in the hands of a few “AI elites.”
Coordination can be addressed by the free market as any coordination need will be met by a new service provider.
Security risks likely require government regulation. We need rules of the road on what’s required to go live with AI tools or use cases. Perhaps that means companies need a license or to at least demonstrate basic security standards.
The alternative to AI decentralization is the consolidation we have today with internet tech companies. It has created fewer options for consumers and an imbalance of power, significantly favoring one or two companies.
In the AI industry, this power will corrupt, especially given the potential impact AI use cases could have on society. Not just the generative AI tools (like ChatGPT) that we are more familiar with today, but future developments like artificial general intelligence.
No one person or company should own artificial general intelligence
It’s like the Catholic Church owning all of Christianity. They would get to dictate everything - doctrine, rituals, customs, and church policy (including whether women can be clergy).
Groups they disagree with or perceive as threats like the Gnostics? Goodbye.
The same will be true if one or two big tech players are allowed to dominate technology with the potential of artificial general intelligence. Basically, artificial humans.
They will control how AI acts, what it communicates, and how it handles ethical dilemmas. We cannot have even a handful of companies dictating the technical makeup of all artificial humans.
In the most consequential technology created since the atomic bomb, we must have diversity the innovation, development, and production of artificial general intelligence. Otherwise, certain parts of history and modern society will be excluded or misrepresented, intentional or not.
The Gnostics won’t be the first expunged group or the last.
People should be able to customize their AI experience, but truth should be presented as much as possible
My current ideal for a future AI world is one where people can customize their own AI experience. Maybe everyone runs their own model. If someone wants to live in blissful ignorance, that’s on them. Complete freedom to choose.
With a major caveat.
There should always be an option for further reading or discovery. There should also be a requirement to have “Community Note” features (one of the few things X does well) to correct the record based on community voting and quality sources.
This will get us as close to the truth as possible without letting technology companies act as the arbiters of that truth.
The recent case of Google’s “woke” Gemini illustrates the impossible difficulty of this challenge.
Some people may want pictures of black founding fathers, but they should also be pointed to the historical fact that none of them were black. Some people may want information to support their “flat Earth” theory, but they should also be pointed to basic physics and other overwhelming pieces of evidence that support the Earth being round.
Even with artificial intelligence, humanity should be pushed as close to the truth as possible. No group, person, or system of thought or belief should be at risk of extinction like the Gnostics were centuries ago. Their record, at a minimum, should live on for people to consume and study should they choose.
But if we allow for too much AI centralization, just as orthodox Christians centralized by the second and third centuries, groups like the Gnostics or unpopular beliefs are at risk of extinction.
If we don’t get this right with AI, we could have many more Gnostic case studies in the future.
We should all be pushing towards a more decentralized AI future powered by the Gnostic pursuit of self-knowledge.
My writing & AI news
Why You Should Care How Andrew Huberman Treats Women (he’s more than a rockstar neuroscientist to many)
My Religion Lied To Me: Review The Gnostic Gospels By Elaine Pagels (this book will inspire future Gnostic self-knowledge)
Impressive filmmaking with OpenAI’s Sora, but this went unsaid - how and who credits the original source material that trains this powerful model and tool? Also, who compensates the creators and rights owners who own the copyrights to this training material?
This guy got engaged using AI to propose to his girlfriend who speaks a different language
Claude 3 Opus (model developed by Anthropic) is currently the best model on the market right now, beating Chat-GPT 4 (lastest version from OpenAI). Pretty impressive from this young upstart company.
Microsoft and OpenAI are working on a $100 billion supercomputer called Stargate (forthcoming 2028 and the big question: will Microsoft start competing with NVIDIA on chips?).
Apple announced its WWDC conference and said it will be (A)bsolutely (I)ncredible - it’s no secret what they plan to discuss! I’m hopeful because Apple needs some good news.