Colonizing the Mind: How Future AI Devices Threaten Human Autonomy
Examining the partnership between Jony Ive and Sam Altman that few are talking about

Sam Altman of OpenAI and Jony Ive of Apple fame have announced a partnership worth $6.5 billion. They released a YouTube video that was basically an announcement for an announcement, giving little detail on what they’re planning or building. But one thing is clear — their mission is to make a future artificial intelligence (AI) device to replace our phones.
Both men expressed misgivings about smartphones, which is interesting coming from someone like Ive who was Steve Jobs’ protege and was instrumental in the design of iPhones, MacBooks, iPads, and more until he left Apple in 2019. Ive had been wandering in the wilderness since trying to find his next thing when one of his sons showed him ChatGPT.
While I have enormous respect for Ive’s talents, the announcement that OpenAI purchased Ive’s company IO for $6.5 billion made me pause to consider the implications future AI devices will have for humanity. What will they look like? How will they function? Will they be scanning and listening to me and you continuously?
As I have written in the past, current laws fail to protect us from AI, and if anything, we are heading in the wrong direction with the current U.S. administration seeking to limit or completely outlaw reasonable oversight and regulation of AI development.
Let’s consider the possible implications that future AI devices could have if allowed to develop in a “move fast and break things” manner without any meaningful oversight.
Misgivings about smartphones, but hope for future AI devices?
When reflecting on the impact smartphones have had on society, Jony Ive said the following: “I shoulder a lot of the responsibility for what these things have brought us.” Sam Altman agreed with that sentiment saying, “I don’t feel good about my relationship with technology right now.”
But in almost the same breath, both men have seemingly unlimited hope and optimism for future AI devices.
“We are on the brink of a new generation of technology that can make us our better selves.”
How Ive and Altman actually plan to realize this vision of “our better selves” is unclear. It’s safe to say, however, that nobody knows how AI devices will impact society, just as it wasn’t completely clear about the addiction, depression, and physiological problems smartphones have caused.
With the benefit of hindsight, we can direct blame on smartphones for many specific societal ills. But it’s especially difficult to get ahead of the next innovation. Both Ive and Altman believe we are on the precipice of an AI technological revolution; one they’re unsurprisingly championing as they have placed billion dollar bets on it succeeding, not only for society but for them personally.
But are they genuinely thinking about the risks that ambient AI devices pose, not only legally, but cognitively, biologically, and socially? Let’s explore what some of those risks might be, and why it’s imperative that we develop baseline rules of the road now in order to mitigate them.
Will you ever have personal autonomy again in an ambient AI device world?
The latest generation of smartphones may be terribly addictive, but they all require one common denominator — deliberate interaction. Your phone won’t turn on or open apps without you directing it to some degree. Ambient AI devices, on the other hand, fundamentally change our relationship with technology by operating continuously in our perceptual field.
Sam Altman has tried to pursue this vision already with the creation of an AI pin through his investment in Humane. The start-up folded not long after its pendant-like product flopped.
Although that instance may have failed, given AI’s continuous need for information and data, the vision of these types of products is likely to remain. Whether it’s a pendant, glasses, or some other ambient product, future AI devices will likely require a constant presence on your person, operating at all times (similar to how your smart speaker is always listening today).
Unlike a smart speaker, however, an ambient AI device could colonize your cognitive functioning, supplanting your observations and reasoning with its own suggestions, interpretations, and perspectives. Of course this means that the architects of the AI products will have significant influence on their users’ cognitive functioning, replete with their own biases.
Consider the influence products like Google and Apple Maps have had on users and their spatial reasoning abilities. How reliant are you on maps applications instead of truly absorbing the world and streets around you?
Now imagine that on steroids. Imagine AI glasses giving you signals, cues, suggestions, and interpretations on everything from nearby business reviews to conversational topics and emotional interpretations.
The risk here is not simply skill atrophy, but the replacement of authentic human judgment with corporate-designed and operated systems. How that makes us our “better selves”, as Jony Ive envisions, is anyone’s guess.
And while a philosopher like John Stuart Mill would likely argue that individuals should be free to act how they please provided they don’t harm others, the collective societal effects of AI augmentation risk fundamentally altering human reasoning and social dynamics. Individual choices to use AI devices, therefore, may begin to detrimentally affect others’ autonomy too.
Unfortunately, our personal autonomy is not the only thing at stake.
A privacy and surveillance AI nightmare
As I said in a previous essay, “Most AI tools will be fine, but a small minority could cause complete chaos.” One of those areas ripe for chaos is the infringement by both the government and private sector on our privacy. AI could supercharge the government’s ability to spy on its citizens while empowering private companies to harvest and use more data on people.
Smartphones already represented an unprecedented expansion of data collection. If Sam Altman doesn’t like his relationship with technology now, just wait until wearable AI devices monitor his biometric data, visual attention, emotional states, and micro-expressions. Wait for AI devices to transform people into “behavioral futures markets”, anticipating human actions just as a commodities trader anticipates price fluctuations in cattle futures.
How will disclosure and consent work in that brave new world without basic rules of the road about the use of AI in targeted advertising?
The privacy and surveillance state implications go far beyond advertising too. Insurance companies may adjust premiums based on a user’s detected stress patterns or risk-taking behaviors. Employers may make hiring or firing decisions based on productivity metrics derived from constant monitoring. Authoritarian governments could track and score their citizens’ actions and emotional responses (something we already see happening in China).
For a philosopher like Michel Foucault, ambient AI devices could potentially create the ideal panopticon — a state where we’re constantly observed by systems that remain invisible and unknowable to us. Foucault viewed “panoptic power” as automizing and dis-individualizing power so that no individual wields or commands it.
In many ways, ambient AI devices are the end state of panoptic power as the power is exercised through algorithms and systems that may appear neutral and objective, making resistance difficult as there is no clear target to oppose. In effect, future AI devices could normalize surveillance, fundamentally shifting how power operates in society with people governing their lives based on algorithmic logic and norms they don’t even perceive.
These are only the primary risks from future AI devices, but unintended consequences abound
Nobody knew all of the consequences of social media and smartphones back in 2008, but had we developed basic principles-based rules of the road we could have avoided — or at least mitigated — scandals like Cambridge Analytica. Perhaps Facebook would have never been permitted to buy Instagram. Maybe parental controls on social media sites would have been required much sooner. Schools may have even restricted phone usage during the school day with phone lockers becoming standard practice.
We understand and appreciate some of the risks from social media and smartphones now, but that’s only because we’re reacting to the past decade plus of societal, cultural, and personal consequences. Some of which was foreseeable. As with any industry, if you don’t have a basic control framework in place, corporate power will evolve without guardrails, which has led to adulterated foods, unsafe and unfair labor conditions, and fraudulent financial practices.
We can get ahead of this for future AI devices by creating basic minimum standards like disclosing when someone is interacting with AI; requiring informed consent that forces people to agree to engage with AI; safeguarding user data so people don’t turn into cattle futures; and restricting government uses for AI so we don’t end up in 1984.
We can also anticipate economic displacement caused by AI to promote education and the new skills that will be required for an AI future. We should emphasize what remains uniquely valuable to the human experience — storytelling, creativity, and social connection. We cannot lose our shared reality for that risks the complete unraveling of human society as we know it, with people retreating to their personally curated AI devices.
So while I commend the efforts of Sam Altman and Jony Ive and I hope future AI devices “make us our better selves”, I don’t trust that they will without basic guardrails. Human history is filled with well-intended people driving immature industries off cliffs that wreak havoc on society, from meatpacking to social media. The incentives of corporate power must be checked as their interests aren’t always aligned with public interest, as we learned from robber barons and the Gilded Age in America.
While we should not over-regulate and overreact to AI doomerism, we cannot assume that the altruistic commentary from Altman and Ive will safeguard society from inherent risks in future AI devices. Notice they didn’t mention any of these threats in their optimistic video announcing their partnership. But if they have misgivings or troubled relationships with technology now, controlling the AI genie before he bursts from his bottle should be top of mind.
Elsewhere…
If you enjoyed the short video on Grant’s Tomb, check out the full movie that I posted on YouTube:
The Republican Budget Bill has been making headlines, but one part of the bill has not received enough press — the proposed elimination of the Public Company Accounting Oversight Board, which was created in the wake of the Enron and WorldCom financial frauds. I talk about the consequences this could have in this video:
Thank you for reading, watching, and subscribing. I hope this newsletter makes you smarter, or at the very least, makes you think.
I have sooooooooooooooo much to say here, John.
History teaches us that transformative technologies often outpace our ability to foresee their consequences. Smartphones revolutionized communication, but also brought addiction, mental health crises, and surveillance capitalism. If AI devices are truly the 'next paradigm,' their architects must commit to transparency and accountability from day one. They will NEVER do that.
Happy Wednesday to you.
I am really concerned and fearful about what this country is gonna look like in four years...