A little over a year ago I wrote that “current laws fail to protect us from AI.” I argued that it’s not the major, science fiction threats of artificial intelligence (AI) that should trouble us immediately, but the threats posed by small AI tools and AI-enabled schemes.
A recent U.S. case involving the illegal streaming of AI music has illustrated these points better than I could have imagined. It reveals how AI can be used at scale to manipulate platforms in the creator economy.
Before we get to the case, consider this arbitrage for context:
Users can pay Spotify $12 per month for a premium account (unlimited music)
Spotify pays ~$0.005 per stream (half a cent!) to artists after pooling user payments and keeping about a quarter of them for themselves
Spotify then divides payments revenue among musicians based on streaming data
If you’re a musician on Spotify with a premium account, you could stream your own music nonstop, 24 hours per day so that all of the $12 revenue goes to you, plus anything additional from the larger pool (note: not everyone who pays for premium listens to nonstop music so their leftover percentage goes to a larger, shared pool)
If your music averages 2 minutes per song, you could listen to 30 songs per hour, which equals 720 streams per day and 21,600 streams per month
Assuming these streams generate $108 at ~0.005 per stream, you conceivably could make $96 of pure profit per month just by listening to your own music (nonstop!)
Spotify and other streaming platforms obviously want to prevent this type of behavior.
Streaming platforms generally include clauses in their Terms of Service that prohibit artificially increasing play counts, among other attempts to manipulate any revenue sharing algorithm. They also have anti-fraud tools to try to detect and prevent nefarious behavior.
And while platforms like Spotify want to deter the individual musician trying to game the system, they’re most concerned with someone who tries to do this at scale. Enter defendant Michael Smith.
When it’s illegal to stream AI music
Smith allegedly took the following actions to carry out his scheme:
Purchased hundreds of thousands of AI-generated songs and uploaded them to streaming platforms like Spotify
Created thousands of Spotify premium accounts, and other similar account types on other platforms
Used "bots" (automated programs) to stream his AI-generated songs billions of times from the premium accounts he created
In addition to these steps, Smith made considerable efforts to mask his activities so Spotify and others wouldn’t detect them.
Smith contracted with a financial services company to provide him with thousands of debit cards so there were different payment sources on each account. He used VPNs to hide his IP address and to operate numerous virtual computers simultaneously.
But arguably the most important part of the scheme was relying on AI-generated music. By using AI music, Smith was able to scale his scheme to a degree that made it immensely profitable. He was able to upload hundreds of thousands of songs and spread out the streams so as not to attract too much attention (although his prolific uploading eventually aroused suspicion).
There is no way someone would have been able to manually create hundreds of thousands of songs in just a few years. The use of artificial intelligence was therefore crucial to scale this scheme quickly.
In a February 2024 email, Smith bragged that his "existing music has generated at this point over 4 billion streams and $12 million in royalties since 2019."
The distribution company that Smith used to connect with platforms like Spotify eventually caught on and accused him of “streaming fraud.” Smith responded with this and other similar statements denying any wrongdoing:
“I have done NOTHING to artificially inflate the streams on my two albums... . I have not a done a thing to illegally stream my music . . . . I have not violated the terms of my agreement with you at all and you have provided no proof either. I have not illegally streamed my music."
Not only is streaming manipulation prohibited by streaming platforms like Spotify under their Terms of Service, but it’s also generally prohibited in distributor agreements, which musicians typically use to help get their music on platforms.
Provided that the Department of Justice can substantiate the evidence against Smith cited in the indictment, it’s pretty clear that Smith violated the platforms’ Terms of Service and his distribution agreements.
But did Smith really commit a crime?
Illegal activity or creative arbitrage?
It’s important to note that Spotify does not completely ban AI-generated music. While deepfakes and impersonations are not acceptable, there’s an entire category of “inspired by” AI music that’s permitted in the murkiest way possible. Nobody - not even the streaming platforms - can define what “inspired by” means, and there are no laws or regulations to provide guidance.
So while it was arguably not a problem for Smith to have his hundreds of thousands of AI songs on the platform, the way he streamed them violated the platforms’ Terms of Service. But did the streaming actually break any laws?
There’s no streaming fraud statute, regulation, or other type of law that addresses this specific activity. Instead, the Justice Department cited the law they almost always use in these situations – wire fraud. The problem is that the wire fraud statute has not been updated since the time of rotary telephones.
And while you may think it should be obvious to folks like Michael Smith that streaming fraud can and should constitute wire fraud, it’s clearly not. Similar issues keep coming up between people and machines, algorithms, and/or artificial intelligence. Social media has a tendency to add fuel to the fire.
A few weeks ago “Chase Bank glitch” was trending on TikTok because some enterprising creators had discovered a new form of check fraud. Back in the day, check fraud involved (i) writing a fake check, and (ii) deceiving someone (an actual human) into cashing it.
None of these TikTok creators, however, were tricking actual people into cashing their fake checks. They simply wrote a fake check to themselves, put it in a Chase ATM, and the machine gave them money. Surely that’s the machine’s fault, right?
While I doubt the Justice Department will have much trouble prosecuting Michael Smith and any of the “Chase Bank glitch” TikTok creators, a creative defense lawyer could give them some trouble.
Wire fraud requires “intentionally deceiving or defrauding someone.” In both the case of music streaming and the Chase Bank glitch, it’s at least debatable that no human person was defrauded.
Spotify and other streaming platforms had major control gaps that allowed Michael Smith’s scheme to go undetected for years. From Smith’s perspective, he was creatively arbitraging inefficiencies that were present on the platform, just as any trader would do in financial markets.
Similar arguments could be made in the Chase bank glitch. The Chase ATMs gave money to people who had nothing close to the account balances they received in cash. These “broken” ATMs then went viral on social media.
This is not to excuse or dismiss the behavior of Smith or these TikTok creators, but to illustrate the point that perhaps we need laws that address 21st century problems instead of relying on statutes that were last updated in the 1950s.
Clearer notice of permitted and prohibited activity when engaging with technology may not stop all criminals. Bad actors will always find a way. But it will give bad actors one less excuse or defense when attempting to hold them accountable.
Manipulating algorithms (or ATMs) should be illegal, but the laws need to be clear
In the Spotify example, artists and streaming platforms were harmed. Artists who otherwise could have earned money from a revenue sharing pool were denied that opportunity because of the thousands of bots Michael Smith was allegedly running from his basement.
Smith, however, is likely to point the finger back at Spotify. Blame their control environment. Criticize how they structure revenue sharing with artists. Argue that they created the conditions that allowed this to happen.
While I don’t think those are winning arguments, they put Spotify and other streaming platforms in a difficult spot. There are no rules of the road that they all must follow. No guiding principles from any form of government or regulator to help them promote innovation but also maintain trust and safety.
The Department of Justice just shows up occasionally after chaos has ensued and points to wire fraud.
It should not be hard to develop simple, principle-driven rules that make it a crime to attempt to manipulate any sort of algorithm. Whether it’s a streaming platform, writing platform (like Medium!), or any other platform in the creator economy.
The lack of regulation in the creator space today feels like what Wall Street must have been like in the 1920s. Until everything came crashing down in 1929 and we finally realized, we can do better.
While the FTC has made some commendable efforts with regards to appropriate disclosures and prohibiting the purchase of fake followers, until we have a better anti-fraud regime that applies to creators and sets minimum standards for platforms, the Wild West creator economy will become more chaotic. Not less.
We need simple laws that make it abundantly clear – attempting to defraud technology, especially with the use of AI at scale, can be criminal.
Unfortunately a scheme like the one Michael Smith allegedly executed is probably not enough to inspire action. It will likely need to be a scandal that’s far worse.
Read more at The Political Prism:
A Glimmer of Hope by Prasanna Srinath Subhasinghe II
Debunking 4 Major Myths of an Assault Weapons Ban by David B. Grinberg
Is the Harris Campaign an Argument For Shorter US Election Cycles? by Nick Hart
We Understand Politicians When We Listen to What They Say by Brenda Mahler
The Debate and Our Gaping Disparity of Attention by Ben Ulansey
From last week — Tim Pool Is Not a Victim by John Polonis