A Retrospective on the Voiceverse Plagiarism Scandal

VoiceVerse Logo

January 2022 was a bizarre time. The COVID-19 lockdowns were ending, and life was returning to its new normal. We were, almost universally, in a period of transition. 

The internet was in a very similar place. We were still 10 months away from the public launch of ChatGPT, but the heady days of NFTs were (largely) behind us. Though AI was definitely a buzzword, NFTs were still the internet pariah of the moment, raising concerns about environmental impact, fraud and a myriad of other issues.

So it’s appropriate that, during that month, a plagiarism scandal would arise that combined the two elements. 

That’s precisely what the Voiceverse plagiarism scandal did.

To make matters even more impressive, Voiceverse was backed by one of the best-known NFT groups, which persuaded one of the most famous video game voice actors to participate.

This made it not just a significant story for NFT and AI fans, but also for the general public. It “broke containment” in a way few niche plagiarism scandals do. 

As such, it’s worth looking back at this story, including what happened and what it can tell us about the future.

Understanding the Voiceverse Plagiarism Scandal

Voiceverse was (and as of this writing still is) a startup founded by the Bored Ape Yacht Club, one of the most famous NFT collections. The idea is simple: using a short audio sample, you can create a custom text-to-speech voice that you can own as an NFT and license others to use.

The idea was always going to be controversial, especially since there was no way to verify that the user owned the voice they were using. 

However, that controversy escalated further when, on January 14, 2022, video game voice actor Troy Baker announced that he was partnering with Voiceverse to “bring new tools to new creators to make new things, and allow everyone a chance to own & invest in the IPs they create.”

Just nine hours later, Baker was walking some of that back. He described his tone as “a bit antagonistic,” and he is “just a storyteller out here trying to tell my story to whomever will hear…”

While that would have been bad enough, things quickly went from bad to worse for Voiceverse. 

Fifteen, the pseudonymous creator of 15.ai, alleged that the Voiceverse promotion video was made using their system. They alleged that Voiceverse further altered the speech in a bid to make it unrecognizable. 

Fifteen launched 15.ai in 2020, offering a simple, non-commercial tool for converting text to speech. Currently, you can use 15.ai to generate short voice samples using various My Little Pony characters. 

Initially, Voiceverse denied any copying, but later apologized, saying that their “marketing team” had used it without proper credit. 

Fifteen’s response was simply, “Go fuck yourself.”

Fifteen’s work has been widely credited for paving the way for modern voice AI systems. However, Fifteen itself has been very adamant that its service is non-commercial, including NFTs.

This made Voiceverse’s use not just an act of plagiarism, but an affront to Fifteen’s stated beliefs. 

Voiceverse has largely fizzled out following the scandal. However, in May 2024, Voiceverse’s parent company, Lovo, was sued by a group of voice actors. That case is ongoing, with the judge allowing the claims to move forward in July of this year.

However, that still leaves many interesting questions about Voiceverse’s misuse of 15.ai.

When AI Plagiarizes AI

Four years after the Voiceverse plagiarism scandal, we would have a strangely similar war of words, this time with much larger players.

The Chinese company DeepSeek released its R1 and V3 models. These models briefly shook the AI landscape, delivering results comparable to those of US-based companies at a fraction of the cost.

However, OpenAI accused DeepSeek of “distilling” their models. However, as others point out, “distilling” is precisely what AIs do to human works to create their models in the first place.  As such, much of the response to OpenAI’s allegations was more shadenfruede than sympathetic.

But Fifteen’s story was different for several reasons. First, AI was still much newer and had not become the focus of so much vitriol. Second, Fifteen is a solo developer who has insisted that their work is to be used only for non-commercial purposes. Finally, Voiceverse didn’t simply distil his model; they used the 15.ai service directly. They did no original work in this process.

Still, it’s strange to think that, in January 2022, we had a plagiarism controversy not over whether a work was AI or human-created, but about which AI system created it.

While this may be one of the first such stories, it will definitely not be the last. The DeepSeek story showed us that AI companies, at least theoretically, can take shortcuts using each other’s work. In a race as cutthroat as the AI one, any shortcut that can be exploited likely will.

The growing question isn’t whether humans are using AI systems to plagiarize or even if AIs are plagiarizing humans. It’s whether they are plagiarizing each other.

That is a growing problem that we have only seen the very beginnings of. 

Bottom Line

If this scandal had happened in 2025, things would have gone very differently. The focus would have been on the commercial AI company using the work of a free, solo-developed project to do what it should have been able to do easily. 

If you’re going to purport to allow users to generate thousands of voices, you should at least be able to develop the one voice in your promotion video. That seems obvious. Voiceverse didn’t just plagiarize; it failed at the most basic task it was supposed to do. 

But imagine if this hadn’t been a solo developer. Instead, they had used OpenAI or a similar service. There would almost certainly have been a lawsuit and technical measures put in place. To prevent future misuse.

But that is a story that I’ve seen over and over again these past 20 years. Many people dismiss copyright and/or plagiarism issues until it is their work that is taken. We’re seeing that with AI companies, too. These companies defend their right to train on whatever human-created content they want, but become upset when their work is misused.

While I don’t put Fifteen in the same category as OpenAI for a myriad of reasons, this case was still an interesting preview of what was and still is to come.

We already know that AI is being trained on AI-generated content, much to its detriment. What happens when AI companies use competitors’ creations to fake their capabilities? Has it already happened? The black box nature of AI makes it difficult to know.

While the Voiceverse case may get lumped in with the other NFT scandals of that time, it’s the AI component that makes it more relevant. In that regard, it may be predicting a future in which AI companies are suddenly very interested in who has permission to do what. 

However, that doesn’t mean they’ll care about what human creators think, just other AI companies. 

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free