Google’s Lyria 3: Why Some Musicians Embrace It While Others Fight It

Google just made its AI music generator, Lyria 3, available to everyone through the Gemini app last week. Within days, the music world split into two camps: artists experimenting with it as a creative tool, and artists warning it threatens human creativity itself.

The divide isn’t theoretical. Real musicians are using Lyria 3 in production, while hundreds of other artists are actively fighting against AI music generation. Understanding both perspectives reveals what’s actually at stake.

The Artists Using It

Three-time Grammy-winning rapper Wyclef Jean used Lyria 3 and Google’s Music AI Sandbox on his recent song “Back From Abu Dhabi.” His take on the technology is surprisingly nuanced.

“This is not just a machine where you’re clicking a button a hundred times, and then you’re done,” Wyclef explains. “What I want everybody to understand […] is you’re in the era where the human has to be the most creative. There’s one thing that you have over the AI: a soul. And there’s one thing that AI has over you: the infinite information.”

For Wyclef, the appeal is speed and exploration. He describes the tools as “capable of speeding up the process of what’s in my head, getting it out. You’re able to move at light speed with your creativity.” He compares it to “digging in the crates” – going through record stores to find sounds to sample. “So right now, we’re digging in the infinite crate. It’s endless.”

He’s not alone. Several musicians have tested these tools: Isabella Kensington found the “Extend” feature helpful for songwriting and trying new ideas. The Range described it as helping overcome writer’s block. Adrie expressed caution about AI generally but sees these tools opening up new experimental avenues. Sidecar Tommy noted the tools help “speeding up production and sparking complex orchestral ideas from simple beginnings.”

The pattern among these artists: they see Lyria as a collaborator that accelerates workflows and enables experimentation, not a replacement for human creativity.

The Artists Fighting It

The counterpoint is equally vocal and significantly larger. Hundreds of musicians, including stars like Billie Eilish, Katy Perry, and Jon Bon Jovi, signed an open letter in 2024 calling on tech companies not to undermine human creativity with AI music generation tools.

Their concerns center on training data and economic impact. AI music models learn from existing music, raising questions about whether artists consented to their work being used for training. More fundamentally, they worry that flooding the market with AI-generated music devalues human artistry and makes it harder for working musicians to earn a living.

Critics are calling the output “musical slop” and questioning the copyright implications of the training data. The criticism isn’t just about quality—it’s about the entire model of training AI on copyrighted work without explicit permission or compensation.

What Lyria 3 Actually Does

The technology behind the debate is now accessible to anyone. Lyria 3 is available to all Gemini users 18+ in 8 languages (English, German, Spanish, French, Hindi, Japanese, Korean, Portuguese). You can create 30-second tracks using text descriptions or by uploading photos/images.

It generates lyrics automatically based on your prompt and allows control over style, vocals, and tempo. All tracks include SynthID watermarking to identify AI-generated content.

For professionals, ProducerAI, backed by The Chainsmokers, just joined Google Labs, using Lyria 3 for professional-grade music creation with granular controls over tempo and time-aligned lyrics. This positions Lyria not just as a consumer toy but as a tool for serious music production.

The Real Question

The divide between artists like Wyclef who embrace these tools and artists like Billie Eilish who oppose them isn’t about technical capability—it’s about philosophy and economics.

Pro-AI musicians see infinite creative possibilities and accelerated workflows. Anti-AI musicians see their work being used without permission to train systems that could eventually replace them. Both perspectives are valid, and they’re not mutually exclusive.

Wyclef’s emphasis on humans bringing “soul” while AI brings “infinite information” suggests a potential middle ground: AI as a tool that amplifies human creativity rather than replaces it. But that only works if the economic model doesn’t destroy musicians’ livelihoods in the process.

What This Means Going Forward

Google’s decision to make Lyria 3 widely available through Gemini—not locked behind professional tools or expensive subscriptions—forces this debate into the mainstream. It’s no longer theoretical. Anyone can generate music with AI right now.

The watermarking through SynthID is Google’s attempt to maintain transparency, ensuring AI-generated music can be identified. But watermarking doesn’t address the fundamental concerns about training data or economic impact.

The music industry is facing a question that won’t be resolved through technology alone: can AI music generation coexist with human artistry in a way that’s economically viable for working musicians? Or will the “infinite crate” that Wyclef celebrates eventually replace the human diggers entirely?

Right now, both futures seem possible. Which one we get depends on decisions about copyright, compensation, and how we value human creativity—decisions that go far beyond what AI can do.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top