Reflections on the WGA Strike, AI and Where We Go from Here

By John Lopez

Just over a year ago, I rolled up to the Writers Guild of America’s picket lines in front of Disney studios in Burbank, grabbed a blank sign, and wracked my brain for a good joke. Whenever we screenwriters strike, we know our picket signs must be good. We’re professional entertainers; expectations are high. But for me there was an added pressure, and I was absolutely feeling it.


You see, I’d helped with the WGA’s AI working group, and as I researched the heady world of AI, I worried it was a complex, thorny topic that risked being left on the negotiating table as bigger issues took center stage. In 2023 both screenwriters’ and actors’ labor contracts with the Hollywood studios were up for renegotiation, and massive shifts in the entertainment industry due to the rise of Netflix-style streaming had led to a crisis. Many of us saw our wages and working conditions take a nosedive. What seemed like a dream job to outsiders was becoming at best gig work, at worst unsustainable. And this was all before OpenAI decided to spring ChatGPT on the world, kicking off an AI craze that grows to this day.

A person holding a demonstration sign reading "Writers guild on STRIKE! ChatGPT doesn’t have Childhood Trauma.”

Fortunately, even though ChatGPT was not even a year old, the WGA Board decided to include AI in its negotiations. Yet, if membership didn’t get the issue, it wouldn’t be fought for. So, I figured I’d write an AI joke as a conversation starter, then I could share what I knew with my fellow striking writers and get everyone’s thoughts about AI in our industry. In other words, this better be one witty sign. But as I sweated under Burbank’s relentless heat, drafting and re-drafting my one liner, a young comedy writer strolled past with her sign: “ChatGPT doesn’t have Childhood Trauma.” And I smiled. 

I didn’t need to worry. Our members got it.

It makes sense: a writer’s job is to think about the future. Future episodes, future seasons, future everything. I know the underlying tech behind AI is impressive; that data science is a powerful tool; that so much raw computational power holds the potential to do so much good.

But the key word is “potential.” Because I could also see AI was being unleashed in an exploitative, extractive fashion without compensation, consultation, or consent. Without anyone standing up for creatives, I feared the forces of capital powering the nascent AI industry would steamroll any basic considerations of fairness or justice, heedless of the consequences.

Sadly, the year that’s followed has proved my fear more than justified.

If anything, OpenAI trying to steal Scarlett Johansson’s voice like Ursula the Sea Witch is just the tip of the iceberg. Amazon’s online bookstore is awash in derivative, chat-bot generated gunk that’s taking real money away from real writers, often with knock-offs of their work. Facebook, and the internet in general, is drowning in surreal AI-generated misinformation, while actual illustrators, journalists, and artists see their incomes dry up as scammers weaponize their own styles against them. Finally, Sony, the world’s largest music publisher, has sent cease-and-desist orders to the creators of music apps that “generate” near identical knock-offs of Mariah Carey’s “All I Want for Christmas” when users prompt up a generic Christmas song.

And this is to say nothing of the issues of bias, non-consensual deepfake pornography, or privacy and digital surveillance. I could go on ad nauseam: the infinite playlist of AI-generated harms that should be (or probably is) illegal has overwhelmed even my hyper-obsessive attention span. At this point, I think it’s less useful to call out every single wrong than to think about what’s driving the deluge.

In my opinion, it’s the ethos of the financial gatekeepers guiding Silicon Valley’s vast horde of resources.  It’s an ethos with an almost fundamentalist faith that “Technology” is unreservedly synonymous with progress; therefore, all must be permitted in Its Name. It’s an ethos that sweeps negative externalities under the rug, or more likely, pushes them onto the backs of the broader public. Ultimately, it’s an ethos that views humans as a means-to-an-end, instead of an end-in-themselves: one that reduces individual life and dignity to the data it produces and seeks to extract that data with as much care as a North Dakota wildcatter breaks shale to extract oil. 

But the shale, here, is us. The AI industrialists have become the frackers of our modern world.

Of course, it can be hard to feel the gravity and urgency of this because shale is tangible. Rocks are firm, physical objects, while “data” is an abstraction, one that often obscures the moral dimensions behind it. A Hollywood screenwriter may seem a world away from the laborer in Kenya whose Reinforcement Learning work (RLHF) makes ChatGPT coherent, but we are both data workers. After all, “data” is merely entropy unless it is information about something. But it is people who, through their sweat, time, and passion, give data its context – making it about something – so that it becomes useful for others. At the end of the day, we give the data its meaning. At the end of the day, Data is Labor.

The tech giants – Google, Meta, Microsoft, Amazon – know this. They’d already found an opportunity to exploit humanity’s evolutionary-adaptive tendency to share useful information freely by monetizing the work of the philanthropic individuals who tend the internet for the public good. (And, of course, they’ve perfected attention-locking social media algorithms, turning us into screen-swiping addicts with a ruthlessness that would make Big Tobacco blush.)

They realized the advertising industry (and money) would shift to these new semi-public forums the internet enabled. I say “semi-public” because websites like Reddit are of course not actually owned by their users. The forum may have been public, but the digital land it was built on was owned by those with the unilateral power to change any initially benevolent rules whenever it suited them. (Just look at the ownership overlap of Reddit and OpenAI, for example.)

But with AI image generators and Large Language Models, these virtual monopolies saw an opportunity to profit again off that free labor, to distill the work of others by adding a dose of computing power and re-selling that value at the expense of the communities (artists, workers, creators) who generated it. It’s fundamentally short-sighted because this amounts to a slash-and-burn approach, one that threatens the information ecosystem with collapse

However, that’s a long-term problem that companies focused on quarterly earnings reports have little interest in tackling. That this may be illegal, and deeply unpopular, doesn’t seem like much of a burden because these industries have so much excess cash they feel confident they can effectively lobby away any actual regulation that would impact them. 

And while vague promises of Universal Basic Income (UBI) are waved about by the likes of Sam Altman to justify this approach, those mostly serve to distract from the underlying dynamics. There is nothing more appealing to those who wish to grow fortunes of passive income than free labor. That’s why the well-intentioned doctrine of “open source” has been weaponized at the expense of the collective labor market. 

That this obfuscation is seen as not only permissible, but laudable, results from an attitude that has so fully embedded itself in the AI industry, that its leading company even weaponizes it against its own employees.  Despite the cries of artists that their work has been taken, what truly upset Silicon Valley about OpenAI was the revelation that Sam Altman used exploitative language in exit agreements between departing staff and the company. Say anything bad about us after you leave and we can claw back your equity, i.e. the majority of your salary.

Fortunately, after a bruising strike, the WGA gained some concessions and helped shift the narrative to the importance of protecting human creativity. While Hollywood is still suffering many other problems, we know for the future that “writing film and television” will require a human being – multiple human beings, in fact. While those gains are far from complete, the experience of writers, powered by collective action, shows alternative paths are possible.

But the cohesion and purpose that defined the WGA doesn’t come easily. And, even as the public’s anxieties about AI show up in poll after poll, C-suiters are rushing headlong into an AI boom they believe will gut labor costs and turbocharge shareholder profits. Just look at the recent deals OpenAI has inked with the publishers of the Wall Street Journal, the Atlantic, and Vox Media, despite their own journalists’ extensive reporting on the questionable ethics and practices of those building AI.

It’s hard to imagine profits flowing freely down to the journalists whose work may well be used to replace them. Without consistent push-back (from, cough, Newsroom Unions), AI will divert the lion’s share of value to a rentier class of shareholders and executives. Even the Godfather of AI, Geoffrey Hinton, thinks the ultimate outcome of AI will be to increase income inequality and make the rich far richer while the rest of us wait for this supposed UBI that executives love to describe theoretically while doing nothing to make a reality.

Even then, UBI may be at best a band-aid. As MIT economist Darren Acemoglu has pointed out, UBI is a nice slogan, but its details can make it poor policy: a blunt instrument that might do little to fix the harms of the current AI approach or compensate those whose livelihood is lost to intellectual property theft. (As every screenwriter knows, the devil is always in the details.) I’m hardly against free money, but if Hollywood’s taught me anything, it’s that there’s no such thing as a free lunch: and I wonder if the flipside of Silicon Valley’s UBI vision is a social stratification that takes us well past the excesses of the Gilded Age into a weird, digital neo-feudalism.  

Technology absolutely can and should lead to progress: but progress is not a given. If anything, the blind faith of boosters can lead to expensive mistakes that society comes to regret. While AI’s acolytes love to compare its revolutionary potential to nuclear power, electricity, or fire, what I see currently happening reminds me most of the introduction of the automobile.

Of course, everyone loves the fact that the trip between L.A. and San Francisco now takes five hours instead of five days. But the car’s transformative effects were hardly an unalloyed good. Traffic fatalities surged before seat belts were instituted; American cities were reconfigured around traffic-choked highways, often at the expense of working-class neighborhoods; and the spike in pollution turbocharged climate change, an existential risk we are far from solving today.  

But at the time, the iron-willed boosterism of men like Robert Moses and corporations like General Motors could not be countered.  In my home city of Los Angeles, our world-famous public transportation system was scrapped (the “Red Cars” of Who Kill Roger Rabbit? fame). When voters were asked if they wanted the county to purchase the system and use it to assist a growing metropolis, they rejected the opportunity. Now, L.A.’s Metro has spent tens of billions undoing that mistake.  

I think about that every time I idle on west L.A.’s infamously clogged 405 freeway, and every time I try to clock 10,000 steps a day to preserve my health. These are daily reminders that advances in technology are not unreservedly synonymous with progress. Real progress comes from how we employ new technology, what values drive our adoption of breakthroughs, and in what direction we strive. 

But to make those choices wisely involves a critical mind-set, resistance to hype and the herd mentality of those with power. The venture capitalists, the founders, the AI scientists need to engage with the people affected by their work, to listen with humility, to contemplate the consequences of the products they unleash. To me, the most pernicious flaw of current LLMs– their indifference to truth–is that they make great “bullshit” machines.  But who will take responsibility for their bullshit? The dark side of the AI dream is its deferral of accountability: the childish fantasy that placing all our faith in an inscrutable black box will lift from us the burden of making choices, individually or collectively.

Unfortunately, with the ethos advanced by the likes of OpenAI, Google, or Meta, that humility, that critical questioning is wholly absent. Instead of medical cures or climate change solutions, we’re getting broad-based deepfake pornography, continued monopolization, and a decimation of the creative class – those who think most about the future.

Investors and founders may not appreciate it, but artists are basically the R&D labs of humanity. 

We explore the unexplored space of all possible worlds. We take novel ideas and give them life through imagination. We also take the more everyday moments of mundane existence, noticing what others take for granted and presenting it with new eyes to our audience. The great literary critic Victor Shklovsky saw that as the mark of Tolstoy’s genius: this process of “estrangement” is the fundamental virtue of art. We push back against automatic thinking, which inspires others to reconceive the world, find new ways of being, and open new vistas. 

In short, we provide a ballast against the dangers of automatic thinking. But automatic thinking seems to be the modus operandi of this great cycle of AI hype. Instead of thoughtfully tailored applications of machine learning and data science, the herd-mentality is pushing founders to “ship” new products that are barely tested, ill-planned, and half-baked. And the recent bubbling up of dissent from within OpenAI itself should be a wake-up call to everyone: those who know best what’s on the frontier are most worried about where our current approach is taking us.

Progress does not come automatically. There is no magic black box to give us all the answers. No matter how good AI gets, we must always understand why we make the choices we do. Those answers come from us: from humans. For those outside the power nexus of Silicon Valley, that means we must educate themselves about AI, what it is and isn’t, what it can do, can’t do, and most importantly, should not do. Against an army of lobbyists, we must push our political leaders to support responsible AI policy. My single biggest wish is that as we legislate AI, we hold builders of ever more complex, potent AI systems somehow responsible for harms their devices engender. If history teaches us anything, power without accountability is always the greatest danger of humanity to itself.

So, for those with power and money, I beg you to engage with your critics. Get outside your bubble. Because, rich or poor, powerful or powerless, we will all endure the consequences of an indiscriminate, uncritical, and fanatical worship of technology. As everyone in Hollywood knows, from the biggest A-lister to the lowliest assistant, sooner or later, we all get stuck on the 405.

 

*These personal reflections are my own and do not represent the official stance or policy of the Writers Guild of America.

About the Author

John Lopez

John Lopez started his career covering entertainment and the arts for Grantland, Vanity Fair and Business Week among others. He was an associate producer on Hossein Amini’s feature film adaptation of The Two Faces of January and has written for Paramount +’s Strange Angel, Showtime’s The Man Who Fell to Earth and The Terminal List on Amazon. He served as part of the WGA’s AI Working group in the run up to its 2023 MBA contract negotiations.

Skip to content