Author Gina Chung was surprised when a friend recently alerted her that her debut novel, “Sea Change,” was used in the database of a public linguistic analysis tool along with tens of thousands of works from other authors. Chung said she felt “outraged” and “disheartened” by the discovery.
“I was definitely not asked for permission to be part of this, nor were any of those authors asked in advance. That right away is just sort of an immediate violation of creative copyright,” Chung said of the web tool, which its creator took down in early August after online pushback. Chung’s experience is emblematic of an emerging issue: how artificial intelligence technology could disrupt industries and affect the people in them. The issue has stirred a labor movement against the unregulated use of AI technology across sectors, from publishing to tech-based gig work.
In July, the Authors Guild released an open letter—supported by more than 10,000 signatories, including prominent authors like Roxane Gay, Celeste Ng, and Margaret Atwood—that was addressed to the leadership of multiple tech companies. It called out the exploitative nature of popular generative AI models like ChatGPT, which the Guild said “mimic and regurgitate” copyrighted material from published authors to train those AI models, and argued that authors should be compensated for that use of their work. The Authors Guild has also advocated for legislation to safeguard authors’ works against AI-generated infringement, which is becoming more common.
Another industry that is seeing substantial movement to establish rights protections against AI technology is entertainment. The most high-profile example is the ongoing strikes by the Writers Guild of America (WGA) and its sister union, the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA). The WGA strike has lasted more than 100 days, and the SAG-AFTRA strike is nearing 50 days.
According to voice actor Zeke Alton, a member of multiple contract negotiating committees under SAG-AFTRA, AI technology already impacts actors working in video games where one’s voice and likeness—things easily manipulated using AI—are valuable work assets.
“A lot of people have said the technology’s not good enough to replace us right now. While that may be true wholesale … our contracts go on a term of three years,” Alton said. “If you look at the maturation of this technology within the last six months, there is no way that in three years, this won’t be able to be the cheap, quick, and dirty version that replaces all of us.”
Employers in video gaming are regulated separately under interactive contracts, which are different from contracts negotiated for employers hiring talent in film, television productions, and other entertainment genres.
Alton cited one unnamed actor who was told by a media company that they were no longer needed since the company possessed the actor’s “clone” through their likeness materials, and the actor was replaced by their synthetic voiceover. Alton noted that many actors experiencing these kinds of rights violations are afraid to come forward.
Research suggests it is difficult to definitively determine how AI will affect everyone’s jobs down the line, and each sector may experience the impact of this technology differently. According to a Pew Research Center report, working in a job with higher exposure to AI technology—where essential components of the job could be either performed entirely or partially assisted by AI technology, such as analysts, technical writers, and website developers—does not necessarily indicate whether AI will replace human workers. The research center also found many workers in jobs with higher exposure to AI technology still felt optimistic that the technology will help them in their work instead of hurting their jobs.
But in practice, workers with jobs that require less analytical work are also feeling the potential threat of AI technology on their livelihoods. Among the most vulnerable are people working in the so-called gig economy.
“A lot of the gig economy seems to be sort of a bandaid for the larger labor and employment issue,” said Dominique Smith, who has worked as a rideshare driver for nearly six years and serves as a board member of the California-based workers’ group Rideshare Drivers United. “What AI technology looks like it’s doing is just kind of the finishing blow to the fact that [employers are] going to displace a number of jobs and a number of people and then take no fiscal responsibility on what that does to the economy.”
People working gig jobs sourced through digital apps and online platforms are typically people of color who face disproportionate bias and safety risks on the job. A separate analysis from the Pew Research Center found that Latinx adults are more likely than others to have done gig work (30%) compared to 20% of Black adults, 19% of Asian adults, and 12% of white adults who have ever earned income through gig work.
Smith’s group is among a number of organizations representing local drivers calling for stronger regulations on driverless rideshare services or “robotaxis.” On Aug. 10, the California Public Utilities Commission approved robotaxis operating under Cruise and Waymo—owned by General Motors and Google, respectively—to function commercially in San Francisco. But city officials have now asked the state to put a pin on its decision after numerous public disruptions involving the driverless cars took place within a week of the approval. These disruptions include severe traffic congestion after multiple robotaxis malfunctioned during a city music festival and at least two collisions with other vehicles.
Smith, a self-described “positive futurist,” isn’t against the deployment of robotaxis so long as it is done collaboratively with human drivers and with a 100% safety guarantee by the operating companies. So far, these driverless vehicles continue to cause issues and even pose public safety risks: the San Francisco Fire Department logged at least 55 incidents where unmanned robotaxis caused interference during emergency responses over the last six months.
“A lot of the time, so much regulation is willingly overstepped so that they can just push something through without worrying about the ramifications—on one end labor like drivers’ jobs, and on the other, the safety of just anybody on the street,” Smith said.
In addition to having proper regulations, another key component to ensuring that AI technology does not harm people’s work is transparency and consent, not only in how the AI technology itself works, but also in how it uses a person’s work if they consent.
“That’s where unions like SAG-AFTRA come in because our contracts with the employers can dictate the transparency, consent, and compensation that is needed [now],” Alton said. Slow government bureaucracy in implementing safeguard policies is why organizations advocating for workers’ rights, such as unions and trade associations, are instrumental to swiftly mitigating harm as AI technology becomes more ubiquitous with little oversight.
Perhaps more than anything, AI technology’s growing visibility in various sectors has pushed the notion that the people powering those industries are expendable. But Chung, who signed the Authors Guild’s open letter, believes there is no way that AI technology can substitute for the human experience, particularly in creative fields like literature.
“No matter how skilled or maybe technologically interesting the machine’s capabilities are, it can’t replace the perspective that human beings bring to arts and creative endeavors,” Chung said.