Artificial intelligence has become an inescapable part of our daily lives, revolutionizing industries, enhancing efficiency, and pushing creative boundaries. But with this technological evolution comes a host of ethical and legal dilemmas—chief among them, the question of consent.
The latest controversy involving Scarlett Johansson serves as a wake-up call for individuals, corporations, and lawmakers alike. A viral AI-generated video featuring the actress, among other Jewish celebrities, has once again ignited concerns over the use of AI in manipulating human likenesses without permission. This is not the first time Johansson has been at the center of such an issue, but the gravity of the situation underscores a broader, more pressing concern: If AI can replicate a person’s image, voice, and mannerisms at will, what safeguards exist to protect individuals from digital exploitation?
This blog explores the controversy in depth, the growing dangers of AI impersonation, and the urgent need for comprehensive AI regulations.
The Incident: AI-Generated Scarlett Johansson Speaks Without Her Consent
The latest AI-generated video featuring Scarlett Johansson is deeply unsettling, both for the actress and for anyone concerned about the misuse of technology. In the video, Johansson’s likeness appears alongside other prominent Jewish figures, including Drake, Jerry Seinfeld, Steven Spielberg, and Adam Sandler. The deepfake shows these figures seemingly speaking out against Kanye West’s antisemitic remarks—without their consent.
Johansson’s AI-generated version is depicted wearing a white T-shirt featuring a Star of David, a hand making an obscene gesture, and the name “Kanye” underneath. The video ends with an AI-generated Adam Sandler raising his middle fingers as the Jewish folk song Hava Nagila plays.
Johansson, 40, was quick to condemn the use of her likeness in the unauthorized AI-generated video, stating:
“I am a Jewish woman who has no tolerance for antisemitism or hate speech of any kind. But I also firmly believe that the potential for hate speech multiplied by AI is a far greater threat than any one individual’s words. We must call out the misuse of AI, no matter its intent, before it spirals out of control.”
Her statement reflects a growing concern among public figures and private citizens alike: AI’s ability to manipulate reality is outpacing regulatory efforts to control it.
A Repeat Offender: AI’s History of Exploiting Johansson’s Image
This is not the first time Johansson has had to defend herself against AI misuse. In November 2023, the actress took legal action against an AI company that used her voice and likeness without her permission for an advertisement. The company ultimately removed the ad, but the incident illustrated how AI is already being weaponized for deceptive commercial purposes.
In May 2024, she also expressed frustration when OpenAI released a ChatGPT voice assistant named “Sky,” which closely resembled her voice. The backlash forced OpenAI to remove the voice option, but the incident again raised concerns about AI replicating individuals without their consent.
Johansson’s experiences are emblematic of a larger issue. She is one of many public figures grappling with AI-generated deepfakes that blur the line between reality and fabrication. From manipulated political videos to explicit deepfake content, the technology is increasingly being used to deceive, harass, and exploit.
The Legal Grey Area: Who Owns Your Digital Identity?
The Scarlett Johansson deepfake incident highlights a glaring gap in legal protections. AI technology is advancing so rapidly that existing laws struggle to keep pace. As of 2025, most countries lack comprehensive legislation addressing AI-generated likenesses, making it difficult to hold bad actors accountable.
1. Intellectual Property and Right of Publicity Laws
Currently, laws governing image rights and likenesses vary by country and even by state. Some jurisdictions, like California, have “right of publicity” laws that protect individuals from having their image used for commercial gain without permission. However, these laws were not designed with AI in mind and often fail to address non-commercial or malicious uses of deepfake technology.
2. The Lack of AI-Specific Federal Laws
The United States has yet to pass a federal law explicitly regulating AI-generated impersonations. While individual states, such as California and Texas, have passed deepfake laws targeting political misinformation and explicit content, the legal framework remains fragmented and inadequate.
Johansson herself has called for federal action, urging lawmakers to create strict guidelines for AI-generated content:
“The U.S. government must take immediate action to limit AI’s misuse before it becomes impossible to control. This is not a political issue—it’s a human rights issue.”
3. AI’s Threat to Privacy and Consent
The rise of AI-generated content raises significant privacy concerns. If an actress like Johansson, who has the resources to fight back, can have her image replicated without permission, what does that mean for everyday people?
The answer is troubling. AI deepfake technology is already being used to create non-consensual explicit content, blackmail materials, and false political narratives. Without clear legal protections, AI-generated impersonations can be used to destroy reputations, spread misinformation, and manipulate public opinion.
The Broader Implications: Society at the Crossroads of Reality and Fabrication
Johansson’s deepfake incident is just one of many examples of AI’s potential for harm. Other high-profile cases include:
- Political Deepfakes: AI-generated videos of world leaders spreading false information have been used to manipulate elections and incite social unrest.
- Financial Scams: AI-generated voice deepfakes have been used to impersonate CEOs, leading to fraudulent wire transfers worth millions of dollars.
- Explicit Deepfakes: AI-generated explicit content featuring celebrities and private individuals has surged, creating massive ethical and legal challenges.
The potential for harm extends beyond individuals. As AI-generated content becomes more sophisticated, society risks losing its grip on objective reality. If deepfakes become indistinguishable from real footage, how can we trust anything we see or hear?
This is the chilling reality Johansson is warning against.
A Path Forward: How We Can Fight AI Misuse
The Scarlett Johansson AI controversy should serve as a catalyst for action. Governments, tech companies, and individuals must work together to address the growing dangers of AI impersonation.
1. Legislative Action
- The U.S. must enact federal AI regulations that establish clear guidelines on deepfake technology, including penalties for unauthorized use of likenesses.
- AI-generated impersonations should be classified as a form of identity theft or fraud, making them easier to prosecute.
2. Tech Company Accountability
- AI developers must implement stronger safeguards, such as digital watermarking to identify AI-generated content.
- Companies like OpenAI, Google, and Meta should require explicit consent before generating AI likenesses of individuals.
3. Public Awareness and Digital Literacy
- Individuals must learn to recognize and report deepfake content.
- Media organizations must implement fact-checking measures to prevent the spread of AI-generated misinformation.
Conclusion: The Time to Act Is Now
Scarlett Johansson’s case is not just about a celebrity fighting for her rights—it’s about the future of digital identity, privacy, and trust. AI technology has the power to create, but it also has the power to deceive and manipulate. Without proper regulations, the line between reality and fiction will continue to blur, with dangerous consequences.
As Johansson herself put it:
“There is a 1,000-foot wave coming, and if we don’t put up barriers now, we will all be swept away.”
The question now is: Will we take action before it’s too late?
What are your thoughts on the AI deepfake controversy? Should governments regulate AI more strictly? Share your thoughts in the comments.
— Afonso Infante (afonsoinfante.link)
Leave a Reply