AI-generated videos depict MLK in unflattering ways. Be prepared to turn off. | Opinion

Just scroll through your social media feed, and you’ll likely come across something created by AI where you can’t tell if it’s real or fake.
Robin Williams’ daughter calls AI tributes to her father ‘Gross’
Director Zelda Williams has called out fans who send her AI-generated videos of her late father, actor Robin Williams, asking them to “just stop.”
unbranded – Entertainment
- Artificial intelligence is being used to create fake videos and images, making it difficult to distinguish real from fake content.
- AI-generated videos have depicted deceased figures like Martin Luther King Jr. and Robin Williams, drawing criticism from their families.
- Politicians, including President Trump, have shared AI-generated content, raising concerns about misinformation.
- The misuse of AI could distort history and erode public trust in media and historical records.
You don’t have to be an AI expert to see it’s already impacting your life. Just browse your social media feed, and you’ll see something that you can’t tell if it’s real or fake.
AI is a powerful tool that helps you build your family tree, conduct research for a project or create a month’s worth of dinner menus. It can also be used to generate clickbait images and videos that depict deceased individuals as if they are alive, along with false stories presented as truths.
President Trump is in a category all his own in his dangerous and cringeworthy use of the technology. He posted several videos on his Truth Social platform Oct. 18 in response to the No Kings protests. One showed him wearing a crown and piloting a fighter jet while dumping sewage on protesters. Trump and Vice President JD Vance circulated another fake video in which Trump dons a crown and cape, while Democrats like former House Speaker Nancy Pelosi kneel before him.
What’s generated the largest outcry is the use of AI to resurrect dead people. For example, there are dozens of fake video clips of civil rights icon Martin Luther King Jr. in a supermarket with a bag of groceries, proclaiming that he has a dream of one day groceries will be free, before walking out without paying.
In another clip, King shows off a gold watch and chain in a jewelry store, saying, “This was my dream all along, brother.” There’s a video of him dancing with the raunchy rapper Sexyy Red. And even a racist video showing him at a Lincoln Memorial podium jumping and screaming like a monkey.
My primary concern is AI’s unsettling ability to distort and manipulate history. This issue becomes especially concerning when we see the Trump Administration’s efforts to remove items from Smithsonian museums that are deemed “woke” or offensive to white audiences.
If we don’t put safeguards in place for AI-generated videos, we risk creating a fabricated reality where people can’t tell what’s real from what’s fake.
In some ways, we’re already there, and this ongoing issue further erodes trust in journalism and the news we depend on. This is why we must have safeguards in place and be prepared as consumers to pull the plug on platforms that distribute this garbage because in this complex landscape, the implications can be profound.
Please stop: Children of King and Williams want videos to end
The videos prompted King’s daughter, Bernice, to call for a stop to AI recreations of deceased celebrities — and who can blame her? People are actually sending Bernice King fake videos. The same thing has been happening to actor Robin Williams’s daughter, Zelda. She took to social media to share her disgust:
“To watch the legacies of real people be condensed down to ‘this vaguely looks and sounds like them so that’s enough,’ just so other people can churn out horrible TikTok slop puppeteering them is maddening,” Williams wrote. “You’re not making art, you’re making disgusting, over-processed hotdogs out of the lives of human beings, and then shoving them down someone else’s throat hoping they’ll give you a little thumbs up and like it. Gross.”
Unfortunately, the King family is accustomed to having their father’s legacy disrespected. In 2019, the KKK allegedly distributed racist flyers around the King holiday celebration in Virginia. A similar incident occurred in Pittsburgh in 2015.
As AI keeps advancing, I find it hard to believe that a simple request from grieving families of the deceased will be enough to stop people from making fake videos. Why would they? Some creators of these videos are only after the clicks and views they get, and the hopes of a viral post that could bring them big bucks.
Most of these videos are made with OpenAI’s new video generator app, Sora 2, and shared on TikTok and Instagram. But it’s only a matter of time before we see more apps, and the technology will only get better, making it harder to tell what’s real and what’s fake.
If misused, AI can spread misinformation and rewrite history
While those creating these videos might see it as harmless fun, I worry that AI could soon be used to rewrite or distort history. Just imagine if AI fabricated a story portraying slave owners as kind people who treated enslaved Black individuals with compassion?
We should also consider situations in which AI was used to portray historical figures in events that never took place, known as “deepfakes.” These AI-generated audio or video clips can alter a person’s appearance and voice.
Take the topic of slavery. AI can sift through vast archives, uncovering and weaving together the narratives of enslaved individuals. It can present their stories with clarity and precision, bringing to light the often-overlooked truths of their experiences. However, there is a dark side to this technology. It can diminish the brutality of that time and the harsh realities faced by the enslaved.
AI-generated deepfakes can craft disturbingly realistic audio, video, and images of historical events or figures, blurring the line between reality and illusion. This technological sorcery can leave the public struggling to determine what is genuine and what has been fabricated, paving the way for a tide of disinformation.
A mere seed of doubt can easily persuade people to believe that everything they encounter is either a complete fabrication or completely authentic, even when it is neither.
A few weeks ago, Trump shared a fake video promoting “medbed” hospitals as the future of healthcare. In this video, set on a fake Fox News stage, he announces that every American will receive a medbed card, which he claims will grant access to top doctors across the nation.
Medbeds are not real. The video is not real, but Trump who frequently criticizes “fake news” — is actively sharing fake news.
He has deleted the video, but why would he post it in the first place?
To fight misinformation, my advice is to mke sure the source is reputable before posting any story. If a story is reputable, it will be posted on several local and national news sites, not just on a random blog or social media profile.
Before sharing these fake videos for laughs, just imagine for a second if that was your deceased loved one who was falsely placed in a compromising position before you hit the share button.
Reach James E. Causey at jcausey@jrn.com; follow him on X@jecausey.




