0

The Barbie Box AI trend is taking over the Internet, but an expert warns that it could be putting innocent ChatGPT users in danger of their images being used as deepfakes

AI Bethenny Frankel
This AI trend could put you at risk(Image: @Bethenny)

If you’ve been online over the past week, you’ll have been bombarded with AI images of Barbies in boxes. These include everything from the Barbie Beethoven, to the Barbie Mirror chicken and even (terrifyingly) Barbie Trump. It all sounds like fun and games – except now an expert is warning that it could be putting users at risk of becoming victims of deepfakes.

Off the back of Chat GPT 4-0’s release, which saw the AI bot gain over one million users within just one day. Unsurprisingly, people have been using the technology to do what the Internet does best: make memes. Some have sparked controversy. The recent splurge in Ghibli-style images prompted a resurfacing of Spirited Away director Miyazaki’s condemnation of AI art. Now, the latest iteration involves prompting the bot to turn users into their very own Barbie doll.

Prince Harry barbie doll
Prince Harry as AI Barbie(Image: SWNS)

READ MORE: Spirited Away director says AI art is an ‘insult to life’ in harsh reaction to viral trend

It follows on from the innocent real-life trend of cinema-goers posing in life-sized Barbie boxes, which appeared around the release of 2023’s Barbie movie. In many ways, the AI trend has allowed fans to relive the hype and excitement of the box office hit.

It’s also allowed users to create amusing images of celebrities and public figures. If you’ve ever wondered what Prince Harry or Elon Musk would like as a plastic girl’s doll, you know longer have to.

However, according to research by the AI prompt management company AIPRM, uploading your data to ChatGPT does more than allow you to relive funny moments – it could be enabling your image to live online in ways you don’t want it to. This is because ChatGPT’s privacy policy collects and stores uploaded images to fine tune its results.

According to Christoph C. Cemper, founder of AIPRM: “Images shared on AI platforms could be scraped, leaked, or used to create deepfakes, identity theft scams, or impersonations in fake content. You could unknowingly be handing over a digital version of yourself that can be manipulated in ways you never expected.”

Content cannot be displayed without consent

What are deepfakes and are you at risk?

Deepfakes are images or videos that use AI to mimic a person’s voice or facial features. The concept first entered the public consciousness in 2017, after a Redditor created a subreddit called r/deepfakes, where they used face swapping technology to post fake pornographic videos and images of celebrities.

Since then, deepfakes have been described as a “global crisis” by the European Commission in 2024. The body highlighted how these fake images can be used to convincingly impersonate or misrepresent individuals.

One of the most notable victims of this abuse of AI is pop megastar Taylor Swift. In January 2024, sexually explicit AI-generated images of the singer began to circulate on social media, leading to them being viewed millions of times. One of Taylor’s videos was reported 47 million times before being pulled from X, as reported by the BBC.

While deepfakes can be used for any kind of image manipulation, the most common use is pornography. According to a 2023 report by Home Security Heroes, pornographic deepfakes form 98% of total deepfake content. Even more concerning, 99% of its targets are women.

Earlier this year, a British man was arrested for using AI images to create pornography of women he knew in real life. They were then posted in a forum glorifying “rape culture”, as reported by the BBC. He did this by pulling images he found on social media.

More troubling still is that deepfakes are a popular search online. According to a recent study by Kapwing, there are 2,479 searches for deepfakes per million people in the UK in December 2024 – the eleventh highest search volume in Europe.

One way to protect yourself against images being scraped by ChatGPT is to change your privacy settings, Christoph recommends. Users can opt out of ChatGPT’s training data collection.


Like it? Share with your friends!

0

What's Your Reaction?

hate hate
0
hate
confused confused
0
confused
fail fail
0
fail
fun fun
0
fun
geeky geeky
0
geeky
love love
0
love
lol lol
0
lol
omg omg
0
omg
win win
0
win
admin

0 Comments

Your email address will not be published. Required fields are marked *