AI-generated Ghibli-style artwork has taken the internet social media by storm, with users transforming everything from movie scenes to personal photos into visuals that mimic the iconic animation style of Hayao Miyazaki’s films. OpenAI’s latest ChatGPT update has made it easier than ever to create these stunning illustrations, fueling a viral trend that has captivated millions online. While the trend is entertaining, privacy experts are raising concerns. Proton, a platform focused on data security, has warned users about the risks associated with AI-generated images. In a post on X, Proton stated, “Think this is a fun trend? Think again. While some don’t have an issue sharing selfies on social media, the trend of creating a “Ghibli-style” image has seen many people feeding OpenAI photos of themselves and their families”.
In a series of posts, Proton explains how and why transforming personal photos using AI models is a problem. It said “Aside from the risks of data breaches, once you share personal photos with AI, you lose control over how they are used, since those photos are then used to train AI. For instance, they could be used to generate content that may be defaming or used as harassment.”
‘Photos may be used without consent’
“Many AI models, particularly those used in image generation, rely on large training datasets. In some cases, photos of you, or with your likeness, might be used without your consent,” Proton highlighted in the post. “Lastly, your data could be used for personalized ads and/or sold to third parties”.
Poll
Have you tried creating AI-generated Ghibli-style artwork?
Proton has also shared a blogpost explaining how ChatGPT and others use data to train AI models. “Massive amounts of data are required to train and improve most AI models. The more data that’s fed into the AI, the better it can detect patterns, anticipate what will come next, and create something entirely new,” the post reads. “This leads to all kinds of problems. An AI trained on data will only learn how to deal with situations raised by the dataset it has seen. If your data isn’t representative, the AI will replicate that bias in its decision making,” it added.