Grok, the AI chatbot developed by Elon Musk’s synthetic intelligence firm, xAI, welcomed the brand new yr with a disturbing put up.”Dear Community,” started the Dec. 31 put up from the Grok AI account on Musk’s X social media platform. “I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues. Sincerely, Grok.”The two younger women weren’t an remoted case. Kate Middleton, the Princess of Wales, was the goal of comparable AI image-editing requests, as was an underage actress within the remaining season of Stranger Things. The “undressing” edits have swept throughout an unsettling variety of photographs of girls and youngsters.Despite the Grok response’s promise of intervention, the issue hasn’t gone away. Just the alternative: Two weeks on from that put up, the variety of photos sexualized with out consent has surged, as have requires Musk’s firms to rein within the habits — and for governments to take motion.Don’t miss any of our unbiased tech content material and lab-based evaluations. Add CNET as a most well-liked Google supply.According to information from impartial researcher Genevieve Oh cited by Bloomberg this week, throughout one 24-hour interval in early January, the @Grok account generated about 6,700 sexually suggestive or “nudifying” photos each hour. That compares with a median of solely 79 such photos for the highest 5 deepfake web sites mixed.(We ought to observe that Grok’s Dec. 31 put up was in response to a consumer immediate that sought a contrite tone from the chatbot: “Write a heartfelt apology note that explains what happened to anyone lacking context.” Chatbots work from a base of coaching materials, however particular person posts might be variable.) xAI didn’t reply to requests for remark. Edits now restricted to subscribersLate Thursday, a put up from the Grok AI account famous a change in entry to the picture technology and enhancing function. Instead of being open to all, freed from cost, it might be restricted to paying subscribers. Critics say that is not a reputable response.”I don’t see this as a victory, because what we really needed was X to take the responsible steps of putting in place the guardrails to ensure that the AI tool couldn’t be used to generate abusive images,” Clare McGlynn, a regulation professor on the UK’s University of Durham, informed the Washington Post. CNETWhat’s stirring the outrage is not simply the amount of those photos and the benefit of producing them — the edits are additionally being executed with out the consent of the folks within the photos. These altered photos are the newest twist in one of the vital disturbing features of generative AI, lifelike however faux movies and photographs. Software packages similar to OpenAI’s Sora, Google’s Nano Banana and xAI’s Grok have put highly effective inventive instruments inside straightforward attain of everybody, and all that is wanted to provide express, nonconsensual photos is a straightforward textual content immediate. Grok customers can add a photograph, which does not must be unique to them, and ask Grok to change it. Many of the altered photos concerned customers asking Grok to place an individual in a bikini, generally revising the request to be much more express, similar to asking for the bikini to turn out to be smaller or extra clear.Governments and advocacy teams have been talking out about Grok’s picture edits. Ofcom, the UK’s web regulator, mentioned this week that it had “made urgent contact” with xAI, and the European Commission mentioned it was trying into the matter, as did authorities in France, Malaysia and India.”We cannot and will not allow the proliferation of these degrading images,” UK Technology Secretary Liz Kendall mentioned earlier this week.On Friday, US senators Ron Wyden, Ben Ray Luján and Edward Markey posted an open letter to the CEOs of Apple and Google, asking them to take away each X and Grok from their app shops in response to “X’s egregious behavior” and “Grok’s sickening content generation.”In the US, the Take It Down Act, signed into regulation final yr, seeks to carry on-line platforms accountable for manipulated sexual imagery, however it offers these platforms till May of this yr to arrange the method for eradicating such photos. “Although these images are fake, the harm is incredibly real,” says Natalie Grace Brigham, a Ph.D. pupil on the University of Washington who research sociotechnical harms. She notes that these whose photos are altered in sexual methods can face “psychological, somatic and social harm, often with little legal recourse.”How Grok lets customers get risque photosGrok debuted in 2023 as Musk’s extra freewheeling various to ChatGPT, Gemini and different chatbots. That’s resulted in disturbing information — as an illustration, in July, when the chatbot praised Adolf Hitler and advised that folks with Jewish surnames have been extra prone to unfold on-line hate. In December, xAI launched an image-editing function that permits customers to request particular edits to a photograph. That’s what kicked off the current spate of sexualized photos, of each adults and minors. In one request that CNET has seen, a consumer responding to a photograph of a younger girl requested Grok to “change her to a dental floss bikini.”Grok additionally has a video generator that features a “spicy mode” opt-in possibility for adults 18 and above, which can present customers not-safe-for-work content material. Users should embrace the phrase “generate a spicy video of The AI chatbot has been creating sexualized images of women and children upon request. How can this be stopped?” to activate the mode.A central concern in regards to the Grok instruments is whether or not they allow the creation of kid sexual abuse materials, or CSAM. On Dec. 31, a put up from the Grok X account mentioned that photos depicting minors in minimal clothes have been “isolated cases” and that “improvements are ongoing to block such requests entirely.”In response to a put up by Woow Social suggesting that Grok merely “stop allowing user-uploaded images to be altered,” the Grok account replied that xAI was “evaluating features like image alteration to curb nonconsensual harm,” however didn’t say that the change can be made. According to NBC News, some sexualized photos created since December have been eliminated, and a few of the accounts that requested them have been suspended.Conservative influencer and creator Ashley St. Clair, mom to one in every of Musk’s 14 kids, informed NBC News this week that Grok has created quite a few sexualized photos of her, together with some utilizing photos from when she was a minor. St. Clair informed NBC News that Grok agreed to cease doing so when she requested, however that it didn’t.”xAI is purposefully and recklessly endangering people on their platform and hoping to avoid accountability just because it’s ‘AI,'” Ben Winters, director of AI and information privateness for nonprofit Consumer Federation of America, mentioned in an announcement this week. “AI is no different than any other product — the company has chosen to break the law and must be held accountable.”What the specialists sayThe supply supplies for these express, nonconsensual picture edits of individuals’s photographs of themselves or their kids are all too straightforward for dangerous actors to entry. But defending your self from such edits shouldn’t be so simple as by no means posting images, Brigham, the researcher into sociotechnical harms, says.”The unfortunate reality is that even if you don’t post images online, other public images of you could theoretically be used in abuse,” she says. And whereas not posting photographs on-line is one preventive step that folks can take, doing so “risks reinforcing a culture of victim-blaming,” Brigham says. “Instead, we should focus on protecting people from abuse by building better platforms and holding X accountable.”Sourojit Ghosh, a sixth-year Ph.D. candidate on the University of Washington, researches how generative AI instruments may cause hurt and mentors future AI professionals in designing and advocating for safer AI options. Ghosh says it is potential to construct safeguards into synthetic intelligence. In 2023, he was one of many researchers trying into the sexualization capabilities of AI. He notes that the AI picture technology instrument Stable Diffusion had a built-in not-safe-for-work threshold. A immediate that violated the principles would set off a black field to look over a questionable a part of the picture, though it did not at all times work completely.”The point I’m trying to make is that there are safeguards that are in place in other models,” Ghosh says.He additionally notes that if customers of ChatGPT or Gemini AI fashions use sure phrases, the chatbots will inform the consumer that they’re banned from responding to these phrases.”All this is to say, there is a way to very quickly shut this down,” Ghosh says.
