Advertisement
Home NEWS Advocacy

Has Elon Musk’s AI Chatbot Grok Become A Weapon Against Women?

"It was uncomfortable to have that power asserted over you"
woman looking at phone in the dark
Image: Getty

An alarming number of complaints have surfaced after Elon Musk-founded AI chatbot, Grok, was used to generate sexualised images of women and children.

First created in July before being launched for public use in October, the AI image generator is hosted on X (formerly Twitter), allowing for fast, public and widespread sharing of Grok-created content. As recently as January 7th, the platform was allowing users to depict others in a sexually explicit nature.

One woman, Mollie, told The Cut, she found comments under a photo she posted to X in December, whereby users were prompting the AI chatbot to strip her down to a bikini.

“Put her in a micro bikini from this angle,” the user wrote, tagging the X AI chatbot. Grok complied, showing an AI-generated image of Mollie in the same position wearing a thong bikini instead of her original outfit.

“It was scary and it was uncomfortable to have that power asserted over you,” she says. “I’ve been sexually assaulted in the past, and it almost felt like a digital version of that.”

According to the publication, Mollie requested Grok remove the image, with the chatbot acknowledging “it violated your consent”. However, the image remains public on the social platform.

Not-for-profit organisation, AI Forensics conducted a report analysing 20,000 images created by Grok over a one-week period in December, finding two percent “depicted a person who appeared to be 18 or younger, including 30 of young or very young women or girls, in bikinis or transparent clothes”, Nine reports.

Advertisement
Woman on phone
Image: Getty

The social media platform addressed concerns via its safety account, but did not deny unconsensual sexualised content exists had been created by Grok.

“We take action against illegal content on X, including Child Sexual Abuse Material, by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” the post reads.

Technology-facilitated violence against women and girls has been rising at an alarming rate, with the intervention of AI, UN Women expects rates to increase dramatically.

Recent studies show “38 percent of women have personal experiences of online violence, and 85 percent of women online have witnessed digital violence against others.

Further studies show  90 to 95 percent of all online deepfakes are non-consensual pornographic images, with around 90 percent depicting women. In fact, “many deepfake tools are not even designed to work on images of a man’s body”, UN Women reveals.

Platforms hosting these tools must be held to a higher standard, particularly when they enable the creation and circulation of non-consensual sexual content in real time.

“The technology is moving far faster than the protections,” UN Women has warned adding that “Less than half of countries have laws that prosecute online abuse.”

For victims like Mollie, the damage is already done. Even when images are flagged or acknowledged as violating consent, they can be impossible to completely erase or get back — compounding the harm long after the initial act.

Related stories


Advertisement
Advertisement