- Non-consensual AI “undress” applications are not a technological oddity or an incident, but a new form of digital sexual violence — with measurable consequences for women's lives, their mental health and the right to public space.
- The Grok case shows that the problem is not in the “misuse of tools”, but in the design and responsibility of platforms : large AI systems can and do produce sexualized images without consent, with foreseeable and disproportionate harm to women.
- Attempts to solve this problem through copyright and individual “image protection” miss the point — consent, human dignity, and protection from violence cannot be reduced to a market dispute.
This website was among the first to investigate the phenomenon of the so-called undress/nudify apps, AI apps that can take your clothes off. And yes, of course, WHY WOULD ANYONE USE these apps to undress men? Of course, mostly girls and women will be the victims. When I think about it, if this world were at all empathetic and normal, artificial intelligence would actually be used to better diagnose cancer and create drugs, and there's no way limited resources would be wasted on such nonsense, humiliation and psychopathological experiments.
You know, when a woman strips naked of her own free will, she's a whore, but if someone strips her naked against her will, that's okay. Because to the patriarchy, only what is forbidden is okay and attractive – when it's not done by her own free will, but against her will, forced, even digitally. That's where all the incitements to rape come from.
Undressing Women Without Consent in the Digital World Is a Reflection of Misogyny
The “incident” where X- Elon Musk's AI, Grok, undressed a woman at the request of a user is not an incident. It is a reflection of misogyny, and also of the gamer culture of humiliation (which is very misogynistic, incel), and in which it is ok. That's the rule. Any kindness and respect for women is an incident.
Let's be clear – not only is it not okay, it's also a criminal offense
In India, the fear of humiliation and shame that someone's photo will be used for a striptease leads women to withdraw from the online space and, in an age-old feminine custom, to shrink and silence themselves. Shame is not being naked. Shame is humiliating another being.
Grok didn't solve the downloading problem. It just made it so that only those who pay a subscription and have the little blue checkmark can do it.
And yes – Grok created a nude photo of Renee Good, the woman killed by ICE in Minneapolis. Then Grok-generated content featuring minors surfaced. Now imagine if someone used AI to create a photo of Charlie Kirk performing fellatio on Trump or having a micro-penis, or Peter Thiel having sex? And is there really anything that could shame this group of people? When I say group of people, I don’t just mean MAGA. I mean the entire spectrum of people who find it funny to undress a woman using AI.
It seems that this made Grok eligible to become Pentagon AI – integrated in their systems,
No Consent? Bye-Bye!
This case clearly shows how technology can turn into a tool of violence when used without ethical and legal constraints. Undress applications and similar tools do not produce “harmless” deepfakes and brain rot, but non-consensual sexualized images, which is a form of digital sexual violence.
The key is that there is no consent – and without consent there is no “freedom of expression”, no “joke”, and no technological neutrality. This kind of content is normalized along with ubiquitous pornography, that is, it is not clear to young people that there is something deeply unethical and that it is a criminal act. Humiliating another being is an absolute moral decline.
Technology is not neutral. It is not true that it is up to us whether it will be used for good or bad purposes.
The case with Grok is additionally problematic because it shows that large, resource-rich AI systems are not immune to abuses, but actually encourage and normalize them. If the model can generate or facilitate the creation of such content, the responsibility is not only on the user, but also on the company that developed the model and put it into circulation. Technology is not morally neutral when the consequences are predictable. This is the first time we have technology that is itself a problem. Technology is NOT NEUTRAL. This is the bush where the rabbit of shirking responsibility lies.
You know, I hear too many times “but it's just technology, it depends on people whether they will use it for good or bad purposes”.
Undress apps are not general tools that “someone misused”. They are specifically designed to remove clothing from women's bodies, produce sexualized content (including offering BDSM and shibari practices to teenagers, which creates additional sexual confusion), and operate without any consent mechanism. It's not a neutral function – it's a built-in assumption that someone else's body is available for manipulation. With a note – female body, predominantly.
If the abuse is obvious, massive and repeatable, then we are no longer talking about misuse, but about structural risk. This is not an accidental, but a deliberate outcome. Neutrality ends where there is a systematic disproportion of harm.
Imbalance of Vulnerability and Withdrawal of Women From the Digital Space
Of particular concern is the fact that the targets are almost always women. This merely transfers existing patterns of misogyny and control over women's bodies into the digital space, where the harm, shame and trauma are often permanent, and the removal of content is almost impossible. However, as this type of technology disproportionately targets women and girls, including minors, and then some other minority groups (LGBTIQ, national minorities), we cannot talk about any neutrality.
The claim of technology neutrality is often used to depoliticize violence, shift blame to the individual, avoid regulation, and shield companies from liability. In this sense, “technology is neutral” is not a description of reality, but a rhetorical strategy. Or better said, a rhetorical deception.
The example from India will not be isolated – this will be a systemic process of pushing women out of the online sphere, silencing their voices for fear of being shamed in front of their families. This is how women are silenced in politics – if a fake image of a politician is created, her voice is no longer valid, because she is a whore. It doesn't matter that it is fake, but she is shamed and her voice and thoughts are not valued.
Unlike men who can get away with taking away and cutting off aid to those in need, destroying the healthcare system (don't think I'm just thinking of the US – I'm also thinking of Bosnia and Herzegovina and the region, where it was mostly men who destroyed the welfare state and public healthcare), they can rape, exploit minors and be Teflon people, to whom nothing sticks. And women can't be Teflon, for them the slightest mistake can ruin their lives and careers.
Copyrighting Your Image Is Not an Elegant Solution – This Move Would Only Deepen the Problem and Inequality
Here, we should refer to the opinions of those who have stepped deeply into neoliberal immorality – that this problem of misuse of images for the purposes of creating soft or hard pornographic content can be solved by placing copyright on one's image. Oh no, no and no!
At first glance, it sounds pragmatic: if you have rights over your own image, you can sue those who abuse it. However, copyright is the wrong tool for this problem. Copyright has historically been designed to protect creative works, not human dignity, bodily integrity, or consent. They do not protect a person because he is a person, but because he is an “intellectual property holder”.
The key problem with this approach is shifting the burden to the victim. A woman (or anyone who is a victim) would have to actively “protect” her image, prove ownership, initiate procedures, bear costs and stress. This is a classic example of making the victim responsible, rather than sanctioning the abuser. The problem with undress apps is not identity theft, but sexual violence without consent. Copyright does not recognize the category of consent of the body — only permission to use the work.
Most women and other vulnerable victims do not have the resources to litigate.
Copyright is territorial, slow, and expensive. Deepfakes and AI content spread globally, instantly, and anonymously. Most women don't have the resources to handle multiple international disputes.
This very bad idea is a reflection of the neoliberal mindset in which the individual is a market subject-object. This is prostitution of one's character, instead of protecting human dignity being a moral imperative!
This is typical neoliberal logic: the privatization of protection against violence. Instead of society and the state setting clear boundaries, the burden is shifted to the individual – usually the one who is already in an unequal position. This is not a market dispute. This is an attack on human rights and mental health, creating permanent trauma.
It should be noted that Elon Musk hates women more and more. Although he wasn't like that before, all that insistence on IVF has a misogynistic side to male offspring. And now he claims that Grok's incident (we repeat, it's not an incident, but systemic support for misogyny and the removal of women from the Internet) is just a reason, a casus belli for the regulatory authorities to censor his child.
And we know that in the world of laissez-faire capitalism, the claim that someone is censoring something is actually a dog whistle for “they are commies”. Regulation of anything and everything that smacks of the welfare state, of statism, for this group of insanely powerful is communism. And the world, especially the USA, is currently in the third wave of the Red Panic and the wave of neo-Carthyism.
Creating Sexualized Images Is a Problem of Power and Freedom Without Morality
The European Commission has declared Grok's nude pictures illegal, thereby incurring additional anger on the EU from the spoiled heir to the blood emeralds, I mean Elon Musk, who, intoxicated by the trillionaire wealth of something he has in his account and connections with the Trump administration, began to threaten the EU. Grok will be a test of EU resistance, in addition to the crisis with Greenland.
A month before the Grok scandal, Brussels fined Elon's X network 120 million euros for violating the bloc's flagship law on platforms, the Digital Services Act ( DSA ). The sentence caused a fierce reaction from Washington. The US administration has imposed a travel ban and US visa ban on former European Union digital policy commissioner and digital services architect Thierry Breton, and four disinformation activists.
In mid-January 2026, Malaysia and Indonesia became the first countries to ban X's Grok due to this explicit content that this AI can produce on user request.
Creating sexualized AI images without consent (and let’s be real, who gives consent?) is not a technical problem, but a problem of power, consent, and responsibility. It arises not because “we haven’t found the right legal trick yet,” but because technologies are allowed to develop and market themselves without clear boundaries about what can’t be done to people.
Regulation, Regulation, Regulation
The essential solution must start from consent as a basic principle: if a person has not given clear, informed consent, such content must not be generated, shared or monetized. This implies legal regulation, but also the active responsibility of those who develop and distribute AI systems, including mandatory technical restrictions and sanctions.
And yes – this includes images and recordings that were not created using Generative Artificial Intelligence but in reality. If the person has not given consent to publish such things, those who share it are just paps and rude, soulless uncultured raw materials. And we know that there are groups of men in which such trophies are distributed. This forced manifestation of the alpha male is only an external symptom of insecurity and complex. A gentleman never does that.
Finally, it's important to be clear: this is not a “controversial use of AI”, but a digital form of violence. As long as we don't treat it as violence, rather than a side effect of innovation, the problem won't be solved—it will only become more sophisticated.
Note: This is not informative news, but an argumentative essay, an opinion piece, and it was published in that section, as a column. All factual claims are supported by sources. Value judgments are explicit and clearly separated from facts.
The cover illustration was created using the GAI tool.
Author: