Dr. Daisy Dixon sat in front of a tapestry

Daisy Dixon: On Grok as a Vehicle for Sexist Abuse

Hi, Daisy, would you like to tell us a little bit about yourself? How you got into philosophy and a little bit about your background.

I don’t know how far back you want me to go, but I started more in like a fine art background. I did my undergrad in fine art with a minor in philosophy. I was most of the time in the studio making artworks and then doing a little bit of philosophy. Then I just thought about doing post-grad, and I didn’t really know what that meant. I didn’t have any sense of what academia looked like as a career.

But I knew I wanted to keep doing more of what I’ve been doing, and then ended up doing a master’s in philosophy and I went to Cambridge for that. It’s kind of been, what I describe it as like falling up the stairs and I never had any intention to be an academic as I said, I didn’t really know what that even was. But the Master’s was under a year, so I just found myself applying for a PhD.

I managed to get funding for that, which is so difficult anyway, but especially now, so I feel like I was lucky in that respect. Then I just stayed on doing philosophy. I stayed in Cambridge again for a post-doc. By this point I was kind of doing philosophy of language, but I was moving into aesthetics and kind of made sense because of my art background, so it was nice to bring the two together.

And then by some miracle, after being on a horrible job market for a while—I don’t know, possibly up to a hundred rejections—Cardiff gave me this job and I started here in 2022. So, yeah, I’m very happy to be here as long as they want to keep me.

I have heard that you have written a book.

Yes, I’ve got the proof. Right there. The red book there is a sort of pretend proof, it’s like a pretend book for now. I’ve just written this book called Depraved, the Story of Dangerous Art, and it’s kind of a culmination of fifteen years’ worth of my career, I guess.

It’s an art history, but it’s written by a philosopher. It starts in pre-history artworks, looking at how artworks can be immoral and how they can be dangerous to society and things like that. I don’t think I can talk a huge amount about it. It’s coming out in June and I’ll have to have you back in and talk about the book then.

Could you talk about your previous experiences with social media and if you have encountered large volumes of hate prior to more recently?

Yeah, I got what was then known as Twitter during my art degree. Our art professors told us we should get it as a way to get our art out there, but I didn’t actually start using it until the pandemic. So, around 2021 is when I started tweeting.

And even in the early days, this was obviously before—I can’t remember what year Elon Musk bought it—but it was before Elon Musk came along and even then there was some bizarre kind of sexism that I had, you know, this was my first foray into how misogyny exists in online spaces. And initially, the first kind of barrage of hate I remember was when I commented on my dating experiences and I was surprised as this was offline, but using dating apps, how a lot of men thought my PhD was fake and they didn’t think it was real or they’d get very defensive.

They’d make assumptions about my class and background and stuff like that because I had a PhD. So, I just made a comment about that on X and I just got the most insane wave of hate from men telling me I should be the classic stereotype, like I should be in the kitchen, women can’t be philosophers. I think that was around 2021. And yeah, and then before this whole recent Grok stuff, I’d had different swathes of misogynistic abuse.

It seems to be that philosophy, specifically, is something that a lot of people still think is something that
old, old white men with beards do. I think the way I dealt with it is that, if anything, in this cynical way, it
built my following.

It was the way I responded, I would often troll back. I would engage and sometimes it would be just jokey trolling back, which would just make them more angry. And that was sort of energising and I enjoyed seeing them humiliate themselves and things like that.

But quickly, things turned more sinister, like it was about a year ago—over a year ago. I’d just post a meme related to the fact that philosophers can look like me now. That’s when I started getting rape threats and death threats and that was that. Then I started to think, okay, I can’t laugh this off any more, like, this is getting extreme. But I carried on tweeting, this was after Elon Musk as well.

Mainly because X was still a good network for me. You know, if you don’t come from academic or literary worlds, it did open up. The only reason I’m publishing that book is because I met popular historians on there, which I would never have met inside the university spaces.

I’ve made some lovely friends as well on X, so that’s why I’ve stayed up until recently. Well, that’s why I’m still on there at the moment.

For anyone who’s unsure on the chronology of what’s been going on—as much as you’re willing to share or talk about—could you give us a bit of a synopsis of what you’ve been experiencing recently with Grok?

It may have started at the very end of December 2025. I started noticing it in very early January. I was aware that Grok was something on X and I was aware that people could use it to generate images and videos.

But it was only in early January that I started noticing lots of men were in my replies to other posts, other random things I was saying—I tend to tweet like several times a day. And completely unrelated users were asking Grok to put me in a bikini, but Grok was taking my profile picture, so it wasn’t even a picture I posted.

Sometimes I have a lot of replies and I’m not always able to engage all the time with all of them. These were just ones I started noticing. I then posted a gym progress picture where I’m fully clothed and I’m showing off my biceps because it was in response to another tweet about women showing off biceps, so I did.

And that’s when I really saw an escalation of Grok putting her in a bikini, lingerie, and then they got more and more specific. It actually started off with just having Grok put me in clown makeup and changing the colour of my hair, which still felt weird.

One person was asking to decrease my biceps, but then others were increasing other parts of me in this very sexualised way. But this is the thing. There are so many, because it happens in replies to replies to replies. They were sort of a Russian doll situation where there were so many and—I haven’t actually gone through them.

I did see one recently where a user asked to make me pregnant with their child, put a wedding ring on my finger, delete my body count—you know, so much you can say about that—and revoke my licence as well. So, the prompts were getting—even just the bikini ones were bad enough—but they got very elaborate and I know some women went through even worse, like extremely violent ones as well.

Are the people who send them usually followers or are they just random strangers?

Seems to be a combination. A lot of men are doing it. There was one that said some violent language. It’s not nice. One said “Grok put her in a rape factory”. He, I believe, follows me. A lot of the others were following me.

So, it’s this weird sadomasochistic situation going on. But a lot of them were just random people. I think it’s because I’ve got a relatively big following—not huge, but enough to get other people who don’t follow me to see my tweets. Some of them might have been bots as well, but a lot of them seem to be real people.

How has this changed your relationship with AI? Obviously, it has been a thing, but imaging AI specifically becoming so accessible has changed people’s interaction with it. Has it changed your opinion in any way?

I think I’ve been aware for several years about the tireless campaigning that many people, especially women, have been doing for image-based abuse more generally. If I’m totally honest, for all the colleagues I have here and at Exeter and other places who work on AI, I have tried to avoid it.

In my day-to-day life, I don’t use it. I might use it if it just comes up on Google and gives you something. But I’ve never used it to write anything or talk to a chatbot or anything like that. I’ve tried to avoid it. Even in my research. AI art and stuff like that—I just wasn’t interested. Not that it wasn’t important. I just feel like the universe dropped me into this.

I had to learn about AI very quickly. And I do think the difference with Grok is that whilst women and children have been subjected to this kind of abusive AI generation on other platforms, I think Grok was slightly different because of its highly accessible nature and the fact that it’s public.

The strange aspect of this was that—not that it’s okay to generate these images for private use, because that is also illegal now—but there was an added power move that you’d see the prompt and then the image would appear within minutes underneath it. So, you’ll see that’s what I’ve described as the kind of effects it has on the victims.

It alienates you from your own body, because whilst you know it’s not you, you’re not just seeing the prompt and then thinking “Oh well, an image has been generated of me. That’s not nice”, but you’re then also seeing the image.

And that has this horrid effect where you experience this kind of destabilised sense of self because the in-puts for these images are your photographs. They have this transparency. The photographs have a privileged, direct visual contact with the thing in the photograph.

Daisy Dixon sat on a sofa.

Do you feel like you have to change how you post on things you might normally interact with a lot?

Yeah, sadly. I started getting a lot of harassment from real people, so not just bots, not random people. But there was a user on X with a huge following—and he’s known for being misogynistic, although he says he’s not a misogynist—who got really obsessed and started taking images from my Instagram and there was one or two where I had willingly posted a photo of myself in a bikini on holiday and he started harassing me.

He kept sending that image back to me about five times a day for over a week, saying that I was a massive hypocrite because he’d seen me on TV talking about this. And he called me a hypocrite and that my existence in public, if I show skin, as well as posting images of me in a bikini, amounts to a sexual assault on men.

This goes back to the original question about how I used to engage with stuff like that. When they say women can’t be philosophers, I feel like I can respond playfully back to that because it’s just an easier thing to get your head around. And it’s also something we grew up with.

I never grew up thinking a woman could be a philosopher. But now, this discourse has got so absurd that philosophically, it’s almost like philosophers are the worst people to engage with because we assume other people are thinking logically. We’re presented with completely illogical, fallacious statements and you can’t respond—it’s a completely bad-faith debate and you can’t respond.

So, not only didn’t I feel like I could engage with him on any level, but because he’s got such a big following, loads of other people started mining my online presence for other photographs that I’d posted. So, my initial response was sadly one of self-policing. I deleted a photo on my Instagram from a really cute solo Greece holiday of me in a bikini.

I deleted it and I was thinking that’s really sad because I started to get frightened. But the government eventually moved quite quickly, and all the other amazing campaigners and I felt like we weren’t alone.

I’m definitely not going to stop posting myself online, but I think a lot of victims did feel like that, and understandably.

How do you feel about the way that the government and police have responded to this issue?

That’s the controversy around all of this. A lot of campaigners—like I’m aware of one such project by Glamour—have been working on image-based abuse against women for years—because it does disproportionately affect women and girls. I forget the stages in legislation, but a bill was passed back in July about creation of these images being unlawful, but it wasn’t actually enforced as legislation.

A lot of campaigners—quite rightly—said that the victory felt bittersweet because the government had been dragging its heels. One response I had to that was that it was already illegal to share these images. And yet that’s what was happening. A lot of people asked why we’re focusing on Grok, because this has been happening for years.

It might be because of the public sharing aspects. And because while it was already illegal, there was a lot of vagueness around how that legislation would be enforced. I was really pleased to see the UK government get Ofcom involved and give them the right to use their full powers.

I think the threat to ban X was an extreme thing to do, but I think it was right that it was an option on the table. It would have been problematic if they had done that, but money talks. Fines first would make more sense, as 18 million—or whatever it was—is an amount that even someone like Elon Musk would feel.

I was generally pleased with how quickly they moved, but in the grand scheme of things, this is part of Labour’s ongoing crackdown—or at least strategy—against violence against women and girls. I think they were looking at other chatbots as well. The media storm was around Grok, but I don’t think that was entirely Grok that was being focused on.

Do you think this experience has changed how you think about aesthetics injustice in language
and do you think AI is something we should be considering more in these fields?

As I said, I’ve avoided AI in my research, and now I feel like there has been a weird silver lining in the fact I learnt about image-based abuse by being a victim of it rather than doing the philosophy first and then looking. Aesthetic domination is a concept I’ve been using to describe what this has felt like.

It stems from the fact that many men for thousands of years have felt a natural entitlement to women and girls’ bodies, as well as that women and girls are inherently sexualised or maternalised—think the Madonna-whore complex.

All those patriarchal concepts are seen in artworks, especially in Greek sculptures of women, where women’s bodies were available for male viewers but they had to be shown as if they were ashamed of their nudity. We see sexual objectification from classical sculpture all the way to Renaissance nudes, where women are always shown as white, young, soft-skinned, and in passive and docile situations.

This led to an aesthetic discourse where seeing women represented in that objectified, docile way entrenches women’s inferior status in society. So, this AI image view is the logical endpoint of this aesthetic domination where women’s bodies are not ours. Many male users feel this natural entitlement to women and girls’ bodies; they know that we’re not consenting, but they’re enjoying the fact that they’re violating it.

I do think that is the case, but I also think there are a lot of men who do not understand consent. A woman refusing them is an act of agency and autonomy. But if they don’t think that women generally have that, then it doesn’t make sense to them. This comes into philosophy of language as well.

Feminist speech act theory has been doing this since the ’90s with Ray Langton’s work on women’s refusal of sexual contact. It’s not that no gets taken as a yes. It can also sometimes be the case that the refusal doesn’t even make sense—it’s not even registered as a no.

With Grok and everything, we’ve reached this dystopian, futurist point of more than just historic aesthetic domination, where it’s not just our physical bodies being represented without agency, but our digital selves too.

Our very digital existence is under attack. There’s so much more to say about it. I need to think about it.

Thank you very much.

Interview by, Olivia Nilson, Bethan Jones, Arielle Melamed, Ruaidhrí Gillen Lynch.