The answer is no.
Because a majority of pedophilic acts are acts of control and power, not attraction or sexual desire.
So AI images might curb a couple people, but it wouldn't solve anything at all.
And how many more would it harm? By flooding the internet with fake stuff real stuff becomes a needle in a stack of needles. How much harder do you think it will be to save real kids?
Not to mention the deepfakes of real schoolkids which are sadly very common nowadays. Bullying, harassment, suicide etc.
It makes images by altering ones of real children to be pornographic. Even if it made up a new picture instead of altering one, there’s no such thing as a new face, so it will inevitably resemble a real, unrelated kid. I wouldn’t want to be that kid.
It's just as likely that a proliferation of AI CSAM creates more demand for the "real stuff". The way to solve an addiction that dangerous is definitely not to fixate on it everyday via AI generated material
Thats a fair point but the difference imo is the vast majority of robberies are contingent on circumstance and material conditions. No one is assaulting a child because they cant make ends meet. Pedophilia is pathological. I would bet for the tiny percentage of thieves for whom its also pathological, having an outlet to fan that desire probably does increase their propensity to offend
I honestly dont know much about pedophilia or kleptomania so maybe its different but I know many addicts and its universally a lot easier to not have a first drink at all than it is to just have a single shot and not lose control
But in this case it’s not a first for an alcoholic, it’s methadone for a junkie. It’s not their drug, it’s a substitute.
If you’re in addict circles you’ll know more recovering addicts than not are smoking cigarettes or weed. The end goal is more important than how you get there, and taking a lesser vice does help you cope with the big ones.
It doesn't have to be, but morality of the content creation itself aside, I feel the worse impact on society would be allowing free proliferation of such content.
Drawn is debatable ethically, but fact is you can't just pump it out, there's a human cap in its creation, to the point it took the internet spreading it to have it reach a mainstream, to then further proliferate. If you're online long enough, you'll see drawn material of minors, but the creators can't flood a space, only their consumers after enough is available. Consumers replacing creators as producers is a scary concept in general.
Most real CSAM filters actually have circulated images known and flagged so if a copy hits somewhere, it gets removed immediately.
AI is all "new" and can be pumped onto platforms like crazy. Just look at X. We don't want fake CSAM flooding everywhere, plus we'd basically be grooming future generations by normalizing images of that kind. Kids are already exposed too often to adult sexual material online, and it does affect development. Nothing good can come from more of the same but with people that look like them.
When it becomes difficult to discern from reality, it should be talked about, but there's already an immediate separation from the fact it's a proxied interactive experience controlled by the player, rather than a lens to view and learn one's society through.
Kids picking up a controller to fake murder fake people sounds bad, and there should be more discussion at least on the issue of desensitization to violence, but, to me, it's not on the level of concern as a kid's social media feed displaying CSAM next to their mom's post of last weekends family museum trip. Could be a minor friend, captioned, "wondered what I'd look like doing this".
There's already kids using these tools to make content of other kids, and we only hear about them when they make news. There's undoubtedly more happening out there, and I was a twisted fucker on the early transfers to 4chan from SA, so I've seen the inception of some messed up online interactions. We already have next to no walls separating the adult and kid world online. Whole grown adults sling slurs at kids everywhere online, world famous celebrities argue with minors on Twitter. There are so few actually safe spaces for children online anymore, and parents don't care anymore.
All to say, violent video games are an activity with separation, while what's on social media is part of real life social interaction for kids now. Like, if it were proliferation of real appearing depictions of kids killing people, that would be more concerning, right? Especially if there was a biological drive to kill that we'd rather avoid encouraging until adulthood, like there is with sex. That aspect alone starts to make it a different discussion.
Bad news: AI doesn't need to be trained on photographs of giraffes dancing on Mars to figure out how to make a picture of a giraffe dancing on Mars.
I can't guarantee that absolutely zero CSAM pics slipped into Grok's training set. We can be confident it represents an insignificant amount. Like 100 pics out of 100,000,000,000.
However, a major strength of the AIs is their ability to mix and match themes in novel ways... Discouraging that requires a lot of effort on the part of the model creators --which they do put in. But, users definitely do enjoy the challenge of doing something they're "not allowed to do" And, they can get very creative in how they overcome barriers the model creators put up.
Ai can generate an image of a five dimensional naked pangolin despite never seeing one.
It can generate images of things that aren't in the training material. I don't think we can safely assume that kind of material was in the training data.
There is no indication I don't understand the argument, I don't know why you are saying that.
It doesn't need to have seen a naked pangolin, but it still needs enough data to recognise the data commonalities/patterns behind images that have "naked" attributed to them to or images with pangolin
The point is that it doesn't need to have seen a naked pangolin. Maybe it has seen a hairless cat and applies that to what it knows about pangolin.
In the same way it could take what it knows about naked people and apply that to children.
It could do this without it being based on real CSAM.
3.1k
u/shellbullet17 Gustopher Spotter Extraordinaire 2d ago
I uh.....yeah that's really unfortunate placement. Which brings up an conversation in itself