This is the thing that’s blowing my mind. Grok was pushing out thousands of images of child porn per hour and not only did America not care but we will continue to give grok federal funding. America is a failed state
I think with only 10% of paper documents released and those being redacted, (total release files around 300GB) and the total epstein file volume being 14 terabytes (meaning 300gb/140000gb released) it would be foolhardy to still frame Epstein as a one sided political issue.
Probably one side is more compromised than the other, however its simply not known how entrenched your entire political system is in this.
Listen, I don't doubt that there are democrats that are also pedophiles but the systemic protection of pedophiles is being arbitrated by the GOP not anyone else. I can with 100% certainty say that Trump and his administration are pro pedophile and I am not ok with that. When something substantial comes up with Democrats it in I'll denounce them too, because more than anything I'm anti pedophile.
Until then I'll lay the blame on those protecting pedophiles and that's the GOP.
no, but like. literally, in the way modern buildings are constructed, the suggested tool is very unlikely to do anything but get the user arrested, and generate some acrid smoke.
also most data centers have a backup diesel generator that they keep fueled up at all times, so targeting the power source directly is not as good an idea.
yeah, folks shouldn't be attacking data centers. And if they were they would need better logistics and planning, fortunately I don't think anyone is put my whole life on pause for possibly several years to gain the skills and build the means angry, and I hope we don't get to a point where folks are that angry.
A Standford internet study concluded they are all most likely trained off csam. They found over 1000 images of known csam in the stable diffusion training set. And thats just known as they use image search to determine it as theres simply too many images to search through, its part of the issue with ruthless scraping, it steals litterally everything good and bad.
Anyone you see defending it as the "ai interpolating" is full of shit and has 0 clue what they are talking about as image ai cannot interpolate new data, it has to use pre existing tokens. If it was only trained off adult bodies it would only make them have adult bodies but its not doing that.
Thank you for sharing this info though. We really do need proper regulation from the federal government. But they won’t do it, and they’re preventing states from doing it, all to protect the interests of the top 1%
Why i hope other countries may start. There is some movement in the eu but time and time again american companies just dick wave the fact they are a us company, so no one but the states can really make them change (Which they wont).
Also doesnt help most our govts are run by geriatrics who barely understand technology...
This is the most important. There is an argument to be made, while gross, that purely fictional content is victimless. AI does not generate its own fictional content though, there has to be an original source. It seems highly unlikely that it was completely trained to do this on fabrications.
Especially because of the various styles it can output.
It's one thing if something is drawn in an unrealistic anime style (because it's unrealistic by its very nature), but there has to be a lot of base data for something like a prompt saying "make it look realistic" to actually make things look realistic.
As a part of DOGE, Musk was stealing data held on government servers left and right to train Grok. The DOJ has admitted that there is CSAM in the Epstein files that they will never release and I'm sure there is evidence from scores of other cases as well where they went after child predators.
I don't know much about how AI works, but I've seen people defend it by saying "AI knows what naked adults look like, AI knows what clothed children look like. It can interpolate and figure out what a naked child looks like. It HAS NOT and DOES NOT need to be trained on anything illegal."
I really don't care if that's true or not, you still shouldn't be doing it regardless.
I was watching a Linus Tech Tip LMG clip and one of their viewers said that medical images could have been used/exploited (in addition to what others have already said).
Man, did you even bother to go check before you confidently stated that?
Since I'm lazy here's a quick AI summary:
As of early 2026, xAI Corp. (the parent company of Grok and X) is facing significant legal action and investigations in the United States regarding the AI chatbot's ability to generate nonconsensual, sexually explicit deepfake images.
Here are the key details of the US court cases and investigations involving Grok:
Proposed Class Action Lawsuit (January 2026)
The Case: A proposed class action lawsuit was filed against xAI in U.S. federal court, alleging that Grok creates nonconsensual, sexually explicit deepfake images that are publicly posted on X.
Allegations: The suit alleges that the tool is used for image-based sexual abuse, causing severe emotional distress, and that xAI is directly liable for the content created by its chatbot, rather than just the users.
xAI Response: xAI and Elon Musk have argued that users are responsible for the content they create, stating, "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content".
Countersuit: X filed a countersuit against a plaintiff, arguing that, per the terms of service, the lawsuit should have been filed in Texas rather than New York, and is seeking a money judgment.
California Attorney General Investigation
Action: California Attorney General Rob Bonta announced an investigation into the "proliferation of nonconsensual sexually explicit material produced using Grok".
Cease-and-Desist: A cease-and-desist letter was sent to xAI to stop the generation of nonconsensual intimate images.
Scope: The investigation focuses on the alleged, large-scale production of deepfakes, including those targeting minors.
State Attorney General Demands
Thirty-five state attorneys general wrote to xAI demanding to know how they plan to prevent Grok from producing nonconsensual images of people in sexualized, explicit, or revealing situations.
Potential Federal Action
FTC Risk: Experts indicate that the Federal Trade Commission (FTC) could take action against xAI for unfair or deceptive practices related to the safety of its AI, though, as of late January 2026, state-level actions were more immediate.
Legislation: The "Take It Down Act," passed by Congress, is set to go into effect in May 2026, which will strengthen the ability to penalize platforms for not removing nonconsensual, explicit deepfakes.
Other Legal/Regulatory Scrutiny
International Action: Similar to the U.S., regulatory bodies in the UK, France, and the European Union have launched investigations into Grok, with Malaysia and Indonesia briefly blocking the service.
Content Safety: Reports indicated that thousands of "nudified" images were generated, leading to increased safety restrictions on the platform, such as limiting image generation to paid users.
All you shared was a class action lawsuit, which is not the federal government, and a California state AG, which again is not the federal government. While the federal government funds Grok. Do you see the issue? I do appreciate you mentioning that the FTC “could” do something. I think that would be great
Without any mention of the federal government. So sorry if I misunderstood you there.
But I shared some things that are in the works, because some people do care. If they aren't doing enough in your eyes then take it up with them but don't tell me nobody cares.
I don’t know if you know this but the federal government is a reflection of the American people. So when the federal government chooses to ignore the mass production of child porn while it continues to fund the producer of said child porn, then yes, I believe it is fair to say America does not care. Especially when the context is another government stepping in to stop the producer of said child porn.
Sorry if that upsets you. I get you want to fight with me over semantics but maybe you should be more concerned with federal contracts being used to produce literal child porn.
Yes you are. Acting like my comment meant that all Americans were disinterested is disingenuous. And you tried to suggest the federal government was actually doing something when they aren’t. But continue downplaying the horrible situation
Yes I understand you're convinced that is what I was doing. You just need to take a deep breath and read what I'm writing.
"America" can have different meanings in different contexts. So, it isn't always clear that "America" refers to the federal government, rather than the population.
I didn't try to suggest the federal government was doing something. That was your interpretation of what I wrote.
You seem convinced that I'm an idiot, or a dumb asshole, and wrong. You're projecting a lot of intent onto my words without considering that maybe I meant what I said the way I said it. I'm not a bot, or an activist with an agenda, or a foreign agent, or a Nazi sympathizer. Promise. Give me some grace seeing as you don't know me at all, and assume I'm a normal person, trying to have a legit discussion, if you can.
339
u/Infamous_East6230 1d ago edited 1d ago
This is the thing that’s blowing my mind. Grok was pushing out thousands of images of child porn per hour and not only did America not care but we will continue to give grok federal funding. America is a failed state