It's actually less scary that AI videos will exist to trick people and more scary that people will watch a genuine video you send to them of war crimes, their favorite politician saying psycho things, etc. and then those people will react by claiming that what you're showing them is AI generated. It's the forever excuse for nationalists to cheer for "their team" while "their team" participates in the indiscriminate destruction of innocent human lives.
To be fair, those people were already the ones who would claim verified statistics are fake, or part of a controversy. What is more scary is that many will now decide only to believe what 'respected' media companies tell them, oblivious to the fact that those companies have always manipulated us, and most have no issue sharing fake videos because they don't bother to fact check when something agrees with their overlords.
Actually, trust around AI his been on my mind a lot as an instructor. I marked one student's paper as temporarily "0" because of 60% or more of AI usage, and after emailing and working with each other one-on-one, I realized that the AI tool that analyzes AI had more flaws than I even previously thought. A lot of students use Grammarly because their HS teachers tell them to, so I encourage them not to use it at all. But then I've had people be like "oh it was just Grammarly" and I realized later it was in fact AI generated. I know plagiarism has always been around, but it's making me have to analyze my students and distrust them first before we can build trust, and it's honestly horrible. Luckily, I've chosen to have at least some in-person time with my courses, and we do a lot of handwritten exercises in class to help me determine where they're at. But it's like a weird quiz they have to do for me so I can know their online work is theirs. It's exhuasting.
J.P. I read your column religiously. You give me hope! (usually!)
Here's something I wrote on this topic:
From: The Revolution Will Be LIVE: Top 10 Reasons to Organize in the Real World, Antonia Scatton, May 04, 2025
"AI is also accelerating the inevitable collapse of trust in Internet content. Disinformation was limited by the capacity of live people working in troll farms. AI will blow that up exponentially. You already cannot believe what you see with your own eyes. AI will continue to train itself on own crappy output until the Internet is hopelessly polluted. People are already starting to unplug."
I've had a similar thought, and noticed a shift in my behavior too. Recently while scrolling through fake stuff that looks real and real stuff that looks fake, I noticed that not knowing what's real anymore made me hope off. Maybe there's a silver lining after all.
I do think things will get a lot worse before they get any better. perhaps that's natural. pretty soon the internet is going to look like a monstrous digital fever dream (if it doesn't already) and a lot of ppl will simply opt out. what's needed in order to shift the culture however is a mass movement of ppl giving up their smartphones (tho that's easy for me to say given I don't own one and, as such, am able to see what they've done to ppl). anyway I'm logging off and going for a walk lol
Disconnection from reality is the definition of psychosis, the lack of "reality testing", and so our thinking is bound to be filled in by delusions, as is so obvious in Trumpworld. We have willingly lost our damn minds. Last one out, turn off the lights.
If this isn't the Rubicon Moment where we fully enter a post-truth society, then I don't know what will be. What even is "the truth" in a world where it's easier than ever to fabricate and spread fake information? There isn't an easier answer to this, but it's not going to be pretty watching people embrace countless lies, not out of malice, but because filtering out fake information and finding what's true will be like trying to find a needle in a haystack for most people.
JP, I have to disagree with you on this one. You complain that AI is bringing us to a world without trust, but it seems to me that even before AI we already had a world without trust, if you've been paying attention.
Think about it: There are several statements that are believed by half of our society and disbelieved by the other half. (I won't name specific examples because your beliefs might not match mine.) This means that someone is lying to half of us.
We have to make our decisions based on evidence at a higher level than just what we can see with our eyes. For instance:
Which wars are based on lies, and which wars are being fought for legitimate reasons? I can't be certain, but it appears to me that MOST of the wars are based on lies, and therefore we need to change the system that decides which wars to have.
And which economic system has good results, and which has bad results? Again, your answer to that has to be based on higher levels of thinking.
I don't have simple answers for these questions. But these questions arose before we ever had anything like AI.
There is a separate, second problem with AI: What about all the electricity and water it requires? I would rephrase that question: Who is paying for the electricity and water, and who is profiting from it? If capitalism made any sense, it would pay for such expenses. I won't pursue that question further here, except to say that this is a separate question from the question of pursuing truth.
Point of information: yes gen AI uses a bit more power than e.g. reading web text or viewing existing video. But those are anyway very small overall parts of our daily energy expenditures. For example, making a coffee takes energy equivalent to fifty hours or so of gen video (using the value from your chart)!
The *actually* concerning energy expenditure will come if industrial processes start to get increasingly automated (this is on the horizon by 2030), and naive production expansion could then rapidly dwarf existing energy consumption. Things could get very dodgy if this is allowed to impinge on ecosystems or atmospheric conditions (it could become humanly inhospitable very quickly).
I too was recently upset to be unable (I knew it was coming!) to tell cheap AI video from real footage. It's ruinous to our epistemic commons, as you've discussed.
For what it's worth, there are many technical solutions, but so far suppliers and governments haven't made the moves needed to get set up. Our communication infrastructure will need improvements, and it won't get fixed overnight. My organisation is running a fellowship (www.flf.org/fellowship) among other initiatives targeting this growing epistemic challenge. It's surmountable, we just have to demand (and contribute to) fixes!
All of this was entirely predictable. And the most likely end stage is dystopia as the wolrd burns. We don’t have it in us to do the necessary cleaning / revolt / nigh-religious jihad against the oligarchy. Colous me incredibly sceptical, at least.
It's actually less scary that AI videos will exist to trick people and more scary that people will watch a genuine video you send to them of war crimes, their favorite politician saying psycho things, etc. and then those people will react by claiming that what you're showing them is AI generated. It's the forever excuse for nationalists to cheer for "their team" while "their team" participates in the indiscriminate destruction of innocent human lives.
Well said!
The scariest part of AI has never been that fake videos/photos can be made, but that it causes us to doubt everything we see.
To be fair, those people were already the ones who would claim verified statistics are fake, or part of a controversy. What is more scary is that many will now decide only to believe what 'respected' media companies tell them, oblivious to the fact that those companies have always manipulated us, and most have no issue sharing fake videos because they don't bother to fact check when something agrees with their overlords.
That’s part of what makes AI scary
Very well said! I've thought about this a lot regarding bad faith actors claiming that a lot of the footage coming out of Gaza is faked.
Wow, so much here. Thank you for putting all this together.
Thank you J.P.
Much appreciated.
Good points all well made!
I would only add that AI is arguably associated with vampires since the tech bros see it as part of their bonkers quest for transhumanism
https://open.substack.com/pub/noelkeith/p/tranquil-piece-of-mind-vol-2-no-3?r=4c7psw&utm_medium=ios
Another aspect of late decaying capitalism. In Marx’s words - “All that is solid melts into air”.
Actually, trust around AI his been on my mind a lot as an instructor. I marked one student's paper as temporarily "0" because of 60% or more of AI usage, and after emailing and working with each other one-on-one, I realized that the AI tool that analyzes AI had more flaws than I even previously thought. A lot of students use Grammarly because their HS teachers tell them to, so I encourage them not to use it at all. But then I've had people be like "oh it was just Grammarly" and I realized later it was in fact AI generated. I know plagiarism has always been around, but it's making me have to analyze my students and distrust them first before we can build trust, and it's honestly horrible. Luckily, I've chosen to have at least some in-person time with my courses, and we do a lot of handwritten exercises in class to help me determine where they're at. But it's like a weird quiz they have to do for me so I can know their online work is theirs. It's exhuasting.
All the more reason we need to get off line, get outside and keep building IRL networks to resist this fascist mess we're in.
J.P. I read your column religiously. You give me hope! (usually!)
Here's something I wrote on this topic:
From: The Revolution Will Be LIVE: Top 10 Reasons to Organize in the Real World, Antonia Scatton, May 04, 2025
"AI is also accelerating the inevitable collapse of trust in Internet content. Disinformation was limited by the capacity of live people working in troll farms. AI will blow that up exponentially. You already cannot believe what you see with your own eyes. AI will continue to train itself on own crappy output until the Internet is hopelessly polluted. People are already starting to unplug."
https://reframingamerica.substack.com/p/the-revolution-will-be-live
I've had a similar thought, and noticed a shift in my behavior too. Recently while scrolling through fake stuff that looks real and real stuff that looks fake, I noticed that not knowing what's real anymore made me hope off. Maybe there's a silver lining after all.
I do think things will get a lot worse before they get any better. perhaps that's natural. pretty soon the internet is going to look like a monstrous digital fever dream (if it doesn't already) and a lot of ppl will simply opt out. what's needed in order to shift the culture however is a mass movement of ppl giving up their smartphones (tho that's easy for me to say given I don't own one and, as such, am able to see what they've done to ppl). anyway I'm logging off and going for a walk lol
Disconnection from reality is the definition of psychosis, the lack of "reality testing", and so our thinking is bound to be filled in by delusions, as is so obvious in Trumpworld. We have willingly lost our damn minds. Last one out, turn off the lights.
If this isn't the Rubicon Moment where we fully enter a post-truth society, then I don't know what will be. What even is "the truth" in a world where it's easier than ever to fabricate and spread fake information? There isn't an easier answer to this, but it's not going to be pretty watching people embrace countless lies, not out of malice, but because filtering out fake information and finding what's true will be like trying to find a needle in a haystack for most people.
JP, I have to disagree with you on this one. You complain that AI is bringing us to a world without trust, but it seems to me that even before AI we already had a world without trust, if you've been paying attention.
Think about it: There are several statements that are believed by half of our society and disbelieved by the other half. (I won't name specific examples because your beliefs might not match mine.) This means that someone is lying to half of us.
We have to make our decisions based on evidence at a higher level than just what we can see with our eyes. For instance:
Which wars are based on lies, and which wars are being fought for legitimate reasons? I can't be certain, but it appears to me that MOST of the wars are based on lies, and therefore we need to change the system that decides which wars to have.
And which economic system has good results, and which has bad results? Again, your answer to that has to be based on higher levels of thinking.
I don't have simple answers for these questions. But these questions arose before we ever had anything like AI.
There is a separate, second problem with AI: What about all the electricity and water it requires? I would rephrase that question: Who is paying for the electricity and water, and who is profiting from it? If capitalism made any sense, it would pay for such expenses. I won't pursue that question further here, except to say that this is a separate question from the question of pursuing truth.
Point of information: yes gen AI uses a bit more power than e.g. reading web text or viewing existing video. But those are anyway very small overall parts of our daily energy expenditures. For example, making a coffee takes energy equivalent to fifty hours or so of gen video (using the value from your chart)!
The *actually* concerning energy expenditure will come if industrial processes start to get increasingly automated (this is on the horizon by 2030), and naive production expansion could then rapidly dwarf existing energy consumption. Things could get very dodgy if this is allowed to impinge on ecosystems or atmospheric conditions (it could become humanly inhospitable very quickly).
I too was recently upset to be unable (I knew it was coming!) to tell cheap AI video from real footage. It's ruinous to our epistemic commons, as you've discussed.
For what it's worth, there are many technical solutions, but so far suppliers and governments haven't made the moves needed to get set up. Our communication infrastructure will need improvements, and it won't get fixed overnight. My organisation is running a fellowship (www.flf.org/fellowship) among other initiatives targeting this growing epistemic challenge. It's surmountable, we just have to demand (and contribute to) fixes!
All of this was entirely predictable. And the most likely end stage is dystopia as the wolrd burns. We don’t have it in us to do the necessary cleaning / revolt / nigh-religious jihad against the oligarchy. Colous me incredibly sceptical, at least.