\"Writing.Com
    February     ►
SMTWTFS
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
Archive RSS
Printed from https://www.writing.com/main/books/action/view/entry_id/1108255
Rated: 18+ · Book · Opinion · #2336646

Items to fit into your overhead compartment

<<< Previous · Entry List · Next >>>
#1108255 added February 13, 2026 at 10:23am
Restrictions: None
Slop Shop
So, here's one from a source I don't usually follow, but it came to my attention thanks to Elisa, Stik of Clubs Author Icon:
     How to Avoid Falling for Fake Videos in the Age of AI Slop  Open in new Window.
Why fake videos spread so easily, and what actually helps people spot them

We’re entering an era of what’s often called AI slop: an endless stream of synthetic images, videos, and stories produced quickly, cheaply, and at scale.

I have to admit, I'm getting more than a little tired of hearing/seeing the words "AI slop." From what I've seen, AI output has become more polished and professional than about 75% of human-generated content. I think some people might be jealous.

I ain't saying it's right, mind you. Only that it's prettier.

Sometimes these fake videos are harmless or silly, like 1001 cats waking up a human from sleep.

Harmless? You dare to call something that triggering
harmless? I don't even allow the significantly fewer than 1001 cats in my household to wake me up.

Other times, they are deliberately designed to provoke outrage, manipulate identity, or push propaganda.

Because human-generated content would never do that. Just like no one ever got lost using paper maps in the time before GPS.

To navigate this new information environment, we need to combine psychological literacy, media literacy, and policy-level change.

And here's where it gets difficult for most of us. Why should
we change? It's the world that needs to change, dammit!

The article provides a road map (or, if you prefer, a GPS route) to us changing:

1) Understand Our Own Psychological Biases (Psychological Literacy)

The psychology behind falling for AI-generated misinformation isn’t fundamentally new. The process is largely the same as with other forms of misinformation, but AI takes it to a whole new level– it dramatically lowers the cost and effort required to produce and spread it at scale.

My own simple solution: Right now, most of us have a bias that says "I saw it, so it must be real." I suggest turning that around. Assume everything you see on the internet, or on TV, is fake. Like you're watching a fictional movie or show. The burden of proof thus shifts.

The downside to this (every simple solution has a downside) is that you get so you don't believe
anything, And for some of these content generators, that's the goal: make you question reality itself so they can swoop in and substitute it with their own version. Hell, religion has been doing this for as long as there's been religion.

As Matthew has written before about fake AI accounts, people are motivated to believe what fits their values, grievances, and group identities, not necessarily what’s true. When a video confirms what you already believe about politics, culture, or power, authenticity becomes secondary.

I have noted this before: it is important to be just as, or preferably more, skeptical about the things that tickle our confirmation bias.

The goal isn’t to suppress emotion. It’s to recognize when emotion is being used as a shortcut around verification, and being used to manipulate you.

It sure would be nice to be able to suppress emotion, though. I've felt that way since watching
Star Trek as a kid. Spock was my role model.

2) Lateral Reading Is Still the Best Tool We Have (Media Literacy)

When people try to fact-check AI videos, their instinct is often to stare harder at the content itself such as examining faces, counting fingers, looking for visual glitches.


Guilty.

I've been seriously considering wearing a prosthetic extra pinkie finger so that anyone who looks at a surveillance photo of me will immediately assume it's an AI fake.

The most effective fact-checking strategy we have isn’t vertical reading (scrutinizing the video itself). It’s lateral reading—leaving the content entirely to verify it elsewhere.

I do that here, especially with notoriously unreliable sources, which, since I try to use free and easily accessible content, is almost everyone these days.

3) Policy Changes and Platform Accountability

Individual skills matter. Community norms matter. But at this point, policy intervention is likely required.


Well, I was trying to be funny with the "It's the world that needs to change" bit above, but I guess they're serious.

Social media platforms are not optimized for truth, they’re optimized for engagement.

I should fact-check this, but it aligns with what I already believe, so I won't.

Conclusion

The most dangerous thing about fake AI videos isn’t that people believe them once. It’s that repeated exposure erodes trust altogether: in media, in institutions, and eventually in one another.


As I alluded to above, it makes us question the very meaning of "truth."

I'd also add this: Be humble enough to know that you can be wrong. Be brave enough to admit when you're wrong. And allow space for the idea that sometimes, your ideological opponents are right.

Not often, mind you. But sometimes.

© Copyright 2026 Robert Waltz (UN: cathartes02 at Writing.Com). All rights reserved.
Robert Waltz has granted Writing.Com, its affiliates and its syndicates non-exclusive rights to display this work.
<<< Previous · Entry List · Next >>>
Printed from https://www.writing.com/main/books/action/view/entry_id/1108255