
Hey Grok, create an image of a damaged Amazon parcelā¦
No, Iām not a criminal. I am simply highlighting how easy this has become.
Thatās a prompt fraudsters can use, and exactly what people and companies are up against in the post-truth era. They can type it in, generate a convincing photo of a ripped-open, empty box with the familiar smile logo, attach it to a refund claim, and suddenly the retailer is left questioning whether the damage was real, staged, or entirely fabricated. Welcome to January 2026, where generative AI has turned everyday gripes like dodgy deliveries into potential scams, blurring the line between genuine complaints and calculated fraud.
āPost-truthā started as feelings beating facts. Now AI has cranked the volume. Deepfakes jumped from roughly half a million online in 2023 to about 8 million by the end of 2025, with growth hitting close to 900% in some pockets. Weāre hopeless at spotting cloned voices, studies show people often prefer the fake because it sounds smoother. The liarās dividend rules: call anything inconvenient āprobably AI-generatedā and watch evidence vanish in a puff of plausible deniability.
The sting really lands when the fakery hits your bank account. AI-powered fraud isnāt science fiction. Itās costing people and businesses billions. Consumer scam losses in the US recently topped $12.5 billion, with nearly 60% of companies reporting higher hits year on year. Here in the UK, Action Fraud and banks are seeing the same surge. Deepfake scams, voice clones pretending to be your son in a āmum, Iāve been arrestedā panic, or fake video calls of the boss demanding an urgent transfer, have gone through the roof. One high-profile case saw a finance worker tricked into sending Ā£20 million after a convincing deepfake video meeting with ācolleagues.ā Globally, experts reckon deepfake-enabled fraud could soon be worth tens of billions a year. Scammers love it: dirt cheap to produce, endlessly scalable, and terrifyingly believable. That urgent call from your gran? Might be a bot with her voice nicked from three seconds of an old voicemail.
But hereās the silver lining that stops this being pure misery: weāre hitting back with the very same tech thatās causing the headache. Deepfake detection is getting seriously clever. Tools like Reality Defender, Sensity AI and Hive scan for dodgy lip-sync, lighting glitches, pixel artefacts and metadata oddities, often clocking 95ā98% accuracy on known fakes. Forensic reports now spit out clear confidence scores and visual tell-tales so you can see exactly why something smells fishy. Platforms, governments and even banks are rolling out real-time checks, mandatory watermarking and ādetectors of detectors.ā Itās an arms race, alright, but the defenders are gaining ground faster than the scammers can invent new tricks.
So where does that leave us in this post-truth puddle? Wary, but not defeated. The era isnāt about giving up on truth. Itās about getting better at guarding it. Pause before you share that explosive clip. Check it against a few reliable sources. Pop a suspicious video or audio into an AI tool and ask it to spot the signs (try: āAnalyse this link for deepfake clues: lip-sync issues, lighting problems, or weird artefactsā). Above all, remember: seeing isnāt believing any more, but a healthy dose of āprove itā costs nothing and packs a punch.
Stay curious and have a great week,
Jamie
Why AI Isnāt Replacing Affiliate Marketing After All
āAI will make affiliate marketing irrelevant.ā
Our new research shows the opposite.
Levanta surveyed 1,000 US consumers to understand how AI is influencing the buying journey. The findings reveal a clear pattern: shoppers use AI tools to explore options, but they continue to rely on human-driven content before making a purchase.
Here is what the data shows:
Less than 10% of shoppers click AI-recommended links
Nearly 87% discover products on social platforms or blogs before purchasing on marketplaces
Review sites rank higher in trust than AI assistants

