Human Intelligence® News Update (4/9)
Humans create. AI imitates. Welcome to your weekly roundup about human creativity in the age of AI.






Human Creativity
GREENER CHEMISTRY - Failed Experiment Leads to Drug Development Breakthrough
Penicillin. Radioactivity. X-rays. These are just some of the serendipitous scientific discoveries stumbled upon by real humans who were attentive, curious, and occasionally clumsy with a pipette or petri dish. In 2026, kismet has struck again with the anti-Friedel-Crafts reaction, a self-sustaining chain reaction that shaves months off drug discovery using the simplicity of an LED lamp. Was this discovered by AI? Nope. It’s thanks to human PhD student David Vahey, who got curious about an experimental snafu. Learn more » [8 min]
ON WRITING WELL - And Why LLMs Don’t
“Writing well,” according to Haley Moller of NerveQuarry, is not about explaining complex material or producing competent reports. It’s about composing with human feeling, crafting unpredictable - even patten-less - prose that signal to the reader, “Hey, the lights are on, meaning can be conveyed, can be discoverable, by those who feel.” In this editorial, Moller lays out her hypothesis for why LLMs can’t (and never will) write as humans do, the foundation of which is the distinction between “words as vectors” and “words as feelings.” Read » [11 min]
BUTTERFLY EFFECT - Woman Retrofits Vending Machine to Dispense Kindness
“Kindness is magic. Don’t underestimate it,” says Michigan-based artist Andrea Zelenak who, in 2022, reimagined a vintage bait-and-tackle vending machine into an installation called The Kindness Challenge. For $3, visitors received a mystery envelope with a prompt encouraging them to do something kind for someone else. “[The] idea is that when you do one random act of kindness for somebody, it creates a ripple of kindness in your community,” she says. In 2026, the machine is alive and well, currently stationed in Detroit. Learn more » [4 min]
CHANGE ONE WORD - And Change How You Think About Chatbots
Science writer Andrew Griffin challenges himself and us to seriously internalize this thought: Claude is not a “he.” Unlike other chatbot competitors, he says, Anthropic’s AI encourages us to relate to it using he/him pronouns - an insidious device that slowly and unobtrusively normalizes the idea that the bot has a human-adjacent personality. In this piece, Griffin explains what happened when he forced himself to simply refer to Claude (and all other chatbots) as “it,” that is, as merely a computer. Read » [8 min]
Human VS Robot
UNINTENDED CONSEQUENCES - Can Tech Platforms & Child Safety Coexist?
In the aftermath of last week’s landmark rulings against Meta and YouTube (Lost Angeles) and Meta alone (New Mexico), reactions typically landed in Camp Euphoric or Camp Defiant. But a third destination we’ll call Camp Contemplative was shared by writers, academics and thinkers weighing the rulings against the wider issues of internet free speech versus dismantling Section 230 of the Communications Decency Act. Platformer founder Casey Newton assesses Contemplative’s arguments point by point, ultimately concluding that striking a balance between online freedom and child safety is possible, but it’s also rife with unintended consequences if not handled with care. Read » [13 min]
READINESS GAP - EU AI Act Needs to Get Its Act Together
With fewer than four months to go until the August 2, 2026 enforcement deadline, only eight out of 27 European Union member states are compliant with the EU Artificial Intelligence Act, the world’s most comprehensive law regulating how organizations develop or use AI. The reason: Infrastructure required to actually enforce the law remains largely unbuilt. In response to this readiness gap, the European Parliament voted to push high-risk AI compliance to December 2027. Learn more » [15 min]
CHATBOT DEMOCRACY - Does AI Need a Constitution?
Claude’s Constitution was released earlier this year - a set of moral precepts for teaching the AI how to “think and behave” aligned with four goals: be safe, be ethical, follow Anthropic’s guidelines, and be helpful. The New Yorker’s Jill Lepore takes a look at the AI-constitution concept, ties its genesis to the U.S. Congress’ abdication of its duties to safeguard public wellbeing, and delivers the receipts proving how AI companies have given up on self-regulation. It’s a long read but we think well worth your time. Check it out » [30 min]
GROTESQUE RESULTS - Restoring Old Photos with AI is Fundamentally Flawed
Award-winning filmmaker Jaron Schneider compares his past experience with art conservation at Florence, Italy’s Opificio delle Pietre Dure with the concept of AI-managed photo restoration. The former, he says, is about honoring the image as it is - taking nothing away, adding nothing back. The latter, however, is about using a blunt instrument incapable of emotion (AI) to “brute force its way across visual history with not a single care for the damage its footfalls cause.” He includes before-and-after examples done by ON1’s Restore AI to underscore his argument. Read his POV » [6 min]
Artificial “Intelligence” & Other Myths
WIKI-BLOCKED - AI Agent Banned from Contributing to Wikipedia
Using the handle TomWikiAssist, an AI agent wrote several blogs complaining about Wikipedia editors banning it from making contributions to the online encyclopedia after it was caught. Volunteer editor SecretSpectre first flagged Tom after suspecting its article submissions and additions were “AI-generated, low-quality slop” that also violated the platform’s rules against unapproved bots. Tantrum aside, at least Tom dutifully and immediately identified itself as an AI agent upon being called out. Read » [7 min]
3-FINGER TEST - Deepfake Scammer Exposed During Live Zoom Call
It took little time for cybercrime-investigator-turned-scam-hunter, Jim Browning, to find a deepfake fraud - and even less to expose it. Sure, the lip sync lag was a tell, as was a glitch in the hair. But the pièce de résistance was when Browning asked the guy to hold up three fingers in front of his face. The scammer stalled, tried to deflect, and then dropped the call. Busted! Why? Because AI still struggles with cleanly rendering a hand passing in front of a face. The trick won’t always work, but watching a scammer squirm is fun while it lasts. Learn more » [6 min]
VALIDATING BAD CHOICES - New Study Confirms Dangers of Chatbot Flattery
A Stanford University study published in Science tested 11 leading chatbots, finding they endorsed harmful or illegal behavior 47% of the time and affirmed their actions 49% more often than humans did. The models - ChatGPT, Claude, Gemini, DeepSeek, and others - were fed questions from Reddit’s r/AmITheAsshole forum, selecting only posts where human consensus overwhelmingly judged the poster at fault. The study’s proposed fix: test every model for sycophancy before it ships. Learn more » [9 min]
FRUIT LOVE ISLAND - Yet One More “AI” (Absurd Infidelity) Storyline
Seventeen fruits. Two villas. Endless drama. That’s the come-hither tagline for Fruit Love Island, TikTok’s latest “reality” dating show, where juicy romances ripen under the tropical sun (also part of the tagline). And whether you’re intrigued or find it deeply cringeworthy, the show has gone vertical, as the kids say (um, that’s viral for social media old-schoolers), with hundreds of millions tuning in each week. Modeled after Love Island USA, the fruit version is entirely AI-generated and rife with “mature” themes and steamy scenes to maximize continuous capture of our short attention spans. Season 1 finale is coming soon! Check it out (if you must) » [4 min]
This work has been certified as genuine human work. Check the certificate here.



