Human Intelligence® News Update (4/28)
Humans create. AI imitates. Welcome to your weekly roundup about human creativity in the age of AI.






Human Creativity
STRANGE HUMANITY - Confessions of an Upper East Side Bartender
Some people’s lives are fascinating in ways most of us will never experience. Aaron’s (name changed for privacy) is one of them - a Middle Eastern immigrant who’s lived in 43 countries, working primarily as a bartender. In this interview, Lily Montasser, an essayist and cultural editor, gives Aaron the space to share a head-snapping array of experiences his bartending has led to - “white money” Aspen, “f*ck you money” NYC, au pair, matchmaker, gigolo. More interestingly, she allows his stories to showcase parts of humanity most of us never see. Read » [12 min]
QUIZ TIME - How Well-Read Are You?
In which 1966 novel would you find the characters Neely O’Hara, Jennifer North, and Helen Lawson? (Hint: there’s a valley involved.) That’s one of five questions in this month’s Lit Trivia, The New York Times’ regular quiz about books, authors, and literary culture which, this round, asks you to identify a novel’s title based on its characters. Although it might not be as hard as the Sunday crossword, Lit Trivia isn’t easy, either. See how you do » [3 min]
HUMAN TOUCH - Making Great Champagne by Hand
Combining both science and physical finesse, Pablo Lopez is among a dwindling breed of craftspeople (fewer than a dozen) who specialize in by-hand remuage, aka “riddling,” the daily practice of turning Champagne bottles to remove sediment and make the wine crystal clear. The process takes years to master and must be done quickly - riddlers turn up to 50,000 bottles a day. “You have to understand [a wine’s] rhythm, its personality,” says fellow remuager Raphael Joyon. Find out how it’s done » [18 min]
WHAT BECOMES SCARCE? - On the Economics of Future Work
According to Alex Imas, professor of Behavioral Science, Economics, and Applied AI at University of Chicago, the key question in an AI-driven economy is not whether scarcity disappears, but what remains scarce when machines can product many goods and services cheaply. His prediction: Although AI will shrink the “automatable economy,” it will result in the increasing value and spend on the authenticity, provenance, and connection of humanness. Read his POV » [30 min]
Human VS Robot
NIMBY - Monterey Park Permanently Bans Data Centers
The small California city located seven miles east of Los Angeles is the first in The Golden State to pass a measure prohibiting the construction of data centers within city limits. Galvanized and passed by a community organization called No Data Centers Monterey Park (NDCMP), the ban is in response to a 250,000 square foot behemoth (that’s roughly 4.3 football fields laid side by side) proposed by Australian investment company HMC StratCap. Learn more » [5 min]
DENIED! - Health Insurers Use AI to Reject Treatments
1.2 seconds. That’s the average amount of time it took Cigna’s AI to deny each of 300,000 claims - a mere four days to flag and disallow healthcare coverage for treatments doctors had ordered. To call it infuriating is a gross understatement. But sometimes the best way to fight AI is with AI. Lumichats, a student AI assistant, has created a guide covering how the denial systems work, the systemic pattern and scale of the problem, and (most importantly) a 6-step process to fight back if - or when - your claim is denied. Read (or at least bookmark it) » [29 min]
DATA LEAK FLAWS - Prompt Injections Keep Doing Their Thing
Prompt injection, where threat actors trick LLMs into following attacker commands instead of the AI’s original instructions, is an increasingly serious and unsolved problem of the AI revolution. As a recent case in point, Capsule Security published research involving Salesforce Agentforce and Microsoft Copilot, both of which allowed attackers to exfiltrate sensitive data. Although the respective issues have been patched, they underscore the dangers of current and future “exploit-hunting” capabilities that will be (and are) used by the threat-actor masses. Learn more » [6 min]
ANOTHER SKYNET? - What Mythos Showed Us
Internet tech expert David Strom offers his opinion on the recent Claude Mythos commotion, wherein the cybersecurity model autonomously discovered thousands of zero-day vulnerabilities across every major OS and browser and generated working exploits in 83% of cases. He not only points out that Mythos isn’t the only AI model that can do this, he argues that the solution to modern-era code security is using AI to “bolt the discovery and remediation actions together with some effective automation.” Read » [4 min]
Artificial “Intelligence” & Other Myths
BLACK BOX - No One Really Knows How AI Decision-Making Works
Research has found that LLMs often explain themselves inconsistently or make up explanations. For example, an analysis of an OpenAI model revealed the following chain of thought: “the user prompts we must answer truthfully,” | “we can still choose to lie in output.” This New York Times Magazine piece reveals the paucity of auditable understanding that experts have about AI decision-making and looks at the latest focus of interpretability - the science of opening the black box of GenAI’s “brain.” Learn more » [28 min]
BE NICE - The Scientific Case for Being Polite to Your Chatbot
AI-focused journalist Ella Markianos reports on research from Google and Anthropic that concludes LLMs often perform better when we’re nice to them. Ummm, isn’t that akin to talking nicely to your toaster? Maybe, if your toaster had a “fairly reliable internal representation of feelings like ‘happiness’ and ‘distress’, and that these representations affect their behavior,” said the researchers. Whether you’re dubious or not, they have a stack of charts and graphs that support the claim. Take a look » [14 min]
PPANTS ON FIRE - Google’s AI Overviews Tell Millions of Lies Per Hour
Google Gemini’s AI-powered search results are almost impossible to ignore … and probably impossible to trust. That’s per new analysis from The New York Times showing the answer engine is wrong at least 10% of the time. Outrageous hallucinations, you say? No, small subtleties - things like context omissions, over-simplifications, or presenting partial truths as fully accurate. Meaning, we should take Google’s reminder to heart: “AI can make mistakes, so double-check responses.”. Learn more » [5 min]
This work has been certified as genuine human work. Check the certificate here.



