John Herrman

@jwherrman.bsky.social

8.0Kfollowers
568following
630posts

posting about posts at new york magazine. have me on your podcast!

Top posts

John Herrman avatar
John Herrman·Aug 19

just checking in with one of the most influential philosophers alive today

Post image
193
187
2.1K
John Herrman avatar
John Herrman·Jun 26

The real AI arms race is between one another 🥰 nymag.com/intelligence...

Stories about Al deployment tend to fall into a few categories. You've got productivity stories, where workers — most visibly at tech companies — talk about how Al tools are making parts of their jobs easier or harder, increasing their workload or simply making them redundant and taking their jobs. You've got top-down management stories, where Al use is suggested or mandated by leaders demanding more efficiency, who are either betting that a great deal of automation is possible within their firms, or who are just worried about getting left behind.
Then you've got the stories in which people are more clearly using new Al tools against one another in an escalatory way.
Job hunters, now able to generate custom applications instantly, flood employers, so employers turn to Al to manage the glut. Spammers and other bad-faith actors flood social media with near-infinite material, pushing the platforms to double down on automated moderation.
Rapidly generated presentations lead to rapidly scheduled meetings recorded and automatically transcribed by AI assistants for machine summarization and analysis. Dating-app users generate chats with Al only to be filtered and then responded to by someone else using AI. The starkest and most consequential such story is what's happening in education: Teachers dealing with students who generate entire essays and assignments are turning to Al-powered plagiarism detectors, or getting pitched on ed-tech software that solves cheating with surveillance - with, of course, the help of AI.
These are stories about AI, but they're also stories about broken systems. Students flocking to ChatGPT in the classroom suggests that they see school in terms of arbitrary tasks and attainment rather than education. The widespread use of Al in job hunting drives home the extent to which platforms like LinkedIn, which promises to connect job seekers with employers, have instead installed themselves between them, pushing both sides to either pay up or dishonestly game their systems. A dating app where users see opportunity in automated flirting must already be a pretty grim space. If Facebook can be so quickly and thoroughly overwhelmed by Al-generated imagery and bots, it probably wasn't much of a social network anymore — a low-trust platform better at monetizing users than connecting them. Smaller-scale AI arms races like these don't take hold unless users (or workers, or students) have already been pitted against one another by systems they don't respect. In an uncomfortably large portion of modern life - especially online - that's exactly what's happened.
20
158
557

Latest posts

John Herrman avatar
John Herrman·6d

funny memory about this story: Balaji reached out privately to say how much the boys at a16z loved it, after which they all spent the next few years letting Twitter drive them completely insane www.nytimes.com/2018/08/15/m...

2
10
71
John Herrman avatar
John Herrman·6d

a sharper way to make this argument would have been to simply point out: Anthropic is building in a world where Pete Hegseth — a genuine and obviously incompetent maniac — has meaningful power over it

1
5
17
John Herrman avatar
John Herrman·Feb 26

it was interesting to see a bunch of AI people suddenly start making BlueSky jokes about model scraping at Anthropic's expense — it's almost as if the LLM theft critique is grounded in something real and significant! nymag.com/intelligence...

Anthropic is a ripe target here as far as jokes about hypocrisy are concerned: It's pitched as the conscientious AI lab, but also it settled last year to pay out $1.5 billion to authors whose pirated books it used for training. These posts represent a fair critique of all of the big players, which have ingested enormous quantities of material created by others, often without permission, to build proprietary models over which they now claim something like authorship. The scrapers have become the scraped, their own powerful distillations of the world's information sampled, reconstituted, and distilled once more.
The backlash here isn't just about that irony. Anthropic is, at the moment, the Al lab to beat and the company whose products are most responsible for recent speculation about how AI might blow up the economy.
As a result, mockery wasn't coming just from people whose content had been scraped by Anthropic or who generally object to the way LLM models are trained. It was coming from AI insiders who see big firms as pulling the ladder up or trying to fortify their early dominance with the help of regulators, copyright law, and government funding. Within the story of an international arms race, model distillation can be cast as a threat to national security and American economic competitiveness. Within some of the other stories about AI, it might look more like fear of competition in general: of cheaper models; of free, open-source models; and of the rapid commoditization of capabilities that, just a few months prior, were unique and prohibitively expensive to develop. The Al firms called out by Anthropic - DeepSeek, Moonshot, and MiniMax — make models that are open to use not just in China but in the U.S. and elsewhere and that are already competing for some of the same customers.
Moonshot's latest Kimi models seem to perform, for many functions, about as well as the best American models did in the middle of last year. DeepSeek, the arrival of which briefly sent the AI industry and the stock market into chaos, is expected to release a major model update imminently, which may help explain why the big labs are all speaking up at the same time.
1
7
39
John Herrman avatar
John Herrman·Feb 19

hawking this link again. elite social media radicalization is vastly underemphasized compared to mass "misinformation," etc nymag.com/intelligence...

0
3
13
John Herrman avatar
John Herrman·Feb 18

three answers suggesting you might want to think about the question a little bit and one releasing you from ever thinking about anything again

an elon musk tweet comparing answers to the question "is the US on stolen land"
77
181
1.3K
John Herrman avatar
John Herrman·Feb 17

the trump administration is doing everything it can to clear the regulatory and legal path for prediction markets, which is says are definitely NOT sports betting, no way, couldn't be nymag.com/intelligence...

By specifically referencing hedging, Selig draws a parallel between what Kalshi and Polymarket allow people to do and, say, how a farmer minimizes the risk of an unpredictable harvest by selling grain-futures contracts.
(Worried that a given candidate winning an election might hurt your business? Place a hefty bet on him or her on prediction markets to balance your risk profile — so goes this argument.) In doing so, the CFTC chair sounds an awful lot like prediction-market executives, who prefer to emphasize how their field is more useful to the world than, say, DraftKings. "I just don't really know what this has to do with gambling," Kalshi CEO Tarek Mansour told Axios last year. "Every contract has a hedging use case, even the less obvious ones," argued Kalshi's Samantha Schwab — of the Schwabs — around the same time. (Schwab has since been appointed deputy chief of staff for the U.S. Treasury Department.)
As a narrow regulatory matter, these assertions now seem temporarily settled in that it's the position of the government that Kalshi and Polymarket have nothing to do with gambling. (For lack of a better place to mention it, TIl state here that Donald Trump Jr. is an adviser to both companies and an investor in at least one.) But the resolution of these questions arrived just as it was becoming abundantly clear — in the numbers but also to anyone who has engaged with these platforms at all or knows anyone who has — that most of the action on the big prediction platforms revolves around sports. From The Wall Street Journal:
Kalshi and Polymarket, the biggest prediction-market platforms, have attracted attention for offering outlandish bets such as whether the Trump administration will buy Greenland ... But sports remain the overwhelming majority of their business, giving a sports-betting-crazed nation a new way to participate in America's favorite new pastime.
2
7
19
John Herrman avatar
John Herrman·Feb 13

There's a real "let's do everything they told us we couldn't" thing going on at Meta now. The facial recognition glasses bring to mind an old story I haven't seen mentioned today: Facebook built a feature like this a decade ago and didn't release it www.businessinsider.com/facebook-bui...

6
61
147
John Herrman avatar
John Herrman·Feb 13

wrote about That AI Essay, the "scare trade," and safety researchers deciding to quit in public nymag.com/intelligence...

Imagine you work in AI alignment or safety; are receptive to the possibility that AGI, or some sort of broadly powerful and disruptive version of artificial-intelligence technology, is imminent; and believe that a mandatory condition of its creation is control, care, and right-minded coordination at corporate, national, and international levels. In 2026, whether your alignment goal is not letting chatbots turn into social-media-like manipulation engines for profit or to maintain control of a technology you worry might get away from us in more fundamental ways, the situation looks pretty bleak. From a position within OpenAI, surrounded by ex-Meta employees working on monetization strategies and engineers charged with winning the AI race at all costs but also with churning out deepfake TikTok clones and chatbots for sex, you might worry that, actually, none of this is being taken seriously and that you now work at just another big tech company
- but worse. If you work at Anthropic, which at least still talks about alignment and safety a lot, you might feel slightly conflicted about your CEO's lengthy, worried manifestos that nonetheless conclude that rapid AI development is governed by the logic of an international arms race and therefore must proceed as quickly as possible. You both might feel as though you - and the rest of us — are accelerating uncontrollably up a curve that's about to exceed its vertical axis.
This is genuinely fun stuff to think about and experiment with, but the people sharing Shumer's post mostly weren't seeing it that way. Instead, it was written and passed along as a necessary, urgent, and awaited work of translation from one world
- where, to put it mildly, people are pretty keyed up — to another. To that end, it effectively distilled the multiple crazy-making vibes of the AI community into something potent, portable, and ready for external consumption: the collective episodes of manic acceleration and excitement, which dissipate but also gradually accumulate; the open despair and constant invocations of inevitability by nearby workers; the mutual surveillance for signals and clues about big breakthroughs; and, of course, the legions of trailing hustlers and productivity gurus.
This last category is represented at the end of 26-year-old Shumer's post by an unsatisfying litany of advice: "Lean into what's hardest to replace"; "Build the habit of adapting"; because while this all might sound very disruptive, your "dreams just got a lot closer"
The essay took the increasingly common experience of starting to feel sort of insane from using, thinking, or just consuming content about AI and bottled it for mass sharing and consumption. It was explicitly positioned as a way to let people in on these fears, to shake them out of complacency, and to help them figure out what to do. In practice, and because we're talking about social media, it seemed most potent and popular among people who were, mostly, already on the same page. This might explain why it has gotten a bit of a pass — as well as a somewhat more muted response from the kinds of core AI insiders whose positions he's summarizing — on a few things:
Shumer's last encounter with AI virality, which involved tuning a model of his own and being accused of misrepresenting its abilities, followed by an admission that he "got ahead of himself"; the post's LinkedIn-via-GPT structure, format, and illustration…
7
11
36
John Herrman avatar
John Herrman·Feb 12

I realize I’m taking two 49% chunks out of my audience here, minimum, but every time I see a clip of Clavicular interacting with fans in public I feel like I’m watching a version of this video where eveyrone has visible traps youtu.be/mloc0jh6kV4

0
0
8
John Herrman avatar
John Herrman·Feb 12

ten years ago, @joeljohnson.com wrote the, uhh, foundational short story about this www.theawl.com/2015/06/hell...

0
2
9