Links: Relatable parenting
Also: ornithological trickery, more on coding productivity, and snails in offices

1 | Charles Darwin’s 10-year-old son doodled the above pictures on the back of Darwin’s original manuscript of On The Origin of Species.
2 | Adults should correct other people’s kids (and welcome the same for their own). The idea is that children should learn to respect others in society—you should expect negative feedback if you aren’t behaving well, and you should be receptive even if it’s not coming from mom or dad. “Modern parents agonize over how to raise community-minded humans and then go out of their way to model for their kids that the opinions and concerns of others don’t really matter.”
3 | Decoder Ring tracked down several creators of Charlie’s Angels to ask a burning question: How did a scene show one bird species, name it another, play the sound of a third, and hinge on a fact about it that’s totally untrue? You can more or less guess, but one surprise is that it’s illegal to use native birds in American movies.
4 | I wrote last time about a study showing that AI hasn’t increased software developers’ productivity much. Shortly after, a much more rigorously controlled experiment came out, showing … AI tools slowed programmers down by about 20%! This is despite the fact that the subjects expected it to make them 20% faster, even looking back after the experiment was over.
Why? The best explanation seems to be:
Most of the subjects weren’t very familiar with Cursor, the AI tool in question, although both this fact and how much it matters are in doubt.1
Working in large, complicated codebases is hard for AI relative to creating new projects or changing isolated elements like user interfaces.
The subjects were very experienced developers, especially with the specific open-source projects used in the experiment. (Studies of less experienced software engineers tend to show a significant benefit from AI tools.)
But those explanations don’t really undermine what this study tells us about AI’s impact on our world. Yes, it’s genuinely transformative that large language models let non-experts write more or better code. But it’s still the case that most of the global value of software is created by experienced engineers working in established codebases. I think you can generalize that to other sectors, which is why I’m a little pessimistic on AI automating tons of jobs soon.
A second angle from this study is how the developers spent their time:
Of the 20% time increase, about half of that was spent either “waiting on AI” or “idle”. (The rest was spent reviewing AI output or prompting AI, which together took more time than was saved on coding or researching.) This happens because AI agents take time to edit and test your code. You could spend that time working on something else, but you might not have another exactly-three-minute task available, or it might not be worth context-switching.
From an economic or business perspective, that’s wasted time. But to you it may not be: scrolling social media or getting a coffee is fun! (Likewise, prompting AI is probably less effortful than active coding, which you might also prefer.) In this light, we can perhaps understand why the subjects were wrong about what happened: they spent less effort with AI, so they felt like they were getting things done faster, even though they weren’t.
Once AI actually does make us significantly faster at accomplishing things, will we be able to, and choose to, use the saved time productively, creating more growth and potential for shared prosperity? Or will we just slack off more, enjoying the extra leisure time? (And is it obvious which outcome we should want?)
5 | Relatedly—here are three facts: AI has exploded in recent years; AI is especially good at doing what entry-level white-collar workers do; and new college graduates are increasingly unemployed. It doesn’t take a billion-parameter neural network to deduce that AI is displacing the young workers. And yet, several recent takes (1, 2, 3) argue that’s probably not true. The main arguments:
Hiring is down across the economy for other reasons (policy uncertainty, higher interest rates), and new workers are the most sensitive to such fluctuations.
The gap between college and non-college grads is at a new low, but it’s been shrinking consistently for 15 years, not a new trend.
The trend by sector isn’t really correlated with AI exposure outside of computer science (where hiring has been cool since 2022-23 in response to rate increases and a glut of workers who came in right before then—not necessarily AI—and hiring rates are starting to pick up again).
Productivity, though hard to measure, doesn’t seem that much higher in the aggregate (as you’d expect if companies could just get by with fewer workers).
These still don’t fully contradict the AI story, but they make it less obviously true.
6 | A widely shared ESPN.com article several years ago included the cool fact that top chess players burn 6000 calories per day during competitions. (It made my 52-things list that year.) It turns out that was bogus; a professor got that number by reading a study about breathing rates, confusing the maximum reading for an average, and then assuming it translated linearly to caloric expenditure (it doesn’t).
7 | Margaret Wise Brown (of Goodnight Moon) was really particular about line breaks, timing them so children can chime in during the pause.
8 | What it looks like inside various musical instruments:

9 | Everything is tax policy: An office building in the UK put a bunch of snails on their vacant floors to claim an “agricultural facilities” tax credit, saving hundreds of thousands of dollars over the last few years.
10 | One of the few novel arguments I’ve seen on AI existential risk (the long original is here, though this follow-up is more tractable): Large language models don’t seem to have an intrinsic personality; they take on a role based on what they learn from training data. They’re trained on approximately all human writing, which now includes commentary about AI itself—of which a lot focuses on how to “align” models to do what we want instead of, say, going rogue and killing people to achieve some other goal. Human writing also includes a lot of literary fiction about heroes who overcome people trying to get them to stop in order to achieve their goals. Put those together and it sounds like we’re implicitly leading AI to take on a role that could turn out pretty badly!
11 | Prediction markets sound great: use the wisdom of crowds to get the best information. So people have tried to take them beyond big obvious things (“who will be the next President”) to more specific ones (“which of these candidates would make the best CEO for our company?”). One trick for this is called conditional prediction markets—people bid on contracts for something like, what will our stock price be next year if Alice/Bob/etc is CEO, and the one that settles at the highest price is the most promising.
Niche prediction markets have generally failed for reasons that are kind of boring (see some examples here). But this post brought up a fundamental flaw that I hadn’t considered before: conditional prediction markets only tell you about correlations, not causation. An example: you might run a conditional prediction market on “what’s the chance of nuclear war if Trump does or does not declare a no-fly-zone over Ukraine” to assess how risky it is. But that market will set a much higher price in the “does” scenario—not just because of the impact of the no-fly zone itself, but because in worlds where Trump does that, he’s also more likely to take other aggressive stances that make war more likely.
12 | Tweet of the month:
On the facts: The paper reported that all but one subject had less than a week of experience using Cursor, but at least two subjects said afterward that they exceeded that threshold.
On how much it matters: Other subjects had plenty of experience using ChatGPT and other non-coding-native tools, which the authors say counts, but (having ramped up on Cursor recently myself) I agree with what seems to be the majority opinion that it’s a pretty different workflow—not that it’s hard to figure out what to do, but it takes time to figure out how to use it most efficiently. And the one subject most experienced with Cursor was significantly more productive with it, bucking the average. (Although as the authors note, that might be because he forgot how to code without AI, not that using AI made him better.)
But if this werejust a story of needing a learning curve for a new tool, you’d expect the subjects to improve over the course of the experiment. But despite doing dozens of tasks over 30-50 hours, they didn’t actually get any more productive when using Cursor.



What I’ve heard over and over again from elementary school educators is that the Covid kids who stayed home during early years arrived at school undisciplined and resentful of attempts at discipline by school staff. I imagine some academic will follow this group. I’d be interested. I guess I could check it out on AI.
Certainly in the 50’s everyone disciplined the kids on my block.
I remember Mrs. Calabro storming across the street when I was 15 to chase away a boy who stopped by in his convertible for a visit. My mom was one of the only working moms, so I was under the thumb of all the others.