Get in the weeds
Most Google Search improvements are manually reviewed by engineers through ‘side-by-side’ comparisons between old and new results...on spreadsheets!
One may expect Google engineering to largely consist of implementing PhD level algorithms, and while that's sometimes true, much of a search or AI engineer's job involves looking at examples, spotting patterns, hand labelling data, and other non-scalable, in-the-weeds analysis. This seems to be generally true among the best AI teams.
Over time, most people work less in details and more on the bigger picture. This is clear if you’re managing people or projects: you delegate to others and spend your time on how everything comes together. But it’s also true if you’re doing the same job as before, because you can offload more to machines as technology advances: programming used to be about making punch cards to flip specific switches, then writing assembly-like languages to tell the computer which switches to flip, then using higher-level languages to do complex calculations, and now asking ChatGPT to write code for you.
You might think this means you don’t have to think about the small stuff at all. But that’s not true—even if you’re not working on the details directly, you need to understand them in order to guide organizations or technologies effectively. Paradoxically, working at higher levels can increase the value of digging into weeds, because more complex systems are harder to master from the big picture alone.
You need the weeds more than ever
In the past, organizational or technological systems were designed to be understood with only a high-level view:
In an assembly line, each worker does one specific task in a linear sequence. If you’re managing the factory, it’s easy to diagnose issues: if the product is coming out okay at the end, everyone is doing their job correctly. If not, you can trace the defective part back to a specific place to find the problem.
Traditional computer programs either run or crash; if they run, they always return the same value. If you’re using new code, it’s easy to diagnose issues: run a test and see if it passes. If it fails, it’ll tell you where, and you only have to fix that part.
But increasingly, you can’t count on systems being so simple. Knowledge and service workers don’t do the same task over and over; they use context and judgement to interact with people and information from inside and outside your organization. And AI models use non-linear patterns to make predictions that can’t be easily explained or even repeated.
Because these systems don’t have a fixed output, you can’t check from the outside if everything is working correctly. And even if you know something’s wrong, you can’t easily trace it back to a single point of failure, because there are complex interactions. As Paul Graham writes, these are problems where delegation fails; the best way to solve them is to understand the whole system in your own head.
To do that, you need to get in the weeds: work through specific examples; talk to individual customers or workers; make small changes and see what happens.
The value of details
You’d go crazy if you tried to understand everything in such detail, and you don’t need to; cars are very complex, but you can drive one without knowing how every part works. When do you need to deeply understand a system?
When you’re responsible for designing it. A mechanic has to know how the car works to fix it; OpenAI executives have to know how ChatGPT works to release it.
When you’re using it in a customized way. An F1 driver has to understand how the car works to drive it at maximum performance; a marketing leader using ChatGPT to write high-profile ad copy will risk embarrassment if they don’t understand how it works.
You can get an okay understanding of a system from the surface, which may make it tempting to delegate all the details. But by getting in the weeds, you can:
Design better solutions. As a now-famous essay goes, reality has a surprising amount of detail, and the best solutions account for those details. For example, when Amazon was starting to take off in the mid-90s, its customer support teams were swamped by a rapidly growing volume of requests. The company hired PhD students to build automated response systems, but it also made them spend half their time working as customer support agents themselves. This may have seemed like a waste of high-powered engineers’ talents, but it ensured that they’d know what it really takes to answer customers’ questions.
Fix things when they break. You might be able to use some complex systems as a black box, but if something goes wrong, you need to understand what’s inside to fix them. For example, medical equipment donated to the poorest areas of the planet often went unused because a part breaks and the community doesn’t have the expertise to fix it. (Donors started giving simpler equipment instead, which is easier to repair but less powerful.)
Spot anomalies before they become a big deal. You can monitor a system’s overall performance from the outside, but you’ll miss little things: maybe it’s doing buggy things in a small fraction of cases, or maybe a few customers are trying something new that you can pick up on ahead of time. For example, in the early years of Google, workers noticed unusual search volume for the dress worn by Jennifer Lopez at the 2000 Grammy Awards; it added image search as a result.
So don’t be satisfied with delegating the details—get in the weeds and understand them yourself.