In Clive Thompson's Coders, Stack Overflow co-founder Jeff Atwood says programming causes people to be nitpicky:
“If you go to work and everyone around you is an asshole, you’re going to become like that. And the computer is the ultimate asshole. It will fail completely and spectacularly if you make the tiniest error. ‘I forgot a semicolon.’ Well guess what, your spaceship is a complete ball of fire because you forgot a fucking semicolon.”1
"The reason a programmer is pedantic is because they work with the ultimate pedant. [...] It’s why you get the stereotype of the computer programmer who’s being as pedantic as a computer. Not everyone is like this. But on average it’s correct.”
It's a neat argument: A human listener can help interpret what you really mean, but a machine has no tolerance for ambiguity. The more you work with machines, the more you get used to zero-ambiguity communication—and the more you use that mode with other humans.
Is it true? (Shrug.) I haven't found any good studies, or even bad studies, so we only have circumstantial evidence. Coder-dominated internet forums have a nitpicky reputation (see: reddit comments). Another interesting data point: in Because Internet, Gretchen McCullough shows that online language was ruder in the late 1990s than in the 21st century. She says this is because most people hadn't yet mastered typing in the '90s, so they only had time to write exactly what they needed to say and nothing more. However, it could also be a changing user mix—the early internet was mostly techies, who might speak more “coding language”; normies didn't spend much time online until the 2000s.
And there's no denying that new technologies can change how humans communicate. One example from McCulloch: when all communication was face-to-face, it was customary to address others with a formal greeting, but this wasn't possible over the telephone: how can you address someone when you don't know who's calling? Thus "hello" entered the lexicon, first only for phone calls, but eventually as an acceptable in-person greeting too.
Yale research on robotic systems shows that talking to machines can have the same effect: in one study, people who interacted with a robot that was sociable and admitted its mistakes were more empathetic and conversational in a group exercise.
Large language models such as ChatGPT are a revolution in how we communicate with machines. Because they infer meaning from natural language instead of executing code, they can handle ambiguity—different prompts can get the same answer, and there is no prompt that will result in "does not compute." (Although they have new failure modes, like "I'm not allowed to answer that" and "I don't have an opinion on that.") ChatGPT and its peers also have conversational memory, using earlier questions and answers to inform later responses. In these ways, large language models are more "human-like" than earlier machines—but they have other nuances that may shape future communication.
The decline of pedantry?
Any change caused by large language models will once again have programmers at the forefront, because coding has been the most successful use case so far. (This includes specialized tools like Microsoft Copilot, but also just ChatGPT; you can get very far by just asking "write me a Python program that does X" and then asking a few follow-ups for the parts that break.) Why coding? It's probably a problem space well-suited to large language models: software programs must spell out every step of logic in their text, so the models’ training data isn’t missing any context. More to the point, developers are a welcoming audience: not only do they like trying cool new software, but a chat program fits seamlessly into their existing workflow—it's already a joke-with-a-kernel-of-truth that programming is mostly copy-pasting from Stack Overflow; ChatGPT is just a new API for sending in questions and copying out code fragments.
That doesn't mean coding will become irrelevant: “Ask ChatGPT to write you a fully functioning program" will be a viable path for hobbyists making a toy or automating something for themselves, but to create enterprise-scale software that has to work across extremely complex programs and infrastructure—which is likely the majority of software in the world, and certainly the vast majority of software-that-you-can-get-paid-to-write—you’ll still need to be able to read and write code.
(Ironically, this is especially true for creating programs that use AI. Traditional software is fairly easy to evaluate: if the program compiles and returns the correct value, ChatGPT probably got it right; some unlikely edge cases might not be covered, but those can be addressed with unit tests, perhaps themselves generated by ChatGPT. But AI systems are notoriously hard to evaluate because there is no single "correct" result; this means they need to be checked in other ways, which may require understanding the code.)
The day-to-day work will be different, though. Instead of typing out every line of code from scratch, it may look more like asking ChatGPT to write parts and iterating with it to debug them, jumping in only to fill gaps in the model's knowledge or to bridge different components. Programmers will need to master how code works but only occasionally write it, the same way an experienced consultant needs to know what makes a good presentation but delegates most of the actual slidewriting to junior associates.
In other words, coders will spend less time communicating with "asshole" compilers; they'll spend more time working with language models that have the capacity to handle ambiguity and collaborate through iteration. So the programming-fosters-pedantry stereotype may not be with us for much longer.
The rise of specification?
Working with large language models is still different than working with humans, however. As people across industries and countries use these tools more regularly, we'll get used to communicating in ways that are tailored to them.
For one thing, ChatGPT doesn't have feelings—it responds in the same way whether you're nice or rude. So "soft skills" of communication like warmth and empathy aren't reinforced, because there's no value in getting the recipient to trust or like you.2
For another, there are some very specific tactics to get better responses from ChatGPT (sources abound, but OpenAI's documentation and Ethan Mollick's newsletter are two I've found useful). These may change as models and interfaces evolve, but for now they include:
Be very explicit about what you want. For factual questions, give as many details as possible; for creative output, be specific about what form of response you want, whether that's a literal instruction ("avoid repetition") or a stylistic one ("you are an econometrics professor writing lecture notes"). By the time you outline everything you want in a response—at least for complex requests that will involve a lot of output or interaction—your prompt might be several hundred words long.
Ask frequent follow-ups if you don't get exactly what you want the first time. If you ask for a short story and you like the introduction but not the conclusion, just say so; the model's "memory" enables it to rewrite just the problematic section.
Give seemingly obvious advice. If you're asking a logical question, prompt ChatGPT to "show its work"—this nudges the model to break the problem down into a logical chain of steps instead of coming up with a single answer that may be incorrect. If you're asking it to summarize a long document, follow up with "did you miss anything important?”
Don't trust the responses. Large language models are known to make frequent factual mistakes—such as inventing references that don't exist—so users need to verify anything important.
If these practices become second nature, what could be the future of human communication? We may be in for a world where you're less afraid to hurt your partner's feelings and more likely to do things that could seem rude today: being more specific about what you want ("in 30 seconds or less..."), asking more follow-ups ("that's not what I was looking for, I meant..."), giving more reminders ("are you sure you aren't forgetting anything?"), and not taking responses at face value. This might make our conversations more informative in the long run, but there will probably be some emotional turbulence on the way there.
This seems to be a bit of poetic license. The 1962 Mariner 1 failure is frequently assigned to a “missing hyphen”, but per Wikipedia the mathematicians creating the equations were at fault, not the programmers encoding them. The 1996 Ariane 5 failure (and corresponding ball of fire) was caused by a true software bug, but it was an integer overflow error, which is less evocative than a missing semicolon.
This isn't exactly true—if you give ChatGPT prompts that are intentionally designed to induce an emotion like anxiety, its responses will be more anxious—but I think it's mostly true.