How Tailscale Co-Founder Program With LLMs

David's personal experience in using generative models for programming

David Crawshaw :

I followed this curiosity, to see if a tool that can generate something mostly not wrong most of the time could be a net benefit in my daily work. The answer appears to be yes, generative models are useful for me when I program. It has not been easy to get to this point.

Good to see David laying this out in open. As technologist we need to be curious and press along beyond the "easy" part to extract benefits of a technology.

He uses LLMs in three ways:

  1. Autocomplete. This makes me more productive by doing a lot of the more-obvious typing for me. ... This is the place to experiment first.
  2. Search. If I have a question about a complex environment, say “how do I make a button transparent in CSS” I will get a far better answer asking any consumer-based LLM, o1, sonnet 3.5, etc, than I do using an old fashioned web search engine and trying to parse the details out of whatever page I land on. (Sometimes the LLM is wrong. So are people....)
  3. Chat-driven programming. This is the hardest of the three. This is where I get the most value of LLMs, but also the one that bothers me the most. It involves learning a lot and adjusting how you program, and on principle I don’t like that....

Tools like Windsurf and Cursor can operate in agentic mode and generate as much code as possible and even test them. I have found them to be ok when you start a fresh project. But in an existing codebase they are a mess. At least so far.

There are tons of insight from Hackernews discussion of the blog. Some interesting ones:

dewitt

One interesting bit of context is that the author of this post is a legit world-class software engineer already (though probably too modest to admit it). Former staff engineer at Google and co-founder / CTO of Tailscale. He doesn't need LLMs. That he says LLMs make him more productive at all as a hands-on developer, especially around first drafts on a new idea, means a lot to me personally.

gopalv

If you are good at doing something, you might find the new tool's output to be sub-par over what you can achieve yourself, but often the lower quality output comes much faster than you can generate.

namaria

As we keep burrowing deeper and deeper into an overly complex system that allows people to get into parts of it without understanding the whole, we are edging closer to a situation where no one is left who can actually reason about the system and it starts to deteriorate beyond repair until it suddenly collapses.

Though this comment was made with respect to economy, this is true of a large code base. You need to learn to reason about the whole system to debug its parts. It is also true as you grow in your career to the top levels. As CXO, you need to understand and reason about different parts of your business to see how your business unit can be effective.

mhalie

communication skills are critical in extracting useful work or insight from LLMs. The analogy for communicating with people is not far-fetched. Communicating successfully with a specific person requires an understanding of their strengths and weaknesses, their tendencies and blind spots. The same is true for communicating with LLMs.
...querying LLMs has made me better and explaining things to people

highfrequency

They dramatically reduce the activation energy of doing something you are unfamiliar with. Much in the way that you're a lot more likely to try kitesurfing if you are at the beach standing next to a kitesurfing instructor.
While LLMs may not yet have human-level depth, it's clear that they already have vastly superhuman breadth.

WhiteNoiz3

don't use it for something that you aren't able to verify or validate.