Commonplace Notes

Commonplace note is a personal compilation of knowledge - quotes, stories, and my observations on artilces I read on topics that interest me. These range from wealth, learning, networking, life, spirituality, homeschooling, and more.

I try to explore an idea out in public (in line with my learning framework), sometimes even without agreeing to it. Think of these as scrap notes.

When is AI useful in the real world?

Milan Cvitkovic:

AI is useful for a real-world task only if the cost for the AI to do the thing and for you to check its work is less than the existing solution.

Milan provides few good and bad examples.

Though it was written in Oct 2020, it seems to be true even now.

As a Hackernews comment said,

don't use it for something that you aren't able to verify or validate

and

the lower quality output comes much faster than you can generate

I've been using GenAI tools for designing this blog, coding an app, editing blog posts, and writing policy documents for IT organization. In each of these cases, GenAI tools (Windsurf or Open WebUI) generates 60 - 70 percent of the output quickly, which I can validate and verify quickly. Then I start to fill-in the rest.

In this process, I've discovered that even though the GenAI models are probabilistic their output is deterministic [1] since they can be validated.


  1. : factors cause things to happen in a way that cannot be changed ↩︎

aieconomy

Confidence is the first step to having a life you need

Richard Branson:

To be honest with you, I had never heard of the ‘Virgin Islands’....I had been madly trying to come up with a way to impress a girl I had fallen for, so I rang up the realtor, and expressed my interest. We were still in the early days of Virgin Records, and I by no means had the cash to buy an island.

Richard Branson bought Necker Island for a mere $180,000, despite its $6 million asking price. You would think it's all due to his negotiation skills, and I agree he must be quite the negotiator to reach such heights in business. But what struck me most was his confidence in making that first call.

Imagine seeing a $6 million price tag when you can only afford $100,000. Instead of walking away, he picked up the phone, arranged a visit to the island, and boldly offered his limited budget.

I wish I had that kind of confidence.

For example, I run a podcast and sometimes spot the perfect guest. Yet, I lack the courage to reach out to them on Twitter, LinkedIn, or email. But Branson's story teaches me the value of taking that first step.

Coming back to Branson's story, a year later, the island's owner hadn't received any better offers and called Branson again. This time he could offer more - $180,000 - and sealed the deal. The lesson here is clear: have the confidence to act even when there's a gaping chasm between what you can give and what's asked for.

I want to embrace this boldness in my own life. Maybe I won't always succeed in negotiating, but at least I'll have given it my best shot.

Paranoid is ok; Paralysis is not ok.

insights, coach, action

How Tailscale Co-Founder Program With LLMs

David Crawshaw :

I followed this curiosity, to see if a tool that can generate something mostly not wrong most of the time could be a net benefit in my daily work. The answer appears to be yes, generative models are useful for me when I program. It has not been easy to get to this point.

Good to see David laying this out in open. As technologist we need to be curious and press along beyond the "easy" part to extract benefits of a technology.

He uses LLMs in three ways:

  1. Autocomplete. This makes me more productive by doing a lot of the more-obvious typing for me. ... This is the place to experiment first.
  2. Search. If I have a question about a complex environment, say “how do I make a button transparent in CSS” I will get a far better answer asking any consumer-based LLM, o1, sonnet 3.5, etc, than I do using an old fashioned web search engine and trying to parse the details out of whatever page I land on. (Sometimes the LLM is wrong. So are people....)
  3. Chat-driven programming. This is the hardest of the three. This is where I get the most value of LLMs, but also the one that bothers me the most. It involves learning a lot and adjusting how you program, and on principle I don’t like that....

Tools like Windsurf and Cursor can operate in agentic mode and generate as much code as possible and even test them. I have found them to be ok when you start a fresh project. But in an existing codebase they are a mess. At least so far.

There are tons of insight from Hackernews discussion of the blog. Some interesting ones:

dewitt

One interesting bit of context is that the author of this post is a legit world-class software engineer already (though probably too modest to admit it). Former staff engineer at Google and co-founder / CTO of Tailscale. He doesn't need LLMs. That he says LLMs make him more productive at all as a hands-on developer, especially around first drafts on a new idea, means a lot to me personally.

gopalv

If you are good at doing something, you might find the new tool's output to be sub-par over what you can achieve yourself, but often the lower quality output comes much faster than you can generate.

namaria

As we keep burrowing deeper and deeper into an overly complex system that allows people to get into parts of it without understanding the whole, we are edging closer to a situation where no one is left who can actually reason about the system and it starts to deteriorate beyond repair until it suddenly collapses.

Though this comment was made with respect to economy, this is true of a large code base. You need to learn to reason about the whole system to debug its parts. It is also true as you grow in your career to the top levels. As CXO, you need to understand and reason about different parts of your business to see how your business unit can be effective.

mhalie

communication skills are critical in extracting useful work or insight from LLMs. The analogy for communicating with people is not far-fetched. Communicating successfully with a specific person requires an understanding of their strengths and weaknesses, their tendencies and blind spots. The same is true for communicating with LLMs.
...querying LLMs has made me better and explaining things to people

highfrequency

They dramatically reduce the activation energy of doing something you are unfamiliar with. Much in the way that you're a lot more likely to try kitesurfing if you are at the beach standing next to a kitesurfing instructor.
While LLMs may not yet have human-level depth, it's clear that they already have vastly superhuman breadth.

WhiteNoiz3

don't use it for something that you aren't able to verify or validate.

aieconomy

Delta Dollar Decision Rule

Rajesh Jain

Set a threshold below which one will not waste thinking time – the answer should be a Yes. For me, that threshold is $100 (Rs 7,500). This simplifies decisions like buying a book, booking a better seat on a flight, going to a better restaurant for a business meeting – the answer is always Yes. The same applies in business also – the decision threshold can be higher. Always look at the benefits and the delta, rather than the absolute.

He gives another example of this "Delta Dollar Decision"

As he says, don't spend mindlessly. Have few categories (self-improvement, education ...) where this becomes default.

every small spend adds up – but there are some categories where the delta needs to be seen on the large spend base, rather than as an integer by itself.

frameworks

You can’t optimize your way to being a good person

Sigal Samuel on Vox so eloquently talks about the challenges of codifying morality and optimizing it. You should read the whole article — it is so well written.

Here are the parts that resonated with me. I clipped these to think a little more deeply:

Optimization requires you to have a very clear and confident answer to the question “What is the thing you should be optimizing for?”

When you grow up as a kid, you almost always optimize for what your parents and the culture you grew up in value. It could be money, fame, or education. If we’re lucky, we get to figure out what really matters to us. And if we’re truly lucky, we get to build our lives on those values and cherish such a life.

What the “right” thing to do is will depend on which moral theory you believe in. And that’s conditioned by your personal intuitions and your cultural context.

Since there is one holy book, one would assume all Christians would agree on what is "right." Far from it. Even on larger theological questions, there’s no consensus. Can women teach in churches? Should we celebrate Christmas, even though it’s not in the Bible? I think the answers are "yes" and "no" to these questions, respectively, but there are many other questions for which I don’t have clear answers — even though I’ve thought deeply and tried to understand different points of view. One lifetime won’t be enough to come to an understanding of what is "right."

The moral view endorsed by a majority of people? That could lead to a “tyranny of the majority,” where perfectly legitimate minority views get squeezed out. Some averaged-out version of all the different moral views? That would satisfy exactly nobody. A view selected by expert philosopher-kings? That would be undemocratic.

Sigal so eloquently put forth the conundrum of morality. It’s "yes" and "no" to all three questions at the same time. That’s the paradox we navigate in our daily lives. We should have the freedom to choose to go with the crowd or to stand alone in our own choice. Not everyone has the ability to do so — not always, not every time, but that is part of being human.

When Rosa Parks refused to give up her seat on the bus to a white passenger in Alabama in 1955, she did something illegal,” they write. Yet we admire her decision because it “led to major breakthroughs for the American civil rights movement, fueled by anger and feelings of injustice.

Christ broke many "laws" that were established until that point in time. Breaking "unjust" laws is one of the unwritten rules of the New Testament. That unwritten rule has inspired many activists to bring about social reforms across cultures and through time.

Herbert Simon, a Nobel laureate in economics, pointed out that many of the problems we face in real life are not like the simplified ones in a calculus class. There are way more variables and way too much uncertainty for optimization to be feasible. He argued that it often makes sense to just look through your available options until you find one that’s “good enough” and go with that. He coined the term “satisficing” — a portmanteau of “satisfying” and “sufficing” — to describe opting for this good enough choice.

I don’t remember where I first learned about “satisficing,” but ever since, the concept has stuck with me. It seems like a wise approach to life in general. I approach almost all decisions with a “satisficing” filter — asking myself, "Is this good enough for the current situation?" Be it a car, a city to live in, or a job to work at, this filter has made life easier and more fulfilling. It’s allowed me to build a life I genuinely enjoy. Ironically, “good enough” is often the optimal way to reach a state of continual happiness in life.

We would, in a sense, be held hostage by the moral architecture of the world. But nobody can prove that. And so we’re free and our world is rich with a thousand colors. And that in itself is very good.

From a Christian perspective, God has revealed his mind through the Bible, but he has also given us the agency to choose one way or the other. If we were toys or robots with a codified moral code, there’d be no need for final judgment.

self, decisionmaking