Maxims of good software

November 2025

I just read Obvious Adams, a short story about advertising, on my flight to Maine for Thanksgiving. The gist is that there's value in stating the obvious. I built my first app over ten years ago, so I thought it would be interesting if I compiled my most obvious observations about software.
Here are five constraints of creating user-facing software, maxims of software if you will, that are timeless.

1. Opportunity cost is real

People have better things to do than solve their own problems. Sure, we aren't all engineers capable of building solutions to every problem. But even engineers don't want to, or regret when they try to, engineer solutions for every problem in their lives. No individual has the time, resources, or effort to build a solution for every single problem they ever face. More than not, people have mutually exclusive opportunities that are more worth pursuing.

2. Every user is ego-centric

People want personalised solutions. Let's not forget that the ideal solution is one tailor made for the individual. We don't like cookie cutter solutions that miss the breadth and depth of our problems.

3. Users think in derivative terms

People have an instinct to solve problems by building on existing solutions. To create the best software, people need to think about their problems from first principles, but people are fundamentally lazy. The path of least resistance is to request changes to the software we already use. Iterating on existing software can only produce an optimal solution as long as the software is built on a foundation of correct assumptions. Every time the world changes, previously correct assumptions are broken. The world changes frequently, meaning software can be an ideal solution one day and fall from its grace the next.

4. Most problems go unnoticed

People aren't observant over all problems in their lives. It would be very taxing to the individual to notice every problem they encounter, let alone in its full depth, since they don't have the means to solve them all. When you notice problems deeply and can't solve them, you're a pessimist. When you notice problems deeply and try to solve them—implicitly because you think you can solve them—you're an optimist. We only have the time, resources, and effort to be optimistic about relatively few things, making us ignorant of 99.99% of problems by default. Focusing on those unaddressable problems would make us pessimists, and no one wants to be a pessimist.

5. Environments steer results

People think in terms of environments. In the physical world, we have separate spaces for eating, working, working out, showering, sleeping, commuting, and so on. By default our environments are siloed, yet there is often a small passage of symbiotic interaction, such as between eating and working out. The context in which we operate undoubtedly affects how we think and act—it's fundamentally human. It should be no surprise that the same applies in the digital world: we think and act differently across different digital environments. A narrow environment enables targeted affordances, which makes it significantly easier for the user to solve their problems well.

Summary

To reiterate, my five maxims of software are:
  1. Opportunity cost is real
    → let others build you most of your software
  2. Every user is ego-centric
    → let users augment software to their specific needs
  3. Users think in derivative terms
    → only sample problems from your users
  4. Most problems go unnoticed
    → optimists need to observe the problems that most people ignore
  5. Environments steer results
    → place users in a narrow digital environment
In writing the constraints of software so plainly, it's clear what ideal user-facing software looks like. Perfect software is mentally compartmentalised by environments, but lets us share and receive chosen data with other environments (Maxim 5). It is highly personalised (Maxim 2), but mainly built by others (Maxim 1) who optimistically observe your problems (Maxim 4) and solve them from first principles (Maxim 3). "Others" means humans and AI systems.

Implications

This raises an interesting question: how is the scope of observing problems and building software best divided among humans and AI systems? In the case that ideal user-facing software is built by both, I have some thoughts on whether we want more generalists or specialists, worthy of a separate essay. Concerning this essay, though, I think that is the case—we'll always need humans to build software, because even first principles thinking can't solve alignment problems.
Just like how science is a truth-seeking method rather than a truth, first principles thinking is a truth-seeking method, not a truth. No matter how hard we try to solve problems from first principles, there's no guarantee that we're making valid assumptions or evaluating them correctly. Wherever different assumptions can be made for solving the same problem, there will be an alignment issue, whether human-human, human-AI or AI-AI. Since software ultimately serves a human user, we'll always be concerned with human-human and human-AI alignment.
The solution is as obvious as my maxims and extends Maxim 1: ideal software is built on human-made assumptions about the user's problem. We the humans must define our assumptions as requirements that solution can grow out of. Once built, we the humans should check software against our requirements. This approach creates the best chances of building ideal user-facing software.

Notes

Timeless also means independent of platforms and tooling for developers.

Even the best design engineers are best off using off-the-shelf solutions. "I could build it better" is not a sufficient reason to spend $1000s of potential work hours building a slightly better todo app. Opportunity cost is real.

"Others" means humans and AI systems.

Naturally, only do so in a safe manner, giving access to select functions and endpoints that are already accessible client-side. If a kid can invent their own toys and make larger sandcastles than ever, your sandbox needs higher walls than ever.

Laziness can also be subsituted for "energy efficient" in the biological sense, depending on how you frame it.

Scope ambiguity works well here—you can read it both ways. My main intention was, only listen to your users' problems, unless you want to hear myopic solutions that are purely changes to existing software. After all, they're spending their time, resources, and effort elsewhere, so they have little to no capacity for first principles thinking in your chosen problem space.

Update (Nov 25, 2025): I've caught myself trying to be a little too smart here. You obviously want to listen to non-users too, especially if they fit your idea of an ideal user. My point was simply, when your users talk, listen to their problems rather than their proposed solutions. I never intended to say ignore the non-users. Unless you're competing with something I've built ;)

As for first principles thinking, I discovered impact mapping a few years ago. I think it's a brilliant planning technique in how it lays out the connections between assumptions and deliverables. If an assumption turns out to be false, you can visually trace the assumption down the tree to every now-invalidated deliverable.

Think about how often yet forgettably you've felt problems in your everyday life. Here are some examples that come to mind:

  • Thinking it's a pull door based on the handle but it's actually a push door.
  • Waving your hand under an automatic tap and the initial pressure is so high that water sprays over you, enough to make it look like you arrived late to the bathroom.
  • Pressing stop on an empty microwave with "00:05" flashing because the last person who used it took their food out five seconds early.

They're all problems that are frequent enough to be design issues rather than user issues. But I currently lack the expertise in door handles, plumbing, and microwave design to do anything about such problems. So the best thing I can do is to let them go, at least for now. I'll stay an optimist for problems that can be solved with software.

If the operating system is the world, then applications are the environments. But even worlds are different meta-environments that set the tone for applications it runs: we operate differently when running the same apps on a watch versus a phone versus a tablet versus a laptop versus a desktop versus a headset and so on. This is analogous to living in different places in the physical world.

Far more than physical environments, well-designed software environments are an exercise in psychology, specifically through user experience design.

Here are some clarifying texts I sent to friends:
Q) Maxim 4 tasks about problems going unnoticed and makes me think about how great "optimists", as mentioned in Maxim 5, can carve out these environments. What are your thoughts on entrepreneurs trying to appeal to customers who live in a large mental environment?
Q2) Is it always better to have others develop most of the software for you?

Product Designer. AI-Native Engineer.
Copyright © 2025 Kevin Gugelmann. All rights reserved.