The question
If accountability is necessary for running a profitable business, it follows that humans should always have oversight over every product decision and feature implementation; at most, AI would be a coworker, but could never fully displace humans. Alternatively, accountability is not an important part of building software products. In that scenario, AI can successfully decide what to build and then presumably build it too; software engineers and product designers could very well be replaced by fully autonomous AI.
I view accountability as the idea that someone owns a product decision and accepts whatever consequences may follow, good or bad. If we let AI autonomously decide what to do and how to implement those changes, then we can't say that any person truly owns those decisions. At best, we can say that someone instructed the AI on how to operate at a high level, which led to some product outcome. But that's not a generalizable argument—sometimes the decision is a direct result of the human's system instructions, while other times the AI has gone rogue. In any case, autonomous AI prevents clear accountability with any person.
So maybe we can't pinpoint exactly who or what is responsible for certain features, bugs, or errors. Does that negatively affect the bottom line of the software company? To rephrase my brother's question:
How important is accountability in product teams for building software that sells?
Let's start by looking at our assumptions about why we might need accountability in software teams, meaning each product decision can be traced back to an individual employee.
Benefits of accountability
I see two main arguments for why firms need accountability to sell software in the pre-AI age.
One is competitive pressure from inside the firm. If a knowledge worker makes exceptionally good work they expect promotion, while for exceptionally poor work they can expect to be fired. Accountability motivates employees to do good work and avoid bad work, because everyone owns their decisions and any subsequent consequences. This motivation produces better software products, which helps create a competitive edge in the software market.
Another is the customer trust. Naturally, the customer wants to know that the software creator won't betray them. If product teams can make decisions without consequences, what stops them from turning their back on the user? With accountability in place, individual employees are directly responsible for lost customers and revenue if they sunset important features or expose publicly-identifiable information (PII).
Autonomy scenarios
Imagine AI automation moves up the chain, first replacing engineers with designers who direct autonomous AI to make engineering decisions, then replacing designers with product managers who direct autonomous AI to make design decisions. Even the most technical product managers are swapped with C-suite directing increasingly intelligent yet holistic AI systems. Before you know it, even C-suite is offloading all their decisions. At this point, AI is hypothetically making decisions across every facet of the company. There is not a single employee who can be held accountable for any specific feature, bug, or system failure. Who is accountable when something goes wrong?
In each scenario, no one can be held fully accountable for how the software turns out, neither the user nor the creator. Since humans can only give AI a finite set of instructions, its decisions will mostly be extrapolations of human rules and intent.
I'd like to test if only accountability can achieve competitive pressure and customer trust, assuming they are both necessary for selling software products. Then we can infer if the absence of accountability, when using autonomous AI systems, would necessarily harm profits.
Alignment
Alignment is a notable issue with autonomous systems. Even if AI follows all the provided instructions to a T, there will always be edge cases that humans couldn't anticipate or encode preferences for. The system is fully autonomous, so it can't ever ask the human to state or clarify their preferences along the way. AI will need to fill in the gaps with its own preferences, which may or may not be what the software creator would have wanted. There is either human-AI alignment or misalignment: it could go either way.
Let's assume AI makes a decision that is misaligned. The product worsens, users churn, and revenue drops. There is no clear accountability: no individual human made the problematic decision, yet the software suffers. We should ask whether the lack of clear accountability actually hurts the business. Importantly, the market doesn't care who made a bad call—it will respond the same to a misaligned AI as it would to a human making the same misstep. While there is no competitive pressure at an individual level, there is competitive pressure at a company level. If things go sideways, the autonomous system can pick up on that signal and act accordingly.
Autonomous AI might also make an aligned product decision. The product improves and revenue grows. Again, there is no clear accountability: we can't trace the beneficial decision back to an individual. As before, the market doesn't care exactly who created that feature or fixed that bug. So customer trust improves even though accountability is absent. Competitive pressure also acts on the company as a whole—the AI system—rather than on individuals that need incentives to work well.
So there is still competitive pressure even when there's no accountability; the feedback mechanism just acts on a company level rather than an individual level. When aligned, autonomous AI can still maintain or improve customer trust. It's just uncertain whether software changes are aligned (beneficial) or misaligned (harmful), because the human's instructions will have gaps that AI needs to fill on its own.
All in all, autonomous AI can achieve the same main benefits as accountability—competitive pressure and customer trust—just by different, and arguably less certain, means. So autonomous AI can be financially viable, too.
Blaming users
But what about our intuition that someone, anyone, should be responsible when software goes wrong? We can look back at each scenario for creating software with autonomous AI and point fingers at whoever wrote the AI instructions. For commercial software, we might blame the C-suite. For prompt-based personal software, we might blame the user. And for personal software that anticipates user needs, we might blame the app builder platform. Then again, our previous conclusion hasn't changed: no one can be held fully accountable for how the software turns out. We're just running in circles.
Rather than chasing accountability with the software creators, what if we can instead shift the responsibility of good software to the user? This seems backwards and appears to violate every UX principle. The reason is that software markets in the AI age will be highly saturated, due to the rapidly falling barriers for creating software—both skills and cost.
It's not unlikely that a user can choose from thousands of close software substitutes in the near future.
Accountability will disappear as human teams disappear, and the main responsibility of having a good user experience will fall on the users. Because users have many options, they have greater responsibility over which software product they decide to use.
Blind trust
You may reasonably say, "but that doesn't solve the issue of information asymmetry!" I completely agree. Let's tackle this separate issue.
I think this effect is for pragmatic reasons—all customers should take a product at face value to a certain degree. Consider these reasonable user questions:
- Could my phone battery blow up in my hand?
- Could the tires on my car fall off on the motorway?
- Could the roof in my lecture hall fall on my head mid-class?
You could compile a list of valid concerns about using any product, if you put your mind to it. Even something as harmless as a poster on your wall risks you leaving a bad impression on someone important you invite over. The reason we don’t obsess over such low risks is because we simply want the product that offers maximum upside while falling under a risk threshold—both decided jointly by logical analysis and our gut feeling. The same applies to software products:
We can only know if software works as expected by using it.
In other words, I don't think the majority of users blindly accept EULAs because they are all free riding on some legal nerd who will read them. No, we just all accept that we can't know for sure if software will work as advertised or breach our trust. We're simply practical creatures: we know the fastest way to find out if software works is to use it, and the fastest way to use software is to skip reading the legal paperwork.
Conclusion
It turns out, accountability is not vital for selling software. The main benefits of accountability—competitive pressure and customer trust—can also be achieved by autonomous AI systems. Competitive pressure is simply shifted from the individual who needs status-based motivation to the company which continuously collects signals—the company being an autonomous AI system. Customer trust is still feasible, but harder to guarantee when there is no human to validate each decision.
Counter to our intuition, users will be responsible for using bad software in the AI age. The falling cost and skills barrier for building software will flood the commercial software market with thousands of close substitutes. Alternatively, users will build their own hyperpersonalized software with AI app builders, meaning they effectively input their needs and also bear responsibility for any poor UX.
Across thousands of commercial and personal software products doing generally the same thing, there will be at least one high quality option. The main barriers to picking high quality software will be switching costs, which is already relevant today, and evaluating all substitutes well, which is a future problem. To solve finding the best option, I expect someone will build a QA testing tool for users that tests software on their behalf before they onboard the best one, which is deserving of its own essay.
Human-AI alignment remains the biggest limiting factor in letting AI design and code software products autonomously. Letting is a much more suitable word than steering, because the alignment of products decisions is highly variable: any language-based instruction set will be grossly inexhaustive and full of interpretative gaps that AI needs to fill.