I’m a utopian, in that I don’t believe that computers are a mistake. I have big criticisms about particular technical decisions, but I don’t think those decisions were inevitable. An alternate computer universe, as projected from trends thirty years ago and earlier, was possible; with care and effort, it still is.
The biggest structural problem I see is a failure to distinguish between two different kinds of computing that have fundamentally different needs.
Big computing is computing at-scale. It’s the kind of thing anybody in the software industry is used to, and anyone not in the software industry is accustomed to complaining about. Big computing is client-server. Big computing processes big data. Big computing has millions of users. Big computing hides ‘advanced settings’ behind a checkbox or a button so ‘regular people’ don’t get intimidated. Big computing has maintainers, bug trackers, and devops on call. Big computing is worried about accidentally committing experimental code to prod. Big computing writes tests, cares about strong typing, and writes things in java because it’s easier for HR to find qualified candidates that way. Big computing is worried about job security. Big computing has a project manager and stock options. Big computing ships.
Small computing never died, but you wouldn’t know it from reading Hacker News. Small computing has an average user count of one. Small computing is peer to peer, and human scale. Small computing does exactly what the end user wants, because the end user is the developer. Small computing doesn’t distinguish between programmer and non-programmer. Small computing doesn’t care about marketing. Small computing is open source because there’s no point in using a restrictive license, not because anyone will ever submit a pull request. Small computing is as unique as a geocities page. Small computing plays.
If you are being paid, you should be doing big computing. Big computing means scale, and scale means that your decisions have technical, social, and ethical ramifications that you have a responsibility to seriously consider. This means asking for permission. It means facing reality, caring about security, avoiding intellectual laziness with regard to tool choice, and maintaining familiarity with the lore. Major technical problems often can be traced back to the application of small-computing mantras (“move fast and break things”, “yagni”, “it’s better to ask for forgiveness than permission”) to big-computing situations. Big computing should be extremely conservative, and because of its centralized and hierarchical nature, we should be making decisions based on the categorical imperative: make a technical decision only if you think it would easily and unproblematically scale to every machine in the planet forever.
On the other hand, I consider small computing much more important than big computing. Big computing, because it is big money, gets all the attention; however, big computing is one-size-fits-all and therefore doesn’t quite fit anyone. Every programmer began in the context of small computing, and every programmer, in his or her off-time, operates in that context. Systems geared toward small computing (like REPLs, notebook interfaces, smalltalk VMs, and the UNIX command line) are incredibly powerful. Unfortunately, small-computing systems are not made accessible to non-programmers, even though they absolutely could.
Almost all user-facing interfaces should be small-computing. Big computing should only exist as a fallback when we, as developers, have failed to make small-computing-oriented systems sufficiently unintimidating. Users should be able to gradually learn to program, without reading manuals, simply by interacting naturally with their computer’s UI and performing the kinds of casual customizations we all do to optimize for our use cases. The system of even a non-technical user should be composed of 75–80% code written by that user, within a few months.
On the other hand, big computing, because it is professional, should be subject to licensing. Licenses are not a guarantee of competence, but they are a mechanism that filters out those unwilling to make minimal effort, and they also present a mechanism for ethical lapses to be effectively punished. (“Why don’t I have a license? Oh, Uber asked me to implement a fake surge pricing mechanism and I said yes. Oh, I lost my license because I collaborated with an NSA wiretapping request. I lost my license because I exposed a credit card database to an unvalidated input field. I lost my license because I didn’t implement buffer overflow checks. I lost my license for using unsalted SHA1 for password hashes.”) Big computing can ruin people’s lives, so professional developers and their employers should be legally liable for their decisions.
Here are other essays I’ve written on related topics:
Trajectories for the future of software Some decisions are sane. By this I mean: some decisions are things you would choose to do with full knowledge of all…hackernoon.com
Considerations for programming language design: a rebuttal Walter Bright, who designed D, wrote an article called “So You Want to Write Your Own Language”. I love D and I respect…hackernoon.com
Guidelines for future hypertext systems Since 1992, the web has been the only hypertext system most of us have known (outside of occasional hypertext systems…hackernoon.com
Orality, Literacy, Hyper-Literacy Walter Ong calls only spoken word impacted by literate culture ‘secondary orality’ (things like audio recordings…medium.com
My hypermedia history So, I actually have a long history of (mostly failed or unsatisfying) hypermedia-adjacent or Alan Kay-ish projects…medium.com
The end-game of the voice UI (like that of the chat UI) is the command line interface. To start out with, there are a handful of differences between interfaces centering around how learning curves are…medium.com