I’m serious. Name one thing in computing that showed up after 1978 that wasn’t either an incremental improvement on a pre-1978 technology or a crappier but cheaper version of a pre-1978 technology. I’m not trying to produce sophistry here. There’s a huge difference in the level of novelty of original research in computing tech during the span 1940–1980 and the level of novelty in the same after 1980, & it relates directly to economics.
From 1940 to 1980, computer science was being done by folks with doctorates & experience in other disciplines, funded by government money to do pure research & moonshot shit — especially ARPA funding starting in the wake of Sputnik for ed-tech. When that funding dried up, so did productivity in original research, because the ability to continue to be employed depended on profitability in a consumer market (which means racing to market… which means avoiding risky detours).
The exact same people have drastically different productivity levels with the two models. Kay at PARC in the 70s went from having seen a sketchpad demo to having a complete functioning live-editable GUI with network transparency in 10 years, because of government ed-tech money. In the early 80s, Kay moved to Atari & tried to continue the kind of work he had been doing. And then he got laid off, and went to Apple, got laid off again. The work he started in the 70s has been treading water because short term profits can’t support deep research.
This isn’t to say that what has happened since isn’t valuable. The computing technologies developed prior to 1980 have mostly become cheap enough that they have become accessible to a mass audience, in part because of iteration on manufacturing techniques, & mostly because of cheap labor (in the form of fresh-out-of-college CS students who will write bad code for half of what you’d pay the PhDs to refuse to write bad code, and will work unpaid overtime if you give them a ball pit and a superiority complex). But, using 70s tech to make 60s tech bigger (ex., deep neural networks) isn’t innovation — it’s doing the absolute most obvious thing under the circumstances, which is all that can be defended in a context of short-term profitability.
What armies of clerks were doing in the 30s was ‘computation’, sure, but it’s very different. The pointing devices we use, the look and feel of our UIs, and our UI metaphors haven’t changed since the 70s except in terms of resolution. Our network protocols haven’t much either.
This is to say:
Somebody who had used an Alto in 1979 could travel through time and sit down at a modern PC and know basically how to do most tasks — they would think of a modern PC as a faster but less featureful stripped down Alto clone, like the Star was. They could probably even code on it — they would have been familiar with UNIX shells & C, & with SGML-style markup. They would find it disappointingly awkard compared to Interlisp-D and Smalltalk environments they’re used to, but they could make it work.
Meanwhile, to somebody from 1940, home computer tech of 1980 would be mind-blowing. Such a person, even if they were in computing, would not be familiar with the concept of a programming language (since stored program computers didn’t exist yet).
The VC model ties into this difference. The best possible outcome, under the VC model, is that actual costs are low & the VCs get wild profits in the short term, after which they sell their stakes & don’t need to care anymore. The easiest & most reliable way to do that is a ponzi scheme (or, to be more technically correct, the pyramid-selling of hype — in this case, to other, marginally less savvy investors). With enough cash floating around, you can keep a company that provides no service & has no income afloat indefinitely, and everyone involved becomes a paper millionaire.
The goal of ARPA from ’57 to ’78 was quite different: to encourage children to become engineers in order to have an edge in a high-tech hot-war that never really ended up coming, and to build tech that lets them bootstrap new tech more easily. Massive short-term losses in that. (It obviously didn’t exactly work: we did end up getting a lot of cheap engineers, but few of them had the background to be able to more-than-iteratively improve upon the tech they grew up on, even had they been allowed to by management.)
The difference in development between the first 40 years of CS and the second 40 is absolutely not the result of the low-hanging fruit all being picked. The most interesting technical work is being done by individuals and small groups still. Important features & useful tools that were well known in the late 70s are actually missing from modern tech because of gaps in education & because having them prevents some avenues toward monetization. (Ex., you can’t live-edit software if the software is closed source.)
I really hate the “silicon valley is a center of innovation” memeplex & feel the need to inject some historical context whenever I see it. It’s weird, masturbatory, Wired Magazine bullshit & it leads to the lionization of no-talent con artists like Steve Jobs. Making money & making tech are very different skills, often fundamentally at odds: good tech is very often not profitable and the most profitable tech is just varying reframings of rentseeking. Very few people can do both well, and SV has a bias toward profit. Because of this, in most cases, the product isn’t actually the tech but the techwashing. Apple’s an excellent case in point. After they dropped the Apple ][ line, their main product was hype & terminology stolen from PARC with a free underpowered home computer thrown in (for the low low price of 8 grand).
In an environment where cash is king, when you can make bank on PR, it’s foolish to try to innovate — and the valley has learned this lesson better than it has learned anything else.