Moore’s law stopped in 2005. Since then, a new computer has not been faster than an old computer with the same cost.
Those new capabilities that *can* be used to speed up software (such as multiple cores) require modifications to that software: in other words, new computers can only run software faster when application developers put effort into optimization.
For about 55 years, we’ve written software that’s currently too slow by a factor of 2 with the expectation that in 18 months it will automatically be fast enough.
For the first 40 of those years, that technique has made sense.
For the past 15, we have merely written software that twice as slow as the minimum acceptable performance.
What Gordon Moore specifically meant in 1965 is less relevant here than the generally-understood myth, by non-technical people and by folks without quite enough experience to recognize on-the-ground realities, that computers are a special domain where performance automatically accelerates.
Moore’s law is one of ten or twenty eponymous laws that claim exponential performance increases in computing. None of them remain in effect.
Sure, Moore’s law on paper is about transitor density. The other eponymous laws (about clock speed, power consumption, cost per number of transitors), when they held, depended upon it — came as side effects of it, really — and these laws are the proximate explanation given by tech journalists for why computers get…