Can tech keep up the rate of innovation?

Before the invention of the microchip, research and development of new tech was an evolutionary affair. A slow, successive chain of trial and error.

Since 1971 when Intel put their revolutionary 4004 chip on the market, it seems that modern technological innovation has advanced at an ever-increasing rate. Our lives are somehow lived faster.

Integrated circuits, what we commonly call microchips, have come a long way over the last 50 years. They are an essentially fabricated device containing a multitude of transistors, the electrical switches that interpret the binary language that computers operate with.

Intel’s 4004 chip contained 2,300 transistors within its 12 square millimetres. If you looked at them through a microscope you could count them. The chips Intel make now are larger in size but the transistors are much closer together and so tiny as to be invisible to the human eye. They may have as many as 25 million transistors per square millimetre, making them far more powerful; making the computers and systems they control far, far faster. Even prior to 1971 as the knowledge and materials required for the computer revolution were coming together, scientists, who are always projecting ahead, were musing about where we would be now

In 1965 Gordon Moore, then R&D director at Fairchild Semiconductor and later Intel president, made his now famous observation that the number of components in an integrated circuit had doubled approximately every year. And that it would continue to do so for at least the next ten years. In 1975 he revised this to every two years…

It would be fair to say that the secret sauce of miniaturisation has given these chips the power to literally change the world. The ability of the billions of switches contained within one circuit making ever more rapid decisions has changed how cars drive, planes fly, ministers govern, and how markets trade. The last 50 years have been truly revolutionary in that sense. The now clichéd comparison between the Apollo Guidance Computer and the modern iPhone (the latter is many times more powerful) actually does hold true. That thing in your pocket could guide rockets. But the question is, how long can this trajectory continue?

Moore’s Law is not actually a law in the strictest sense. It is not a universal constant. It is not inescapable like gravity. It has been in the commercial interests of companies such as Intel to get more bang from the bucks they invest in chip design since Moore wrote his paper. They made it a self-fulfilling prophecy for sound economic reasons.

There are though a growing number of voices within the industry and academia that believe its time as a method of predicting the future is up. “There’s a law about Moore’s Law”, jokes Peter Lee, a vice-president at Microsoft Research: “The number of people predicting the death of Moore’s Law doubles every two years.”

Moore’s Law has been observed to be less reliable in recent times for a number of reasons.  

For some time now the economic benefits of going small have been decreasing. It has been costing Intel and competitors far more to go in that direction than previously. The main obstacle facing them is that chips are beginning to reach a fundamental level of smallness. Intel’s Skylake transistors are only 100 atoms across, spaced 14 nanometres apart. Beyond this level of smallness chips become much harder to make cost effective.

Dr Richard Cooper

As Dr Richard Cooper, former senior lecturer in the Department of Computing Science at Glasgow University puts it: “It seems ‘obvious’ that there are physical limits to how close we can place transistors and proximity is the key at present, but someone may invent another way. I’d still think we are approaching a limit that will be hard to exceed.”

It’s also the case that no matter how fast the hardware gets, the software becomes more complex in turn. Dr Cooper continues on this point: “The sad thing about working in computing is that whereas the hardware can do amazing things, they cannot overcome the ability of software engineers to make things overly complicated and thus slow things down.”

This idea of ‘software bloat’ is also known as ‘The great Moore's Law compensator (TGMLC)’. Randall C. Kennedy, formerly of Intel, coined this term. He compared successive versions of Microsoft Office between the year 2000 and 2007. Despite the gains in computational performance during this time period in accordance with Moore's Law, Office 2007 performed the same task at half the speed on a prototypical year 2007 computer as compared to Office 2000 on a year 2000 computer.

It would seem that Moore’s Law, durable though it has been, only really describes the first revolutionary period of computing. What does remain constant however is the commercial imperative to steal a march on the competition. Dr Cooper uses the financial markets as an example to illustrate this: “The compelling application right now is High Frequency Trading; algorithms which analyse the market and issue buy/sell orders. In 2011 issuing an order took 11 microseconds, now it’s 84 nanoseconds, (one nanosecond being the physical limit). So, the guys with the money are driving this, because it’s still true that if you’re late with the order you will lose out.”

In order to do this R&D departments are starting to look into areas such as quantum mechanics; advances in the emulation of biological brains; and the diffusion of computing across a range of objects, which has become known as ‘the internet of things’. Gordon Moore himself sees the temporal limit of his “law” as being about 2025. Beyond that it seems a new paradigm will be required that has not yet arisen. Whatever it turns out to be, the future will certainly be interesting, and very probably faster.

This is a guest blog and may not represent the views of Virgin.com. Please see virgin.com/terms for more details.

Comment

Our Companies

Quick Links