640K 2^256 Bytes of Memory is More than Anyone Would Ever Need Get

 
Author:  Follow: TwitterFacebook
Job Title:Sarcastic Architect
Hobbies:Thinking Aloud, Arguing with Managers, Annoying HRs,
Calling a Spade a Spade, Keeping Tongue in Cheek
 
 
Binary Solar System

Threatening hare:While there is a desire to get as much memory as possible, physics will certainly get in the way and will restrict any such desire.There is a famous misquote commonly and erroneously attributed to Bill Gates: “640K of memory is all that anybody with a computer would ever need.” Apparently, Gates himself has denied that he has ever said anything of the kind [Wired97]. Reportedly, he went even further, saying “No one involved in computers would ever say that a certain amount of memory is enough for all time.” [Wired97] Well, I, ‘No Bugs’ Hare, am involved in computers and I am saying that while there can be (and actually, there is) a desire to get as much memory as possible, physics will certainly get in the way and will restrict any such desire.

Moore’s Law vs Law of Diminishing Returns

What goes up must come down

— proverb —

There is a common perception in the computer world that all the current growth in hardware will continue forever. Moreover, even if such current growth is exponential, it is still expected to continue forever. One such example is Moore’s Law; originally Moore (as early as 1965, see [Moore65]) was referring to doubling the complexity of integrated circuits every year for next 10 years, i.e. to 1975 (!). In 1975, Moore adjusted his prediction to doubling complexity every two years [Moore75], but again didn’t go further than 10 years ahead in his predictions. As it happens, Moore’s law has stood for much longer than Moore himself had predicted. It was a great thing for IT and for everybody involved in IT, there is no doubt about it. Assertive hare:There is only one objection to this theory, but unfortunately, this objection is that this theory is completely wrongWith all the positives of these improvements in hardware, there is one problem with such a trend though – it has led to the perception that Moore’s Law will stand forever. Just one recent example – in October 2012, CNet published an article arguing that this trend will continue for the foreseeable future [CNet12]; in particular, they’ve quoted the CTO of Analog Devices, who said: “Automobiles and planes are dealing with the physical world. Computing and information processing doesn’t have that limitation. There’s no fundamental size or weight to bits. You don’t necessarily have the same constraints you have in these other industries. There potentially is a way forward.”

There is only one objection to this theory, but unfortunately, this objection is that this theory is completely wrong. In general, it is fairly obvious that no exponential growth can keep forever; still, such considerations cannot lead us to an understanding of how long it will continue to stand. In practice, to get any reasonable estimate, we need to resort to physics. In 2005, Moore himself said “In terms of size [of a transistor] you can see that we’re approaching the size of atoms which is a fundamental barrier, but it’ll be two or three generations before we get that far – but that’s as far out as we’ve ever been able to see.”[Moore05] Indeed, 22nm technology already has transistors which are just 42 atoms across [Geek10]; and without going into very different (and yet unknown) physics one cannot possibly go lower than 3 atoms per transistor.

Dangers of relying on exponential growth

Anyone who believes exponential growth can go on forever in a finite world is either a madman or an economist.

Kenneth Boulding, economist —

Hare thumb down:In 2000, Intel has made a prediction that by 2011, there will be 10GHz CPUs out there; as we can see now, this prediction has failed miserablyIn around the 2000s, Moore’s Law had been commonly formulated in terms of doubling CPU frequency every 2 years (it should be noted that it is not Moore’s formulation, and that he shouldn’t be blamed for it). In 2000, Intel has made a prediction that by 2011, there will be 10GHz CPUs out there [Lilly10]; as we can see now, this prediction has failed miserably: currently there are no CPUs over 5GHz, and even the only 5GHz one – POWER6 – is not produced by Intel. Moreover, even IBM which did produce POWER6 at 5GHz, for their next-generation POWER7 CPU has maximum frequency of 4.25 GHz. With modern Intel CPUs, even the ‘Extreme Edition’ i7-3970XM is mere 3.5GHz, with temporary Turbo Boost up to 4Ghz (see also an extremely enthusiastic article in PC World, titled ‘New Intel Core I7 Extreme Edition chip cracks 3GHz barrier’ [PCWorld12]; the only thing is that it was published in 2012, not in 2002). In fact, Intel CPU frequencies have decreased since 2005 (in 2005, the Pentium 4 HT 672 was able to sustain a frequency of 3.8GHz).

One may say, “Who cares about frequencies with all the cores around” – and while there is some point in such statement (though there are many tasks out there where performance-per-core is critical, and increasing the number of cores won’t help), it doesn’t affect the fact – back in 2000 nobody had expected that in just 2 years, all CPU frequency growth would hit a wall and that frequency will stall at least for a long while.

It is also interesting to observe that while there is an obvious physical limit to frequencies (300GHz is already commonly regarded as a border of infra-red optical range, with obviously different physics involved), the real limit has came much earlier than optical effects have started to kick in.

Physical limit on memory

The difference between stupidity and genius is that genius has its limits.

As we’ve seen above, exponential growth is a very powerful thing in a physical world. When speaking about RAM, we’ve got used to doubling address bus width (and address space) once in a while, so after move from 16-bit CPUs to 32-bit ones (which has happened for mass-market CPUs in mid-80s) and a more recent move from 32-bit CPUs to 64-bit ones, many have started to expect that 128-bit CPUs will be around soon, and then 256-bit ones, and so on. Well, it might or might not happen (it is more about waste and/or marketing, see also below), but one thing is rather clear – 2128 bytes is an amount of memory which one cannot reasonably expect in any home device, with physics being the main limiting factor. Judging hare:Even if every memory cell can be represented by a single atom, we would need 1 to 10% of all the stars and planets which we can see , to implement 2256 bytes of memory. Let’s see – one cubic cm of silicon contains around 5*1022 atoms. It means that even if every memory cell is only 1 atom large, it will take 2128/(5*1022)*8 cm3 of silicon to hold all that memory; after calculating it, we’ll see that 2128 bytes of memory will take approximately 54 billion cubic metres (or 54 cubic kilometres) of silicon. If taking other (non-silicon-based) technologies (such as HDDs), the numbers will be a bit different, but still the amount of space necessary to store such memory will be a number of cubic kilometres, and this is under an absolutely generous assumption that one atom is enough to implement a memory cell.

To make things worse, if we’re speaking about RAM sizes of 2256 bytes, we’ll see that implementing it even with 1 atom/cell will take about 1078 atoms. Earth as a planet is estimated to have only 1050 atoms, so it will take ten billion billion billions of planets like Earth to implement a memory which take 256 bits to address. The solar system, with 1057 atoms, still won’t be enough: the number we’re looking for is close to number of atoms in the observable universe (which is estimated at 1079–1080). In other words – even if every memory cell can be represented by a single atom, we would need 1 to 10% of all the stars and planets which we can see (with most of them being light years afar), to implement 2256 bytes of memory. Honestly, I have serious doubts that I will live until such a thing happens.

On physics and waste of space

Architecture is the art of how to waste space.

It should be noted that the analysis above is based on two major assumptions. First, we are assuming that our understanding of physics is not changed in a drastic manner. Obviously, if somebody finds a way to store terabits within a single atom, things will change (it doesn’t look likely in the foreseeable future, especially taking the uncertainty principle into account, but strictly speaking, anything can happen). The second assumption is that when speaking about address space, we are somewhat assuming that address space is not wasted. Of course, it is possible to use as much as a 1024-bit address space to address a mere 64K of RAM, especially if such an address space is allocated in a manner similar to the allocation of IPv4 addresses in early days (“here comes IBM, let’s allocate them as small portion of the pool – just class A network, or 1/256 of all IP addresses”). If there is a will to waste address space (which can be driven by multiple factors – from the feeling that space is infinite, like it was the case in early days of IPv4 addresses, to the marketing reason of trying to sell CPUs based on perception that a 128-bit CPU is better than a 64-bit one just because of the number being twice as big) – there will be a way. Still, our claim that ‘2256 bytes of memory is not practically achievable’ stands even without this second assumption. In terms of the address bus (keeping in mind that an address bus is not exactly the same as an address space, and still relying on the first assumption above), it can be restated as ‘256-bit address bus is more than anyone would ever need’.

Don't like this post? Comment↯ below. You do?! Please share: ...on LinkedIn...on Reddit...on Twitter...on Facebook

[+]References

[+]Disclaimer

Acknowledgements

This article has been originally published in Overload Journal #112 in December 2012 and is also available separately on ACCU web site. Re-posted here with a kind permission of Overload. The article has been re-formatted to fit your screen.

Cartoons by Sergey GordeevIRL from Gordeev Animation Graphics, Prague.

Join our mailing list:

Comments

  1. Martin Kunev says

    For the necessity of >64-bit CPUs to arise, it is not required to have 2^128 bytes of memory. 2^64 + 1 will suffice. Taking into account that programming languages like C mandate an address that does not contain a valid memory address, you no longer need that + 1. Using the calculation described above (1 Si atom for 1 byte), that amount of memory will take about 3 cubic millimeters. In addition, most operating systems work with virtual memory so memory addressing does not only faciliate memory access (memory addresses can have a number of uses).

    Yes, probably we won’t need 128-bit CPUs any time soon, but it is not that improbable that we will need them at some point in the future.

    • "No Bugs" Hare says

      Technically you’re right, but the point is that one way or another, the end of this exponent-of-exponent road is very close (that is, unless we start wasting addresses the same way IPv4 addresses were wasted in the very beginning, with whole Class A networks going to a single company just because there were “plenty” of them).

  2. Arnaud says

    It seems to me that your “Physical limit on memory” part is not taking into account the expansion of the universe. Our current understanding of this expansion is that the biggest part of the universe is unreachable. Put simply, the maximum speed is the speed of light, but the rate at which space is appearing between distant galaxies can be above the speed of light , making distant galaxies unreachable. An estimation from Lawrence Krauss (https://www.youtube.com/watch?v=8Cnj8MIQ0HY at 36’30) is that the total amount of energy accessible is 3e67 Joules (i.e. 3e67/(3e8)^2 Kg) . Even assuming Hydrogen atoms and 1 bit per transistor we end up with only 7e76 bits which is below 2^256. So if what we know of the laws of nature is true, we will never be able to make a system containing more than 2^256 bits.

  3. Michel says

    Another possible twist to this story and why I can build a 2^256 byte memory today is you forgot to mention how fast it needs to be and how long it should last. If we can make it as slow as 10 read-or-writes/sec and it is guaranteed to work for only 2 years. The device needs not be dense, a sparse device storing key/values would suffice. The key would be you 256 bit address and the value would be the byte stored at that address. The device needs only to be big enough to store 2*365*24*3600*10 < 1Gkey and it would still be a (slow) 2^256 byte memory chip.

    • "No Bugs" Hare says

      > it would still be a (slow) 2^256 byte memory chip.

      Funny observation, but… while it may indeed be understood as a “memory chip” depending on definitions used, but well – as long as it cannot store 2^256 bytes at the same time – it is quite obviously a kind of cheating/angling (your approach effectively makes 2^256 memory chip indistinguishable from a much smaller chip – the one restricted by total_number_of_writes << 2^256, which goes against common understanding of "what memory chip is"; extending this very logic we can say that a chip with only 1 single write during its lifetime allowed, is still an any-finite-size memory chip - which is well, strange).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.