New
#40
Ant - That is, indeed, a container of more than adequate size... But I have a concern the 128 bits will fall through the steel mesh.
Bring it on MS. There is nothing wrong with trying to push the proverbial envelope. I remember when I bought a pentium 233 with 32 megs of ram with a 6 gig HD and my friends said that was overkill...................LMAO. I agree that software companies are behind the curve with 64 bit programs but I am all for improvements. Don't hate change, instead embrace it. Many people fear the unknown!!!!!!
Let me ask you all a question: "How many of you are going to turn down a chance to be the first owner of a 128 bit system, and also be the first on the beta list to test the new OS for a 128 bit system.
I know my name is going to be at the top (if I live that long).:)
Last edited by Lee; 09 Oct 2009 at 03:27.
I'd like to offer a counterpoint, but only because I'm like that, not because I'm not excited by the thought of bigger and better computers. (I totally agree with Lee, and with Antman's "FTW".)
As Clive Sinclair reportedly retorted when somebody asked him why he settled for the 8-bit Z80 processor in his ZX Spectrum (at a time when 16-bit architectures were becoming available), "because I couldn't find a 4-bit chip I really liked."
The outwardly facetious comment is in fact insightful.
Previously, the transition to greater "bittiness" was always forced upon us by simple mathematical exhaustion of the describable memory address range. The 16-bit [segment: offset] addressing scheme was designed to defeat the overly limiting 64KB flat address space. Even so, 1MB became insufficient within a few years. When the first 80386 PC came out in the mid-80s, the 32-bit machine was capable of a flat 4GB address space. That was sufficient or adequate for around two decades. By the mid-90s, large databases were pushing the envelope, so as a stopgap Intel brought out 36-bit addressing and PAE for a grand total of 64GB. It wasn't uncommon for massive server systems to nudge against that limit by 2000 or a few years later.
The 64-bit address space is 16 exabytes in size. However, Windows currently artificially limits itself to a mere fraction of that (though that will change in future versions). This time around, we are not even close to exhausting the address space yet, so the talk of double the bittiness is being driven by other factors.
The number of available general purpose registers, the way they are used, the efficiency of the calling convention... Currently, the potential of a new architecture is more exciting because of the chance to improve in those areas than because of the meaninglessly vast 128-bit address space. Even the 64-bit one we've got now is still almost entirely empty.
I expect that none of this would be happening yet if Intel could achieve greater uptake of their IA-64 product.
http://community.winsupersite.com/bl...-only-lol.aspx
Thurrot says this is a bogus rumour.
Robert Morgan - LinkedIn
The seed - as archived at Google.