Monday, February 22, 2010

Singularity Metrics: Manycore Processors

As the billions of smart devices using single microprocessors are replaced by manycore ‘brains,’ we can expect to reach trillions of smart chips conducting much more efficient parallel processing within just seven years. The Era of Moore’s Law will give way to the Era of Amdahl’s Law [1].

During the Era of Moore’s Law, miniaturized microprocessors produced smaller faster computers. Manycore systems are now replacing the performance hierarchy with innovative efficiency thereby redefining the Information Revolution.

Moore’s Law is the empirical observation that the capacity of chips doubles every 18 months. As physical size of chips grew, the density and complexity of the circuits increased. In 2002, Intel planned on achieving 30-gigahertz chips by today using 10 nanometers technology. But Intel was wrong.

Chip makers are still using four gigahertz, and the future has shifted from obtaining greater single processor to exploiting manycore processors.

Manycore processors provide high density computer processing power with scalability and less heat. Just as the transistor replaced the vacuum tube, manycore systems are now replacing the single microprocessor system, as more efficient, cheaper, and more reliable components.

Conventional wisdom for PCs predicates a doubling of the number of cores on a chip for each new silicon generation. Within a few years there will be 100 core machines. Applications will require new concurrent programs. Windows 7 and Windows Server 2008 can now work with up to 256 logical processors. There is conviction that reaching 1000 cores on a die with 30nm technology is possible. Cisco already has routers with 188 cores through using 130 nm technologies (see Figure 1).

The goal is easy-to-write programs executing efficiently on highly parallel systems using 1000s of cores per chip.

This means that growing software complexity will require fundamentally rethinking architecture and shifting the paradigm from Moore’s Law to Amdahl’s Law.

Amdahl’s Law given by:

Speedup ≤ 1 / (F + (1-F) / N)

Figure 1

Amdahl's law describes how much a program can theoretically be sped up by additional computing resources, based the proportion of parallelizable and serial components. Where F is the fraction of calculation that must be executed serially given as [2]:

F = s / (s + p)

where s = serial execution and p = parallel execution.

Then Amdahl's law says that on a machine with N processors, as N approaches infinity, the maximum speedup converges to

1/F, or (s + p)/s.

What does this mean for the metrics technology growth?

It means fast just got faster.


1. Alesso, H. P., Connections: Patterns of Discovery, John Wiley & Sons Inc., New York, NY, 2008.

2. Goetz, B., et. al., Java: Concurrency in Practice, Addison-Wesley, Stoughton, Massachusetts, USA, 2008.

No comments: