Difference between revisions of "CSC103: DT's Notes 1"

From dftwiki3
Jump to: navigation, search
Line 1,603: Line 1,603:
 
So computers have had to evolve fast to keep up with our increasing sophistication in what we get and what we expect from them.   
 
So computers have had to evolve fast to keep up with our increasing sophistication in what we get and what we expect from them.   
  
But there's a problem with the speed at which processors and memory have improved in speed.  While processors have doubled performance every two years for almost four decades now, memory has not.  At least not as fast.  The figure below taken from an article by Sundar Iyer for EETimes<ref name="Iyer">Sundar Iyer, Breaking through the embedded memory bottleneck, part 1, ''EE Times'', July 2012, http://www.eetimes.com/document.asp?doc_id=1279790</ref> shows the gap existing between  
+
But there's a problem with the speed at which processors and memory have improved.  While processors have doubled performance every two years for almost four decades now, memory has not.  At least not as fast, and it appears that it is now not improving at all.  The figure below taken from an article by Sundar Iyer for EETimes<ref name="Iyer">Sundar Iyer, Breaking through the embedded memory bottleneck, part 1, ''EE Times'', July 2012, http://www.eetimes.com/document.asp?doc_id=1279790</ref> shows the gap existing between  
the performance of processors compared to that of memory.
+
the performance of processors compared to that of memory. The bad news is that the processor has been getting faster at doing computation, but memory has not been able to keep up the pace, so processors are in effect limited and it doesn't seem that there is a solution in sight.  At least not one that uses the currently semiconductor technology.
 +
 
 +
That's one aspect of the Von Neumann bottleneck.  Using our previous metaphor of the cookie monster, it is akin to having our cookie monster walking on a treadmill where cookies are dropped in front of him at regular intervals, and the cookie monster is becoming faster and faster at walking the treadmill and eating cookies, but the treadmill, while increasing in speed as well, is not able to keep up with the cookie monster.
 +
 
 +
The other aspect of the Von Neumann bottleneck is that the way the processor is the center of activities for the computer.  Everything has to go through it.  Instructions, data, everything that is in memory is '''for''' the processor.  The processor is going to have to access it, read it, modify it at least once during their time in memory.  And sometimes multiple times.  So
 +
this is a huge demand on the processor.  Remember the Accumulator register (AC) in our processor simulator?  Any data whatsoever that is in memory at some point will have to go into AC to be either moved somewhere else or modified.  To get an idea of what this represent, imagine that the size of the AC register is the size of a dime.  Since a register is a memory word, then the size of a memory word would be the same.  In today's computers, the Random Access Memory (RAM) contains from 4 billion to 8 billion memory words.  4 billion dimes would cover the size of a football field.  Von Neumann gave us a design where the computation is done in a tiny area while the data spans a huge area, and there is not other way to process the data than to bring them into the processor.  That's the second aspect of the Von Neumann bottleneck.
 +
 
 +
There has been attempts at breaking this design flaw, and some have helped performance to some extent, but we are still facing a major challenge with the bottleneck.  Possibly the most successful design change has been the replication of processors on the chip.  Intel and other manufacturers have created ''duo-core'', ''quad-core'', ''octa-core'', and other design where 2, 4, 8 or more processors, or ''cores'', are grouped together on the same piece of silicon, inside the same integrated circuit.  Such designs are complex because these cores may have to share access to the memory, and have to be careful when operating on the same data (sharing data).  While improvements in performance have been encouraging, some research has hinted that the performance would decrease as the number of cores increases, as illustrated in the graph below.
 +
<br />
 +
<center>[[File:MultiCorePerformance.jpg]]</center>
 +
<br />
 +
 
 +
 
 
<br />
 
<br />
 
<center>[[Image:MooresLawProcessorMemoryGap.png]]</center>
 
<center>[[Image:MooresLawProcessorMemoryGap.png]]</center>

Revision as of 23:40, 28 September 2013

--© D. Thiebaut 08:10, 30 January 2012 (EST)



This section is only visible to computers located at Smith College













.