Difference between revisions of "CSC103: DT's Notes 1"
Line 1,581: | Line 1,581: | ||
the design of the EDVAC computing machine at the Moore School of Electrical Engineering at the University of Pennsylvania | the design of the EDVAC computing machine at the Moore School of Electrical Engineering at the University of Pennsylvania | ||
wrote a report with his recommendations for how a computing machine should be organized and built. There are many remarkable things about this report: | wrote a report with his recommendations for how a computing machine should be organized and built. There are many remarkable things about this report: | ||
− | * The first | + | * The first is that it was the synthesis of many different good ideas of the time from the few people who were involved in building such machines, and as such it presented some form of blueprint for machines to come. |
* Even more remarkable is that the draft was never officially published but circulated well within the small group of experts interested in the subject, and when new research group started getting interested in building calculating machines, they would use the report as a guide. | * Even more remarkable is that the draft was never officially published but circulated well within the small group of experts interested in the subject, and when new research group started getting interested in building calculating machines, they would use the report as a guide. | ||
− | * | + | * Possibly the most remarkable about Von Neumann's design is that chances are very high that the computer you are currently using to read this document (laptop, desktop, tablet, phone) contains a processor built on these original principles! |
− | |||
These principles were that good! And they offered such an attractive design that for the past 70 years or so, engineers have kept building computers this way. | These principles were that good! And they offered such an attractive design that for the past 70 years or so, engineers have kept building computers this way. | ||
Line 1,590: | Line 1,589: | ||
So what is this bottleneck and why is it bad? | So what is this bottleneck and why is it bad? | ||
− | + | Before we answer this question, we have to understand that we human beings have had an ever increasing thirst for more and more complex programs that would solve more and more complex problems. And this appetite for solving larger and more complex problems has basically forced the computers to evolve in two simple complementary ways. The first is that computers have had to be faster with each new generation, and the second is that the size of the memory has had to increase with every new generation as well. | |
− | |||
− | The reason is that to predict the weather one has to divide the earth into quadrants forming large squares in a grid covering the | + | Nate Silver provides a very good example illustrating these complementary pressures on computer hardware in his book ''The Signal and the Noise''<ref name="silver">Nate Silver, ''The Signal and the Noise: Why So Many Predictions Fail-but Some Don't'', Penguin, 2012.</ref>. The recent hurricanes have shown an interesting competition between different models of weather prediction, and in particular the path of hurricanes over populated areas. Some models are European, others American, and super storm Sandy in October 2012 illustrated that some models were better predictors than others. In this particular case, the European models predicted the path of Sandy more accurately than their counterparts. Since then, there has been a push on the National Center for Atmospheric Research (NCAR) to update its computing power in order to increase the accuracy of its model. How are the two related? |
− | What does that mean for the computation of the weather prediction, tough? Well, if we have 4 times more squares, then we need | + | |
− | So in short, if the NCAR decides to refine the size of the grid it uses to compute its weather prediction, and | + | The reason is that to predict the weather one has to divide the earth into quadrants forming large squares in a grid covering the earth. Each square delimits an area of the earth for which many parameters are recorded using various sensors and technologies (temperature, humidity, daylight, cloud coverage, wind speed, etc). A series of equations links the influence that each parameter in a cell of the grid exerts on the parameters of neighboring cells, and a computer model simply looks at how different parameters have evolved in a give cell of the grid over a period of time, and how they are likely to continue evolving in the future. The larger the grid size, though, the more approximate the prediction. A better way to enhance the prediction is to make the size of the grid smaller. For example, one could divide the side of each square in the grid by half. If we do that, though, this makes the number of cells in the grid increase by a factor of four. If you are not sure why, draw a square on a piece of paper and then divide the square in half vertically and in half horizontally: you will get 4 smaller squares inside the original one. |
+ | What does that mean for the computation of the weather prediction, tough? Well, if we have 4 times more squares, then we need four times more data for each cell of the new grid, and there will be 4 times more computation to be performed. But wait! The weather does not happen only at ground level; it also takes place in the atmosphere. So our grid is not a grid of squares, but a three-dimensional grid of cubes. And if we divide the side of each cube in half, we get eight new sub-cubes. So we need actually eight times more data, and we will have eight times more computation to perform. But wait! There is also another element that comes into play: time! Winds travel at a given speed. So the computation that expects wind to enter the side of our original cube at some period of time and exit the opposite side of a cube some interval of time later needs to be performed more often, since that same wind will now cross a sub-cube of the grid twice as fast as before. | ||
+ | So in short, if the NCAR decides to refine the size of the grid it uses to compute its weather prediction, and divides it by two, it will have 8 x 2 = 16 times more computation to performed. And since weather prediction takes a lot of time and should be done in no more than 24 hours to actually have a good chance to predict the weather tomorrow, that means that performing 16 times more computation in the same 24 hours will require a new computer with: | ||
* a processor 16 times faster than the last computer used, | * a processor 16 times faster than the last computer used, | ||
* a memory that can hold 16 more data than previously. | * a memory that can hold 16 more data than previously. | ||
Line 1,601: | Line 1,601: | ||
Nate Silver makes the clever observation that since computer performance has been doubling roughly every two years<ref name="mooreslaw">Moore's Lay, Intel Corporation, 2005. ftp://download.intel.com/museum/Moores_Law/Printed_Material/Moores_Law_2pg.pdf</ref>, getting an increase of 16 in performance requires buying a new computer after 8 years, which is roughly the frequency with which NCAR upgrades its main computers! | Nate Silver makes the clever observation that since computer performance has been doubling roughly every two years<ref name="mooreslaw">Moore's Lay, Intel Corporation, 2005. ftp://download.intel.com/museum/Moores_Law/Printed_Material/Moores_Law_2pg.pdf</ref>, getting an increase of 16 in performance requires buying a new computer after 8 years, which is roughly the frequency with which NCAR upgrades its main computers! | ||
+ | So computers have had to evolve fast to keep up with our increasing sophistication in what we get and what we expect from them. | ||
− | + | But there's a problem with the speed at which processors and memory have improved in speed. While processors have doubled performance every two years for almost four decades now, memory has not. At least not as fast. The figure below taken from an article by Sundar Iyer for EETimes<ref name="Iyer">Sundar Iyer, Breaking through the embedded memory bottleneck, part 1, ''EE Times'', July 2012, http://www.eetimes.com/document.asp?doc_id=1279790</ref> shows the gap existing between | |
+ | the performance of processors compared to that of memory. | ||
+ | <br /> | ||
+ | <center>[[Image:MooresLawProcessorMemoryGap.png]]</center> | ||
+ | <br /> | ||
<br /> | <br /> |
Revision as of 23:13, 28 September 2013
--© D. Thiebaut 08:10, 30 January 2012 (EST)