Difference between revisions of "CSC352 Resources"

From dftwiki3
Jump to: navigation, search
(Literature)
 
(12 intermediate revisions by the same user not shown)
Line 66: Line 66:
  
 
==Documentation on Python Threads==
 
==Documentation on Python Threads==
<greenbox>
+
 
 
[[Image:smilingPython.png| right| 100px]]
 
[[Image:smilingPython.png| right| 100px]]
 
* [http://python.org/ The main Python reference]
 
* [http://python.org/ The main Python reference]
Line 74: Line 74:
 
* [http://www.slideshare.net/pvergain/multiprocessing-with-python-presentation Multiprocessing with Python] a presentation by Jesse Noller who wrote the PEP 371
 
* [http://www.slideshare.net/pvergain/multiprocessing-with-python-presentation Multiprocessing with Python] a presentation by Jesse Noller who wrote the PEP 371
 
* [http://blip.tv/file/2232410 Video Presentation] on the Python GIL (found by Diana)
 
* [http://blip.tv/file/2232410 Video Presentation] on the Python GIL (found by Diana)
</greenbox>
 
  
 
==Documentation on XGrid==
 
==Documentation on XGrid==
<bluebox>
+
 
 
[[Image:xgridLogo.png | right|100px]]
 
[[Image:xgridLogo.png | right|100px]]
  
Line 106: Line 105:
 
* [http://www.macos.utah.edu/documentation/administration/xgrid/xgrid_presentation.html Utah] Xgrid: Lots of good stuff.
 
* [http://www.macos.utah.edu/documentation/administration/xgrid/xgrid_presentation.html Utah] Xgrid: Lots of good stuff.
 
* [http://reference.wolfram.com/mathematica/guide/StandaloneMathematicaKernels.html Using the Mathematica Kernel].
 
* [http://reference.wolfram.com/mathematica/guide/StandaloneMathematicaKernels.html Using the Mathematica Kernel].
 
</bluebox>
 
  
 
==Documentation on Cloud Computing, Map-Reduce, &amp; Hadoop==
 
==Documentation on Cloud Computing, Map-Reduce, &amp; Hadoop==
Line 113: Line 110:
 
Ken Arnold, CORBA Designer
 
Ken Arnold, CORBA Designer
 
</blockquote>
 
</blockquote>
<tanbox>
 
 
__NOTOC__
 
__NOTOC__
 
===Literature===
 
===Literature===
* [[Image:hadoopOReilly.jpg | right |100px]] [http://www.amazon.com/Hadoop-Definitive-Guide-Tom-White/dp/0596521979  Hadoop, the definitive guide], Tim White, O'Reilly Media, June 2009, ISBN 0596521979.  The Web site for the book is http://www.hadoopbook.com/ (with the data used as examples in the book)
+
* [[Media:ApacheChapterOnStreaming.pdf | Apache's chapter on Hadoop Streaming]], Apache.org.
 +
* [http://answers.oreilly.com/topic/460-how-to-benchmark-a-hadoop-cluster/ How to Benchmark a Hadoop Cluster], by Tom White, [http://answers.oreilly.com O'Reilly Answers], Oct. 2009.
 +
* [[Image:hadoopOReilly.jpg | right |100px]] [http://www.amazon.com/Hadoop-Definitive-Guide-Tom-White/dp/0596521979  Hadoop, the definitive guide], Tom White, O'Reilly Media, June 2009, ISBN 0596521979.  The Web site for the book is http://www.hadoopbook.com/ (with the data used as examples in the book)
 
* Dan Sullivan [http://nexus.realtimepublishers.com/dgcc.php The Definitive Guide to Cloud Computing], IBM, 2010, ''in production'' (but can be downloaded in parts).
 
* Dan Sullivan [http://nexus.realtimepublishers.com/dgcc.php The Definitive Guide to Cloud Computing], IBM, 2010, ''in production'' (but can be downloaded in parts).
 
* Dean, J., and S. Ghemawat, [http://labs.google.com/papers/mapreduce-osdi04.pdf MapReduce: Simplified Data Processing on Large Clusters], Dec. 2004,  ([[media:MapReduce1204.pdf|cached copy]])
 
* Dean, J., and S. Ghemawat, [http://labs.google.com/papers/mapreduce-osdi04.pdf MapReduce: Simplified Data Processing on Large Clusters], Dec. 2004,  ([[media:MapReduce1204.pdf|cached copy]])
Line 132: Line 130:
 
* Matthews, S., & Williams, T. [http://www.biomedcentral.com/1471-2105/11/S1/S15 MrsRF: an efficient MapReduce algorithm for analyzing large collections of evolutionary trees BMC Bioinformatics], 11, 2010 (Suppl 1) <font color=magenta>(authors show that speedups of close to 18 on 32 cores can be reached for treating 20,000 trees of 150 taxa each and 33,306 trees of 567 taxa each.)</font>
 
* Matthews, S., & Williams, T. [http://www.biomedcentral.com/1471-2105/11/S1/S15 MrsRF: an efficient MapReduce algorithm for analyzing large collections of evolutionary trees BMC Bioinformatics], 11, 2010 (Suppl 1) <font color=magenta>(authors show that speedups of close to 18 on 32 cores can be reached for treating 20,000 trees of 150 taxa each and 33,306 trees of 567 taxa each.)</font>
 
* Chris K Wensel, [http://www.manamplified.org/archives/2008/11/hadoop-is-about-scalability.html Hadoop Is About Scalability, Not Performance], www.manamplified.org, November 12, 2008.
 
* Chris K Wensel, [http://www.manamplified.org/archives/2008/11/hadoop-is-about-scalability.html Hadoop Is About Scalability, Not Performance], www.manamplified.org, November 12, 2008.
* Paulson, Rasin, Abadi, DeWitt, Madden, and Stonebraker, [[Media:ComparisonOfApproachesToLargeScaleDataAnalysis.pdf |A Comparison of Approaches to Large Scale Data-Analysis]], SIGMOD-09, June 2009.
+
* Pavlo, Paulson, Rasin, Abadi, DeWitt, Madden, and Stonebraker, [[Media:ComparisonOfApproachesToLargeScaleDataAnalysis.pdf |A Comparison of Approaches to Large Scale Data-Analysis]], SIGMOD-09, June 2009.
  
 
* [[Image:mapReduceTaskTimeLine.png|right|150px]]<u>TimeLine Graphs and Performance</u>
 
* [[Image:mapReduceTaskTimeLine.png|right|150px]]<u>TimeLine Graphs and Performance</u>
Line 158: Line 156:
 
* Yahoo Developer Network: Module 4: MapReduce Basics http://developer.yahoo.com/hadoop/tutorial/module4.html , a must-read!
 
* Yahoo Developer Network: Module 4: MapReduce Basics http://developer.yahoo.com/hadoop/tutorial/module4.html , a must-read!
 
* Python and streaming, a [http://atbrox.com/2010/02/08/parallel-machine-learning-for-hadoopmapreduce-a-python-example/ tutorial] by [http://atbrox.com].
 
* Python and streaming, a [http://atbrox.com/2010/02/08/parallel-machine-learning-for-hadoopmapreduce-a-python-example/ tutorial] by [http://atbrox.com].
 +
 +
===Installation Tutorials===
 +
 +
* Jochen Leidner and Gary Berosik, [[http://arxiv4.library.cornell.edu/pdf/0911.5438v1| Building and Installing a Hadoop/MapReduce Cluster from Commodity Components]], [http://arxiv4.library.cornell.edu/pdf/0911.5438v1 library.cornell.edu], 2009. ([[Media:HadoopInstallationOnUbuntuLeidnerBerosik.pdf|cached copy]])
  
 
===Media Reports===
 
===Media Reports===
Line 181: Line 183:
 
===Software/Web Links===
 
===Software/Web Links===
 
[[Image:HadoopCartoon.png | 100px | right]]
 
[[Image:HadoopCartoon.png | 100px | right]]
 +
*[http://www.hadoopstudio.org/docs/tutorials/nb-tutorial-jobdev-streaming.html Karmasphere Studio] for Hadoop. An interesting IDE worth looking into...
 
*[http://hadoop.apache.org/common/ Apache's Documentation on Hadoop Common]
 
*[http://hadoop.apache.org/common/ Apache's Documentation on Hadoop Common]
 
**[http://hadoop.apache.org/common/docs/current/mapred_tutorial.html The Hadoop Tutorial] from Apache.  A "Must-Do!"
 
**[http://hadoop.apache.org/common/docs/current/mapred_tutorial.html The Hadoop Tutorial] from Apache.  A "Must-Do!"
Line 199: Line 202:
 
*[http://www.cloudera.com/blog/2009/04/20/configuring-eclipse-for-hadoop-development-a-screencast/ Configuring Eclipse for Hadoop] A video from Cloudera on setting up Hadoop... not easy to follow...
 
*[http://www.cloudera.com/blog/2009/04/20/configuring-eclipse-for-hadoop-development-a-screencast/ Configuring Eclipse for Hadoop] A video from Cloudera on setting up Hadoop... not easy to follow...
 
* [https://trac.declarativity.net/browser/hadoop-0.19.1-bfs/src/examples/org/apache/hadoop/examples The source code for the examples] that come with the Hadoop 0.19.1 distribution.  Includes WordCount, WordCountAggregate, WordCountHistogram, PiEstimator, Join, and Grep, among others.
 
* [https://trac.declarativity.net/browser/hadoop-0.19.1-bfs/src/examples/org/apache/hadoop/examples The source code for the examples] that come with the Hadoop 0.19.1 distribution.  Includes WordCount, WordCountAggregate, WordCountHistogram, PiEstimator, Join, and Grep, among others.
+
* [http://github.com/datawrangling/spatialanalytics Spatial Analysis of Twitter Data with Hadoop, Pig, & Mechanical Turk], [http://github.com github.com], March 2010.
 +
 
 
* <u>Generating Hadoop TimeLines</u>
 
* <u>Generating Hadoop TimeLines</u>
 
** [http://people.apache.org/~omalley/tera-2009/job_history_summary.py Python script] from apache.org to generate the time  line ([[CSC352 ApacheHadoopJobHistorySummary.py | Apache's script to generate Hadoop Timeline ]]).
 
** [http://people.apache.org/~omalley/tera-2009/job_history_summary.py Python script] from apache.org to generate the time  line ([[CSC352 ApacheHadoopJobHistorySummary.py | Apache's script to generate Hadoop Timeline ]]).
Line 232: Line 236:
  
 
<br /><br /><center><videoflash>qoBoEzOkeDQ</videoflash></center><br /><br />
 
<br /><br /><center><videoflash>qoBoEzOkeDQ</videoflash></center><br /><br />
* Monitoring a Cluster of Computers as a school of fish (U. Nebraska)
+
 
 +
* Monitoring a Cluster of Computers as a school of fish (U. Nebraska)
 +
::In this video, the researchers at U. of Nebraska decided to use fish swiming in a tank as a way of displaying what is going on with a cluster of many computers working on a large problem.  All the computers are involved in a common computation.  Each fish (as far as we can tell, given the lack of better information) represents a computer or a program running on a computer.  As the user zooms in on a fish, a blue window pops up to give some vital information about that system's health.  Fish change color and size to indicate a change in status.  One could imaging that green fish represent computers not doing much work, which orange fish represent computers loaded with work.  It is interesting to see how researchers would use a school of fish as a way to indicate what is going on in a cluster of computers, and relying on human beings's ability to recognize visual clues quickly to understand what is going on quickly and accurately.  This is certainly better than trying to have the same human beings read tons of log files containing the date and time of many different events occurring in the cluster.
 
<br /><br /><center><videoflash>LM1j_8sWSEk</videoflash></center><br /><br />
 
<br /><br /><center><videoflash>LM1j_8sWSEk</videoflash></center><br /><br />
  
Line 239: Line 245:
 
<br />
 
<br />
  
</tanbox>
 
  
 
[[CSC352_Notes | <font color="white">Notes</font>]]
 
[[CSC352_Notes | <font color="white">Notes</font>]]

Latest revision as of 13:16, 31 July 2010


Main Page | Syllabus | Schedule | Links & Resources


Resources: References & Bibliography for CSC352

General Knowledge Papers

Papers, Articles and University Courses on Parallel & Distributed Processing

Videos: Big Data and Analytics


A
video by Linkedin's Chief Scientist DJ Patil. As a mathematician specializing in dynamical systems and chaos theory, DJ began his career as a weather forecaster working for the Federal government. DJ shares his observations on how analytics has changed in recent years, especially as Big Data increasingly becomes common.

Roger Magoulas, from O'Reily Radar, discusses "big data" (10 minutes).

Jeff Veen: Designing for "Big Data", April 2009.

Documentation on Python Threads

SmilingPython.png

Documentation on XGrid

XgridLogo.png

General References

Applications

Documentation on Cloud Computing, Map-Reduce, & Hadoop

"Failure is the defining difference between distributed and local programming"

Ken Arnold, CORBA Designer

Literature

Collections of Hadoop Papers and/or Algorithms

Presentations

Tutorials

Installation Tutorials

Media Reports

News Feed

Class Material on the Web

Software/Web Links

HadoopCartoon.png
The IBM MapReduce Tools for Eclipse Plug-in is a robust plug-in that brings Hadoop support to the Eclipse platform. Features include server configuration, support for launching MapReduce jobs and browsing the distributed file system. This setup assumes that you are running Eclipse (version 3.3 or above) on your computer.

Videos

Visualizations

  • Visualizations of Hadoop Data Transfers, from the U. of Nebraska (more videos)




  • Monitoring a Cluster of Computers as a school of fish (U. Nebraska).
In this video, the researchers at U. of Nebraska decided to use fish swiming in a tank as a way of displaying what is going on with a cluster of many computers working on a large problem. All the computers are involved in a common computation. Each fish (as far as we can tell, given the lack of better information) represents a computer or a program running on a computer. As the user zooms in on a fish, a blue window pops up to give some vital information about that system's health. Fish change color and size to indicate a change in status. One could imaging that green fish represent computers not doing much work, which orange fish represent computers loaded with work. It is interesting to see how researchers would use a school of fish as a way to indicate what is going on in a cluster of computers, and relying on human beings's ability to recognize visual clues quickly to understand what is going on quickly and accurately. This is certainly better than trying to have the same human beings read tons of log files containing the date and time of many different events occurring in the cluster.




  • The evolution of Hadoop (Code-Swarm)






Notes

Cloud Cluster @ Smith















class notes