Difference between revisions of "Super Computing 2016"

From dftwiki3
Jump to: navigation, search
(Parallel Processing Tutorial)
(Parallel Processing Tutorial)
Line 12: Line 12:
 
I attended a [http://sc16.supercomputing.org/presentation/?id=tut135&sess=sess189 1-day tutorial on parallel processing].  I knew 95% of the material, but this actually was a great way of reviewing the material I will teach in my parallel and distributed processing class CSC352 in the spring of 2017, and see what I needed to drop or add to the course.  The tutorial was presented by Quentin F. Stout and Christiane Jablonowski of U. Michigan.  I was happy to see that my current syllabus is quite up to speed with what is important to the field, and I just need to revise it by adding a new section on GPUs and accelerators.  GPUs will likely become one of the biggest players in parallel computing in the next few years from their use in machine learning and deep learning in particular, and running neural network operations.  NVidia seems to be the company at the head of the game, with Intel's [http://www.intel.com/content/www/us/en/processors/xeon/xeon-phi-detail.html Xeon Phi] a competitor in HPC and machine learning.  C and C++ are still the languages of choice, and I will keep my module on teaching C &  C++ (maybe not C++ classes) in the seminar.
 
I attended a [http://sc16.supercomputing.org/presentation/?id=tut135&sess=sess189 1-day tutorial on parallel processing].  I knew 95% of the material, but this actually was a great way of reviewing the material I will teach in my parallel and distributed processing class CSC352 in the spring of 2017, and see what I needed to drop or add to the course.  The tutorial was presented by Quentin F. Stout and Christiane Jablonowski of U. Michigan.  I was happy to see that my current syllabus is quite up to speed with what is important to the field, and I just need to revise it by adding a new section on GPUs and accelerators.  GPUs will likely become one of the biggest players in parallel computing in the next few years from their use in machine learning and deep learning in particular, and running neural network operations.  NVidia seems to be the company at the head of the game, with Intel's [http://www.intel.com/content/www/us/en/processors/xeon/xeon-phi-detail.html Xeon Phi] a competitor in HPC and machine learning.  C and C++ are still the languages of choice, and I will keep my module on teaching C &  C++ (maybe not C++ classes) in the seminar.
 
<br />
 
<br />
In case you'd like to know the contents of the tutorial, here it is in its entirety, as a word cloud...  :-)
+
In case you'd like to know the contents of the tutorial, here it is in its entirety, as a word cloud...  :-)
 
<br />
 
<br />
[[Image:SC16TutorialWordCloud.jpg|500px|center]]
+
[[Image:SC16TutorialWordCloud.jpg|500px|center]]<br />
 +
<center>Word-cloud generated on http://www.wordclouds.com/</center>
 
<br />
 
<br />
  

Revision as of 13:49, 14 November 2016

--D. Thiebaut (talk) 10:39, 14 November 2016 (EST)



SC16.jpg Tabernacle.jpg


I attended the SuperComputer 2016 (SC16) conference in Salt Lake City, UT, in Nov. 2016. The conference deals with all issues relating to supercomputing, high performance computing (HPC), and any domain where massive orders or computation are required.

Parallel Processing Tutorial


I attended a 1-day tutorial on parallel processing. I knew 95% of the material, but this actually was a great way of reviewing the material I will teach in my parallel and distributed processing class CSC352 in the spring of 2017, and see what I needed to drop or add to the course. The tutorial was presented by Quentin F. Stout and Christiane Jablonowski of U. Michigan. I was happy to see that my current syllabus is quite up to speed with what is important to the field, and I just need to revise it by adding a new section on GPUs and accelerators. GPUs will likely become one of the biggest players in parallel computing in the next few years from their use in machine learning and deep learning in particular, and running neural network operations. NVidia seems to be the company at the head of the game, with Intel's Xeon Phi a competitor in HPC and machine learning. C and C++ are still the languages of choice, and I will keep my module on teaching C & C++ (maybe not C++ classes) in the seminar.
In case you'd like to know the contents of the tutorial, here it is in its entirety, as a word cloud...  :-)

SC16TutorialWordCloud.jpg

Word-cloud generated on http://www.wordclouds.com/


EduHPC 16


There is also an afternoon session relating to eduction of HPC in undergraduate schools, and I was on the technical committee for this session (Monday 11/14/16).


Super Moon


An important detail: my week in Salt Lake City coincided with the super moon (super moon for super computing conference--that makes sense!), the biggest and brightest supermoon to rise in almost 69 years. So I had to document this! You can see some more of it on my instagram feed.

SuperMoonNov132016.jpg



...