Difference between revisions of "Super Computing 2016"

From dftwiki3
Jump to: navigation, search
Line 7: Line 7:
 
<br />
 
<br />
 
I attended the [http://sc16.supercomputing.org/ SuperComputer 2016 (SC16)] conference in Salt Lake City, UT, in Nov. 2016.  The conference deals with all issues relating to supercomputing, high performance computing (HPC), and any domain where massive orders or computation are required.   
 
I attended the [http://sc16.supercomputing.org/ SuperComputer 2016 (SC16)] conference in Salt Lake City, UT, in Nov. 2016.  The conference deals with all issues relating to supercomputing, high performance computing (HPC), and any domain where massive orders or computation are required.   
 
+
<br />
I attended a [http://sc16.supercomputing.org/presentation/?id=tut135&sess=sess189 1-day tutorial on parallel processing] as a way of reviewing the material I will teach in my parallel and distributed processing class CSC352 in the spring of 2017.  The tutorial was presented by Quentin F. Stout and Christiane Jablonowski of U. Michigan.  The tutorial was a good way of getting back in the material.  I was happy to see that my current syllabus is quite up to speed with what is important to the field, and I just need to revise it by adding a new section on GPUs and accelerators.  GPUs will likely become one of the biggest players in parallel computing in the next few years from their use in machine learning and deep learning in particular, and running neural network operations.  NVidia seems to be the company at the head of the game, with Intel's [http://www.intel.com/content/www/us/en/processors/xeon/xeon-phi-detail.html Xeon Phi] a competitor in HPC and machine learning.
+
==Parallel Processing Tutorial==
 +
<br />
 +
I attended a [http://sc16.supercomputing.org/presentation/?id=tut135&sess=sess189 1-day tutorial on parallel processing].  I knew 95% of the material, but this actually was a great way of reviewing the material I will teach in my parallel and distributed processing class CSC352 in the spring of 2017, and see what I needed to drop or add to the course.  The tutorial was presented by Quentin F. Stout and Christiane Jablonowski of U. Michigan.  I was happy to see that my current syllabus is quite up to speed with what is important to the field, and I just need to revise it by adding a new section on GPUs and accelerators.  GPUs will likely become one of the biggest players in parallel computing in the next few years from their use in machine learning and deep learning in particular, and running neural network operations.  NVidia seems to be the company at the head of the game, with Intel's [http://www.intel.com/content/www/us/en/processors/xeon/xeon-phi-detail.html Xeon Phi] a competitor in HPC and machine learning.  C and C++ are still the languages of choice, and I will keep my module on teaching C &amp;  C++ (maybe not C++ classes) in the seminar.
  
 
There is also an afternoon session relating to eduction of HPC in undergraduate schools, and I was on the technical committee for this session (Monday 11/14/16).
 
There is also an afternoon session relating to eduction of HPC in undergraduate schools, and I was on the technical committee for this session (Monday 11/14/16).

Revision as of 12:39, 14 November 2016

--D. Thiebaut (talk) 10:39, 14 November 2016 (EST)



SC16.jpg Tabernacle.jpg


I attended the SuperComputer 2016 (SC16) conference in Salt Lake City, UT, in Nov. 2016. The conference deals with all issues relating to supercomputing, high performance computing (HPC), and any domain where massive orders or computation are required.

Parallel Processing Tutorial


I attended a 1-day tutorial on parallel processing. I knew 95% of the material, but this actually was a great way of reviewing the material I will teach in my parallel and distributed processing class CSC352 in the spring of 2017, and see what I needed to drop or add to the course. The tutorial was presented by Quentin F. Stout and Christiane Jablonowski of U. Michigan. I was happy to see that my current syllabus is quite up to speed with what is important to the field, and I just need to revise it by adding a new section on GPUs and accelerators. GPUs will likely become one of the biggest players in parallel computing in the next few years from their use in machine learning and deep learning in particular, and running neural network operations. NVidia seems to be the company at the head of the game, with Intel's Xeon Phi a competitor in HPC and machine learning. C and C++ are still the languages of choice, and I will keep my module on teaching C & C++ (maybe not C++ classes) in the seminar.

There is also an afternoon session relating to eduction of HPC in undergraduate schools, and I was on the technical committee for this session (Monday 11/14/16).

An important detail: my week in Salt Lake City coincided with the super moon (super moon for super computing conference--that makes sense!), the biggest and brightest supermoon to rise in almost 69 years. So I had to document this! You can see some more of it on my instagram feed.

SuperMoonNov132016.jpg



...