Difference between revisions of "Google IO 2016"
(→Google IO 2016Mountain View, CA, May 18, 19, 20, 2016) |
(→Google IO 2016Mountain View, CA, May 18, 19, 20, 2016) |
||
Line 9: | Line 9: | ||
<br /> | <br /> | ||
<br /> | <br /> | ||
− | At IO, three concepts struck me. ''Machine learning'' is it! It's mentioned all the time, everywhere, and any session that mentions it in its title has an impossibly long line of coders waiting to attend it. It's behind many of the tools Google is using or making available to coders. ''Integrating'' the user experience into a fluid environment is paramount. Users interested in a particular piece of information should not quite their app and launch another app. The information should appear where the user is. Integrating apps together is key. Makes for a fluid environment. Finally, one concept that I found interesting because of my background with caching and cache memories is the idea of loading/launching apps on the fly, even if the app hasn't been installed on the device. The example given was of a user being sent a link to a video requiring a particular app to be viewed. Instead of forcing the user to load the app, the components of the app required to run the video were downloaded on the fly, and launched. The user remained connected to her app. That's a form of caching: the app is temporarily downloaded, cached in the device, used, and then will be removed. Seamless. | + | At IO, three concepts struck me. |
+ | :* ''Machine learning'' is it! It's mentioned all the time, everywhere, and any session that mentions it in its title has an impossibly long line of coders waiting to attend it. It's behind many of the tools Google is using or making available to coders. | ||
+ | :* ''Integrating'' the user experience into a fluid environment is paramount. Users interested in a particular piece of information should not quite their app and launch another app. The information should appear where the user is. Integrating apps together is key. Makes for a fluid environment. | ||
+ | :* Finally, one concept that I found interesting because of my background with caching and cache memories is the idea of loading/launching apps on the fly, even if the app hasn't been installed on the device. The example given was of a user being sent a link to a video requiring a particular app to be viewed. Instead of forcing the user to load the app, the components of the app required to run the video were downloaded on the fly, and launched. The user remained connected to her app. That's a form of caching: the app is temporarily downloaded, cached in the device, used, and then will be removed. Seamless. | ||
<br /> | <br /> | ||
<br /> | <br /> |
Revision as of 14:16, 20 May 2016
--D. Thiebaut (talk) 14:12, 20 May 2016 (EDT)
Google IO 2016Mountain View, CA, May 18, 19, 20, 2016
Attended Google IO 2016. First time attending this event. It started with the Keynote by CEO Sundar Pichai. Definitely a different feel from the Amazon AWS keynote I attended last year in NYC. While I felt I was attending a Rock Concert during the AWS keynote, the presentation by Sundar Pichai was more that of the wise guru to his disciples. In both cases, technology is still a religious experience, and IO is the place to practice it.
At IO, three concepts struck me.
- Machine learning is it! It's mentioned all the time, everywhere, and any session that mentions it in its title has an impossibly long line of coders waiting to attend it. It's behind many of the tools Google is using or making available to coders.
- Integrating the user experience into a fluid environment is paramount. Users interested in a particular piece of information should not quite their app and launch another app. The information should appear where the user is. Integrating apps together is key. Makes for a fluid environment.
- Finally, one concept that I found interesting because of my background with caching and cache memories is the idea of loading/launching apps on the fly, even if the app hasn't been installed on the device. The example given was of a user being sent a link to a video requiring a particular app to be viewed. Instead of forcing the user to load the app, the components of the app required to run the video were downloaded on the fly, and launched. The user remained connected to her app. That's a form of caching: the app is temporarily downloaded, cached in the device, used, and then will be removed. Seamless.
The session that I enjoyed the most was one given by Damien Henry, titled Machine Learning and Art. Two artists showcased their work: Mario Klingemann, creator of the Munich FabLab, and Cyril Diagne, professor of Interaction Design at ECAL (Lausanne, Switzerland), a co-founder of the collective LAB212, as well as artist in residence at the Google Cultural Institute Lab, in Paris, France. Both have used Google's deep learning and deep dream engines to process images, with stunning results (which I will research later, and provide links for.)