The Google I/O 2017 Keynote Live Blog

12:51PM EDT – Taking place today is Google’s annual I/O developer conference. Starting things off as always is the keynote, where we should receive updates on several Google technologies, initiaitves, and other Google-centric projects
12:51PM EDT – Android O will obviously be a big focus
12:52PM EDT – Android Wear and Android TV are also good bets
12:52PM EDT – I’d also be surprised if we don’t see something hardware related, though that could be just about anything
12:54PM EDT – So far Wi-Fi is behaving, and the weather is clear, if a bit chilly in the shadows. So we should be in for a good show
12:56PM EDT – This is the second year Google has held their event out at Mountain View’s Shortline Amphitheater; it still remains a rather unique event
12:58PM EDT – The keynote is scheduled to start at 10am, so we should be getting underway in a few minutes
01:01PM EDT – And here we go
01:03PM EDT – Google is opening things up with an animated trailer
01:06PM EDT – Now on stage is Sundar Pichai
01:06PM EDT – Over 7000 people here
01:06PM EDT – There are a further 400 remote events
01:07PM EDT – Over one billion active Google users per month
01:08PM EDT – Android has crossed 2 billion active devices just this week
01:09PM EDT – Keep in mind there’s only 7.5 billion people in the first place
01:09PM EDT – Sundar is now talking about Google’s heavy investment in machine learning
01:10PM EDT – Smart Reply is being rolled out to Gmail users today
01:10PM EDT – First subject: voice
01:11PM EDT – Google’s voice dictation word error rate is down to 4.9%, which is down from 6.1% just in the last 6 months
01:11PM EDT – Discussing how machine learning has allowed the Google Home device to use 2 microphones instead of 8, and still get good voice recognition accuracy
01:12PM EDT – Image recognition has similarly improved
01:12PM EDT – Google says their vision error rate is now lower than the human error rate
01:13PM EDT – New Google initiative being announced today: Google Lens
01:13PM EDT – Will ship first in Google Assistant and Photos
01:14PM EDT – Using image recognition to do smart things with information in photos. IDing flowers, inputting the SSID of a WiFi router, etc
01:15PM EDT – Google is rethinking their computational architecture and data centers. Google wants to build AI-first datacenters
01:15PM EDT – The core of this is Google’s Tensor Processing Units, which they started using last year
01:15PM EDT – Now recapping machine learning training versus inference
01:16PM EDT – Google’s TPU was optimized for inference. Training requires greater precision
01:16PM EDT – Google is announcing their second-generation TPU today, the Cloud TPU
01:16PM EDT – 180 TFLOPS per Cloud TPU chip. 4 chips per board
01:17PM EDT – Cloud TPUs are coming to the Google Compute Engine as of today
01:17PM EDT – These new TPUs are optimized for inference and training, implying that they operate at greater precision
01:18PM EDT – Google’s AI efforts will be coming together under the Google.ai umbrella
01:18PM EDT – Research, tools, and applied AI
01:19PM EDT – Now discussing AutoML: using neural nets to design better neural nets
01:19PM EDT – “Learning to learn
01:20PM EDT – Now discussing applications for AI such as digital pathology
01:22PM EDT – Google also has projects for DNA sequencing, chemistry, and drawing assistance tools
01:23PM EDT – Now on to Google Assistant
01:23PM EDT – Recapping it, rolling a demo video
01:24PM EDT – Now on stage: Scott Huffman
01:24PM EDT – Google Assistant is now available on 100M devices
01:26PM EDT – Google wants to further improve the conversational abilities of Google Assistant
01:26PM EDT – Google is adding the ability to type to the assistant starting today
01:27PM EDT – Further down the line, Google will be rolling out Lens for Assistant, allowing it to talk about what it’s seeing
01:28PM EDT – Combining Word Lens image translation with more standard Assistant natural language queries
01:29PM EDT – Also new: Google Assistant is now available on the iPhone
01:30PM EDT – This is part of a larger effort to get it into more devices
01:30PM EDT – Google is rolling out an Assistant SDK to allow product developers to add Assistant to their products
01:31PM EDT – Starting this summer, the Assistant will be available in German, French, and several other languages
01:32PM EDT – Now on stage: Valerie Nygaard
01:32PM EDT – Valerie is here to talk about how developers have been adding features to Google Assistant
01:32PM EDT – Actions on Google
01:33PM EDT – Starting today, Actions on Google will be supporting transactions
01:33PM EDT – Account creation, ordering/purchasing products, and more
01:33PM EDT – Demoing ordering food via Google Assistant on a phone
01:34PM EDT – Payment by fingerprint (Google Payments, I’d assume)
01:35PM EDT – Now on stage: Rishi Chandra
01:35PM EDT – Rishi is here with news about Google Home
01:36PM EDT – Launching in Canada, Australia, France, Germany, and Japan this summer
01:36PM EDT – 4 new features to be rolled out over the coming months
01:36PM EDT – 1) Proactive assistance
01:37PM EDT – Google will be starting simple, and it will have multiple user support
01:37PM EDT – 2) Hands-free calling for Google Home
01:38PM EDT – You can call any landline or mobile number in the US or Canada for free
01:38PM EDT – (If this isn’t a prime example of how cheap VoIP to POTS has become, I don’t know what is)
01:39PM EDT – Will be rolling out over the next few months
01:39PM EDT – Also, you can dial out using your personal number
01:40PM EDT – 3) Entertainment. Spotify’s subscription and free services will be available on Home. SoundCloud and Deezer as well
01:40PM EDT – Bluetooth support is also coming
01:41PM EDT – 4) Today Google is announcing support for visual responses with Google Home
01:42PM EDT – Google Home can notify your phone (iOS or Android) and TVs via Chromecast
01:43PM EDT – Continuing to demo TV-related functionality
01:43PM EDT – Playing YouTube videos, showing the weather, etc
01:44PM EDT – Also integrates with Google’s recently launched YouTube TV service
01:45PM EDT – Now on stage: Anil Sabharwal
01:45PM EDT – The next subject is Google Photos
01:46PM EDT – Recapping Google Photos advancements in the last couple of years
01:46PM EDT – 1.2 billion photos and videos are being uploaded each day
01:46PM EDT – Google is launching 3 new features for Photos
01:47PM EDT – 1) Suggested sharing
01:48PM EDT – Google will suggest photos to share, and who to share them with
01:48PM EDT – Photos will have a new “sharing” tab
01:49PM EDT – Demoing the feature
01:49PM EDT – Photos will send photos via SMS or email if the recipient doesn’t have a Google Photos account
01:50PM EDT – 2) Shared Libraries
01:51PM EDT – Share part or all of your libraries with others
01:53PM EDT – Demoing how library sharing works and how it can be set to auto-save certain photos
01:55PM EDT – The new sharing features will be rolling out in the coming weeks
01:55PM EDT – 3) Photo Books
01:56PM EDT – It will now be possible to make photo books on the Photos application, with an emphasis on making it easy
01:56PM EDT – Demoing the feature now
01:57PM EDT – Google will be making the books; they offer softcover and hardcover books
01:57PM EDT – In the future, Google will be adding machine learning into the mix to suggest photo book compositions
01:58PM EDT – Photo books are available today on the website, and next week on the mobile apps
01:58PM EDT – Everyone at I/O will be receiving a free hardcover photo book
01:59PM EDT – Finally, Google Lens is being added to Google Photos
02:00PM EDT – IDing items in photos, etc
02:01PM EDT – Up next: YouTube news
02:02PM EDT – Now on stage: Susan Wojcicki
02:04PM EDT – YouTube has passed 1B hours/day viewed
02:05PM EDT – (So far this is more promotional than announcing anything new)
02:06PM EDT – Over 60% of watch time is now on mobile devices
02:07PM EDT – But living room viewing is the fastest growing segment
02:07PM EDT – Now on stage: Sarah Ali
02:08PM EDT – Google is adding support for 360 degree videos on the TV YouTube application
02:09PM EDT – Demoing controlling a 360 degree view using a TV remote
02:09PM EDT – (Looks like an NVIDIA Shield TV controller)
02:10PM EDT – Now on stage: Barbara Macdonald
02:11PM EDT – Barbara is here to discuss Google’s Super Chat feature, which actually launched earlier this year
02:11PM EDT – Demoing Super Chat
02:13PM EDT – Google is adding a new API feature to Super Chat to allow Super Chat to trigger device actions
02:15PM EDT – (This water balloon stunt is cringey)
02:17PM EDT – Now on stage: Dave Burke to talk about Android
02:18PM EDT – (2 Billion active devices; I have to wonder how many of them are up to date on security patches…)
02:18PM EDT – The 2B number is just phones and tablets, BTW
02:18PM EDT – Dave is currently recapping recent efforts. Android Wear, Android Auto, etc
02:20PM EDT – 82B apps and games installed via the Play Store in the last year
02:20PM EDT – Now focusing on Android O
02:20PM EDT – Release later this summer
02:21PM EDT – Dave is going to walk us through two themes that Google wants to focus on
02:21PM EDT – 1) Fluid Experiences
02:22PM EDT – Multitasking on Android O: Picture-in-picture on a phone
02:23PM EDT – Notification dots: a dot on the icon for an app to indicate that an app has a notification
02:24PM EDT – All fully automatic, without developers having to do extra
02:24PM EDT – Autofill with Google: autofill has been extended to apps
02:24PM EDT – Auto fill usernames, passwords, etc
02:25PM EDT – Copy & paste: smart text selection
02:25PM EDT – On device machine learning to intelligently automatically select text based on context
02:25PM EDT – Select a whole name at once, etc
02:26PM EDT – Announcing TensorFlowLite
02:27PM EDT – TFL will be leveraging a new API for neural network hardware
02:27PM EDT – The neural network will be made avaialble to O later this year
02:27PM EDT – 2) Vitals
02:28PM EDT – Vitals is focused on core functionality like security, performance, and battery life
02:28PM EDT – 3 foundational building blocks: security enhancements, OS optimizations, and developer tools
02:29PM EDT – Google is going to make their Google Play store security features more obvious
02:29PM EDT – Google Play Protect
02:29PM EDT – Letting users know the app has been scanned
02:29PM EDT – OS optimizations: faster boot time
02:30PM EDT – O will be adding limits to background execution
02:30PM EDT – Play Console Dashboards, to help developers see what problems users are happening
02:31PM EDT – Having
02:31PM EDT – One more thing
02:32PM EDT – Google is adding a new programming language to Android
02:32PM EDT – Kotlin
02:32PM EDT – Fully ART compatible, and interops with existing Android apps
02:33PM EDT – A ton more features they aren’t going into right now, such as Project Treble
02:33PM EDT – First Android O beta out today
02:33PM EDT – But wait, there’s more
02:34PM EDT – Now on stage: Sameer Samat
02:34PM EDT – Sameer is here to talk more about Android
02:35PM EDT – How do we get smartphones to more people and more of the world?
02:35PM EDT – Annoucning the successor to the Android One program, Android Go
02:36PM EDT – Android Go focuses on 3 things: optimized OS for low-end devices, smaller built-in apps, and a Play Store that highlights suitable apps (though all apps are accessible)
02:36PM EDT – For phones with 1GB or less of RAM
02:36PM EDT – Go devices can run with as little as 512MB
02:37PM EDT – Quick settings will include data usage information
02:37PM EDT – Chrome Data Saver will be on by default
02:38PM EDT – YouTube Go: a version of the app optimized for Android Go
02:38PM EDT – Save videos to watch them later. Peer-to-peer sharing of saved videos
02:40PM EDT – Improved keyboard tools for typing in other scripts
02:40PM EDT – “Building for Billions” best practices for Android Go
02:41PM EDT – Useful offline state, sub-10MB APK size, and better performance through GCM
02:41PM EDT – Starting with O, all 1GB or less devices will get the Go configuration
02:41PM EDT – And all devices will eventually have a Go configuration
02:42PM EDT – Now on stage: Clay Bavor to talk about AR and VR
02:43PM EDT – LG’s next flagship phone will support Daydream
02:43PM EDT – Samsung GS8/GS8+ will add Daydream support this summer
02:44PM EDT – Daydream will now support stand-alone headsets as well
02:44PM EDT – (Qualcomm has been pushing this idea particularly hard)
02:45PM EDT – WorldSense tracking technology
02:45PM EDT – Inside-out tracking from the device
02:45PM EDT – Google will be taking a platform approach for standalone Daydream devices
02:46PM EDT – Working with Qualcomm, of course. Also working with HTC and Lenovo on headsets
02:46PM EDT – Standalone devices later this year
02:46PM EDT – Update on Project Tango
02:47PM EDT – The second generation Tango phone will be the Asus Zenfone AR, which will go on sale later this summer
02:48PM EDT – Visual Positioning Service
02:48PM EDT – The phone finds its place by looking around and identifying visual markers
02:49PM EDT – CV meets positioning
02:49PM EDT – Adding a new AR mode to Google Expeditions for education
02:51PM EDT – And back to Sundar
02:53PM EDT – Sundar is sharing a developer story on TensorFlow
02:53PM EDT – Rolling a vido
02:56PM EDT – New initiative: Google for Jobs
02:56PM EDT – Job listing/matching service
02:57PM EDT – New search feature to help people find job postings
02:58PM EDT – Google’s worked with all of the major job listing services
02:58PM EDT – Filtering jobs by type, hours, etc
02:59PM EDT – Addressing jobs of every skill and experience level
02:59PM EDT – Rolling out in the US in the coming weeks
03:00PM EDT – Recap time
03:00PM EDT – Google’s shift from mobile-first to AI-first
03:00PM EDT – And that’s a wrap
SOURCE:http://www.anandtech.com/show/11409/the-google-io-2017-keynote-live-blog