One of the best ways to understand the potential of the Google Assistant is to watch how fast the voice-activated helper can now bring up Beyonce’s Instagram page.
“Hey Google,” says Meggie Hollenger, a Google program manager, using the wake words that trigger the software on her smartphone. Then it’s off to the races as she shoots off 12 commands in rapid-fire succession.
“Open the New York Times…Open YouTube…Open Netflix…Open Calendar…Set a timer for five minutes…What’s the weather today?…How about tomorrow?….Show me John Legend on Twitter…Show me Beyonce on Instagram…Turn on the flashlight…Turn it off…Get an Uber to my hotel.”
As she makes each ask, the phone pops up the new information. The whole sequence takes 41 seconds. She doesn’t have to repeat the wake words between commands. When she makes the request to see what Beyonce is up to, the Assistant not only launches the Instagram app, but it automatically takes us directly to the pop star’s page so I can see the latest photos she’s shared with her 127 million followers. Likewise, when Hollenger asks for an Uber, the software already knows where she’s staying.
Three years after CEO Sundar Pichai introduced his AI-driven digital helper to the world, Google is previewing the “next-generation” of the Assistant at its annual I/O developer conference on Tuesday. The new version can deliver answers up to 10 times faster than it did before. A big boost of speed could help turn around the perception that voice assistants are too laggy and inaccurate. That’s a big deal if companies like Google and Amazon want to take these digital helpers further into the mainstream.
Making Google Assistant a success is key for the world’s biggest search service, which delivers answers to over a trillion searches a year. Many of us are moving away from looking for information by typing on our computers and are instead talking to our smartphones and smart speakers. Google is now racing with Amazon, and its Alexa voice assistant, and Apple, with Siri, to give us the instant gratification we increasingly expect from our always-connected gadgets.
That’s why Google invited me to its global headquarters in Mountain View, California, a few days before I/O to see the biggest update yet of its make-or-break Assistant.
It’s fascinating — and a little bit scary.
The next-gen software is the headliner in a new slate of features that showcase Google’s world-class artificial intelligence and engineering chops. The Assistant isn’t only faster, but smarter, with Google counting on breakthroughs it’s made in neural network research and speech recognition over the past five years to set itself apart from its rivals.
And it’s getting more personal. You’ll be able to add family members to a list of close contacts. When you ask the Assistant for directions to your mom’s house, for instance, it knows who your mom is and where she lives. Another feature, an update to last year’s eerily human-sounding Duplex voice concierge, lets the Assistant automatically fill out forms on the web after you make a verbal request for actions like booking a rental car or ordering movie tickets.
“We could potentially see a world where actually talking to the system is a lot faster than tapping on the phone,” says Manuel Bronstein, vice president of product for the Google Assistant. “And if that happens — when that happens — you could see more people engaging.”
The Assistant is now on 1 billion devices, mostly because it comes preinstalled on phones running Android, the world’s most popular mobile operating system. Many of Google’s other services — Gmail, YouTube, Maps, the Chrome browser — also serve more than 1 billion people a month. All these services are useful and innovative, but their lifeblood is the data you feed the company every day through your search history, email inbox, video viewing habits and driving directions.
Of course, this is all predicated on the Assistant actually working as billed. Google wouldn’t let me try it for myself, and my colleagues and I weren’t allowed to video record the demo. Instead, Google provided us with a pre-shot marketing video. Hollenger also read from a script, following a cheat sheet of written commands. So it’s unclear how deft the software would be in carrying out the sometimes meandering requests of regular people.
The demo even had a few stumbles. While the jumps from app to app are snappy, Hollenger had to repeat queries once or twice because the software didn’t process her requests on the first try. In other demos, though, Hollenger used the Assistant to dictate texts and emails with hyper-accuracy. The system can also tell the difference between what she wants written in the email and what’s a general command. For example, when she says “Send it,” the software sends the email instead of typing “Send it” in the email body.
Still, the Assistant is sure to be the subject of discussion — and perhaps controversy.
“There are positives and negatives and tradeoffs,” says Betsy Cooper, director of the Aspen Tech Policy Hub. “With the Google Assistant, since it’s always listening [for a wake word], there’s always the possibility that they could abuse that privilege.”
‘Your own individual Google’
The new Assistant is the culmination of five years of work, says Francoise Beaufays, a principal scientist at Google. That’s longer than the Assistant has been around. Over those five years, Google researchers have made key advances in AI audio, speech and language recognition.
“What we did was reinvent the whole stack, using one neural network that does the whole thing,” says Beaufays.
It’s a major technological breakthrough, bringing down the space needed from 100 gigabytes to less than half a gigabyte. Still, the suped-up digital helper requires hefty computing power for a phone, so it will only be available on high-end devices. Google will debut the product on the next premium version of its flagship Pixel phone, expected in the fall.
Days before he unveiled the Assistant in May 2016, I sat down with Pichai in his glass-walled office, secluded within the sprawling Googleplex, to hear his pitch. The search giant, already years late to the digital voice assistant game, was finally getting ready to jump into the ring with Siri and Alexa.
From the very beginning, Pichai was adamant it was much more than that. For Google, the Assistant is about breaking past the company’s iconic white homepage and spilling its engineering smarts into every piece of tech you own — your phone, your car, your washing machine.
“It’s Google asking users, ‘Hi. How can I help?'” he said at the time. “Think of it as building your own individual Google.”
Now as Pichai ushers in a new phase for the Assistant — including the feature that knows specific details about your family — it’s clearer than ever that when he said “your own individual Google,” he meant it.
Google wouldn’t make Pichai available for an interview for this story.
Changing times
Of course, the world is a much different place than it was three years ago.
For starters, the competition with Amazon is now a full-fledged rivalry. When it comes to smart speakers, Amazon’s Echo devices powered by Alexa own almost 67% of the market, according to research firm eMarketer. Google Home devices, driven by the Assistant, account for almost 30%.
Then there’s the public debate on privacy and security. Lawmakers and consumers are taking a harder look at the policies of big tech companies after Facebook’s Cambridge Analytica scandal, which brought data collection issues to the forefront throughout 2018. Google was criticized just last month for its Sensorvault database, which helps measure the effectiveness of lucrative targeted ads served to you based on the personal information Google knows about you. It turns out that police departments across the country have tapped Sensorvault for location data when trying to crack criminal investigations. In response, a US House of Representatives committee sent a letter to Pichai demanding answers about the database. Lawmakers have asked for an in-person briefing by May 10.
When I asked during a product briefing last week what Google would do if law enforcement asked for data on family relationships and other info collected by Assistant, a spokesman said that Google doesn’t have anything to share on that front.
Bronstein, the product head for the Assistant, says Google constantly has “very good debates” about storing data for advertising purposes. The philosophy, he says, is “Don’t store the information for the sake of storing it. Store it because you think it can deliver value.”
He adds, “We want to be very transparent with all those things, so that you know when this is going to be used for advertising or is…never going to be used for advertising.”
But privacy experts say Google should do a better job communicating its policies to consumers.
“I don’t know how well people actually understand,” Jen King, director of consumer privacy at the Stanford Center for Internet and Society. She adds that the company should give people more options to opt out of data collection, instead of lumping things together.
Google has already been challenged on how it deals with transparency. Last year, the Associated Press reported that Google tracked people’s location even after they’d turned off location-sharing on their smartphones. The data was stored through a Google Maps feature called “Location History,” the same feature at issue in the Sensorvault database. Critics like the ACLU said Google was being disingenuous with its disclosures. The company later revised a help page on its website to clarify how the settings work. Last week, Google announced a feature that lets people auto-delete location, web and app history.
Bronstein also says a “small fraction” of voice queries from the Assistant are shared with a team at Google that works on improving the AI system, if users allow for that in the settings. He didn’t provide any details about how many “small” is. But he did say that in those cases, personal information is stripped from the voice audio.
The evolution of Duplex
In addition to giving the Assistant a jolt of speed, Google is also updating the project that stoked the most controversy at last year’s conference: Duplex.
The feature uses unnervingly human-sounding AI software to call businesses to book reservations and appointments on the behalf of Google Assistant users. Its AI mimics human speech, using verbal tics like “uh” and “um.” It speaks with the cadence of a real person, pausing before responding and elongating certain words as though it’s buying time to think.
Last year’s demo immediately raised flags for AI ethicists, industry watchers and consumers, who worried about the robot’s ability to deceive people. Google later said it would build in disclosuresso people would know they were talking to automated software.
This new iteration is a lot tamer.
Google on Tuesday is updating Duplex to streamline bookings for more types of things, such as car rentals and movie tickets. But this time there are no human-sounding robots. It basically automates the process of filling out forms you’d find on the mobile web — think of it like autofill on steroids.
Here’s how it works: You say something like “Hey Google, get me a rental car from National for my next trip.” The Assistant then pulls up National’s website on your phone and starts filling out the fields in real time.
Throughout the process, you’ll see a progress bar, just like one you’d see if you were downloading a file. Whenever Duplex needs more information, like a price or seat selection, the process pauses and prompts you to make a selection. When the form is filled, you tap to confirm the booking or payment. Like other Assistant features, the system fills out the form by using data culled from your calendar, Gmail inbox and Chrome autofill (that includes your credit card information). The update launch later this year on Android phones.
While this version will probably cause less blowback, last year’s widespread recoil was a key moment for Google, Scott Huffman, head of engineering for the Google Assistant, told me earlier this year. “The strength of the reaction surprised me,” he said. “It made it clear to us how important those societal questions are going forward.”
There’s other stuff coming for the Assistant, too. Google on Tuesday also unveiled a new “driving mode” for Android phones. When you activate it, the user interface puts a few items front and center that you’re likely to use while driving. Those include navigation directions for Google Maps and Waze, music controls and reminders of missed calls. When you’ve got navigation directions up, your music or phone call controls sit at the bottom of the screen, so you don’t have to fiddle with your phone to find them.
‘Rules of the road’
Taken as a whole, Google’s new Assistant announcements could have a hefty impact on how we use tech.
Making voice commands easier and faster could change the way we interact with devices, just as when smartphones, led by Apple’s iPhone, became mainstream over a decade ago and sparked the age of touchscreen everything.
Perhaps we may look back at this as the first step toward a world in which people are constantly talking to inanimate objects. (It reminds me of those videos of toddlers holding magazines, trying to swipe at them like they’re iPads. In the future, kids could talk to a candle or chair and be surprised when it doesn’t talk back.)
The next-gen Assistant could also set a foundation for new habits around voice queries. Last year, Google announced “continued conversation” for voice commands, which keeps the mic open for eight seconds after a query so you can ask a follow-up question. The next-gen Assistant builds on that concept and could eventually forge a path for getting rid of wake words. (Huffman told me earlier this year that he thinks wake phrases like “Hey Google” are “really weird” and unnatural.)
That open mic would likely spark privacy concerns. Bronstein says it’s helpful to keep the microphone open for a little while — the company is still tuning how long that duration will be — but he wants people to be “intentional” when they’re talking to it. “You don’t necessarily want this thing to be transcribing everything you’re saying,” he says. “Because you wouldn’t feel comfortable.”
There are many other ways Google could advance the Assistant. Huffman told me earlier this year he’s interested in having the software remember an exact discussion you had with it yesterday, so that today you can pick up where you left off. He even wants the Assistant to be able to detect your mood and tone.
Whether that’s frightening or not, it’s how Google is thinking about evolving the Assistant. For now, though, Bronstein says he’s focused on making the experience more seamless, and figuring out what features will be valuable to users before adding that future-looking stuff.
In the meantime, people will have to work through all the issues that come with large-scale data collection and smarter-than-ever tech, and Google knows that. As Huffman told me earlier: “With AI, we’re going to end up with society thinking through some of the rules of the road.” ●