Long before smartphones, virtual assistants saw their initial heyday in the mid-1990s. Although search engines like AltaVista and Lycos were on the scene long before our favorite digital butler, Ask Jeeves, the latter allowed users to input more natural search terms and questions in order to get an accurate response.
However, it wasn’t until Siri started answering our questions via the iPhone 4s that the public really embraced a more modern-day version of virtual assistants. Google Assistant soon followed with improved technology on devices like the new LG G6.
Today, users can also connect to Google Assistant through their smartwatches, laptops, and TVs for a fully comprehensive virtual assistant experience. But beyond giving us the weather report and playing our favorite tunes, there’s a lot more in store for virtual assistants that will impact every facet of our lives. Here’s a look at their bright future.
In the past, virtual assistants largely existed as stand-alone products. Today, however, we can interact with Siri through third-party iOS apps and tap into Amazon Alexa via multiple products. Consumers will continuously expect the ability to connect with a virtual assistant across multiple devices to help streamline their activities and everyday life.
Although Siri is solely a component of Apple, other companies in the virtual assistant space, including Amazon, have developed the same technology to be used in additional capacities. In particular, Ford has already announced plans to add Amazon Alexa to its line of vehicles in order for drivers and passengers to more easily play music and get directions.
But all these technological advancements will require the development of more tools and resources in order for the Apples and Googles of the world to deliver on increased consumer demands. Those efforts are already underway, though. In 2015, Amazon released a “skills set” for Alexa, allowing developers to create their own voice commands. Meanwhile, Google opened its doors to developers to create commands for its Assistant.
Interpret and Adapt
Siri, Google Assistant, and Amazon Alexa already have the ability to interpret what we say in a natural context. Case in point: We don’t have to repeat, “Give me directions to” every time we launch Maps. Instead, we can use a variety of natural commands and still be understood by our virtual assistants to receive assistance with straight-forward questions.
However, the future of virtual assistants lies in their ability to learn, interpret and adapt to our ever-increasing demands. Virtual assistants and artificial intelligence technology work in concert to handle tasks that typically require a lot of back and forth. That’s where new virtual assistants like Amy — this “invisible software” doesn’t require an app or website to interact with — can be a godsend.
For example, you might send an email to a colleague, copying Amy, and say: “Hey, let’s get together next week” or “Grab a bite next week?” or “We should connect.” Amy will then take action, introduce herself to the other person and, based on your calendar and preferences, suggest a time to meet.
As Amy looks to attract more companies in today’s hectic business world, virtual assistants like ElliQ can go a step further by picking up on your subtle cues and habits to provide autonomous suggestions. In fact, the tabletop robot can do any number of things, from suggesting you take your medication to recommending a hot new restaurant in town.
Designed with the elderly in mind, the founders of Ignition Robots want ElliQ to be able to perform tasks completely autonomously in order for it to more quickly learn and adapt to human commands. Of course, that type of adaptability can help the elderly and disabled stay better organized and on task. Plus, these folks are bound to thrive and have a new outlook on life with the help of a virtual assistant.