(Lau, 2019)
As we live longer and technology continues its rapid arc of development, we can imagine a future where machines will augment our human abilities and help us make better life choices, from health to wealth. Instead of conducting a question and answer with a device on the countertop, we will be able to converse naturally with our virtual assistant that is fully embedded in our physical environment. Through our dialogue and digital breadcrumbs, it will understand our life goals and aspirations, our obligations and limitations. It will seamlessly and automatically help us budget and save for different life events, so we can spend more time enjoying life’s moments.
While we can imagine this future, the technology itself is not without challenges — at least for now. The ability for artificial intelligence to understand the complexities and nuances of human conversation is one hurdle. There are more than 7,111 known living languages in the world today, according to Ethnologue. Adding to the intricacies are the varied ways words are shared and used across different cultures, including grammar and the level of education and style of the speakers. Google Duplex, the technology supporting Google Assistant, which places phone calls using a natural-sounding human voice instead of a robotic one, is an early attempt to address such challenges in human communications. But these are just initial whispers in voice AI’s long journey.
Beyond making reservations and conducting simple dialogues, virtual assistants will need to become far more useful and further integrated into the fabric of our everyday lives. Not only will they need to anticipate what we need before we ask, they also need to understand the context of our conversations and react accordingly. Imagine a snow day where school is canceled for the kids. Knowing that you must now stay home with your children, your phone would prompt you, asking if you’d like your meetings moved to the following day; your entertainment console would automatically suggest movies to watch and e-books to read. Best of all, your smart speaker would recommend meal options for lunch while you are out shoveling snow. Alternatively, imagine how much more pleasant your journey home from a business trip would be if your phone could automatically arrange for a ride waiting to pick you up at the airport, based on your travel itinerary, location, and habits. The possibilities are endless. And interactive voice can initiate a conversation in ways fingers on glass cannot.
Consider the example of banking. Free from the traditional bounds of communication, we can now re-imagine life in a world where the concept of banking extends beyond its traditional frontiers, or simply, disappears. Where physical boundaries once defined bank branches, an array of modalities from mobile phones and laptops to smart speakers and connected appliances will re-characterize the meaning of money — for consumers and institutions alike. Consumers now demand consistent and seamless digital experiences, whether they are purchasing merchandise online, downloading music, or transferring money. Consumers now dictate what they want, when they want it. If financial institutions want to leverage voice technology to further evolve the mobile experience and enhance day-to-day financial activities, they must take a playbook from the digital ecosystem and be careful not to simply replicate the branch and mobile transactions with scripted verbal dialogues. After all, there is much more to virtual assistants than a simplified robotic voice. What might happen if AI becomes more contextually aware and empathetic? Imagine that one day this ambient technology knows us so well that it can act as our personal CFO and continuously help us get to the best financial outcomes over time, based on its knowledge of our household, our life choices, our health, and our longevity. Will we trust it enough to make decisions for us automatically? A large part of that will be driven by the society’s perception and acceptance of machines. In Japan, where the culture is more welcoming to humanoids, robots are deployed in hospitals and nursing homes to keep seniors from feeling lonely, and educational robots are also being used to help children improve their English skills. Some have even gone to the extreme to find love and companionship with robots, such as in the case of the holographic AI character called Hikari Azuma made by Japanese messaging giant, Line.
AI provides us with the opportunity to reimagine not just the user experience, but the value exchange. Through collating an abundance of data sources, AI has the ability to establish a true 360-degree view of the consumer’s everyday life, based on his or her past habits and behaviors, well beyond the traditional data silos. The ability to learn, process, and augment creates a symbiotic relationship between humans and machines. While movies such as “Her” and “Humans” paint a world that may seem unattainable at the moment, they allow us to flex our creative muscles to envision what’s in store. And according to Amazon, that future might actually not be so far-fetched. In fact, with the help of AI and machine learning, Amazon is working toward a future where humans can conduct a natural back-and-forth conversation with smart speakers and other connected devices.
We have the power to design a world where our collective voices help create better versions of humanity, where purpose becomes central to our innovations and drives our day-to-day actions. Maybe what limits us is not our technology, but our imagination to think beyond the current realm of possibilities, and our willingness to place our trust in machines.
Bibliography
Lau, T. (. (2019, may 23). When AI Becomes a Part of Our Daily Lives. Retrieved from hbr.org: https://hbr.org/2019/05/when-ai-becomes-a-part-of-our-daily-lives