In its day, the company Voice Computer Technologies generated a lot of attention for its state-of-the-art voice response system. Despite the word “voice” in its name, the system used prerecorded audio and “heard” by requesting that an individual calling into the system press 1 for yes and 2 for no, or by entering a series of numbers from a printed catalog. The year was 1984, and the voice response system promised to help college students register for classes.
Thirty years after the system debuted, the movie “Her” was released. It featured a relationship-challenged man who fell in love with an intelligent computer operating system. This human-like voice became an invaluable companion, and since it “lived” inside an operating system, it was able to read and evaluate all emails, text messages and stored files. With permission (and sometimes without), it responded to emails, made recommendations, purchased items and was always available 24/7. Given the rapid advancements in artificial intelligence, the storyline is perhaps more convincing today than when the movie was first released.
So, what do the two technologies, 30 years apart, have in common? Voice communications and AI have come together to create a new category in artificial intelligence called “conversational AI.” It is a rapidly evolving field that continually improves its ability to understand and interact with humans naturally and effectively.
Keeping up with AI applications and new startups is already challenging, as a new offering or application appears almost weekly. But Hume.AI is noteworthy because it is a foray into this new category, conversational AI. The tech company measures hundreds of dimensions of emotions through a real-time conversation between you and its unique chatbot. Hume.AI claims its empathic voice interface is the world’s first emotionally intelligent voice, and that it has been trained on millions of human interactions.
With humans no longer tethered to a keyboard, AI has progressed, making chatbots more meaningful with quick two-way conversations that can instantly be translated into dozens of different languages.
It should come as no surprise then that local governments, being closest to their citizens, are embracing AI and voice in many significant ways. They are already utilizing generative AI to help public employees work more efficiently, improve how they communicate with residents and help better design services.
Here are just a few ways that governments are thinking about deploying conversational AI:
Accessibility and convenience: Voice-activated AI assistants like Siri, Alexa and Google Assistant make technology more accessible to a broader audience, including those with disabilities. They allow users to perform tasks hands-free, enhancing convenience and productivity. AI-powered chatbots and voice assistants can provide round-the-clock service, answering common queries and guiding citizens through various services without the need for human intervention.
Customer service: AI-driven chatbots and voice assistants improve customer service by providing quick, accurate responses to common inquiries, reducing wait times and freeing human agents to handle more complex issues.
Personalization: AI can analyze vast amounts of data to provide personalized recommendations and experiences. Voice technology can enhance this by recognizing individual voices and preferences.
Automation: Voice commands enable the automation of smart homes, allowing users to control lights, thermostats, security systems and more with simple voice instructions.
Health care: AI and voice technology are being used in health care as virtual assistants that help manage patient records, schedule appointments and provide basic medical advice, improving efficiency and patient care. In Washington, D.C., officials recently employed generative AI to create a “Knowledge Assist” chatbot that allows residents and employees to ask questions and quickly get accurate answers about various health programs, including vaccines and nutrition services.
Education: Voice assistants and virtual tutors can aid education by helping students with their studies, providing information, and even facilitating language learning through conversational practice. The Khan Academy already uses this through its introduction of Khanmigo, which serves both learners and teachers.
Efficiency in workplaces: In professional settings, AI and voice technology can streamline workflows, assist in scheduling, provide reminders and facilitate quick access to information, thereby boosting productivity.
Natural interaction: Voice technology enables more natural human-computer interaction, making it easier for people to communicate with their devices as they would with another person.
Multilanguage translation: AI-powered voice translation services break down language barriers, enabling real-time communication between speakers of different languages.
The future of AI-powered voice applications shows great promise. With the advent of emotional intelligence interfaces, one can begin to imagine some rather intriguing applications.
For instance, measuring sentiment and emotion can enhance social and mental health-related counseling, 311 call centers and, quite possibly, emergency 911 systems. AI voice systems can measure anger, frustration, hostility, stress and emotional pain. Such systems must be trained to know exactly when to direct a caller to a highly trained individual who can best respond when an automated response is not enough. These same systems can translate such emotion into useful, structured data for public managers to review, respond to and plan for future interventions. AI will play a crucial role in analyzing large datasets to provide policymakers with insights. This can lead to more informed decisions on various aspects of local governance, from urban planning to resource allocation.
Local governments face many implementation challenges. Government employees will need training to use and integrate these new technologies into their workflows effectively. Decisions need to be made about when a chatbot’s responsibility is transferred to a human and under what conditions and circumstances. Next, one can anticipate technical infrastructure challenges, whereby local governments will need to invest in the necessary technical infrastructure to support AI and voice technologies. As voice systems become more natural and human-sounding, local governments will want to provide disclaimers and statements of AI use policies.
However, as with any technology, there is a dark side that is often overlooked. In the case of convincingly real AI-powered voice interfaces, we must worry about overreliance on machines. We must be concerned about how conversational data and information are stored, for how long, under what conditions and who might have access. We cannot assume conversional AI can keep a secret.
Machines that portend to think, feel and communicate—however alluring—present some interesting ethical and moral challenges. Machines spend basically all their time studying and learning about us—and this alone should be worrying, to say the least. Those old enough to remember the film “2001: a Space Odyssey,” released some 50 years ago, are haunted by the implications of the famous line, “Open the bay doors, Hal.” In this brief scene, Hal is the master control computer system that talks, thinks and, ultimately, concludes that the human crew poses a threat to the mission. Hal actively tries to kill off the crew. At this moment, in real-time, we happily remain in charge of our computer systems, but one has to wonder as we converse with machines that think and speak with us, where is this leading us?
Dr. Alan R. Shark is the executive director of the Public Technology Institute (PTI) and associate professor for the Schar School of Policy and Government, George Mason University, where he is also an affiliate faculty member at the Center for Advancing Human-Machine Partnership (CAHMP). Shark is a National Academy of Public Administration fellow and co-chair of the Standing Panel on Technology Leadership. Shark also hosts the bi-monthly podcast Sharkbytes.net. He acknowledges collaboration with generative AI in developing certain materials.