The Journal The Authority on Global Business in Japan

Real-time translation of the spoken word using artificial intelligence (AI) still has a way to go before it can outperform human linguists, but it is already passable for simple conversations. At least that was my impression after trying Microsoft’s video conversation system with a Spanish woman.

“Have you ever been to Japan?” I asked in Japanese.

“Many times,” she—or rather the system—replied in perfect Japanese, after translating her response from Spanish.

Not exactly a riveting conversation, but this is bound to change. Microsoft has recently improved the accuracy of its system with “deep learning” technology, and the future looks bright. With the addition of Japanese, the system can now speak 10 major languages.

AI-based simultaneous translation systems have improved rapidly, so much so that Japan is planning to deploy one for the Tokyo 2020 Olympic and Paralympic Games. And while not a translation system, Amazon.com’s Echo speaker, which interfaces with its Alexa app, allows a user to give verbal commands to play music, read the news, give the weather forecast, and do other tasks.

Ten years after the iPhone made its debut, touchscreens are everywhere. Sound is the next frontier for user interfaces, and there are only a few technological leaps to go before it, too, goes mainstream.

EAVESDROPPER
Some view this development with alarm, and their fears are not new. In his 1970 science fiction classic Koe no Ami (The Voice Net), Japanese writer Shinichi Hoshi describes a dark future of networked computers listening to phone conversations, gathering secrets, then blackmailing people. Through constant surveillance, the network gradually intimidates the populace into keeping quiet.

Creepy, but still fiction.

However, this vision raises legitimate concerns: AI may not be able to will itself into sentience, but its prevalence and sophistication could make it a handy tool for abuse—especially as regards privacy.

Governments and companies are already vacuuming up mountains of personal information, and voice-interactive AI will make this easier and more thorough—interpreting our feelings, determining our age, and approximating our level of education, said an executive with a US venture business that is working on interactive AI.

The company provides corporate clients with AI solutions for customer service. Its goal is to provide customers the best possible experience, but the system could amass sensitive personal data unintentionally.

YOU KNOW TOO MUCH
AI systems learn and become more capable as they accumulate data; but what is being learned and how is that knowledge being used? There are good—and not so good—answers to this.

Microsoft got an earful when it unleashed Tay, an AI chatbot, on Twitter a year ago. In less than a day, the impressionable bot began rattling off a string of obscenities and racist posts, forcing Microsoft to pull the plug. The lesson is that AI systems will “turn racist” if they are fed racist data, according to Microsoft Vice President David Heiner.

AI in self-driving cars and robotic nurses may perform well, but it may not be so good at dealing with people’s questions.

In order to prepare for a future where voice-based AI is common, there should be a consensus as to how it will be used, and some areas should be off-limits.

The chat app Line has more than 200 million users worldwide, creating a massive trove of private conversations among people from all walks of life. Messages are encrypted so that even Line employees cannot see what is being said.

“People will stop using our app if they think something funny is going on,” said Takeshi Nakayama, Line’s chief privacy officer. He said the company examines its service weekly to ensure there are no privacy problems. It has also set up a research foundation to establish privacy safeguards.

Line will release an AI speaker this summer. “We’ll put out feelers to see how users react to it,” Nakayama said.

In the meantime, companies developing voice-based AI should engage with their customers and others to discuss issues of concern. This kind of transparency should ensure that the technology is used ethically as it becomes a part of daily life.