Features, Tech, Archives Neil W. Davis Features, Tech, Archives Neil W. Davis

AI 1988

Look back at how members viewed the potential of AI in 1988 in this reprint from the ACCJ Journal archives.

Look back at how members viewed the potential of AI in 1988 in this reprint from the ACCJ Journal archives

Artificial intelligence (AI) is becoming more and more widely used in Japan. Companies in this country are coming to realize that if they want to stay internationally competitive, they have to incorporate this technology. Davis is a popular contributor to these pages who specializes in high-technology subjects.


Japan’s large electronics companies are constantly looking for ways to boost the efficiency of their administrative work while searching for new markets so as to diversify their business. AI, a type of sophisticated computer programming that promises to revolutionize many job-related tasks, is catching on among electronics companies and software businesses here, and is also a target of interest among trading houses. All of these enterprises want to establish a foothold in this up-and-coming technical sector so as to enhance their long-term prospects. AI systems in the years to come may make or break certain companies in highly competitive areas of business. It would not be an exaggeration to say that a “mini boom” is being seen today in Japan’s AI sector.

AI systems under development here are intended to address bottlenecks within corporate product-development departments, and they are being marketed to outside customers, sometimes together with special types of data processing equipment. One of the best ways to sell more computer hardware is to market such equipment with an emphasis on higher value-added features, such as the ability to effectively handle AI writing tasks. However, Japanese companies in this field are all well aware that they have to contend with the likes of Symbolics Inc., a Cambridge, Massachusetts, global leader in AI workstations.

Two years ago (in 1986) the Artificial Intelligence Association of Japan was established in Tokyo by electronics companies, telecommunications businesses, software houses, and others interested in new developments in computer programming. The association cultivates exchanges between researchers in various AI-related fields and disseminates technical information to its members. Moreover, the association promotes specialized training of so-called knowledge engineers and other experts needed for the advancement of the new discipline. Establishment of the special association signifies the maturation of the initial commercial phase of AI here.

In contrast to Japan’s AI infrastructure, state-of-the-art American AI work is typically dominated by clusters of small businesses mainly located around major universities. In fact, many Japanese AI specialists have studied at leading US universities. As a result of the difference in the two paradigms, the large electronics enterprises of Japan have tremendous potential resources to devote to AI studies, whereas in the US, venture capital must typically be raised to fund much of the innovative work in AI.


The global market for AI systems is likely to grow to as large as much as $10 billion per year sometime between 1995 and 2000, according to Japanese electronics industry estimates.

The Japanese approach to the AI business often relies as much on proximity to leading US universities as it does on relationships with the top Japanese universities. In other words, Japanese universities are not major actors within the immediate sphere of AI business here. The paradigms are not without exception, however, because some smaller businesses in Japan, such as CSK Corp., are doing work in the field as well.

As AI is widely considered a promising growth market within the information processing sector, electronics companies are offering products that will allow users to develop their own AI systems, such as so-called expert systems. This customized programming is developed on the basis of experts’ knowledge; hence expert systems comprise handy tools for novices—so that they may easily draw upon the comprehensive knowledge of specialists to assist them in complicated tasks, such as writing specific types of software programs.

The global market for AI systems is likely to grow to as large as much as $10 billion per year sometime between 1995 and 2000, according to Japanese electronics industry estimates. The leading AI language today is LISP (LISt processor), and it is widely expected to retain its front-running position. Four of the largest AI applications expected in the mid-to-late 1990s are those for integrated circuit design assistance, manufacture planning, financial planning, as well as computer systems diagnosis and maintenance.

An example of a medical application of AI systems is the so-called RINGS program—rheumatology information counseling system—developed recently by Nippon Telegraph and Telephone Corp. and a medical college in Tokyo. The system is used by those suffering from rheumatism to help them in diagnosing minor problems over the telephone. When more serious problems arise, doctors are to be consulted. A variety of other medical-related AI systems are now under development, in part because the medical sector is likely to see rapid growth due to the aging of Japan’s population.

In the area of nuclear power plant operations, a group of Japanese enterprises is developing an expert system to enhance the safety of pressurized water reactors (PWRs). The LISP-based expert system is intended for use in new types of PWRs to be operated by Kansai Electric Power Co., Inc. and three other electric utilities.

Greater safety in operating nuclear plants can lead to enhanced profits for the utility companies, as they will not need to shut down reactors for prolonged periods in order to do repairs, precautionary tests or other types of maintenance.

The most prominent of Japan’s AI-related development programs is the so-called fifth-generation computer project, which is administered by the Institute for New Generation Computer Technology (ICOT). The institute was established in 1981 under funding from the Ministry of International Trade and Industry’s (MITI’s) Machinery and Information Industries Bureau.


Although today’s AI systems can only cope with surface level knowledge, those of the year 2000 are likely to be capable of dealing with more abstract forms of knowledge.

Altogether, there are nine private companies participating in the project. Researchers based at MITI’s Electrotechnical Laboratory (ETL) in Tsukuba, Ibaraki Prefecture, are also involved. Moreover, the ETL, which is administered by MITI’s Agency of Industrial Science and Technology, is doing its own independent work in the field. Six to eight researchers from each of the electronics companies work at the ICOT center in Tokyo, and then only for periods generally ranging from two to four years.

When the project began in 1982, it was the subject of considerable attention throughout the world, due to its bold proposals and the perceived threat that it posed to the American and European computer software industries. However, recently it has not attracted much interest because Americans and Europeans have been less than impressed by the meager results of the project. US interest in the ICOT project led to the establishment of Microelectronics and Computer Technology Corp., a research consortium headquartered in Austin, Texas.

AI systems have a long way to go before they reach a phase of maturity. Although today’s AI systems can only cope with surface level knowledge, those of the year 2000 are likely to be capable of dealing with more abstract forms of knowledge. Advances in the memory capacity of computer microchips, parallel processing capabilities of computers, data processing speeds, and knowledge bases will accelerate the progress of the AI business sector.

Let us hope that people will always be able to keep the upper hand of control on such advanced tools as AI systems, and that the sophisticated tools won’t ever “discard” the humans they are supposed to be helping.

 
Read More
Features, Tech Tim Hornyak Features, Tech Tim Hornyak

Your Own Private AI

As AI evolves, businesses are turning to custom LLMs to unlock corporate resources.

As artificial intelligence evolves, custom systems unlock business resources

Konosuke Matsushita was one of Japan’s greatest entrepreneurs. As the founder of a light socket company that evolved into Panasonic, he inspired legions of salarymen with his business wisdom. Twenty-five years after his death, the “god of management” was effectively resurrected as an artificial intelligence (AI) model. A chatbot trained on his writings and speeches can produce eerily lifelike Matsushita answers, according to one relative, and will eventually be used to make business decisions. It’s a dramatic example of how businesses are using AI to leverage intellectual property built up over decades.

The past few years have seen an explosion of AI applications based on large language models (LLMs) and tools such as OpenAI’s ChatGPT and Google’s Gemini. They have been used for everyday tasks such as writing text for slide decks, lessons, and articles, as well as synthesizing search results as in Google’s AI Overview that now appears with most searches.

LLMs are based on computational systems using neural network transformers that perform mathematical functions. Measured by the number of parameters they contain, LLMs learn by analyzing vast amounts of text from books, websites, and other sources. During training, the model identifies patterns, relationships among words, and sentence structures. This process involves adjusting millions of parameters—values that help the model predict what comes next in a sequence of words.

A major problem with LLMs and generative AI, however, is that they usually draw entirely from online content and thus are prone to inaccuracies. AI hallucinations, as they are called, occur when LLMs observe patterns in the data that are nonexistent, or at least imperceptible to humans.

One solution is private AI. It brings the power of LLMs inside a company, where queries are secure and limited to the company’s own data, reducing the risk of security leaks and incorrect or misleading responses. Private AI has traditionally been limited to government, defense, finance, and healthcare users, but it’s spreading to a broader spectrum of industries due to fears about intellectual property theft.


[Private AI] brings the power of LLMs inside a company, where queries are secure and limited to the company’s own data, reducing the risk of security leaks and incorrect or misleading responses.

Kenja KK, a member of the American Chamber of Commerce in Japan (ACCJ), is opening up the market in Japan to private AI. The Tokyo-based company offers AI solutions for enterprises that include purpose-built expert systems, incorporating a relatively new AI technology called retrieval-augmented generation (RAG).

Bearing a name coined as recently as 2020, RAG relies on a predetermined collection of content to improve the accuracy and reliability of generative AI content. Kenja offers a self-service plan for small and medium-sized businesses and a more comprehensive enterprise plan for businesses.

“Private AI is the next frontier,” said Kenja founder and Chief Executive Officer Ted Katagi, who is also chair of the ACCJ’s Marketing and Public Relations Committee. “All companies face the same issues: you have very sensitive data that you don’t want to make accessible to everybody at the same time. Private not just in terms of someone outside the company, but within the company, too. You may not want HR data to be shared with people in finance, for example. That’s an issue you want to solve, and we solve that.”

Kenja users create so-called rooms where they can upload thousands of documents or other content, organizing this into topic-specific folders. The process can be automated, and Kenja can train and fine-tune the system. For instance, it can be taught to forget certain words or trained to understand a balance sheet in order to do financial tasks such as due diligence.

“You are kind of building a wall around a set of information and telling it to only use what’s in this area,” explained Katagi. “Having 85–90 percent accuracy—which is what current generative AI, such as ChatGPT, Gemini, or Claude, will give you—is not good enough. Private AI models that are fine-tuned and query a closed set of materials can close that gap.”

Private AI is being used in surprising applications. Just as Panasonic has cloned its founder in digital form, Dr. Greg Story is using Kenja to share the teachings of another business luminary, Dale Carnegie. The self-improvement guru from Missouri wrote a book in 1936, How to Win Friends and Influence People, that still counts among the world’s all-time bestsellers. As president of Dale Carnegie Tokyo Japan, Story has been teaching Japanese businesspeople about leadership, communications, and other skills in Dale Carnegie seminars for the past 14 years. Dale Carnegie started in Japan in 1963. 

Since learning about the impact of content marketing, he has built up an enormous corpus consisting of white papers, e-books, printed books, course manuals, 270 two-hour teaching modules, as well as video and audio recordings that include hundreds of podcast episodes. He has penned a series of books himself in English and Japanese that includes Japan Sales Mastery, Japan Business Mastery, Japan Presentations Mastery, and Japan Leadership Mastery.


If you like the cut of our jib and you want a Dale Carnegie point of view and a curated, trustworthy response, we provide that through this AI.

The material was scattered in different places, and when clients began asking for on-demand training, Story decided to get ahead of the curve by including all his company’s content in AI-curated form, something public chatbots cannot do.

“ChatGPT will give you everything it can scrape together, but it’s everything and therefore nothing,” said Story. “You get generic answers, and you don’t know if they’re trustworthy. But if you like the cut of our jib and you want a Dale Carnegie point of view and a curated, trustworthy response, we provide that through this AI.”

Story thinks the technology can benefit businesses that have substantial bodies of work to draw on, but those that don’t will get thin answers. He adds that using tools such as those from Kenja will not only help his company learn about the benefits of AI, but it will also give it an edge over competitors. He plans to roll out his AI offerings in 2025, delivering customized responses to students’ questions in English or Japanese on topics ranging from sales to diversity, equity, and inclusion.

Could there be a Dale Carnegie version of the Matsushita chatbot one day?

Kenja has begun working with Dale Carnegie’s global team to do just that, and has developed a prototype revival of Dale Carnegie’s voice, avatar, and writing style. The writing style and word generation are done with Kenja RAG AI technology.

“Carnegie became a global superstar in a non-digital world,” noted Story. “There’s no question we can get an AI to read a script generated in his style, in his voice. It’s amazing.”

 
Read More
Partner Content Tran Anh Thu Partner Content Tran Anh Thu

AI Audits

AI's ability to analyze considerable amounts of information quickly offers great potential for auditors. How is this rapidly evolving tool impacting the audit process?

What impact does artificial intelligence have on auditing?


Presented in partnership with Grant Thornton

In the Fourth Industrial Revolution, with increasingly developed technology, the application of artificial intelligence (AI) has become popular both in everyday life and work. Thanks to AI’s outstanding features, multiple tasks can be performed in less time and with less effort. For this reason, AI is playing an important role in some areas which require the processing of large amounts of information, such as auditing. So how does AI affect the work of an auditor?

AI is defined in the Oxford English Dictionary as “the capacity of computers or other machines to exhibit or simulate intelligent behavior; the field of study concerned with this.” In later use, it is also defined as “software used to perform tasks or produce output previously thought to require human intelligence, especially by using machine learning to extrapolate from large collections of data.” In other words, AI is used to perform tasks that require human intelligence through programmed algorithms.

AI processes supplied information and produces results after conducting an analysis. Thus, AI can convert huge amounts of data in a short time and increase work efficiency.

With those exceptional aspects, AI can be applied at many stages of the audit process. How is it used and how does it affect the efficiency of the audit?

This can be broken down into three stages:

Audit Planning
Based on the customer data input, AI will propose appropriate audit procedures to optimize the audit plan.

Risk Assessment
Using the provided information, AI will analyze past trend fluctuations and financial indicators. Since AI can process and synthesize considerable amounts of information, the analysis will be more specific and more effective, giving auditors a deeper view of the business’s situation. Accordingly, auditors will identify potential risks more accurately and provide more appropriate material. As a result, AI also can assist auditors to predict the potential financial situation and determine the reasonability of financial forecasts as well as potential future risks of the business.

Substantive Procedures
At this stage, auditors must perform many repetitive tasks, such as checking details of documents (e.g., invoices and contracts), matching data among documents, and verifying the accuracy of the financial statements. AI can perform these tasks automatically through programmed algorithms, allowing auditors to review more data promptly with a higher level of accuracy in less time than a traditional audit.

By using AI to analyze and process the large volume of transactions, auditors can easily detect anomalies, errors, and risks in financial data. It allows auditors to focus more on high-risk areas that are prioritized, thereby improving the audit quality. Additionally, AI has a function known as machine learning which allows the system to learn from past data and improve its performance, thus enhancing accuracy and effectiveness.

It can be seen that applying AI in auditing brings many benefits. On the one hand, labor savings and productivity increases are the prominent characteristics of AI system. And with the ability to review and analyze information on a wide scale, AI can help identify fraud or potential risks that may be overlooked, thereby improving risk assessment and strengthening audit quality.


With the ability to review and analyze information on a wide scale, AI can help identify fraud or potential risks that may be overlooked, thereby improving risk assessment and strengthening audit quality.

However, using AI still has certain limitations:

  • AI works based on the provided data, so ensuring that the data is accurate, complete, and taken from reliable sources is crucial. Additionally, with a colossal volume of data, errors are likely to occur during analysis and processing. This can result in inaccurate conclusions and affect audit results.

  • Another limitation relates to cybersecurity, as using AI requires an internet connection. Therefore, if the internet system is compromised, AI algorithms could be altered and routed to discrepancies in AI operations.

  • AI is, after all, a machine set up by humans which performs tasks based on pre-established patterns. Hence, AI cannot respond to or handle unforeseen situations. Moreover, maintaining an attitude of professional skepticism is extremely important during the audit process to minimize potential risks. Nevertheless, the nature of AI is mechanical, so it is impossible to possess this skepticism when analyzing information and handling situations as auditors do.

  • Another limitation is that AI might not be able to satisfyingly detect fraud or window dressing in accounting that may occur in a business, because AI lacks the ability to think and evaluate like humans. Fraud detection requires auditors to possess a professional skepticism to assess the evidence collected during the audit and to evaluate it based on the business operations and internal control activities.

With workloads increasing, the benefits that AI provides are indispensable and will have a positive impact on the audit process. However, auditors should use AI appropriately and not abuse it or rely entirely on it, because AI is precisely a tool and cannot solve complex issues that require human decision-making on a system basis. Consequently, the balance between using AI and manual work in auditing is prerequisite. Furthermore, auditors should be equipped with the necessary knowledge and skills to fully understand AI’s operations, employ its produced results, and avoid cybersecurity attacks.


 
 

For more information, please contact Grant Thornton Japan at info@jp.gt.com or visit www.grantthornton.jp/en


Disclaimer: Opinions or advice expressed in the The ACCJ Journal are not necessarily those of the ACCJ.

Read More
Features Tim Hornyak Features Tim Hornyak

Synthetic Savants

Since the introduction of consumer-facing artificial intelligence applications such as ChatGPT and Google’s Bard, generative AI has transformed how people work around the world. How might it impact specific industries in the years to come?

As generative AI sweeps the world, how will it transform the way we work and innovate?

We live in an age of intelligent machines. Since the introduction of consumer-facing artificial intelligence (AI) applications such as OpenAI’s ChatGPT and Google’s Bard over the past year, generative AI has transformed how people work around the world.

From $40 billion in 2022, the market size for generative AI will balloon to $1.3 trillion over the next 10 years, according to Bloomberg Intelligence. First popularized through image generators, the technology has been applied in fields ranging from neuroscience to advertising, sometimes in surprising ways.

Generative AI programs like the large language models powering ChatGPT are trained on enormous volumes of data to sense patterns and predict how they will play out in a piece of content. These models can be trained on linguistic, financial, scientific, sensor, or other data—especially data that is uniform and structured—and can then create new content in response to user input. They have had remarkable success, particularly in image and text generation, and have seen rapid uptake in sectors ranging from education to computer programming. “This technology is set to fundamentally transform everything from science, to business, to healthcare … to society itself,” Accenture analysts enthused in a report. “The positive impact on human creativity and productivity will be massive.”

Powerful New Assistants

Generative AI first gained public attention thanks to its ability to change how we communicate through words, images, and video. It’s no wonder, then, that the world’s largest public relations company has embraced it. Edelman worked with OpenAI to launch the original ChatGPT-2 and delivered the first application in an ad campaign. In the spots for Hellmann’s Mayonnaise, the tool is tasked with finding new ways to use leftovers.

Edelman believes the technology will reconfigure the communications industry, but it won’t replace human ingenuity, strategic advice, and ethical decision-making that builds trust, said Meghan Barstow, president and representative director of Edelman Japan.

“We predict that AI will become an essential assistant in our work, helping to brainstorm, research, summarize, trend spot, monitor media, and generate content, among other tasks,” explained the ACCJ governor and chair of the chamber’s Communications Advisory Council. “The emphasis here is on ‘assistant,’ as we believe there will always be a human in the loop, that AI and people working together will provide the most effective and valuable work output.

“As with any technology, there are risks that require appropriate caution, education, processes, and policies to ensure the safe and trustworthy use of generative AI to protect our work, our clients, and end users from issues related to disinformation, bias, copyright infringement, and privacy.”

Similarly, lawyers such as Catherine O’Connell are also using generative AI as smart assistants. O’Connell is principal and founder of Catherine O’Connell Law and co-chair of the American Chamber of Commerce in Japan (ACCJ) Legal Services and IP Committee.

After taking a course on how to get the most out of ChatGPT, she has been using it for writing keynote speeches, article outlines, posts on social media, and skeletons of presentations. She compares the tool to a human intern, and praises its time-saving efficiencies, but warns that it should not be used for legal work, such as contracts or legal advice. Attorneys in the United States, she noted, have found themselves in trouble after producing legal filings referencing non-existent cases that generative AI simply made up.

“Generative AI is like a teenager that has a lot of promise but has not learned how to be a whole professional yet; it needs guidance,” said O’Connell. “However, in terms of an idea generator or idea expander, a time-saving device, and an assistive tool, generative AI is an asset. The rest falls to me to add my human touch to check and verify, to add my own personality and insights only I have, and to make the output my very own. I think generative AI is so good that its cousin, Google search, may be out of a job sometime soon.”

Smart Tools for Talent

Recruiting is another industry in which workers deal with mountains of structured data, in the form of resumes and online posts, that can be utilized by AI. Robert Half Japan, an ACCJ Corporate Sustaining Member company, uses a system called AI Recommended Talent (ART) to match resumes to client needs. The system speeds up matching for job hunters and employers, allowing staff to spend more time with clients.

“The real power of generative AI is how much it can integrate with our existing systems,” explained Steven Li, senior division director for cybersecurity. “We are piloting ChatGPT-4 integration in our Salesforce CRM. Studies have shown benefits from integrating generative AI into workflows. Other industry examples that highlight the benefit of integration include the GitHub CoPilot generative AI feature.”

The effectiveness of AI in recruiting has led some people to speculate that it could render many human recruiters obsolete. Deep learning algorithms are figuring out what a good resume looks like, and generative AI can craft approach messages and InMails, a form of direct message on the popular LinkedIn platform, noted Daniel Bamford, Robert Half’s associate director for technology.

“However, the real value of agency recruitment is not, and never was, a simple job-description-to-resume matching service,” added Bamford. “Agency recruitment done well is a wonderful journey of problem-solving, involving the goals of organizations and teams and the values and desires of individuals. Excellent recruiters will thrive. They will use AI’s capacity to handle simple tasks like scheduling and shortlisting. This will free up time for high-value interactions, delivering even greater value for their partners and industries through the human touch. The future of excellent recruiters will be brighter with AI’s support.”

Tracking Ships and Patients

Even a traditionally hardware-oriented industry like logistics is being transformed by generative AI. Shipping giant Maersk is using a predictive cargo arrival model to help customers reduce costs with more reliable supply chains. It also wants to harness the power of AI to recommend solutions when shipping routes are congested, advising on whether goods should be flown or stored, and better understand the sales process, Navneet Kapoor, Maersk’s chief technology and information officer, told CNBC.

Maurice Lyn, head of Managed by Maersk for Northeast Asia, also sees great potential in the technology. “The biggest changes that I foresee will be related to the enhanced visibility into, and agility of the management of, the global supply chains of our clients on an execution level,” he told The ACCJ Journal. “The data aggregated will allow logistics service providers [LSPs] to deliver predictive and proactive solutions to our clients. If clearly interpreted by the LSPs, stability and uniformity of costs and deliverables will be provided globally and locally to our clients.”

Generative AI may even help us live longer, healthier lives via long-term patient monitoring. Sydney-based medical AI startup Prospection recently launched its first generative-AI model in Japan to analyze anonymized patient data for pharmaceutical companies so they can better understand patient needs. A Japanese drug company, for instance, could look at cancer patient outcomes across the country and find that they are slightly worse in a particular region, possibly because less-effective drugs are prescribed there.

Founded in 2012 and operating in Australia, Japan, and the United States, Prospection now has data on half a billion patients. For the first 10 years, it was using traditional AI methods, but generative AI has opened new services for the company. Users can query Prospection’s AI services about typical pathways for patients who took a certain drug, or what therapy they underwent after quitting the medication. A Prospection model can predict whether a patient will experience a certain event, such as needing to be hospitalized, over the next year.

“The ChatGPT transformer model is trained on billions of sentences consisting of words. We see each patient’s journey as the sentence and events in the journey as the words. That’s the vocabulary,” said Eric Chung, co-founder and co-CEO of Prospection. “The data is very powerful. There are lots of insights to be gained from data on 500 million patients. It’s beyond the power of humans to analyze, but AI can do it.”

Read More
Columns C Bryan Jones Columns C Bryan Jones

Do Androids Dream of Electric Sheep?

AI is beginning to create content that is sparking questions about ownership. For some time, companies have been using AI-powered tools to give computers the task of writing articles, social media posts, and web copy. Now, AI-powered image-generation engines, such as Stable Diffusion, Midjourney, and the Deep Dream Generator, have hit the mainstream. One day, might DEI be extended to machines?

Rethinking DEI in an age of rapidly expanding artificial intelligence

I have always been fascinated by the idea of artificial intelligence (AI). I remember chatting back in the 1980s with a version of Eliza for Commodore 64. Eliza is a program created in 1964 by German American computer scientist Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory. Rudimentary by today’s AI standards, Eliza is a natural language processor that converses with the user based on their input. It tries to mimic a real person and was one of the earliest applications to attempt what has come to be called the Turing test, a way of gauging a machine’s ability to exhibit intelligence. Passing this test, developed by English scientist Alan Turing, means a machine can conceal its identity, making a human believe it is another human.

I’ve been thinking back to that experience because we are now at a point where we must start considering how we will coexist with and treat truly intelligent machines. We’re not quite there yet, but the rapid advance of AI, and its integration into so many aspects of life, means this is a question that is no longer the providence of science fiction. It will be a real part of our future. Machine identity and rights will one day be an extension of the diversity, equity, and inclusion (DEI) that we talk about in this issue of The ACCJ Journal.

AI is beginning to create content that is sparking questions about ownership. For some time, companies have been using AI-powered tools to give computers the task of writing articles, social media posts, and web copy. Now, AI-powered image-generation engines, such as Stable Diffusion, Midjourney, and the Deep Dream Generator, have hit the mainstream. You may have seen some of their creations in the news. As these engines are trained on existing art, often scraped from the internet, there are questions about copyright and plagiarism. Stock media giant Getty Images announced on September 21 that it is banning AI-created art over these concerns.

Eventually, I believe, the visuals that machines create will become less obviously imitative and will express a view of the world unique to the creator, in the same way that the work of a human artist is an expression of the inner working of their mind. And when that happens, we really will have to ask ourselves what distinguishes us from machines.

Back to the Present

We still have some time before that question must be answered. For now, our focus can remain on the people who make our companies successful and our societies prosperous.

We explore DEI initiatives in this issue, along with sustainability efforts that can help ensure that our world has a healthy future.

I take to the road and the air on page 26 to explore the future of transportation and sustainability initiatives by member companies. I also talk to Bank of America’s Japan country executive and president of BofA Securities Japan, Tamao Sasada, on page 18 about the importance of diversity and the company’s efforts in the areas of DEI; environmental, social, and corporate governance; and sustainable finance.

I hope you enjoy this special issue and find useful ideas to help you achieve your own DEI and sustainability goals.

Sincerely yours, Eliza.

 
Read More
Features Julian Ryall Features Julian Ryall

State of Mind

For millions of people around the world who were already struggling with mental health issues, the past two-and-a-half years of the coronavirus pandemic have been a further trial. Isolation, a sudden shortage of opportunities to interact with friends or family in person, additional stresses in the workplace or the home, new financial worries, and difficulty in accessing appropriate mental healthcare have taken their toll, experts in the field told The ACCJ Journal.

How artificial intelligence is helping identify mental health concerns for better treatment

Listen to this story:


For millions of people around the world who were already struggling with mental health issues, the past two-and-a-half years of the coronavirus pandemic have been a further trial. Isolation, a sudden shortage of opportunities to interact with friends or family in person, additional stresses in the workplace or the home, new financial worries, and difficulty in accessing appropriate mental healthcare have taken their toll, experts in the field told The ACCJ Journal.

However, in the battle against mental health complaints, this time of adversity has also served to fast-track development and adoption of a new tool: artificial intelligence (AI). While the technology may be relatively new to the sector, the potential is huge, according to companies that are applying it to assist physicians with diagnosis and treatment.

A Tool for Our Time

AI has come a very long way since the first chatbots appeared back in the 1990s, and early mental health monitoring apps became available, explained Vickie Skorji, Lifeline services director at the Tokyo-based TELL Lifeline and counseling service. And it is urgently needed, she added.

“When we have something such as Covid-19 come along on a global scale, there is inevitably a sharp increase in anxiety, stress, and depression. The mental healthcare systems that were in place were simply flooded,” she said.

“A lot of companies were already playing around in the area of AI and mental healthcare, but the pandemic has really pushed these opportunities to the forefront,” she explained. “If, for example, a physician is not able to meet a client in person, there are now ways to get around that, and there has been an explosion in those options.”

Not every purported tool is effective, she cautions, and there are going to be questions around client confidentiality and keeping data current. The clinician must also become sufficiently adept at interpreting a client’s genuine state of mind, which might be different from the feelings that are communicated through the technology. On the whole, however, Skorji sees AI as an extremely useful weapon in the clinician’s armory.

Voice Matters

One of the most innovative solutions has recently been launched by Kintsugi, a collaboration between Grace Chang and Rima Seiilova-Olson, engineers who met at the 2019 OpenAI Hackathon in San Francisco. In just a couple of years, the company has gone from a startup to being named in the Forbes list of North America’s top 50 AI companies.

Kintsugi has developed an application programming interface called Kintsugi Voice which can be integrated into clinical call centers, telehealth platforms, and remote patient monitoring applications. It enables a provider who is not a mental health expert to support someone whose speech indicates they may require assistance.

Instead of using natural language processing (NLP), Kintsugi’s unique machine learning models focus on signals from voice biomarkers that are indicative of symptoms of clinical depression and anxiety. Producing speech involves the coordination of various cognitive and motor processes, which can be used to provide insight into the state of a person’s physical and mental health.

In the view of Prentice Tom, chief medical officer of the Berkeley, California-based company, passive signals derived from voice biomarkers in clinical calls can greatly improve speed to triage, enhance behavioral health metadata capture, and benefit the patient.

“Real-time data that augments the clinician’s ability to improve care—and that can be easily embedded in current clinical workflows, such as Kintsugi’s voice biomarker tool—is a critical component necessary for us to move to a more efficient, quality-driven, value-based healthcare system,” he explained. The technology is already in use in the United States, and Japan is on the waiting list for expansion in the near future.

Chang, the company’s chief executive officer, is confident that they are just scratching the surface of what is possible with AI, with one estimate suggesting that AI could help reduce the time between the appearance of initial symptoms and intervention by as much as 10 years.

“Our work in voice biomarkers to detect signs of clinical depression and anxiety from short clips of speech is just the beginning,” she said. “Our team is looking forward to a future where we can look back and say, ‘Wow, I can’t believe there was a time when we couldn’t get people access to mental healthcare and deliver help to people at their time of need.’

“My dream and goal as the CEO of Kintsugi is that we can create opportunities for everyone to access mental health in an equitable way that is both timely and transformational,” she added.

The Power of Data

Maria Liakata, a professor of NLP at Queen Mary University of London, is also the joint lead on NLP and data science for mental health groups at the UK’s Alan Turing Institute. She has studied the use and effectiveness of AI in communicating with the public during a pandemic.

Liakata’s own work has focused on developing NLP methods to automatically capture changes in individuals’ mood and cognition over time, as manifested through their language and other digital content. This information can be used to construct new monitoring tools for clinicians and individuals.

But, she said, a couple of other projects have caught her eye.

One is Ieso Digital Health, a UK-based company that offers online cognitive behavioral therapy for the National Health Service, utilizing NLP technology to analyze sessions and provide data to physicians. And last October, US-based mental and behavioral health company SonderMind Inc. acquired Qntfy, which builds tools powered by AI and machine learning that analyze online behavioral data to help people find the most appropriate mental health treatment.

“There has definitely been a boom over the past few years in terms of the development of AI solutions for mental health,” Liakata said. “The availability of large fora in the past 10 years where individuals share experiences about mental health-related issues has certainly helped in this respect. The first work that came to my attention and sparked my interest in this domain was a paper in 2011 by the Cincinnati Children’s Hospital. It was about constructing a corpus of suicide notes for use in training machine learning models.”

Yet, as is the case during the early stages of any technology being implemented, there are issues that need to be ironed out.

“One big hurdle is the availability of good quality data, especially data over time,” she continued. “Such datasets are hard to collect and annotate. Another hurdle is the personalization of AI models and transferring across domains. What works well, let’s say, for identifying a low mood for one person may not work as well for other people. And there is also the challenge of moving across different domains and platforms, such as Reddit versus Twitter.

“I think there is also some reluctance on the part of clinicians to adopt solutions, and this is why it is very important that AI solutions are created in consultation with clinical experts.”

Over the longer term, however, the outlook is positive, and Liakata anticipates the deployment of AI-based tools to help with the early diagnosis of a range of mental health and neurological conditions, including depression, schizophrenia, and dementia. These tools would also be able to justify and provide evidence for their diagnosis, she suggested.

To Assist, Not Replace

Elsewhere, AI tools will be deployed to monitor the progression of mental health conditions, summarize these with appropriate evidence, and suggest interventions likely to be of benefit. These would be used by both individuals, to self-manage their conditions, and clinicians.

Despite all the potential positives, Skorji emphasizes that AI needs to be applied in conjunction with in-person treatment for mental health complaints, rather than as a replacement.

“The biggest problem we are seeing around the world at the moment is loneliness,” she said. “Technology is useful, but it does not give people access to people. How we deal with problems, what the causes of our stress are, how can we have healthy relationships with other people—we are not going to get that from AI. We need to be there as well.”

 
Read More