Features Tim Hornyak Features Tim Hornyak

Synthetic Savants

Since the introduction of consumer-facing artificial intelligence applications such as ChatGPT and Google’s Bard, generative AI has transformed how people work around the world. How might it impact specific industries in the years to come?

As generative AI sweeps the world, how will it transform the way we work and innovate?

We live in an age of intelligent machines. Since the introduction of consumer-facing artificial intelligence (AI) applications such as OpenAI’s ChatGPT and Google’s Bard over the past year, generative AI has transformed how people work around the world.

From $40 billion in 2022, the market size for generative AI will balloon to $1.3 trillion over the next 10 years, according to Bloomberg Intelligence. First popularized through image generators, the technology has been applied in fields ranging from neuroscience to advertising, sometimes in surprising ways.

Generative AI programs like the large language models powering ChatGPT are trained on enormous volumes of data to sense patterns and predict how they will play out in a piece of content. These models can be trained on linguistic, financial, scientific, sensor, or other data—especially data that is uniform and structured—and can then create new content in response to user input. They have had remarkable success, particularly in image and text generation, and have seen rapid uptake in sectors ranging from education to computer programming. “This technology is set to fundamentally transform everything from science, to business, to healthcare … to society itself,” Accenture analysts enthused in a report. “The positive impact on human creativity and productivity will be massive.”

Powerful New Assistants

Generative AI first gained public attention thanks to its ability to change how we communicate through words, images, and video. It’s no wonder, then, that the world’s largest public relations company has embraced it. Edelman worked with OpenAI to launch the original ChatGPT-2 and delivered the first application in an ad campaign. In the spots for Hellmann’s Mayonnaise, the tool is tasked with finding new ways to use leftovers.

Edelman believes the technology will reconfigure the communications industry, but it won’t replace human ingenuity, strategic advice, and ethical decision-making that builds trust, said Meghan Barstow, president and representative director of Edelman Japan.

“We predict that AI will become an essential assistant in our work, helping to brainstorm, research, summarize, trend spot, monitor media, and generate content, among other tasks,” explained the ACCJ governor and chair of the chamber’s Communications Advisory Council. “The emphasis here is on ‘assistant,’ as we believe there will always be a human in the loop, that AI and people working together will provide the most effective and valuable work output.

“As with any technology, there are risks that require appropriate caution, education, processes, and policies to ensure the safe and trustworthy use of generative AI to protect our work, our clients, and end users from issues related to disinformation, bias, copyright infringement, and privacy.”

Similarly, lawyers such as Catherine O’Connell are also using generative AI as smart assistants. O’Connell is principal and founder of Catherine O’Connell Law and co-chair of the American Chamber of Commerce in Japan (ACCJ) Legal Services and IP Committee.

After taking a course on how to get the most out of ChatGPT, she has been using it for writing keynote speeches, article outlines, posts on social media, and skeletons of presentations. She compares the tool to a human intern, and praises its time-saving efficiencies, but warns that it should not be used for legal work, such as contracts or legal advice. Attorneys in the United States, she noted, have found themselves in trouble after producing legal filings referencing non-existent cases that generative AI simply made up.

“Generative AI is like a teenager that has a lot of promise but has not learned how to be a whole professional yet; it needs guidance,” said O’Connell. “However, in terms of an idea generator or idea expander, a time-saving device, and an assistive tool, generative AI is an asset. The rest falls to me to add my human touch to check and verify, to add my own personality and insights only I have, and to make the output my very own. I think generative AI is so good that its cousin, Google search, may be out of a job sometime soon.”

Smart Tools for Talent

Recruiting is another industry in which workers deal with mountains of structured data, in the form of resumes and online posts, that can be utilized by AI. Robert Half Japan, an ACCJ Corporate Sustaining Member company, uses a system called AI Recommended Talent (ART) to match resumes to client needs. The system speeds up matching for job hunters and employers, allowing staff to spend more time with clients.

“The real power of generative AI is how much it can integrate with our existing systems,” explained Steven Li, senior division director for cybersecurity. “We are piloting ChatGPT-4 integration in our Salesforce CRM. Studies have shown benefits from integrating generative AI into workflows. Other industry examples that highlight the benefit of integration include the GitHub CoPilot generative AI feature.”

The effectiveness of AI in recruiting has led some people to speculate that it could render many human recruiters obsolete. Deep learning algorithms are figuring out what a good resume looks like, and generative AI can craft approach messages and InMails, a form of direct message on the popular LinkedIn platform, noted Daniel Bamford, Robert Half’s associate director for technology.

“However, the real value of agency recruitment is not, and never was, a simple job-description-to-resume matching service,” added Bamford. “Agency recruitment done well is a wonderful journey of problem-solving, involving the goals of organizations and teams and the values and desires of individuals. Excellent recruiters will thrive. They will use AI’s capacity to handle simple tasks like scheduling and shortlisting. This will free up time for high-value interactions, delivering even greater value for their partners and industries through the human touch. The future of excellent recruiters will be brighter with AI’s support.”

Tracking Ships and Patients

Even a traditionally hardware-oriented industry like logistics is being transformed by generative AI. Shipping giant Maersk is using a predictive cargo arrival model to help customers reduce costs with more reliable supply chains. It also wants to harness the power of AI to recommend solutions when shipping routes are congested, advising on whether goods should be flown or stored, and better understand the sales process, Navneet Kapoor, Maersk’s chief technology and information officer, told CNBC.

Maurice Lyn, head of Managed by Maersk for Northeast Asia, also sees great potential in the technology. “The biggest changes that I foresee will be related to the enhanced visibility into, and agility of the management of, the global supply chains of our clients on an execution level,” he told The ACCJ Journal. “The data aggregated will allow logistics service providers [LSPs] to deliver predictive and proactive solutions to our clients. If clearly interpreted by the LSPs, stability and uniformity of costs and deliverables will be provided globally and locally to our clients.”

Generative AI may even help us live longer, healthier lives via long-term patient monitoring. Sydney-based medical AI startup Prospection recently launched its first generative-AI model in Japan to analyze anonymized patient data for pharmaceutical companies so they can better understand patient needs. A Japanese drug company, for instance, could look at cancer patient outcomes across the country and find that they are slightly worse in a particular region, possibly because less-effective drugs are prescribed there.

Founded in 2012 and operating in Australia, Japan, and the United States, Prospection now has data on half a billion patients. For the first 10 years, it was using traditional AI methods, but generative AI has opened new services for the company. Users can query Prospection’s AI services about typical pathways for patients who took a certain drug, or what therapy they underwent after quitting the medication. A Prospection model can predict whether a patient will experience a certain event, such as needing to be hospitalized, over the next year.

“The ChatGPT transformer model is trained on billions of sentences consisting of words. We see each patient’s journey as the sentence and events in the journey as the words. That’s the vocabulary,” said Eric Chung, co-founder and co-CEO of Prospection. “The data is very powerful. There are lots of insights to be gained from data on 500 million patients. It’s beyond the power of humans to analyze, but AI can do it.”

Read More
Columns C Bryan Jones Columns C Bryan Jones

Do Androids Dream of Electric Sheep?

AI is beginning to create content that is sparking questions about ownership. For some time, companies have been using AI-powered tools to give computers the task of writing articles, social media posts, and web copy. Now, AI-powered image-generation engines, such as Stable Diffusion, Midjourney, and the Deep Dream Generator, have hit the mainstream. One day, might DEI be extended to machines?

Rethinking DEI in an age of rapidly expanding artificial intelligence

I have always been fascinated by the idea of artificial intelligence (AI). I remember chatting back in the 1980s with a version of Eliza for Commodore 64. Eliza is a program created in 1964 by German American computer scientist Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory. Rudimentary by today’s AI standards, Eliza is a natural language processor that converses with the user based on their input. It tries to mimic a real person and was one of the earliest applications to attempt what has come to be called the Turing test, a way of gauging a machine’s ability to exhibit intelligence. Passing this test, developed by English scientist Alan Turing, means a machine can conceal its identity, making a human believe it is another human.

I’ve been thinking back to that experience because we are now at a point where we must start considering how we will coexist with and treat truly intelligent machines. We’re not quite there yet, but the rapid advance of AI, and its integration into so many aspects of life, means this is a question that is no longer the providence of science fiction. It will be a real part of our future. Machine identity and rights will one day be an extension of the diversity, equity, and inclusion (DEI) that we talk about in this issue of The ACCJ Journal.

AI is beginning to create content that is sparking questions about ownership. For some time, companies have been using AI-powered tools to give computers the task of writing articles, social media posts, and web copy. Now, AI-powered image-generation engines, such as Stable Diffusion, Midjourney, and the Deep Dream Generator, have hit the mainstream. You may have seen some of their creations in the news. As these engines are trained on existing art, often scraped from the internet, there are questions about copyright and plagiarism. Stock media giant Getty Images announced on September 21 that it is banning AI-created art over these concerns.

Eventually, I believe, the visuals that machines create will become less obviously imitative and will express a view of the world unique to the creator, in the same way that the work of a human artist is an expression of the inner working of their mind. And when that happens, we really will have to ask ourselves what distinguishes us from machines.

Back to the Present

We still have some time before that question must be answered. For now, our focus can remain on the people who make our companies successful and our societies prosperous.

We explore DEI initiatives in this issue, along with sustainability efforts that can help ensure that our world has a healthy future.

I take to the road and the air on page 26 to explore the future of transportation and sustainability initiatives by member companies. I also talk to Bank of America’s Japan country executive and president of BofA Securities Japan, Tamao Sasada, on page 18 about the importance of diversity and the company’s efforts in the areas of DEI; environmental, social, and corporate governance; and sustainable finance.

I hope you enjoy this special issue and find useful ideas to help you achieve your own DEI and sustainability goals.

Sincerely yours, Eliza.

 
Read More
Features Julian Ryall Features Julian Ryall

State of Mind

For millions of people around the world who were already struggling with mental health issues, the past two-and-a-half years of the coronavirus pandemic have been a further trial. Isolation, a sudden shortage of opportunities to interact with friends or family in person, additional stresses in the workplace or the home, new financial worries, and difficulty in accessing appropriate mental healthcare have taken their toll, experts in the field told The ACCJ Journal.

How artificial intelligence is helping identify mental health concerns for better treatment

Listen to this story:


For millions of people around the world who were already struggling with mental health issues, the past two-and-a-half years of the coronavirus pandemic have been a further trial. Isolation, a sudden shortage of opportunities to interact with friends or family in person, additional stresses in the workplace or the home, new financial worries, and difficulty in accessing appropriate mental healthcare have taken their toll, experts in the field told The ACCJ Journal.

However, in the battle against mental health complaints, this time of adversity has also served to fast-track development and adoption of a new tool: artificial intelligence (AI). While the technology may be relatively new to the sector, the potential is huge, according to companies that are applying it to assist physicians with diagnosis and treatment.

A Tool for Our Time

AI has come a very long way since the first chatbots appeared back in the 1990s, and early mental health monitoring apps became available, explained Vickie Skorji, Lifeline services director at the Tokyo-based TELL Lifeline and counseling service. And it is urgently needed, she added.

“When we have something such as Covid-19 come along on a global scale, there is inevitably a sharp increase in anxiety, stress, and depression. The mental healthcare systems that were in place were simply flooded,” she said.

“A lot of companies were already playing around in the area of AI and mental healthcare, but the pandemic has really pushed these opportunities to the forefront,” she explained. “If, for example, a physician is not able to meet a client in person, there are now ways to get around that, and there has been an explosion in those options.”

Not every purported tool is effective, she cautions, and there are going to be questions around client confidentiality and keeping data current. The clinician must also become sufficiently adept at interpreting a client’s genuine state of mind, which might be different from the feelings that are communicated through the technology. On the whole, however, Skorji sees AI as an extremely useful weapon in the clinician’s armory.

Voice Matters

One of the most innovative solutions has recently been launched by Kintsugi, a collaboration between Grace Chang and Rima Seiilova-Olson, engineers who met at the 2019 OpenAI Hackathon in San Francisco. In just a couple of years, the company has gone from a startup to being named in the Forbes list of North America’s top 50 AI companies.

Kintsugi has developed an application programming interface called Kintsugi Voice which can be integrated into clinical call centers, telehealth platforms, and remote patient monitoring applications. It enables a provider who is not a mental health expert to support someone whose speech indicates they may require assistance.

Instead of using natural language processing (NLP), Kintsugi’s unique machine learning models focus on signals from voice biomarkers that are indicative of symptoms of clinical depression and anxiety. Producing speech involves the coordination of various cognitive and motor processes, which can be used to provide insight into the state of a person’s physical and mental health.

In the view of Prentice Tom, chief medical officer of the Berkeley, California-based company, passive signals derived from voice biomarkers in clinical calls can greatly improve speed to triage, enhance behavioral health metadata capture, and benefit the patient.

“Real-time data that augments the clinician’s ability to improve care—and that can be easily embedded in current clinical workflows, such as Kintsugi’s voice biomarker tool—is a critical component necessary for us to move to a more efficient, quality-driven, value-based healthcare system,” he explained. The technology is already in use in the United States, and Japan is on the waiting list for expansion in the near future.

Chang, the company’s chief executive officer, is confident that they are just scratching the surface of what is possible with AI, with one estimate suggesting that AI could help reduce the time between the appearance of initial symptoms and intervention by as much as 10 years.

“Our work in voice biomarkers to detect signs of clinical depression and anxiety from short clips of speech is just the beginning,” she said. “Our team is looking forward to a future where we can look back and say, ‘Wow, I can’t believe there was a time when we couldn’t get people access to mental healthcare and deliver help to people at their time of need.’

“My dream and goal as the CEO of Kintsugi is that we can create opportunities for everyone to access mental health in an equitable way that is both timely and transformational,” she added.

The Power of Data

Maria Liakata, a professor of NLP at Queen Mary University of London, is also the joint lead on NLP and data science for mental health groups at the UK’s Alan Turing Institute. She has studied the use and effectiveness of AI in communicating with the public during a pandemic.

Liakata’s own work has focused on developing NLP methods to automatically capture changes in individuals’ mood and cognition over time, as manifested through their language and other digital content. This information can be used to construct new monitoring tools for clinicians and individuals.

But, she said, a couple of other projects have caught her eye.

One is Ieso Digital Health, a UK-based company that offers online cognitive behavioral therapy for the National Health Service, utilizing NLP technology to analyze sessions and provide data to physicians. And last October, US-based mental and behavioral health company SonderMind Inc. acquired Qntfy, which builds tools powered by AI and machine learning that analyze online behavioral data to help people find the most appropriate mental health treatment.

“There has definitely been a boom over the past few years in terms of the development of AI solutions for mental health,” Liakata said. “The availability of large fora in the past 10 years where individuals share experiences about mental health-related issues has certainly helped in this respect. The first work that came to my attention and sparked my interest in this domain was a paper in 2011 by the Cincinnati Children’s Hospital. It was about constructing a corpus of suicide notes for use in training machine learning models.”

Yet, as is the case during the early stages of any technology being implemented, there are issues that need to be ironed out.

“One big hurdle is the availability of good quality data, especially data over time,” she continued. “Such datasets are hard to collect and annotate. Another hurdle is the personalization of AI models and transferring across domains. What works well, let’s say, for identifying a low mood for one person may not work as well for other people. And there is also the challenge of moving across different domains and platforms, such as Reddit versus Twitter.

“I think there is also some reluctance on the part of clinicians to adopt solutions, and this is why it is very important that AI solutions are created in consultation with clinical experts.”

Over the longer term, however, the outlook is positive, and Liakata anticipates the deployment of AI-based tools to help with the early diagnosis of a range of mental health and neurological conditions, including depression, schizophrenia, and dementia. These tools would also be able to justify and provide evidence for their diagnosis, she suggested.

To Assist, Not Replace

Elsewhere, AI tools will be deployed to monitor the progression of mental health conditions, summarize these with appropriate evidence, and suggest interventions likely to be of benefit. These would be used by both individuals, to self-manage their conditions, and clinicians.

Despite all the potential positives, Skorji emphasizes that AI needs to be applied in conjunction with in-person treatment for mental health complaints, rather than as a replacement.

“The biggest problem we are seeing around the world at the moment is loneliness,” she said. “Technology is useful, but it does not give people access to people. How we deal with problems, what the causes of our stress are, how can we have healthy relationships with other people—we are not going to get that from AI. We need to be there as well.”

 
Read More