X
Innovation

Can AI step up to offer help where humans cannot?

It has not always had a good reputation, but artificial intelligence can step up and offer a helping hand when humans cannot seek solace from their own kind.
Written by Eileen Yu, Senior Contributing Editor

If applied inappropriately, artificial intelligence (AI) can bring more harm than good. But, it can offer a much-needed helping hand when humans are unable to find comfort from their own kind. 

AI hasn't always gotten a good rep. It has been accused of replacing human roles, taking away a person's livelihood, and threatening human rights.

With the right checks and balances in place, though, few can deny the potential for AI to enhance business operations and improve lives

Others have tapped AI to help save lives.

The Chopra Foundation in September 2020 introduced a chatbot, dubbed Piwi, to provide a "community-driven solution" that aims to prevent suicide. The AI-powered platform is trained by "experts" and, based on the online interactions, will connect users to 5,000 counsellors who are on standby. 

The foundation's CEO Poonacha Machaiah said: "With Piwi, we are giving people access to emotional AI to learn, interpret, and respond to human emotions. By recognising signs for anxiety and mood changes, we can improve self-awareness and increase coping skills, including steps to reduce stress and prevent suicide by timely real-time assistance and intervention."

Piwi has deescalated more than 6,000 suicide attempts and handled 11 million conversations through text, according to The Chopra Foundation's founder, Deepak Chopra, an Indian-American author famed for his advocacy of alternative medicine. He described Piwi as an "ethical AI" platform trained with safeguards built into the system, adding that there were always humans in the backend to provide support where necessary. 

Young individuals, in particular, were drawn to the chatbot, Chopra said. Noting that suicide was the second-most common cause of deaths amongst teenagers, he said youths loved talking to Piwi because they didn't feel judged. "They are more comfortable talking to a machine than humans," he said in a March 2022 interview on The Daily Show

in Singapore, suicide is the leading cause of death for those aged between 10 and 29. It also was five times more deadly than road accidents in 2020, when the highest number of suicide cases were recorded in the city-state since 2012. The cause of death accounted for 8.88 per 100,000 residents that year, compared to 8 in 2019.

Increases also were seen across all age groups, in particular those aged 60 and above, where the number who died by suicide hit a new-high of 154, up 26% from 2019. Industry observers attributed the spike in numbers to the COVID-19 pandemic, during which more likely had faced social isolation and financial woes.

It is estimated that every one suicide in Singapore affects at least six loved ones.

I, too, have lost loved ones to mental illness. In the years since, I've often wondered what else could have been done to prevent their loss. They all had access to healthcare professionals, but clearly that proved insufficient or ineffective. 

Did they fail to reach help when they needed it most in their final hour because, unlike chatbots, human healthcare professionals weren't always available 24 by 7? Or were they unable to fully express how they felt to another human because they felt judged? 

Would an AI-powered platform like Piwi have convinced them to reconsider their options during that fateful moment before they made their final decision?

I've had strong reservations about the use of AI in some areas, particularly law enforcement and autonomous vehicles, but I think its application in solutions such as Piwi is promising. 

While it certainly cannot replace human healthcare specialists, it can prove vital where humans aren't deemed viable options. Just look at the 6,000 suicide attempts Piwi is said to have deescalated. How many lives amongst these might otherwise have been lost?

And there is so much more room to leverage AI innovation to improve the provision of healthcare. 

Almost a decade ago, I posed the possibility of a web-connected pill dispenser that could automatically dispense a patient's prescribed medication. This would be especially useful for older folks who had difficulty remembering the numerous pills and supplements they required on a daily or weekly basis. It also could mitigate the risk of accidental overdose or wrongful consumption.

There have been significant technological advancements since I wrote that post that can further improve the accuracy, and safety, of the pill dispenser. AI-powered visual recognition tools can be integrated to identify and ensure the correct medication is dispensed. The machine also can contain the updated profile of each medication, such as how much each pill weighs and its unique features, to further determine the right drugs have been dispensed. 

Clinics and pharmacies can issue each patient's prescribed medication in a cartridge, refillable every few months, and protected with the necessary security features. Relevant medical data is stored in the cartridge, including dispensing instructions that can be accessed when it is inserted into the machine at home. The cartridge also can trigger an alert when a refill is needed and automatically send an order to the clinic for a new cartridge to be delivered to the home, if the patient is unable to make the trip.  

The pill dispenser can be further integrated with other healthcare functions, such as the ability to analyse blood for diabetic patients, as well as telemedicine capabilities so doctors can dial in to check on patients should the data sent across indicate an anomaly. 

AI-powered solutions such as the pill dispenser will be essential in countries with an ageing population, such as Singapore and Japan. They can support a more distributed healthcare system, in which the central core network of hospitals and clinics isn't overly taxed.  

With the right innovation and safeguards, AI surely can help where humans cannot. 

For instance, 66% of respondents in Asia-Pacific believe bots will achieve success where humans have failed with regards to sustainability and social progress, according to a study released by Oracle, which polled 4,000 respondents in this region including Singapore, China, India, Japan, and Australia.

In addition, 89% think AI will help businesses make more progress towards sustainability and social goals. Some 75% express frustration over the lack of progress, to date, by businesses and 91% want concrete action from organisations on how they're prioritising ESG (environmental, social, and governance) issues, rather than delivering mere words of support.

Like The Chopra Foundation, CallCabinet also believes AI can help customer service agents cope with the mental stress of dealing with cases. The UK-based speech analytics software vendor argues that AI-powered tools with advanced acoustic algorithms can process key phrases and assess voice pace as well as volume and tonality. These enable organisations to ascertain emotions behind words and evaluate the sentiment of every interaction. 

CallCabinet suggests that these can allow managers to monitor service calls and identify patterns that signal potential mental health issues, such as negative customer interactions, raised voices, and profanity directed at agents. 

Because when humans cannot provide solace to those who need it, then maybe AI can?

RELATED COVERAGE

Editorial standards