X
Innovation

We need bold minds to challenge AI, not lazy prompt writers, bank CIO says

Wanted: Critical thinkers with perspectives and skills to interpret AI-generated results. (Those with an over-reliance on ChatGPT need not apply.)
Written by Joe McKendrick, Contributing Writer
Four blocks with people thinking as icons
marchmeena29/Getty Images

After leading firm Boston Consulting Group's 2023 report found their IT consultants were more productive using Open AI's GPT-4 tool, the company and other industry giants received backlash from some commentators that one should simply use ChatGPT for free instead of retaining consulting services for millions of dollars.

Here's their reasoning: High-paid consultants will simply get their answers or advice from ChatGPT anyway, so they should avoid the third party and go straight to ChatGPT.

Also: Master AI with no tech skills? Why complex systems demand diverse learning

There's a valuable lesson to anyone hiring or seeking to get hired for AI-intensive jobs, be it developers, consultants, or business users. The message of this critique is that anyone, even with limited or insufficient skills, can now use AI to get ahead or appear to look like they're on top of things. Because of this, the playing field has been leveled. Needed are people who can provide perspective and critical thinking to the information and results that AI provides.

Even skilled scientists, technologists, and subject matter experts may fall into the trap of relying too much on AI for their output -- versus their own expertise. 

"AI solutions can also exploit our cognitive limitations, making us vulnerable to illusions of understanding in which we believe we understand more about the world than we actually do," according to research on the topic published in Nature.

Even scientists trained to critically review information are falling for the allure of machine-generated insights, the researchers Lisa Messer of Yale University and M. J. Crockett of Princeton University warn. 

"Such illusions obscure the scientific community's ability to see the formation of scientific monocultures, in which some types of methods, questions, and viewpoints come to dominate alternative approaches, making science less innovative and more vulnerable to errors," their research said. 

Messer and Crockett state that beyond the concerns about AI ethics, bias, and job displacement, the risks of overreliance on AI as a source of expertise are only starting to be known.

In mainstream business settings, there are consequences of user over-reliance on AI, from lost productivity and misplaced trust. For example, users "may alter, change, and switch their actions to align with AI recommendations," observe Microsoft's Samir Passi and Mihaela Vorvoreanu in an overview of studies on the topic. In addition, users will "find it difficult to evaluate AI's performance and to understand how AI impacts their decisions."

That's the thinking of Kyall Mai, chief innovation officer at Esquire Bank, who views AI as a critical tool for customer engagement, while cautioning against its overuse as a replacement for human experience and critical thinking.  Esquire Bank provides specialized financing to law firms and wants people who understand the business and what AI can do to advance the business. I recently caught up with Mai at Salesforce's New York conference, who shared his experiences and perspectives on AI. 

Mai, who rose through the ranks from coder to multi-faceted CIO himself, doesn't argue that AI is perhaps one of the most valuable productivity-enhancing tools to come along. But he is also concerned that relying too much on generative AI -- either for content or code -- will diminish the quality and sharpness of people's thinking. 

Also: Beyond programming: AI spawns a new generation of job roles

"We realize having fantastic brains and results isn't necessarily as good as someone that is willing to have critical thinking and give their own perspectives on what AI and generative AI gives you back in terms of recommendations," he says. "We want people that have the emotional and self-awareness to go, 'hmm, this doesn't feel quite right, I'm brave enough to have a conversation with someone, to make sure there's a human in the loop.'"  

Esquire Bank is employing Salesforce tools to embrace both sides of AI -- generative and predictive. The predictive AI provides the bank's decision-makers with insights on "which lawyers are visiting their site, and helping to personalize services based on these visits," says Mai, whose CIO role embraces both customer engagement and IT systems.

As an all-virtual bank, Esquire employs many of its AI systems across marketing teams, fusing generative AI-delivered content with back-end predictive AI algorithms. 

"The experience is different for everyone," says Mai. "So we're using AI to predict what the next set of content delivered to them should be. They are based on all the analytics behind and in the system as to what we can be doing with that particular prospect."

Also: Generative AI is the technology that IT feels most pressure to exploit

In working closely with AI, Mai discovered an interesting twist in human nature: People tend to disregard their own judgement and diligence as they grow dependent on these systems. "As an example, we found that some humans become lazy -- they prompt something, and then decide, 'ah that sounds like a really good response,' and send it on." 

When Mai senses that level of over-reliance on AI, "I'll march them into my office, saying 'I'm paying you for your perspective, not a prompt and a response in AI that you're going to get me to read. Just taking the results and giving it back to me is not what I'm looking for, I'm expecting your critical thought."

Still, he encourages his technology team members to offload mundane development tasks to generative AI tools and platforms, and free up their own time to work closer with the business. "Coders are finding that 60 percent of the time they used to spend writing was for administrative code that isn't necessarily groundbreaking. AI can do that for them, through voice prompts."

Also: Will AI hurt or help workers? It's complicated

As a result, he's seeing "the line between a classic coder and a business analyst merging a lot more, because the coder isn't spending an enormous amount of time doing stuff that really isn't value added. It also means that business analysts can become software developers."

"It's going to be interesting when I can sit in front of a platform and say, 'I want a system that does this, this, this, and this,' and it does it."

Editorial standards