After leading the company Boston Consulting GroupThe 2023 report found that its IT consultants were more productive when using Open AI’s GPT-4 tool, the company received backlash that one should simply Use ChatGPT for free instead of retaining their services for millions of dollars.
Here’s their reasoning: Consultants will get their answers or advice from ChatGPT anyway, so they should avoid the third party and go directly to ChatGPT.
Also: Mastering AI without tech skills? Why complex systems require diverse learning
There is a valuable lesson for anyone hiring or looking to be hired for AI-intensive jobs, whether developers, consultants, or business users. The message of this critique is that anyone, even with limited or insufficient skills, can now use AI to get ahead or appear to be on top of things. Because of this, the playing field has been leveled. People are needed who can bring perspective and critical thinking to the information and results that AI provides.
Even scientists, technologists and subject matter experts can fall into the trap of relying too much on AI for results, rather than their own expertise.
“AI solutions can also exploit our cognitive limitations, making us vulnerable to illusions of understanding in which we believe we understand more about the world than we really do,” according to research on the topic. published in nature.
Even scientists trained to review information critically are falling for the temptation of machine-generated information, warn researchers Lisa Messer of Yale University and MJ Crockett of Princeton University.
“Such illusions obscure the scientific community’s ability to see the formation of scientific monocultures, in which some types of methods, questions and points of view come to dominate alternative approaches, making science less innovative and more vulnerable to error.” , said his research.
Messer and Crockett say that beyond concerns about AI ethics, bias, and job displacement, the risks of over-reliance on AI as a source of expertise are only just beginning to emerge.
In conventional business environments, users’ over-reliance on AI has consequences such as lost productivity and misplaced trust. For example, users “can alter, change, and shift their actions to align with AI recommendations,” Microsoft’s Samir Passi and Mihaela Vorvoreanu observe in a general description of studies on the subject. Additionally, users “will find it difficult to evaluate AI performance and understand how AI affects their decisions.”
That is the thought of Kyall Maichief innovation officer at Esquire Bank, who sees AI as a critical tool for client engagement, while cautioning against its overuse as a substitute for human expertise and critical thinking. Esquire Bank provides specialist financing to law firms and wants people who understand the business and what AI can do to advance it. I recently caught up with Mai at the Salesforce conference in New York, who shared her experiences and perspectives on AI.
Mai, who rose from programmer to multi-hyphenate CIO, doesn’t deny that AI is perhaps one of the most valuable productivity-enhancing tools to emerge, but he’s also concerned that relying too heavily on generative AI (whether for content or code) diminishes the quality and acuity of people’s thinking.
Also: Beyond programming: AI creates a new generation of jobs
“We realize that having fantastic brains and results is not necessarily as good as someone who is willing to think critically and give their own perspectives on what AI and generative AI bring to you in terms of recommendations,” it states. “We want people who have the emotional and self-awareness to say, ‘Hmm, this doesn’t feel quite right, I’m brave enough to have a conversation with someone, to make sure there’s a human being in the loop.’ “.
Esquire Bank is using Salesforce tools to leverage both sides of AI: generative and predictive. Predictive AI provides bank decision-makers with information about “which lawyers visit their site and helps personalize services based on those visits,” says Mai, whose CIO role encompasses both client interaction and IT systems.
As a fully virtual bank, Esquire employs many of its AI systems in its marketing teams, merging content generated by generative AI with back-end predictive AI algorithms.
“The experience is different for everyone,” says Mai. “So we’re using AI to predict what the next set of content should be delivered to them. They’re based on all the analytics behind it and in the system on what we can do with that particular lead.”
Also: Generative AI is the technology that IT feels the most pressure to exploit
Working closely with AI, Mai discovered an interesting twist in human nature: People tend to disregard their own judgment and diligence as they become dependent on these systems. “For example, we found that some humans get lazy: They give a signal and then decide, ‘Oh, that looks like a really good response,’ and send it.”
When Mai senses that level of over-reliance on AI, “I’ll bring them into my office and say, ‘I’m paying you for your insight, not for a message and an AI response that you’re going to get me. ’ “Just taking the results and giving them back to me is not what I’m looking for — I expect your critical thinking.”
Still, he encourages members of his technology team to delegate routine development tasks to generative AI tools and platforms, and free up their own time to work more closely with the business. “Programmers are finding that 60 percent of the time they used to spend writing was for administrative code that isn’t necessarily cutting edge. AI can do that for them, through voice prompts.”
Also: Will AI hurt or help workers? It’s complicated
As a result, he’s seeing “the line between a classic coder and a business analyst blur a lot more, because the coder doesn’t spend a huge amount of time doing things that don’t really have value added. It also means that business analysts can become software developers.”
“It will be interesting when I can sit in front of a platform and say, ‘I want a system that does this, this, this and this,’ and do it.”