Retaining our problem-solving abilities while using AI
Is AI making you smarter or are you just nodding along? How to stay sharp in the age of artificial intelligence.
I came across a study recently in which Microsoft had looked at the effects of AI on long-term problem solving capabilities. In short, the claim was that those using AI “may also reduce critical engagement, particularly in routine or lower-stakes tasks”.
There are certainly reasons to be critical of such findings (more on that later), but it stems from a pervasive truth about AI: If AI is used well, it forces us to think better and even more holistically. At its worst, it lulls us into complacency.
The underlying danger (which is neither clear nor present) is that we won’t be able to think for ourselves when it counts. In that vein, I wanted to explore what we as users can do to guard against the diminishment of our own abilities, or even the perception of it.
Key Takeaways
An over-reliance on AI can show us as having a lack of independent thought and reduced agility
With or without AI, the basics of good problem solving (and informing our decisions) remain unchanged
AI can play a strong role in helping us to challenge assumptions and uncover gaps in our thought processes
The subjectivity of ‘problem-solving’
First and foremost, we need to take studies such as these with a grain of salt. For a few reasons:
The ways to measure problem solving are inherently subjective
Face-to-face problem solving (i.e. through dialogue) caters to those naturally good on their feet, and doesn’t cater to those who process best with individual research (even w/o AI)
The inherent slanting towards speed (a faster solution is better)
If you stop to think about it, most of these are not an AI problem. Not to mention, this particular study calls out lack of engagement with routine or lower-stakes tasks—tasks that inherently deserve a lesser level of critical thinking. Which means there’s some talking out of both sides of the mouth happening, given many orgs would just as soon hand over these tasks to AI entirely.
That said, this subjectivity is exactly why we need to 1) understand what over-reliance looks like and 2) be critical about our own self-reliance.
The symptoms of over-reliance
What does an over-reliance on AI look like?
The ONLY step in your decision-making process is “Ask ChatGPT”
Blindly accepting (or not taking the time to cross-check) AI’s outputs
An inability to pivot (or paralysis) when new information arises
Using AI to generate innovative ideas but not pushing beyond its defaults
Not changing your approach with AI between high-leverage (or strategic) and low-leverage (tedious) tasks
Externally, these can show as having a lack of independent thought and reduced agility (or pattern lock-in) when faced with actual problems.
And while the causal relationship in Microsoft’s study can be questioned, there’s one thing that can’t: cognitive skills atrophy—just like muscles do when they’re not used.
Going back to basics
What’s the best way to avoid any scrutiny of our own problem-solving capabilities?
Do what good problem-solvers do
Firstly, we can mimic the habits of great problem-solvers.1 The tenets of good problem-solving include:
Taking swift action rather than postponing issues, ensuring problems don’t linger unresolved.
Strong communication skills, namely being able to clearly explain problems and solutions helps align teams, ensuring that everyone understands both the issue and their role in addressing it.
Recognizing the importance of prioritization; trying to fix everything at once is counterproductive.
A key lesson is that consistency and visibility in problem-solving help shape how others perceive this skill. And it’s worth mentioning, the more independent our approach, the more room there is for questioning.
Do your homework
Whether you’re using AI or not, the following will always be important parts of effective solutioning:
Knowing your audience: AI is only as good as the data you feed it. If you want results that resonate, make sure you provide context—who you’re speaking to, what they care about, and what problem you’re actually trying to solve.
Consulting actual experts: AI is an excellent assistant, but it’s no substitute for lived experience. If you’re making critical decisions, get input from human experts who know what they’re talking about.
Looking at problems from multiple sides: The key is avoiding tunnel vision. We can do this by checking and challenging our own thinking for gaps and lack of clarity, both of which enhance our ability to explain the problem and solution.
Practical techniques for retaining problem-solving with AI
Know what’s critical to validate
Regardless of the weight of the task at-hand, knowing what to double-check is important. Any of the following should be validated second-hand, as not doing so can come back to bite us.
Strategic assumptions
Before asking for or accepting AI-generated insights, ask: What strategic assumptions are we making? Validating these underlying assumptions (and providing them to AI when you enlist it) ensures strategy is based on real-world knowledge and situations. The alternative is AI makes these assumptions for us (based on patterns from existing data, which likely won’t align with your unique situation).
Deadlines and estimates
AI might be able to generate a thorough roadmap, but it can’t account for the reality of execution—team bandwidth, unexpected obstacles, or shifting priorities. Stress-test any AI-suggested timelines: Are the deadlines realistic? Have potential roadblocks been accounted for? Treat estimates as starting points, not final schedules.
Processes and directions
If AI suggests a step-by-step approach, validate whether it aligns with how your team actually works. Does it skip steps or considerations? Is it based on relevant best practices? Similarly, when asking it for how-to style directions, try them first. There’s nothing that will out blind AI usage quicker than passing along directions you haven’t tried.
Data accuracy
If you’re dealing with anything factual, cross-check it. That includes everything from statistics to dates to quotes. Similar to unchecked directions, accepting what AI gives us is a fast-track to dinging our credibility.
Use AI to challenge your thinking
Can we work with AI in a way that makes us smarter? Absolutely. Here are some techniques:
Use AI to generate counterarguments or poke holes in your reasoning.
Interrogate AI’s responses. Ask: What’s missing? What biases are present? What would the opposite perspective look like?
Use AI to test assumptions. Before making a big decision, have it generate pros, cons, and alternative viewpoints.
Iterate. Instead of accepting the first output, ask follow-up questions to dig deeper or ensure the correct assumptions are being made.
All of which fall in addition to one of the core rules of using AI: Make sure you layer on the nuance, insights, and originality to the final product.
Five prompts to help you challenge your thinking
Challenge assumptions:
Analyze this content and identify its underlying assumptions.
What ideas, perspectives, or factors does it take for granted? Where might it be oversimplifying complexity or overlooking key nuances?
Challenge these assumptions by presenting alternative viewpoints, counterarguments, or real-world complexities that could shift the perspective.
Spot biases:
Examine this content for potential biases.
Which perspectives, voices, or factors might be overrepresented, underrepresented, or missing entirely? How do these biases shape the framing of the content?
Suggest ways to reframe or expand the answer to make it more balanced, inclusive, and nuanced.
Take the opposite stance:
Critically challenge this content or plan by taking the opposite stance.
* What are the strongest counterarguments or alternative perspectives?
* Where might this approach fall short, and what overlooked factors could change the outcome?
Highlight any new insights that emerge from reframing the issue from a different angle.
Identify gaps:
Analyze this content for gaps or missing perspectives. Consider the following:
* What key factors, nuances, or real-world complexities are overlooked?
* Are there any implicit assumptions that should be questioned?
* How could this be expanded or refined to provide a more complete and well-rounded view?
Stress-test your reasoning:
Assess the logical strength of this content. Identify any gaps in logic and suggest ways to make the argument more robust. Consider the following:
* Are there weak points, inconsistencies, or assumptions that need stronger justification?
* Where could additional evidence, context, or clarification improve the reasoning?
As it relates to the characteristics of good problem-solving, I referred to the findings by behavioral statistician, Joseph Folkman. https://www.forbes.com/sites/joefolkman/2021/06/07/8-consistent-behaviors-of-practically-perfect-problem-solvers/