The myth of out-of-the-box AI success
The myth was always inevitable. How do we pick up the pieces and turn disillusionment to success?
An old friend reached out recently, and I had the pleasure of joining the ManageEngine podcast. I like what ManageEngine is doing as they tackle the podcast medium, and since they (like AI Artistry) blend the tactical and the psychological with technology, it was a no-brainer.
Lauren, my host, had a ton of deep thinkers in her set of questions, and one of my favorite parts of this discussion was when she asked about the out-of-the-box myth of AI.
In short, the idea that everything should be given to us. Which is in short, the idea we’ve been sold about AI. For better or (typically much) worse. My response to Lauren:
“I think the out-of-the-box implies, you know, I don't have to work towards it. I don't have to have a, a sense of my own unique situation. I don't have to approach it as a skill as opposed to a tool. Which truly, I think that's, that's one of the things that makes somebody successful, is that mindset. Um, but then it also, I, I think it, it lends itself to not understanding what context and what's important to give AI so that it can help understand what you're trying to accomplish.”
But the more I thought about it, the more there was to latch onto. And with that evolution of thought, I wanted to chew on this some more here. So let’s look at what’s played into it and how we succeed once we do a thorough jig on it’s grave.
The reasons for the myth
I think there are three reasons for this myth. But it truly is one big reason and a couple of sidekicks.
Reason number one, the biggest reason, is this is what we’re being sold. It’s not intentional, mind you, but the undertone of most everything we heard about AI (at least for the first two years) was that it was capable of replacing us and/or could do things without us. Which is a big distinction from “for us”.
How were we sold this, but more importantly, why was it so believable?
We’ve been surrounded by software for decades, none of which truly works out-of-the-box. Upload something, set up a template, add a connector. There’s always a large amount to do before clicking the button. So it’s not that SaaS has trained us to expect frictionless starts. It’s that we’ve been told “Artificial Intelligence” is just that much more advanced.
Adding on to this is the paradox that, the more complex an AI tool is, the less it’s creators can tell me how to use it. That’s always been a bit of an unspoken truth for software in general. There’s googling things, and then there’s being good at crafting queries—and that’s the simplest example. But the talk about “plug-and-play automation” or “immediate value with zero setup” is more easily swallowed in the nebulous word clouds of AI tooling. When in reality, we’re giving into the belief that the best versions of our workflows are waiting in the trunk of someone else’s car.
Worse still, AI is being promoted as if it’s the same for everyone. On the one hand, there’s truth to its strength as being universal. But on the other, the real competency lives in specificity. It’s built of our data, our understanding, and our goals. None of which are out “out-of-the-box”.
The hype doesn’t help, nor does the flooding of the market. But it goes deeper than that. The question of what’s possible versus what’s useful is a bigger (and more personal) question than most people are realizing.
How we respond accordingly
So with the myth shattered, what can we do to pick up the pieces and operate at our best?
Work with the ultimate balance of cynicism and optimism
When it doesn’t work immediately, we assume we’re the problem. It doesn’t help either that product demos are always cherry-picked to impress. Everything just works and we don’t see the broken attempts or the weird thread detours. The cynicism comes in a healthy reminder that when things don’t work, it’s not you, it’s the abstraction.
Optimism on the other hand, is encouraging a foundation of “this will work”. You don’t try things hoping they’ll go poorly. That basic truth easily applies to the basic units of writing a prompt or building a workflow.
Marry skill and mindset
One of the things I come back to often is that skill-building doesn’t look like mastery; it looks like willingness. Willingness to exemplify to a tool how you think. Willingness to work through the bad and build an intuition.
And quite frankly, a willingness to look at the idea of skill in a new light. To think of AI as a skill is to look at the melding of three things:
Self-awareness: To identify where we can augment ourselves, we need an idea of not only the goal but even the smallest intuition of how we’d attack that goal.
Communication: Central to prompting are the fine arts of being concise, asking for what we want and knowing when to leave things unsaid.
Discipline: The age of AI means the age of content overload. The discipline comes in knowing what’s important and cutting anything that’s not worth taking away.
Unlearn what needs to be unlearned
If we can accept the myth is just that, we can do some helpful rewiring of AI as we know it.
The first is abandoning the “one perfect prompt” mindset. Most of the time, the best use of time isn’t hunting a magic string—it’s typing five dumb ones and iterating to success. Messiness is a part of most effective processes at one time or another.
Another is how we engage with new tools. We have to view what a tool thinks it can do versus what it lets you build into. Any suggestions of how a tool augments us are just that: suggestions. Rather than trying to mimic example workflows, a better question becomes: how quickly can I shape this tool around my weirdness?
And perhaps most broadly is embracing the messiness. Which isn’t to say it’s entirely unavoidable as much as AI has (deservedly) added a lot more tools to our toolbelts. We’re going to find ourselves jumping between them. And we’re going to find ourselves with responses we don’t like. Why to delete a thread that didn’t deliver much? Do it. Want to keep it? It’s a choice of comfort and intuition, and one we can all make.
Be vocal users
Instead of hoping a tool matches our workflow, we can pivot from passive to active. Instead of expecting the tool (or the team creating it) to read our minds, we can provide feedback. The value that you build might help you the most, but that doesn’t mean there aren’t translatable pieces. Software is a living, breathing thing, and most of the AI tools I’ve interacted with I’ve found to be open and agile to change.
Because once you let go of the out-of-the-box myth, the more the door opens to an actual system that fits. And that’s where the real returns live.
See the full podcast at ManageEngine: How everyday users can actually make gen AI work for them