Blog
22 January 2026
Superpowers in the Age of AI
To survive in the technological jungle, we don’t have to become extraordinary — it’s enough that we don’t give up thinking in our own way.
A squad of highly trained and seasoned commandos is tasked with finding missing pilots deep in the heart of the South American jungle. A demanding yet seemingly routine mission quickly becomes complicated when the team, led by Alan “Dutch” Schaefer, arrives at the site of a helicopter crash and discovers the mangled bodies of the crew. Soon, the members of the rescue unit begin to die one by one. It turns out that “Dutch” and his men have stepped into territory where a killer from outer space is hunting on Earth. Even these battle-hardened soldiers stand no chance against a creature far stronger, faster, and equipped with vastly more advanced gear and technology that makes the alien not only elusive, but effectively invisible to humans.
It seems that the lone “Dutch” on the battlefield - like his comrades before him - is doomed to certain death in a confrontation with the extraterrestrial predator. And yet, at that moment, he decides to change the rules of the game. A straightforward exchange of blows is pointless, and trying to match the enemy head-on is doomed from the start. Stripped of weapons, “Dutch” reaches for primal skills instead: he observes and learns his opponent’s tactics, uses his knowledge of the terrain, sets crude traps, and ultimately camouflages himself in a simple way before delivering the decisive blow. Through cleverness, improvisation, and cunning, “Dutch” minimizes the technological and physical advantage of his adversary. The winner is the one who can break out of the pattern and change strategy mid-game.
In the plot of Predator - one of the biggest action hits of the VHS era - there is, of course, no need to look for a metaphor of AI hunting humans. The point is rather to emphasize that human advantage doesn’t always come from values that can be measured, calculated, or directly compared to an opponent’s parameters. In a clash with someone faster, stronger, and better equipped, the victor is not the one who tries to catch up on the other’s terms, but the one who can shift their thinking onto different tracks and adapt their tactics to their own strengths.
In the age of AI, it’s easy to fall into a trap: we try to be faster, flawless, and more efficient, attempting to compete on ground where machines will always be better. And yet we should be like “Dutch” - look inward and bring out what cannot be replaced, not even by technology from outer space. Does the human being still possess competencies that cannot be copied or automated? Can we speak of such abilities as our own superpowers? How can we recognize them within ourselves and learn to use them in a world that increasingly rewards what is fast, measurable, and repeatable?
Deskilling — when our skills “go soft” like unused muscles
The first step in searching for “superpowers” isn’t about inventing new skills or redefining the ones we already have. To start with, it’s worth realizing what we’re losing here and now, as technology begins to replace us in almost every activity. Because before we talk about human advantages, we should look at how easily we give them up in exchange for convenience. Convenience, of course, isn’t a sin, and technology often exists to help us. The problem is that help rarely stops at “making things easier” and more often turns into “doing things for us.” And that kind of assistance comes at a cost - only it’s spread out over time, which makes it invisible for a long while. If you don’t have to remember, you remember less. If you don’t have to plan, your brain unlearns planning. If you don’t have to make decisions, you start to feel safer when someone else makes them - even if that “someone” is an automated system. That is deskilling: the gradual loss of abilities that used to be a natural part of everyday life.
When, during a car journey, we almost always rely on GPS navigation, it becomes harder to pay attention to and correctly read road signs or traditional paper maps. If various tools proofread our text for us - or even generate it on our behalf - we become far less able to catch our own mistakes or write something truly original. If a virtual assistant suggests which restaurant to choose for lunch, which hotel to book for a vacation, or what gift to buy for a loved one, we start to believe that our choices are, by definition, less accurate and less logical. As life gets easier, our tolerance for effort drops - our thinking starts to feel too slow and too heavy. It’s a bit like New Year’s resolutions about being more physically active. If we haven’t used our muscles much up to this point, lifting a 5-kilogram dumbbell or running a single kilometer will come with pain and soreness. In the same way, our brain feels it when we outsource its work to technology.
The most dangerous thing about deskilling is that it severely limits our sense of agency. When we get used to instant prompts and suggestions, we begin to panic when they’re not there. The silence in our head becomes unbearable, because we must come up with something ourselves, decide something ourselves, and take the risk ourselves. In the age of AI, the biggest paradox is this: the better the technology, the more passive we become. And the more we delegate our own competencies, the more shocked we’ll be when someone decides we’re unnecessary. “With great power comes great responsibility” - that famous line from a comic-book superhero story should become a compass in the search for our superpowers. We won’t discover them within ourselves if we shift the responsibility for everyday choices entirely - or even largely - onto machines.
Our superpowers aren’t about copying AI
Why is it that, in situations where we could act independently, we increasingly prefer to click, copy, and accept a ready-made answer? What makes technology stop being just a tool and start playing the role of a guide, an editor, or sometimes even a better version of ourselves? There are several reasons, but they all come down to an obsessive pursuit of perfection: we’re expected to act faster, better, and flawlessly - regardless of the cost. We live at a pace that leaves no room for mistakes, hesitation, or the slow process of working our way toward an answer. We want instant solutions because problems themselves appear instantly, like notifications on a screen. And if work can be sped up, risk reduced, and quality improved - why wouldn’t we do it? In this setup, AI seems like the perfect partner: patient, fast, and always ready to suggest the optimal solution. But the package comes with something else, too: an illusion in which perfection becomes the norm, and the human being starts to look pale - not only in comparison to the machine, but also in comparison to their own expectations.
Trying to match artificial intelligence is doomed to fail - not because humans are “worse,” but because we’re competing in an arena that isn’t our discipline or domain. AI works like an assembly line: it doesn’t get tired, it doesn’t get distracted, it doesn’t waste time on doubt, it doesn’t have an off day, and it doesn’t need a break to collect its thoughts. Humans weren’t built to function in that mode, yet we increasingly try to. On the one hand, we want technology to help us; on the other, we constantly compare ourselves to it. If AI can write a text in a minute, we suddenly feel that an hour is excessive. If it can find a solution in a few seconds, our searching starts to look like a waste of time. If it can generate ten variations, our single idea seems not ambitious enough. This mechanism is brutal, because we simply can’t carry those comparisons physically. And that’s when we lose sight of what could be our advantage. Instead of bringing our own superpowers to the surface, we waste energy pretending we can operate like a machine.
What superpowers give us an advantage over technology?
If trying to copy the speed and scale of AI’s work ends in frustration, and the race for “productivity” is almost always won by machines, then logic suggests one thing: it’s worth looking for an advantage in the way we operate. These “human strengths” are what set us apart from technology - because they don’t come from raw computing power, but from experience, context, and responsibility for our own choices. So, it’s worth naming them directly and seeing which of them can become our most important superpowers in the age of AI.
- Understanding context, not just information: AI can gather data, summarize it, and arrange it into a logical whole - but context is more than extra information or background. Context is a feel for the situation. It’s the ability to notice that two identical sentences can mean something completely different depending on the person, the place, and the moment. Humans can read between the lines. We can catch a nuance that remains invisible in the data: tension in a team, a shift in someone’s mood, the hidden risk in a decision that must be made. A machine can predict a trend, but it’s the human who understands what that trend will mean for a specific community, relationship, or organization. In the age of AI, the real advantage is not “knowing more,” but “knowing what to do with it.” This is a competency that can’t be fully automated, because it requires intuition, lived experience, sensitivity to consequences, and the ability to see the world as a whole - not as a set of data points.
- Intuition as condensed experience, not a magical instinct: Intuition is often dismissed or treated as something unreliable and unprofessional. And yet, in practice, intuition is frequently the result of thousands of micro-observations and experiences that the brain processes outside conscious narration. It’s what tells us “Something’s off” before we can prove it with the right spreadsheet. AI can analyze hundreds of scenarios, but humans are often faster in situations where a decision must be made without the comfort of complete information. Intuition is especially powerful when the stakes involve relationships, social risk, trust, or safety - things that cannot be measured as easily as time and money. A machine can suggest the optimal move, but a human can judge whether that move will destroy everything they’ve been building for years. This isn’t a “feeling” that undermines rationality. It’s a part of rationality in a world that isn’t fully rational.
- Empathy and relationships - competencies that stay invisible until a crisis: One of AI’s biggest tricks is that it can sound supportive, gentle, calm, and sometimes even wise. But empathy isn’t a speaking style. Empathy is a relationship in which another person feels truly seen - not merely correctly “handled.” It’s the capacity to be with someone in emotions that cannot be “fixed” with a single message. Humans can create a kind of contact where presence matters more than solutions. We can notice that someone says, “I’m fine,” but their face tells a completely different story. This is especially important in team leadership, negotiations, building trust, easing conflict, or supporting someone in crisis. AI can help with communication, but it cannot replace emotional responsibility.
- Moral responsibility and the ability to choose amid ambiguity: A machine can advise, but it doesn’t bear consequences. It can point to the best option based on parameters, but it has no conscience, shame, guilt, or the weight of responsibility. It doesn’t have to live with the decision it made. A human does. And that’s exactly why humans are irreplaceable wherever values, ethics, and choices that cannot be solved “correctly” are involved. There are situations where no perfect solution exists—only one that is more or less honest, more or less responsible, more or less human. AI has no way of living through such a decision; it can only simulate the language of morality. Humans, on the other hand, can take on the risk of choosing the harder path because it’s the better one. This ability will be crucial in the future, because the more decisions AI makes, the more often humans will have to remind the world of the consequences.
- Creative thinking, creativity, and stepping outside the pattern: AI is great at generating variations. It can produce ideas, slogans, texts, images, and solutions that look impressive because they’re fast and correct. But human creativity often means something else: the ability to question assumptions. To take a step sideways when everyone else is moving straight ahead. To connect things that seem incompatible. In the age of AI, creativity won’t be about “producing more,” but about “producing better” - boldly, originally, unconventionally. It’s a survival skill in a world where mediocrity will be generated at scale.
- Psychological resilience and the ability to act in a world with no guarantees: Technology loves predictability. Machines perform best where rules are stable and data is high-quality. The problem is that human life is, by definition, unstable. People change, emotions shift, contexts evolve, priorities move - and sometimes even the very definition of success changes. A superpower, then, becomes the ability to function in uncertainty: making decisions despite incomplete knowledge, coping with failure, learning from mistakes, and not collapsing when things go wrong. Psychological resilience may turn out to be one of the most underrated advantages of the future, because without it, it’s hard to act effectively when stress, chaos, and uncertainty appear.
Superpowers — how to find them within yourself and train them
What exactly is so “super” about these “powers,” and is it true that such seemingly obvious and basic things should be elevated to something extraordinary? That simplicity is precisely the point, because these skills work in any conditions - especially when things get tense, chaotic, and there’s no time for perfect solutions. They’re the foundation without which even the best tools won’t help much. And the good news is that you don’t need exceptional predispositions to strengthen them. All it takes is a few small changes in everyday habits that restore independence and awareness.
- “Do it yourself”: Before you ask a tool for an answer, try to come up with your own - even an imperfect one. This trains independence, builds a sense of agency, and makes AI a support rather than a replacement.
- Look for sharp questions first, and only then for the right answers: A future superpower will be the ability to define the problem, not just react to it. Practice questions like: “What really matters here?”, “What’s the hidden assumption?”, “What could the consequences be?” This shifts your thinking from autopilot mode to interpretation mode.
- “Map” your intuition: For a week, write down brief moments when “something felt off” or when you sensed a situation correctly. Then look for what those moments have in common. Intuition stops being vague when you learn how to recognize it and name it.
- Build your tolerance for effort: Superpowers don’t grow in comfort. Choose small tasks without shortcuts: take a route without navigation, memorize your shopping list, write an email without autocorrect.
- Keep a decision journal: Once a day, note one decision you made and the reason you made it. This helps you see your motives, shortcuts, and automatic patterns more clearly.
- Talk just for the sake of talking: Empathy and relationships are superpowers you can’t train in isolation. Set up conversations where the goal isn’t an outcome or a result, but connection itself. No pre-scripted “perfect messages”—just presence, attention, and listening.
- Give your brain some “time off”: Choose a part of your day you won’t optimize, plan, or speed up. It might be a walk, cooking, exercise, reading. This is where creativity comes back, because your brain stops operating under the pressure of productivity.
- Don’t rush: Perfectionism kills agency because it delays action. Set yourself a rule: first “good enough,” then “better.” The superpower is finishing, testing, and improving - not waiting for the perfect moment.
- Practice uncertainty on purpose: Every now and then, leave a question unanswered “for now.” Don’t check immediately, don’t google, don’t generate ready-made solutions. This teaches your brain that uncertainty is normal, not dangerous - and strengthens resilience.
- Collect different kinds of input: Read things that disagree with each other, confront different opinions, analyze contradictions. AI likes order; humans can think in chaos. It’s a great exercise for critical thinking and building context.
- Learn by doing, without instructions: Pick one skill that can’t be mastered through theory alone (e.g., negotiation, public speaking, working with people) and deliberately practice it in real situations. The superpower is adaptation in practice, not knowledge on paper.
What awaits us in the future?
In the coming years, the advantage will belong to people who not only know how to use AI but can choose the right tool for the right problem and evaluate the quality of the outcome - not just its correctness and how closely it matches a pre-written prompt. Critical thinking, filtering information, verifying sources, and spotting oversimplifications will become highly valued. Social competencies will also matter more and more: leading teams, negotiating, building trust, easing conflicts, and taking responsibility for communication. On top of that comes creativity in a “hard” form - not generating ideas for the sake of having ideas but creating solutions that make sense in specific conditions. In other words: the market will reward those who can connect humans with technology, instead of trying to turn themselves into human machines.
A similar shift is coming to education, which for years has been built around reproducing knowledge and testing memory. In a world where AI can summarize a book in seconds, solve a problem, and write an essay, simply “knowing” stops being an advantage - because knowledge will be everywhere, instantly available. That’s why much more emphasis should be placed on understanding, interpretation, and independent thinking—on what cannot be copied from ready-made templates. In schools and universities, key skills will include asking questions, building arguments, discussing, and defending one’s position, rather than merely reciting definitions. Learning how to work with mistakes will also become important, because the future will demand experimentation and resilience - not perfect box-ticking. That’s why it’s worth valuing things that may seem “unfashionable” today: reading with comprehension, writing in your own words, expressing ideas precisely, teamwork, and patiently working your way toward meaning.
Surviving in the technological jungle
Stories about superheroes and their superpowers have always inspired us, fascinated us, and fueled our imagination. We admire heroes who shoot lasers from their eyes, fly, turn invisible, gain superhuman strength, teleport through time, or become indestructible. Yet it seems that every one of those comic-book or on-screen characters had to face a moment when they temporarily lost their extraordinary abilities. Even then, superheroes got back up, kept moving, and won the next battles. And they did it thanks to something that wasn’t “super” at all, but seemed ordinary: courage, determination, cleverness, fighting spirit, and stubborn persistence. The real superpower became the ability to act despite fear, despite uncertainty, and despite having no guarantee that anything would work out.
It has always been the same beyond the screen: every great discovery, breakthrough, and transformation began with a person who didn’t have “powers,” but had a question, curiosity, and the willingness to take a risk and make mistakes. No one invented new ideas through perfection - only through attempts, missteps, and patience for things that didn’t deliver immediate results. And in the age of AI, we don’t need to become extraordinary to avoid being pushed to the margins. It’s enough that we don’t give up our own voice, our own thinking, and our own choices just because machines can do it better than we can. Artificial intelligence can be powerful support, but it shouldn’t become an excuse to stop trying on our own. Maybe this is exactly the moment when - like “Dutch” and his team - we’re stepping into the dense undergrowth of a (technological) jungle, and instead of looking for better weapons, we should reach for what we’ve always had in our kit.