As kids, we all learned the same lesson: don’t run with scissors. It was drilled into us by parents and teachers as an unquestionable rule, right alongside looking both ways before crossing the street. The logic was simple and absolute. Move too fast, ignore the risk, and someone gets hurt.
That lesson stuck. And for many executives and board members, it quietly became a governing principle well into adulthood.
For big companies, “don’t run with scissors” translates into layers of approvals, exhaustive risk assessments, governance frameworks, and a strong bias toward caution. For decades, that mindset made sense to protect your stakeholders.
But standing still has never been more dangerous.
This year in Davos, something shifted. For the past several years, conversations around artificial intelligence were dominated by ethics, responsibility, regulation, and risk. While those topics remain essential, they no longer define the conversation. We hosted events with Automation Anywhere, Jasper, and Uniphore where board members, CEOs, technologists, and investors expressed a growing impatience with paralysis. The focus moved decisively toward outcomes.
The question was no longer whether AI should be adopted responsibly, that debate is largely settled. The more pressing questions were about speed, competitiveness, and value creation. How fast can organizations move without being reckless? How do they avoid harm without avoiding progress altogether? And what is the real cost of waiting?
Many large companies are paying that cost already. In boardrooms around the world, caution is often framed as prudence. In practice, it has become inertia where pilots never scale and initiatives die in committees designed to eliminate risk rather than enable learning. Meanwhile, smaller and more agile competitors take calculated bets, accept uncertainty, and quietly take market share. Hence why one of our attendees, the Chairman of a large public technology company, suggested that companies need to rethink who is on their boards.
Not making a decision is still a decision. It is a choice to preserve the status quo. And in moments of rapid technological change, the status quo is not neutral, it is a slow retreat.
The old mantra of “move fast and break things” deserved to be retired. It ignored consequences and treated disruption as an end in itself. The pendulum may have swung too far. Many organizations now behave as if the safest option is to do nothing until every risk is mapped and every outcome is measurable. That approach may feel responsible, but it is increasingly disconnected from reality.
What Davos made clear is that leaders are beginning to recognize the flaw in that thinking. The challenge is not choosing between speed and safety. It is learning how to pursue both at the same time. In other words, learning how to run with scissors.
Artificial Intelligence has often been talked about as an efficiency tool, where automating workflows helped to reduce costs and improve margins. While those benefits are real, they are also limited. Increased efficiency preserves businesses, but it doesn’t transform them. Cost savings do not create competitive moats, they buy time until someone else rewrites the rules.
The more compelling conversations in Davos focused on AI as a growth engine. Executives talked about using it to unlock new revenue streams, enable entirely new business models, expand addressable markets, and create differentiated customer experiences. These are harder conversations to have in boardrooms. They involve uncertainty and they often challenge long-held assumptions about where value actually comes from.
That discomfort is precisely why so many companies default to efficiency over growth. Efficiency is easy to justify and measure, innovation is not. New entrants continue to outmaneuver incumbents as they are not burdened by legacy thinking or legacy economics. They are willing to test ideas that feel uncomfortable because they have less to protect. Boards want KPIs and CFOs want ROI. That discipline is necessary, but it has also become a convenient excuse for delay.
The most valuable opportunities rarely arrive fully formed or easily quantified. New business models do not fit neatly into existing dashboards. Early signals are qualitative long before they are quantitative. By the time outcomes are perfectly measurable, first movers have already established an advantage.
Technology itself is rarely the true bottleneck, organizational dynamics are.
While the attendees of our events represented a wide variety of industries and executive roles, the same challenges surfaced again and again. Talent remains uneven, with too few people who understand how to operationalize AI at scale. Skills are often concentrated in isolated teams rather than embedded across the business. Behavioral change is harder than any technical deployment, as AI challenges roles, authority, and long-standing ways of working. Resistance is not irrational, it is human.
Readiness, it turns out, is about far more than technology and data, It is cultural. It reflects whether organizations are willing to experiment, tolerate small failures, and reward learning instead of punishing deviation from plan. Many executives admitted that the hardest part is not building capability, but giving people permission to use it.
While many of the most promising AI-enabled opportunities involve doing things that are not comfortable, we should treat discomfort as a yield signal, not a stop sign. Sharing data across silos, letting machines augment judgment, rethinking pricing and distribution models that disrupt existing revenue streams, all need to be explored.
The companies that succeed will not be reckless, they will definitely put guardrails in place, monitor outcomes closely, and intervene when necessary. They will move fast as there is greater risk in moving too slowly while the world accelerates.
As kids, the “don’t run with scissors” lesson was black & white. As leaders, the lesson is more nuanced. Sometimes you have to move faster, even when there is risk, because standing still carries its own danger.
Davos this year was not about abandoning responsibility, it was about reclaiming agency. Ethical, responsible, and regulated approaches to AI are necessary, but they are not sufficient.
Learning to run with scissors means accepting that progress involves risk, that safety and speed are not mutually exclusive, and that leadership in this moment requires judgment, not just process.