LLM

The Case for Critical Thinking: Training the Managerial Mind in the Age of AI

Imagine the hurried life of a typical business student today, grabbing the last table at the coffee shop, busting out the laptop to tackle a complex case about an underperforming multinational company. The question is simple and open-ended: “What’s wrong with the company and what should leadership do next?” Our student promptly copies and pastes the case and question into ChatGPT and downloads a structured answer. It looks clear and compelling—and right. She inserts a few “human” touches (i.e., grammatical errors) before pressing the send button to her teammates. By then her latte is ready and she has just enough time to make it to her job interview.

What’s missing from this scenario? Everything that matters, as far as I’m concerned. Analyzing a case means exploring assumptions, considering the context, doing additional research, and applying multiple frameworks and perspectives. The stuff that takes lots of time and intellectual energy.

Learning is like food and finance. You can’t stay healthy without managing your diet now. You can’t increase wealth without saving now. And you can’t acquire the knowledge and skills you need without learning now. Yet, many people have difficulty giving up short-term benefits for bigger long-term gains. Psychologists refer to it as present bias (or hyperbolic discounting).

We are only just beginning to understand the implications of AI use when it comes to learning. I’m especially excited about the possibilities for personalizing education and increasing access. But I’m also concerned about the potential impact on students and their development in important areas such as critical thinking.

Researchers at MIT have been comparing the brain scans of humans engaged in essay writing over time. In one experiment, they divided participants into three different groups: those using their “brain only,” those using search engines, and those using Large Language Models (LLMs). Among their initial findings, “cognitive activity scaled down in relation to external tool use.” That is, users of generative AI showed less brain development than the others. This phenomenon is sometimes referred to as AI-induced cognitive offloading.

Notice that the experiments are not about the quality of the essays, but the capacity to write better ones in the future. That’s the thing about education.

In management education, specifically, I worry that AI will become a substitute for learning the very skills that define good management. Management is difficult. Management problems are messy and require critical thinking. They are political, contextual, and they don’t often have clear right answers. In business schools, students need to practice diagnosing problems, weighing tradeoffs, understanding human motivations, and making difficult choices under uncertainty. Managers develop these skills by doing the work and exercising the brain—thinking, rethinking, failing, adjusting, and developing a feel for complexity.

Unbeknownst to them, students consistently using AI to generate answers without engaging in the real work are robbing their future selves of the skills they will need most. Just as a person who skips exercise for years may find themselves unfit when they most need strength and endurance, a student who skips over the challenging thinking work may find themselves unprepared when they must navigate a boardroom, lead a team, or face a crisis where there is no pre-written answer. Sure, AI may eventually do all these things for us. But for now, we need critical thinkers.

What can business professors do to help students learn the things that matter most? Cognitive offloading is not unavoidable in management education. I’m sure there are other approaches that work, but here are a few suggestions for encouraging AI use in ways that support critical thinking.

First, emphasize peer-to-peer interactive discussions and debates in class. It turns out that in the MIT experiments, the LLM users “also struggled to accurately quote their own work.” So, emphasizing real-time discussion can reduce the incentive to outsource thinking, since students must defend their reasoning in real time. That’s one reason why I believe cases should remain a fixture in management education. As case conversations enter unexpected territory, students must think on their feet to analyze data, uncover underlying assumptions, debate trade-offs, and connect theory to the specifics of the case.

Second, instructors can design assignments that require structured reflection before accessing AI or other digital aids. Students must articulate assumptions and frameworks in their own words. As in the case of real-time discussions, students are free to use AI, but not as a cognitive crutch. I’m interested in learning more about the approach described by Weinstein, Brotspies, and Gironda. They suggest more back-and-forth interactions between students and AI, first requiring students to diagnose issues without external help, then to compare their analysis with AI-generated insights to refine their understanding, and finally to critique AI’s responses.

Finally, educators can do more to inform students about the tradeoffs, emphasizing the long-term payoffs of exercising the mind. To me, it is simply about reinforcing the importance of continuing development to being an effective manager—to have and exercise good judgment, think creatively, and sort through complexities at the systems level. The idea is to instill in students a growth mindset and help them balance efficiency objectives with the time it takes for genuine skill development. A good manager is constantly learning and developing. In closing, I want to emphasize this last point more generally. Ultimately, I believe we must address the underlying mindsets about management education (and higher education more broadly) to reduce the risks of AI. We need to reinforce that it is more about the process of learning than degrees and entry-level jobs. Frankly, I have always believed that learning is not only a means to an end, but also an end to itself. Now, I think such a mindset will be especially important in an AI world.

>