Twitter has a lot of uses. It helps people connect with others who have similar interests by sharing research and resources. Users can track and contribute in areas of interest, increase the visibility of their companies, and build their personal brand. All these things are important, of course, but I was looking for more when I signed up in 2009.
I saw Twitter as an opportunity to gather insights from a more diverse set of people both inside and outside higher education. Through my work, I was lucky to learn from the leaders in business schools around the world. But I was feeling insulated and sought new and different perspectives. I somehow envisioned a broad community; sharing a wide mix of perspectives that intersect to generate new ideas.
The platform, I thought, could also help me become a better writer and communicator. The character limit at the time was 140 characters and that was a real and binding constraint. It would force me to isolate and articulate the main point of an article, speech, or conversation more concisely—and do it in a way that draws attention. Twitter provided a practice field for thinking and writing for clarity and engagement, and the feedback was almost instantaneous.
The way I (and I’m sure many others) thought about it, Twitter was about improving our work and not just about amplifying it. Although both are important from my point of view, the platform evolved more to serve the latter than the former. In 2016, the company introduced an algorithmic timeline, which displayed tweets based on popularity and not just chronologically, as was initially the case. It was a business decision to increase interaction and engagement with the platform, but it also created information bubbles, making it more difficult to explore different points of view.
A year later, the character limit of a tweet increased from 140 to 280 characters. Relaxing the constraint provided additional room for content but meant we didn’t have to work as hard to generate shorter tweets. The change reminded me of what Mark Twain said long before Twitter, “I didn’t have time to write you a short letter, so I wrote you a long one.”
Overall, the platform now seems better suited for self-promotion rather than self-development. This point is captured in the sarcastic (and Tweetable) first line of a David Brooks piece appearing in The Atlantic, “Whenever I feel particularly humble, I tweet about myself.” Worse, social media platforms have carelessly enabled the spread of misinformation, contributed to diminishing trust, and exacerbated political polarization.
Now what, if anything, does the evolution of Twitter tell us about the future of ChatGPT, which was released in November to a flurry of commentary?
In our own field of management education, initial concerns about ChatGPT have focused on students and scholars using the AI as a substitute for their own work or otherwise, “game the system” to improve their position rather than themselves. These concerns are not surprising. I’ve seen articles about using ChatGPT to make your job easier, write a book, and, of course, simply to make money. Many universities and business schools are hurriedly rethinking and rewriting their academic integrity policies, while professors are revising their syllabi.
Others have been more optimistic about the potential for ChatGPT as a tool to enhance and improve learning. As Christopher Grobe writes in the Chronicle of Higher Education, most of the initial concerns about ChatGPT are “not so much about writing, understood as a process and adjunct to thought, as they are about writing assessment, understood as a tool for sorting students and awarding distinctions.” According to Grobe, “if we treat learning (not distinction) as the goal of education, then generative AI looks more like an opportunity than a threat.”
More generally, professors are writing about using ChatGPT to develop critical thinking skills, serve as a learning companion or teaching assistant, and generate alternative perspectives. I was especially excited to see Alain Goudey of Neoma Business School write about ChatGPT and “The art of prompt engineering,” emphasizing the increasing importance to companies of being able to get the most out of AI. Is there a better way to develop skills for working with generative AI’s, than to use it to learn business and management? As many of my professor friends have told me, students (and professors) should worry less about being displaced by AI and more about being displaced by people that know how to use it.
As I’ve written before, advances in technology hold the greatest potential to solve the grand challenge in education, which I believe is “to do things that work AND are accessible.” Imagine the power of AI to help us contextualize content and create immersive experiences. These opportunities are especially important for the mission of GBSN, which is “to improve access to quality, locally relevant management education for the developing world.”
Of course, the opportunities associated with AI go well beyond education. It can make us healthier and safer, as well as more productive and innovative. But there also are major risks for society. If we are not careful, AI could increase inequality, facilitate human rights violations, further divide us, and more. The immediate impact on jobs and employment are a major concern. If you are skeptical about the risks, read the New York Times article “Bing’s Chatbot Drew Me In and Creeped Me Out” in which Kevin Roose writes that he’s still fascinated and impressed by the AI, but “also deeply unsettled, even frightened, by this AI’s emergent abilities.”
I started this blog with the Twitter story to illustrate the tensions across various uses of technology. Business schools are, of course, compelled to pay attention to the implications for teaching and learning. But they are also responsible to help the world anticipate and address the myriad challenges that AI will bring to the world. These challenges have less to do with technology and more to do with the role of business in society. OpenAI, which brought us ChatGPT, has a mission to “to ensure that artificial general intelligence benefits all of humanity.”
It won’t be easy for business schools to shape the future of AI. To get ahead of the issues, we must continue to climb out of our “ivory towers” and get more engaged with policy and practice. To address the complex problems, we must break down silos and work with other disciplines, such as medicine, psychology, and law. To influence direction, we must take stands and be more confident in speaking out on controversial subjects, even if it alienates potential donors. To make a bigger impact, we must build stronger connections with governments and NGOs, as well as business, to test our ideas, put them into action, and adapt our approaches based on what we learn. To empower our graduates to drive change, we must help them be better systems thinkers and lead beyond their organizations.
Many business schools are already changing in these ways. For an interesting example of what business schools are doing, take a look at NYU Stern’s Center for Business and Human Rights. One of their four work streams is dedicated to technology and building on the benefits of Internet companies, while minimizing the potential for harm, especially by spreading mistrust and misinformation. They are working “to define a way forward that combines the right mix of government oversight, company self-regulation, and public education and action” – and this work has required a different model for research and engagement.
I’m optimistic about the future of AI because of the changes we are seeing in business schools. With a little help from organizations like GBSN, business schools can and will do more to positively impact economic and social development—to build more inclusive and sustainable communities.
Dan LeClair, CEO
Dan LeClair was named CEO of the Global Business School Network (GBSN) in February of 2019. Prior to GBSN, Dan was an Executive Vice President at AACSB International, an association and accrediting organization that serves some 1,600 business schools in more than 100 countries. His experience at AACSB includes two and half years as Chief Strategy and Innovation Officer, seven years as Chief Operating Officer, and five years as Chief Knowledge Officer. A founding member of the Responsible Research in Business and Management (RRBM) initiative, Dan currently participates on its working board. He also serves in an advisory capacity to several organizations and startups in business and higher education. Before AACSB, Dan was a tenured associate professor and associate dean at The University of Tampa.
Dan played a lead role in creating a think-tank joint venture between the European Foundation for Management Development (EFMD) and AACSB and has been recognized for pioneering efforts in the formation of the UN’s Principles for Responsible Management Education (PRME), where he served on the Steering Committee for many years. Dan has also participated in industry-level task forces for a wide range of organizations, including the Chartered Association of Business Schools, Graduate Management Admission Council, Executive MBA Council, and Aspen Institute’s Business and Society Program.
Widely recognized as a thought leader in management education, Dan is the author of over 80 research reports, articles, and blogs, and has delivered more than 170 presentations in 30 countries. As a lead spokesperson for reform and innovation in management education, Dan has been frequently cited in a wide range of US and international newspapers, magazines, and professional publications, including the Wall Street Journal, Financial Times, New York Times, China Daily, Forbes, Fast Company, and The Economist. Dan earned a PhD from the University of Florida writing on game theory.