' ); } ?>

The Economic and Business Impacts of Artificial Intelligence: Reality not Hype

 

 width=

The debate on Artificial Intelligence (AI) is characterized by hyperbole and hysteria. The hyperbole is due to two effects: first, the promotion of AI by self-interested investors. It can be termed the “Google-effect,” after its CEO Sundar Pichai, who declared AI to be “probably the most important thing humanity has ever worked on.” He would say that. Second, the promotion of AI by tech-evangelists as a solution to humanity’s fundamental problems, even death. It can be termed the “Singularity-effect,” after Ray Kurzweil, who believes AI will cause a “Singularity” by 2045.

The hysteria similarly arises from two effects: first, from warnings that AI poses an existential threat. It can be termed the “Elon-Musk-effect” after the billionaire entrepreneur who tweeted that “Competition for AI superiority at national level most likely cause of WW3 imo.” Second, from warnings that AI could cause mass unemployment through job automation. This can be termed the “Robot-effect” after the bestselling book by Martin Ford entitled “The Rise of the Robots: Technology and the Threat of Mass Unemployment.”

 width=

In a recent Discussion Paper I provide a critical survey of the Google-Singularity-Elon-Musk-and-Robot-effects, and argue that hard evidence to support hyperbole and hysteria is lacking.

Back in 2013, it was estimated that 47 percent of jobs could be automated in 10 – 20 years in the USA and even more in the EU and developing countries. Six years later, instead of mass unemployment, unemployment in advanced economies are in fact historically lows. It has been shown that the methods used to calculate potential job losses due to AI are sensitive to assumptions. Moreover, the evidence indicates that automation has created 1,5 million net new jobs between 1999 and 2010 in Europe.

At the same time, we have seen a continued decline in labor productivity growth. The UKs ten-year average labor productivity growth since 2007 was the lowest since 1761. Even global superstar firms, who may benefit most from AI, have not become more productive. This contradicts claims that AI will enhance productivity.

Why are the Robot-and-Google-effects not materializing?

There are at least three reasons:

First, the diffusion of AI through the economy is slower than most people think. It is especially difficult for small firms to economically implement AI. Growing Pseudo-AI is a result of this. The Guardian points out “It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.”

Second, AI innovation is getting harder; and it is mostly applied to fine-tune and disrupt existing products rather than introduce radically new products. It may be entertaining to play with Google’s Bach doodle, but it hardly raises productivity. The low-hanging fruits of applying Machine Learning (ML) may have been reaped, decreasing returns seem to have set in, and on top of this that ML is facing a reproducibility crisis. The end of Moore’s Law may be in sight.

Third, it is not profitable for businesses to invest in AI given slow growing consumer demand in most western countries. Most AI innovation is in visual systems for autonomous vehicles. Despite this, autonomous vehicles are most notable for their absence from our roads. This is likely to remain the case for a long time, for technical reasons, and due to sunk investments.

One may argue that just because AI’s impact has been small in the past this does not rule out that the massive impacts will still happen in the future. Maybe the robocalypse is inevitable because of progress in AI. Such an argument is based on misunderstanding AI. The term “Artificial Intelligence” itself is misleading. Current AI, using ML, is not intelligent. A joke about AI is: “When you’re fundraising, it’s AI. When you’re hiring, it’s ML. When you’re implementing, it’s logistic regression.” There are various reasons to be skeptical whether non-ML AI research will result in a super-intelligence soon, and not just one-trick ponies that ML applications currently are.

As a result of hype and hysteria many governments are scrambling to produce national “AI strategies.”

Global governance organizations are rushing to be seen to take action. It has become fashionable to hold conferences and publish flagship reports on the “Future of Work.”

The United Nations’ Secretary-General has, for the first time in history, published a “Strategy on New Technologies,” singling out certain technologies, including AI, for special attention, based on the belief that “automation, artificial intelligence and robotics promise enhanced economic growth, but they can also exacerbate inequality within and between nations and can contribute to unemployment.” Taking its cue from this strategy, The United Nations University’s Centre for Policy Research (CPR) goes even further in justifying the UN’s planned intervention in the field of AI by claiming that AI is “transforming the geopolitical order” and even more incredibly that “a shift in the balance of power between intelligent machines and humans is already visible.” Its blog has called foran Intergovernmental Panel for Artificial Intelligenceand for a “UN-led multi-stakeholder global governance regime.” Yes, it sees AI as of the same complexity and magnitude as climate change. There are many other examples of AI hyperbole and hysteria leading to crazy proposals.

Singling out AI for control and regulation by the UN, governments, or even an intergovernmental panel on AI, unproductively shift the focus towards the technology and not the real problems.

Technology is a “moving target:” imagine if during the second industrial revolution an intergovernmental panel was put together to “globally govern” electricity? The case of electricity in the late 19th century actually offers pertinent historical caution. As Carolyn Thomas de la Pena recounts (p.113), hysteria broke out in some quarters: “Doomsday predictions were made by those fearful of electricity’s deviation from their perceived natural order. Clergymen were particularly prone to this viewÉAccording to Bishop TurnerÉthere was much to be feared from Ôthe invention of the white man in controlling electricity.”

Government regulation of technology and innovation is at the best of times fraught with difficulties and unintended consequences. When it is based on hysteria and hyperbole, the task may be particularly problematic. And when such a problematic task is taken up by global political bodies, whose decisions often are made in an “evidence-free zone,” caution is advised.

The upshot is that the hype and hysteria about AI has led to an “unhinged” debate about AI, and is now encouraging stifling regulations as well as AI “arms races.” These consequences could hasten a premature AI-winter through inappropriate controls and a loss of public trust, unfortunately at a time when the world needs more, not less, technological innovation, and for this technology to diffuse much faster.

Business schools, by researching and teaching good (scientifically informed!) decision-making so as to allocate organizations’ scarce resources well, have a responsibility to bring balance to the debate.

AI, as a statistical and computer science tool to help in decision-making, as a component of “smart” goods and services, and as an instrument to enhance scientific discovery and innovation, should be taught in business schools as part of the “toolkit.” More generally, the digitalization of economies and businesses, of which AI is one outcome, requires new business models, effective digital transformation strategies, and updated innovation practices for the diffusion of digital technologies Ð all to which business school education should add value. Ultimately it is managers that make decisions as to which technologies to adopt, how to re-configure the organization to make use of these, and what to invest in technological innovation. Therefore, managers need an accurate understanding of what digitalization, data, and AI mean. Hype and hysteria thus are not helpful.

Finally, the cost and extent of digital adoption and transformation within organizations depend to some extent on their external environment, for instance the country of region or city in which the organization is located will either have sufficient or insufficient complementary infrastructure, services, and regulations available for AI-adoption, for instance. It is very expensive for companies to make a digital transformation if basic electricity and ICT connectivity are very expensive, or if (perhaps due to hysteria) regulations hamper innovation in AI. Therefore, business schools have, as a collective, the responsibility to play an advocacy role for better complementary investments and proper, not stifling, regulation of AI.

Further Reading:

NaudŽ, W. (2019). The Race against the Robots and the Fallacy of the Giant Cheesecake: Immediate and Imagined Impacts of Artificial Intelligence, IZA Discussion Paper no. 12218. Bonn: IZA Institute of Labor Economics.

Watch the GBSN Cross-Border Coffee Break webinar by Professor NaudŽ

 

 


 width=Professor Wim NaudŽ is an expert in entrepreneurship, innovation and development. He is particularly interested how new technology impacts on development, and the role of entrepreneurs and the broader institutions of society in this regard. His current work is on artificial intelligence, the future of manufacturing, technology-inequality and climate technology entrepreneurship.

Professor NaudŽ is currently professor at Maastricht University and MSM and visiting professor at RWTH Aachen University. Between 2012 and 2018 he was the Dean of Maastricht School of Management. Under his innovative leadership the School became a member of the GBSN; was successful in substantially climbing the rankings (from 27th to 10th position in Western Europe according to the Eduniversal ranking of full-time MBA programs); obtaining re-accreditation for its MBAs from AMBA, IACBE and NVAO and moreover was twice a finalist for the AMBA Innovation Award. Before this he worked at the United Nations University (UNU-WIDER) in Finland and at Oxford University.

Connect with Professor NaudŽ on LinkedIn or via email