AI: the good, the bad, and the need for humans to take the reins

Artificial Intelligence can be awesome. It can be awful too. Beyond the hype surrounding AI, Professor Wim Naudé sees how it also stifles innovation, increases already woeful global inequalities and further concentrates power in the hands of a few haves. It requires drastic measures to reverse such developments. Above all, a governance system for AI is needed that ensures it does no harm and that facilitates the diffusion of AI. "The interaction between people and AI is what makes it good or bad. Getting people to be involved as much as possible with AI is, I think, the best way forward."

In your publications and popular writings you describe a decline of entrepreneurship and innovation in the West. What have you observed that leads you to this conclusion?

"Almost every measure you use for entrepreneurship, innovation or business dynamics in general, tells the same story. The ratio of start-ups to total number of firms, research productivity or the share of entrepreneurs with higher education, are all declining significantly in almost all high-income countries. People generally believe that we live in a really fast-moving technological age, with all these great innovations that are disrupting everything all the time. But you know what? We don't. The last 40 years we have been at a relative standstill. The only real improvement we have had has been in ICT. I think it was entrepreneur Peter Thiel who said that if you could transport someone back from the 1970s to your city today, the world would be very recognisable to them, except for the mobile phones and the computers we use. Apart from that, the main technologies and innovations we use - trains, electricity, the combustion engine, shipping technology, aspirin and antibiotics as our most potent drugs - are all if not a century old, then approaching it."

‘We need technology and innovation now more than ever, but instead we are smothering innovation.’

What causes this decline?

"There is no agreement yet. It can be seen as a matter of death by a thousand cuts. For example the extent of business regulations have continued to increase. While these are really cumbersome, they only play a small role in my opinion. The major reasons lie in a number of long-term trends that started in the 1970s and that represent a structural change in world economic history. For the first time in history, fertility rates in Western countries fell below the replacement rate. So populations get smaller and older. As a result, you have less risk-taking, less entrepreneurship, less innovation. Another major development that also started in the 1970s was a change in capitalist institutions in the West. We have seen the rise of monopolistic domination of the market by the rich that has led to the high levels of income and wealth inequality we have today."

How does that cause a decline in innovation?

"First, you have to realise how these inequalities were given a boost by the development of the digital economy. We thought that this was going to make it very easy for lots of small businesses to be created and to benefit from online opportunities. What actually happened is the rise of platform capitalism: there are a few digital platforms that dominate online business, because of their scalability and the winner-take-all effects from data network economies. Of the ten companies in the world with the highest market capitalisation, around eight are digital platform companies. Now to answer your question: these digital platforms are increasingly engaged in what we call defensive innovation. Many of their innovations aren't intended to bring significantly new and improved goods and services to the market, but are incremental, and designed to keep competitors out. And if smaller businesses become a threat, they can be bought out. That is, before potential competitors had the size to have enough data, to lure expensive PhD's in data science, to run AI algorithms, and so on. So, the area of competition policy needs to be revised, we haven't got that right now. We don't know how to properly regulate competition in the age of digital platforms. All of this really matters because we're confronted with a number of very serious self-inflicted problems and vulnerabilities in our societies. We have a big global population with older generations that are becoming more vulnerable, and younger ones that are increasingly on the move. We're facing climate change problems, we have put pressure on natural resources, we're very vulnerable to outbreaks of pandemics. We need technology and innovation now more than ever, but instead we are smothering innovation."

The perception that we're living in a fast-moving technological age is in large part due to advances in data science, and AI and machine learning in particular. How can such technologies help to create a new wave of innovations?

"Let's look at the current crisis we face. A lot of scientists have started to work on COVID-19. Hundreds of papers on the subject are published every day. Elsevier, the scientific publisher, has created a list of articles relevant to COVID-19. It currently contains tens of thousands of papers, which makes it really difficult for scientists to keep up. In general, we have too much information – which has been described as a “burden”. This leads to decreasing returns to scale in scientific research; research teams have to get larger and larger because it gets more difficult to ‘do' science. This is where AI can help to make research easier again for us. Algorithms are now reading and scanning these tens of thousands of papers, data-mining, identifying patterns and cross-references and looking for needles in haystacks. AI cannot do the research itself, but it can point researchers in promising new directions. I think this application of AI will have a very good impact on research and innovation."

‘As long as we don't understand the human brain, I don’t think we will be making very good progress in developing real artificial general intelligence.’

This is an example of the use of AI as a research tool. A very powerful one perhaps, but still only a tool. There are also cases of AI being innovative in itself, for example by playing a truly creative move in Go, or by discovering a new antibiotic after screening more than a 100 million chemical compounds. Given the overwhelming number of possibilities in both these cases, no human can reasonably be expected to have found that particular one. Do examples like these mean we're entering a new era where AI becomes more creative and innovative than humans?

"I fully agree that the volumes of data in these cases are really too much for humans to handle. But I don't see them as proof that we've entered a new world, I'm less optimistic. First of all, these discoveries were not made independently. AI is not an independent entity that comes up with something on its own. AI very much involves researchers who set it to do errands for them, who teach it what to do, who feed it the right data. Nowadays people are trying to let AI replicate innovations from the past. They throw certain data at it and then want to see if an algorithm will come up with the same kind of innovations or patents that people came up with. If you can do that, then you can feed it new data and it might come up with new innovations that humans might come up with if they could only see the bigger picture much more clearly. A number of patents that actually exist were predicted by this AI and that is quite a fascinating development. The second issue I have is that we still don't understand intelligence, let alone concepts like consciousness, common sense or creativity, well enough to be making these kinds of predictions. As long as we don't understand the human brain, I don’t think we will be making very good progress in developing real artificial general intelligence. Google has programmed AI to compose music like Bach, others have programmed AI to paint like Rembrandt, but nobody is really convinced by it.”

One of your research interests is the governance of AI: how should AI be governed to facilitate its wider diffusion for productive entrepreneurship, and how can we ensure that progress in the development of AI will benefit humanity and prevent bad outcomes? What steps do governments, companies, or individuals need to take for that?

"First, I don't think that AI poses an existential threat at the moment, as some people think it does. But sure, I do think there are some very negative aspects to developments in AI. Increasing surveillance of private lives and the development of lethal autonomous weapons, are examples that we need to avoid. But we need to also look very hard at a less nasty but more difficult issue. That is the question of who has access to AI and who gets to benefit from it? If you look at WIPO patenting data, roughly 30 companies, all located in three regions of the world, are responsible for 90% of patents on AI applications. This means that a handful of countries and companies utterly dominate the field of AI and hold all the patents, knowledge and information and the tools and techniques in their hands, while whole continents with billions of people have absolutely no role in the development of AI."

What would you personally advocate to create a better balance of power?

"In the past the unbundling and breaking-up of companies that became too big and powerful has happened quite a few times. The US provides some very good examples. The robber barons of the 19th century with their monopolies in railroads, shipping, finance and oil were forced to break-up their companies, and that proved to be very good for subsequent growth. Another measure might be to reinstate laws that prohibit companies from buying back their own shares. For a long time, in the US, companies were not allowed to do that, but under pressure from the business lobby that changed in the early 80's. These share buybacks are a very effective tool to pump up share prices, and thereby executives' bonuses. The result is that in the last ten years, big US firms have deviated an estimated 500 billion dollars away from investments in innovation and instead spent it on share buy-backs. That is another big reason for stifling innovation, leading to slow economic growth. Also, we need to be careful not to over-hype AI – there is an incentive to do this so as to attract finance. But this can create asset-price bubbles in share markets and distort investment decisions, and if the bubbles burst, we may end up with another AI-winter.

Wim Naudé (1968, South Africa) is a Full Professor in Development Economics and Entrepreneurship at the Maastricht School of Management, the Netherlands. He is also affiliated with the RWTH Aachen University and the IZA Institute for Labor Economics in Bonn, Germany, and is a Fellow at the Africa Study Centre, University of Leiden, the Netherlands.

Wim Naudé studied economics, development economics, econometrics, mathematics and statistics at the University of Warwick (UK) and North-West University (South Africa), and artificial intelligence at the University of Helsinki. The use of big data sets in addressing economic problems has been a constant in his career. For example he worked on modelling the economic impact of trade liberalization as part of South Africa's negotiations for entry into the GATT, and was part of initial workshops that established UN Global Pulse, the UN Secretary-General's innovation initiative on big data and AI for sustainable development, humanitarian action and global peace.

‘AI is not an independent entity that comes up with something on its own. AI very much involves researchers who set it to do errands for them, who teach it what to do, who feed it the right data.’

These are quite drastic measures, and the trend is in the opposite direction. Short of such drastic measures, what can be done to facilitate the wider diffusion of AI as a tool for more productive entrepreneurship and innovation?

"In addition to top-down measures it's good to also have a bottom-up approach. One thing to do is to put significantly more tools and resources into our university system, to make sure there are enough grassroots initiatives in AI. Currently the amount that European countries are investing in AI infrastructure and technology is relatively miniscule. Few universities have the number of servers or the computing power you need to develop, train and run serious cutting-edge AI models. More funding for fundamental science and physics, and better promotion of STEM skills, will also lead to further advances in AI. The number of calculations needed to be a leader in AI tasks like language comprehension, game playing, and common-sense reasoning has soared an estimated 300,000 times in the last six years. So, our computational abilities are reaching the end of the line, and quantum computing may be required that get past these computational constraints – but this is still quite far away in my view. Then there is the issue that governments tend to view AI as primarily a technical issue, not so much as an economic or business issue. It is a real shortcoming that involvement from these domains in governments' AI decisions is limited. Governments seem to think that you govern good AI by making sure computer scientists are writing good code and that the algorithms and datasets aren’t biased. If that is the only way you are going to deal with AI governance, you are not going to be successful in getting the most out of AI. You may only end up with a few firms and organisations benefiting from “good AI” and most other firms and countries excluded – obviously in such a case even good AI can exacerbate inequality. We therefore need to think seriously about access to AI and its diffusion as well – and this is largely an economic and political challenge.


Wim Naudé published a widely quoted early review article on AI against COVID-19, with the following key takeaways: "I find that AI has not yet been impactful against COVID-19. Its use is hampered by a lack of data, and by too much noisy data and outliers. Overcoming these constraints will require a careful balance between data privacy and public health concerns, and more rigorous human-AI interaction. It is unlikely that these will be addressed in time to be of much help during the present pandemic."

As an illustrative example of both the power and the shortcomings of current AI models, Naudé describes the case of BlueDot, a relatively low-cost, Canadian-based AI model. BlueDot predicted the outbreak already on 31 December 2019, more than a week before the WHO did. It also generated a list of the top 20 destination cities where passengers from Wuhan would arrive, and it warned that these cities could be at the forefront of the global spread of the disease. Less well known is the fact that another AI-based model sounded an alarm even earlier than BlueDot, but this model attached a very low level of significance to the outbreak. Naudé concludes: "In essence, it required human interpretation and providing context to recognise the threat. Moreover, even in the case of BlueDot, humans remain central in evaluating its output, as Kamran Khan, Founder of BlueDot, explained in a podcast. It is therefore rightly stressed that human input, and from various disciplines, is needed for the optimal application of AI.”

Please also see “AI versus COVID-19, part II"