Based in London, united kingdom. working as a software engineer for palantir technologies. thoughts and ideas expressed are my own.

Artificial Intelligence — Economics of Fear and Cultural Anxiety

Artificial Intelligence — Economics of Fear and Cultural Anxiety

Technological innovation has caused cultural anxiety throughout history, but what kind of fears are justified? — Here are 7 different arguments what’s going to happen on the labor market.

1.jpeg

It is extremely difficult to keep track on both the pace and scope of technological development. Our current age is characterized by brilliant and ground-breaking technological innovations. However, the lines are blurred between different ‘buzz words’, most importantly:

Robotics, Artificial Intelligence (AI) including both the sub-categories of Machine Learning (ML) and Deep Learning (DL), Big Data and directly followed by Distributed Ledger Technologies, Blockchain and Smart Contracts, Fin- and InsureTech, cloud computing and the Internet of Things (IoT) which enable Smart Home, Smart Manufacturing and potentially Smart Society, 3D Printing, Renewable Energy as well as CleanTech, BioTech and Nanoscience. Since it’s totally understandable that many people don’t have the technical knowledge about each of those innovations, the public debate is shaped by speculations about potential positive and negative impacts. Overestimating or underestimating is often the case.

As stated by the Nobel-prize worthy economist Joel Mokyr, technology, in general, has ‘generated cultural anxiety throughout history’. Imagine, you’d ask a 19th-century farmer what he or she would do if there were machines cutting their crops? Imagine, you‘d ask cotton spinners within the period of the first industrial revolution, what they focused on when their work would become abundant? In the sense of business development, Henry Ford is cited with the quote:

‘If I had asked people what they wanted, they would have said faster horses.’

That’s exactly the issue with technological development. It’s purely about the future. Hence, rational and helpful recommendations are derived seldom, which makes it difficult for policymakers and business people to take the right measures and instruments. Besides, communicating understandable decisions to voters and employees becomes tricky.

A short historical assessment and Keynes notion of ‘technological unemployment’

The most common public fear of widespread job losses due to automation is not necessarily new. Historically, it can be traced back to William Lee’s invention of the ‘stocking frame knitting machine’ in 1589. Lee was trying to receive patent protection, but Queen Elizabeth I. wasn’t that happy because she was worried about negative impacts on employment. As a consequence, Lee was forced to leave Great Britain. Going further, in the mid-19th-century British textile industry, the Luddites protests emerged as a reaction to the invention of the ‘self-acting mule’, which was tackling the jobs of cotton spinners.

Left:  Stocking frame knitting machine (1589) by  profimedia.si on Pinterest . /  Right:  Self-acting mule (around 1900) by  New York Public Libraries .

Left: Stocking frame knitting machine (1589) by profimedia.si on Pinterest. / Right: Self-acting mule (around 1900) by New York Public Libraries.

In 1930, the term ‘technological unemployment’ was coined by the godfather of modern economics John Maynard Keynes. In his essay on the ‘Economic Possibilities for our Grandchildren’, Keynes was trying to forecast the next 100 years of economic development — until 2030. He argued, that ‘due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour’, people will become unemployed in the short-term.

Therefore, neo-Keynesian economic policy is often used in times of recessions as characterized by state-based investments and subsidies. The logic: When the state invests more than usual or provides (fiscal) incentives to the public, it is more likely that demand remains stable — that being true even if the economy is going through a crisis. Thereby, devastating results of widespread job losses are diminished and it is tried politically to maintain ‘full’ or ‘former levels’ of employment. A nice example is the so-called ‘Abwrackprämie’ (or car scrapping bonus) introduced by the German government within the last economic crisis in 2009. Here, a subsidy of up to 2.500 Euro was provided to consumers willing to scrap their old car when purchasing a new one. Of course, the rationale behind can be questioned from an environmental perspective, but it makes sense economically.

Are job losses justified?

Switching back to the very topic of this essay, public fears of technology-induced unemployment are still relevant for today. Research from nine countries — Bulgaria, China, Germany, India, Italy, Spain, Sweden, UK, and the US — suggests that, when it comes to technology, people are mostly worried about cyber-attacks (48%) and job losses (43%).

Nowadays, ‘automation angst’ relates to the fear that labor is substituted by capital. Here, capital means investment in emerging technologies. As a consequence, workers may be replaced subsequently by both tangible machines and robots as well as intangible software systems, some of them using AI-mechanisms.

These concerns are mainly driven by labor market assessments such as those introduced by the Oxford scholars Frey and Osborne in 2013. Their study was the first trial to provide a systematic approach to forecast the ‘susceptibility’ of a total of 702 occupations. They conjectured that within the next two decades 47 percent of all US jobs were at high risks from automation. From their point of view, employees working in logistics, administration, and manufacturing would be those most affected by the shift towards capital-intensive workflows. Similar results of 42 percent of all jobs being at risk were found for Germany when the same research methodology was used to transfer the study.

If you are worried now, keep reading. As most of the time, the answer isn’t that simple.

First of all, measurement is important. Osborne and Frey built their analysis on the ‘task model’, which was pioneered by the economists David Autor, Frank Levy, and Richard Murnane. Here, it is differentiated between routine and non-routine as well as manual and cognitive tasks. Within the period of 1959 to 1998, non-routine cognitive tasks showed to have large comparative advantages to any kind of routine tasks. The intuition behind is that routine tasks can be automated more easily than those which imply flexibility. In other words, it’s easier to automate a whole assembly line than the job of a teacher, who needs to do administrative work and research, prepare classes and exams, supervise and mentor as well as grade students, be a listener and trouble-shooter, explain things and show empathy, close the classroom, and so on.

In contrary to the evidence shown by the task model, Frey and Osborne assumed that non-routine cognitive tasks will be prone to automation in the near future. They argued, that AI-mechanisms became better and better in domains previously dominated by humans alone. Examples range from machine learning algorithms which match or surpass human-level performance in image, text and voice recognition to rapidly improving sensor technologies. When thinking all these technological advancements in conjunction, driverless cars didn’t seem so far away.

However — even if the ‘task model’ was used as a baseline — the Oxford study is criticized because the authors looked at occupation categories as a whole. Thereby, conclusions were drawn such that ‘recreational therapists’ or ‘choreographers’ are not very likely to become automated whilst ‘models’ and ‘cooks’ are. Why is this? Probably, because they used false measurement which leads to bad results. In this sense, other studies which allow for heterogeneity across jobs find that ‘only’ 9 percent of all jobs in the 21 OECD countries are automatable.

Remember, each job implies many different tasks and some of them are very hard to automate. Also, automation does not always translate into job losses. This has different reasons. I’ll give you 6 additional arguments what’s going to happen on the labor market besides to job losses.

1.Complementary inputs to decision-making and redesigning jobs

First of all and as I have tried to make clear in my recent book review of ‘Prediction Machines — The Simple Economics of Artificial Intelligence’, the advent of computers has made arithmetic cheap. Thereby, the former occupation of a ‘computer’ (yes, this job title indeed existed!) lost its charm. Today, it is argued that due to AI systems the same will happen to prediction. Hence, when prediction becomes cheaper humans will be used less for predictive tasks.

However, when prediction becomes cheaper, complementary inputs to decision-making like data or human talent and experience such as interpretation, judgment, and action will become more valuable. In this sense, many jobs, as well as whole company structures, are very likely to be redesigned.

In my former article, I have explained how the invention of the automated teller machine (ATM) did not result in bank tellers losing their jobs, but in more of them being employed. I also explained why school bus drivers probably won’t lose their jobs even if autonomous driving cars entered our roads. Both are nice examples of job redesign.

2. The ‘plateau of productivity’ and time-lags of adjustment

When a new technology is presented to the public, it is hyped, but a second dynamic takes place after the first media attraction flows down. We are entering an important sequence of criticism: All kind of challenges to future mass adoption are expressed — economically, politically, socially, legally, morally and with respect to the environmental impact. This is painful for the inventors and researchers in the field, but necessary in order to find out what’s possible, valuable, sustainable and what’s not. Hence, entrepreneurs have to adjust their business models and engage in hardcore problem-solving. All kinds of criticism need to be tackled before becoming mainstream in business and everyday life.

As time goes on, specific technologies push towards the ‘plateau of productivity’. Meanwhile, the redesign of both jobs and company structures explained above (see 1.) takes place. In this sense, Keynes also wrote: ‘We are suffering, not from the rheumatics of old age, but from the growing-pains of over-rapid changes, from the painfulness of readjustment between one economic period and another.’

3. Job creation, skill adaption, and lifelong learning

Thirdly, technological change not only destroys jobs but also generates new types of occupation throughout higher levels of productivity as well as via new or additional types of demand. After the car was invented, coachman lost their job, but occupations like motorcar mechanics, gas station attendants or cab drivers were created. Today, the same holds true. Who would have predicted three decades before that many people are going to make their living as programmers, social media managers, search engine optimizer or web-designers? To be fair, Osborne and Frey focused on the destruction potential for jobs, not on job creation — hence, this factor wasn’t included in the study.

Besides the creation of new types of work, many people react to new demands on the labor market. They adapt to an ever-changing environment. They prevent technological unemployment by switching tasks. They understand that their skills won’t be needed in a few years and adjust by lifelong learning. By the way, I guess this is the best way in order to not become susceptible to automation.

4. Some jobs will be augmented and become qualitatively better

Fourthly, some jobs will become augmented but not lost. For instance, when AI-systems are used in hospitals, they do not only potentially save lives but they fulfill criteria which are challenging even for human experts. Here, a system compares the vast amount of data of the past — including different patients, treatments, and probabilities of success. Thereby, it is aimed to generate information which helps the doctor to provide a single-best diagnosis and figure out successful options for therapies.

However, the doctor still needs to double-check, take the decision and communicate the reason to the patient. In this sense, technology doesn’t take over but fulfills its ultimate goal: To support and help humans. Hence, some jobs will become qualitatively better.

5. Social inequality and political willingness to automate

Fifthly, technological mass adoption depends on our social and political willingness to automate. Most easily explained, some businesses and entrepreneurs have no interest in automating everything. For instance, many patients or clients won’t be happy to be served by a robot, which is why full automation would result in a disadvantage.

Business is a provoking process due to existing rights, financial requirements and legal standards. But maybe, more importantly, it is challenging because of moral standards and norms protected by certain groups of society. As mentioned before, the Queen neglected to patent William Lee’s ‘stocking frame knitting machine’ in 1589 because of personal fears of widespread unemployment. Thereby, she protected the Status Quo. Today, labor unions, NGOs and the media, as well as public opinion in general, will have a role to play. When union workers stipulate a general strike, technological development will be harmed. When the public shows to be against something, it is less likely to occur. On the other hand side, workers need to have their voice in a democracy, which is particularly true in times of harshly expanding social inequality. In this sense, the MIT professors Erik Brynjolfsson and Andrew McAfee predict a skill-biased technological polarization of the labor market.

Hence, our social willingness, our skills, given power structures within society and important questions of wealth distribution will affect the pace and scope of technological evolution. Intelligent industrial, educational and social policy will be required.

6. Many jobs won’t be automated, because further technological development is required

Last but not least, the question ‘which task will be automated and which not?’ will, of course, depend crucially on future technological improvements. Nowadays, AI is heading towards several technical walls.

An example is the so-called ‘symbol grounding problem’, which states difficulties of algorithms to acquire meaning. A machine might be able to recognize patterns better than any human being, especially when trained with vast amounts of data. However, a machine lacks the ability to provide explanations. A robot cannot explain its results.

Another example is the ‘frame problem’, which means that a robot is only able to act within the known environment. Whenever an unexperienced situation occurs, the machine becomes unable to perform. A self-driving car has challenges to find solutions when a traffic signal doesn’t work. In this sense, a vacuum-cleaner won’t be able to cut your pasture or go to the supermarket.

A nice summary of the current challenges of AI is provided here


To sum it up, short-term unemployment is only one factor driven by technology. Besides, many tasks will be complemented and augmented. Technological development implies time-lags, which are best used by industrial adjustment by means of intelligent policy. Within this process, jobs may be replaced but others are created and whole company structures and working flows might be redesigned. Labor market polarization and social inequality remain major concerns and directly reflect given power structures within society and political willingness to adapt. Finally, don’t worry! Many technical problems remain unsolved, further technological improvements in both hard- and software are needed and the majority of us won’t be jobless in the near future. The best strategy for individuals is constant skill adaption and lifelong learning.

What scares me about AI

What scares me about AI

Microservices, or the Design of a Scalable Infrastructure

Microservices, or the Design of a Scalable Infrastructure