Update 2: A free book on Privacy and the Internet of Things
Companies making and selling AI software will need to be held responsible for potential harm caused by “unreasonable practices” – if a self-driving car program is set up in an unsafe manner that causes injury or death, for example, Microsoft said. And as AI and automation boost the number of laborers in the gig-economy or on-demand jobs, Microsoft said technology companies need to take responsibility and advocate for protections and benefits for workers, rather than passing the buck by claiming to be “just the technology platform’’ enabling all this change.
At the end of last week I attended a presentation about Economy 4.0 and Artificial Intelligence (AI). Idea is that all the expected improvements in services and products, the Internet of Things (IoT) depends on the intelligence of the communication of out machines.
Most of the simple repetitive tasks are already computerized; replacing workers either by smart software, or by robots. The next step, which is taken rapidly, are learning algorithms. Learning by algorithms can be described as repeating processes, games or tasks, evaluating the results and adapting the reaction to the initial input. This repetition had to be accompanied by human input, to distinguish right and wrong outcomes. Yet, the last generation algorithms can even learn from other algorithms. By computerizing simple task, so is the expectation, labour productivity will increase by 40%.
The learning process requires lots of data on inputs, procedures and desired outcomes. Therefore, Big Data is of importance and is one of the functions of the IoT to generate and analyze data. The idea is to generate so much data that every task becomes simple and repetitive. In the (nearby) future, all task will become divided in simple repetitive tasks. The presenters sketched a future in which people can enjoy a 24/7 economy where interactions are real-time, but with a robot, who can help you based on your emotional pattern, earlier request, request of similar identities and communication with other AI’s in both your and its own network.
In my view, if something is possible, in time it will be realized. However, this is true for positive and negative developments. So four remarks with respect to the Economy 4.0-future:
- The underlying models of the algorithms determine the validity of the outcomes of the analyzes and actions. For example, criminal profiling depends on the correctness of the relationship between the chosen profiles and the probability of criminal behaviour. Another example is the fact that advertisements on Facebook and websites are determined by past behaviour. boyd and Crawford (2012) cite Bollier: “As a large mass of raw information, Big Data is not self-explanatory”.
Often the remark is heard that after the buy of a pair of sneakers, or in my case a casserole, the algorithms for some time will offer us the same sneakers and casseroles. Learning, then, takes the form of changing the advertisements after I placed a new buy on the internet. A less sympathetic feature is the fact that if you have looked for airline tickets, the price increases with each visit to the website. Some people even reserve one computer for looking and another one for buying! The Financial Times recently asked attention for “The algorithms that seduce our children“: “The tech industry is under scrutiny for how its algorithms manipulate adults but little attention has been paid to how algorithms seduce children, who are far more susceptible than their parents. Children often lack the self-control or even the means to change the channel“.
- An implicit assumption in the Big Data approach is that every task can be divided in simple tasks which can be described by an algorithm, simplifying complex activities in sets of computerized tasks. Activities which are to complex today can be solved by gathering more data. Yet, as someone remarked: finding a needle in a haystack is not made simpler by adding more and more hay to the stack.
boyd and Crawford (2012) compare the influence of Big Data with the assembly line of Ford, stating: “[..] the specialized tools of Big Data also have their own inbuilt limitations and restrictions. For example, Twitter and Facebook are examples of Big Data sources that offer very poor archiving and search functions. Consequently, researchers are much more likely to focus on something in the present or immediate past – tracking reactions to an election, TV finale, or natural disaster – because of the sheer difficulty or impossibility of accessing older data“. Kate Metzler argues that academics in social sciences either lack the access to Big Data or the capabilities for Big Data analyses. According to her, the digital age will result in a division between in company researchers and academic researchers, resulting in a majority of research aimed at selling more, and a minority of research trying to understand social processes and outcomes.
- The gathering of data also raises issues about privacy and ownership of the data. If my behaviour is recorded by some home device, which learns to make expectations on my personal life: raising the temperature after 5, ordering pizza if I’m not home at 7; it is uncomfortable to know that this data is shared with some anonymous IT-workers in Silicon Valley. Especially when firms, but also government agencies, will use this data to forecast my behaviour and use this knowledge to influence my decisions.
To quote boyd and Crawford (2012): “Just because it is accessible does not make it ethical”. Arguments for such actions are often found in “it is convenient for you….”, “it is only to help….”, of “it is for your/the national safety……”. Next to privacy and unwanted influences, data on your behaviour, on and of the web, is worth money. The firms pay the data collecting firms money for the data you have given them. An old internet proverb states that if you don’t pay for the product, you become the product; often followed by the remark that you agreed with the user agreements. Yet, without agreeing with a long list of conditions, you can’t use Facebook, web browsers or other ICT-applications, used for modern communication. So if I want to keep communicating with my family in our WhatsApp group, I cannot state that I agree with conditions 1 – 10, but not with 11 -121. Yet, I still think that there should be some discussion on the ownership of data on myself, or at least have a say in the way it is used and by whom.
- The emergency of Economy 4.0 will cause serious disturbances, even if the algorithms are right, the needle is found and agreement is arrived. As the Austrian economists as Mises and Hayek already showed around the start of the last century, changes in the real economy take time. When workers become obsolete, this will result in unemployment, losses in income and human capital, with real effects. In that sense, Economy 4.0 can increase inequality and social instability.
Economy 4.0 can influence our lives in positive and negative ways. It is expected to increase the ease of use of several appliances, increase quality of life through self-learning and communicating between different machines and applications, mechanization of dirty and tedious work. On the other side, it gives the opportunity for firms, governments and others to influence our behavior in an undesirable way, can give rise to wrong or socially unacceptable decisions, for example by supporting biases etcetera.
These (potential) developments ask for a Public Management 4.0; not only in the sense that government agencies apply the new technologies, but in a way that supports the positive sides of AI and IoT, and suppresses the negative sides.
To do so, governments have to leading in the knowledge of the underlying technologies, but also make the usage of these technologies transparent. Why is a loan denied, what kind of commercials are financed by whom? And in extreme cases, governments should be able to pass laws, forbidding some kinds of usage of the Economy 4.0 technologies.
Next to direct interferences in the negative sides of the technological possibilities, is the obligation to educate the new generations in the do’s and don’ts. So they (we?) know when we are manipulated, know that ‘fake news’ exists, but also see that it requires some effort to make a distinction between good and bad.
Much is possible, much will be positive, but to enjoy the advantages of the AI/IoT developments, society has to defend itself against the negative aspects.