Эротические рассказы

We Humans and the Intelligent Machines. Jörg DrägerЧитать онлайн книгу.

We Humans and the Intelligent Machines - Jörg Dräger


Скачать книгу
– there is nothing to complain about. Kyle Behm even has experience in retail and is a good student. His father, a lawyer, starts investigating and discovers the reason. All seven supermarkets use similar online personality tests. Kyle suffers from bipolar disorder, a mental illness, which the computer programs recognized when they evaluated the tests. All the supermarkets rejected his application as a result.

      Behm’s father encourages him to take legal action against one of the companies. He wants to know whether it is permissible to categorically block a young man from entering the labor market simply because an algorithm is being used. Especially since Behm is in treatment for his illness and is on medication. Moreover, his doctors have no doubt that he could easily do the job he applied for. Before the case goes to trial, the company offers an out-of-court settlement. Behm obviously had a good chance of winning his case.

      Larger companies in particular are increasingly relying on algorithms to presort candidates before asking some in for an interview. The method is effective and inexpensive. An algorithmic system has no problem doing it, even if several thousand applications are to be considered. However, it can become a problem for certain groups of people if all companies in an industry use a similar algorithm. Where in the past a single door might have closed, they now all close at once. The probability of such “monopolies” being formed is increasing because digital markets in particular adhere to the principle “The winner takes it all,” i.e. one company or product wins out and displaces all competitors. Eventually only one software application remains – to presort jobseekers or to grant loans.

      That does not bother a lot of companies: Such software allows them to save time and increase the effectiveness of their recruiting procedures. And for some applicants the algorithmic preselection also works out since their professional competence and personal qualities count more than the reputation of the university they attended, or their name, background or whatever else might have previously prevented them from getting the job (see Chapter 12). While some people’s chances on the labor market increase and become fairer, other groups are threatened with total exclusion, such those who suffer from a health condition, as Behm does. Such collateral damage cannot be accepted by a society that believes in solidarity. In areas impacting social participation, an oversight authority is therefore required that recognizes algorithmic monopolization at an early stage and ensures diverse systems are present (see Chapter 16).

       No blind trust

      As these six examples have shown: Algorithms can be deficient and produce unwanted results, data can reflect and even reinforce socially undesirable discrimination, people can program software to achieve the wrong objectives or they can allow dangerous monopolies to take shape. Thus, blind faith is inappropriate. Algorithms are merely tools for completing specific tasks, not truly intelligent decision makers. They can even draw wrong conclusions while fulfilling their mission perfectly. After all, they do not understand when their goals are inappropriate, when they are not subject to the necessary corrections or when they deprive entire groups of the opportunity to participate in society. They can do considerable harm with machine-like precision. When algorithms are mistaken, we cannot let them remain in their error – to return to the adage by Saint Jerome quoted in the last chapter. People are responsible for any wrongdoing of this sort. They determine which objectives algorithms pursue. They determine which criteria are used to reach those objectives. They determine whether and how corrections are carried out.

      And just like Carol from the Little Britain sketch, they hide behind algorithms when they do not want to or cannot talk about the software’s appropriateness. For example, the head of Human Resources at Xerox Services reported that algorithms are helping her department reduce the high turnover at the company’s call center. The software used to parse applications predicts a potential employee’s length of stay at the company (see Chapter 12). When asked what criteria the program used, the HR director replied, “I don’t know why this works. I just know it works.”11 Such answers forestall any debate about which candidates are rejected and why and whether there might be a systematic bias.

      A second example is provided by Germany’s Federal Ministry of the Interior. It used facial recognition software in a pilot project at the Südkreuz train station in Berlin to search for criminals and terrorists. Its official statement for the project reads: “We achieved a 70-percent and above recognition rate of the test subjects – a very good figure.”12 This means that the software correctly recognized seven out of ten wanted persons. But that is not the entire story. The ministry did initially not disclose the number of innocent passers-by falsely identified by the system. Its complete interim report has been kept under lock and key.13

      Both users, Xerox Services and the Ministry of the Interior, are thus making it more difficult to have a public discussion on the use of algorithms, one that is sorely needed. Both the question of possible discrimination in selecting employees and the right balance between surveillance and security needs are sensitive issues in a free society. Citizens can legitimately demand that users of algorithms assume responsibility and not hide behind a machine. More facts and figures need to be on the table for a real debate to take place. After all, only those who understand how their systems work can detect and eliminate errors and biases.

      Not only do we need effective algorithms, algorithms need us, too. We must therefore act in a way that is both competent and ethically responsible. In addition to the technical challenge, there are moral and legal aspects which must be addressed. Where seemingly intelligent machines judge people and errors quickly have a resounding impact, people must be able to discuss and define the goals machines are used for and comprehend their basic functioning at all times. We have a social responsibility to ensure that the software that governs our lives functions properly, that it is corrected when necessary, and that it receives the feedback it needs to improve. In cases where this is not possible and society’s key principles, including social solidarity, become endangered, we must not shrink from prohibiting the use of algorithms. In a democracy, artificially dumbing down artificial intelligence is a legitimate response (see Chapters 14 and 15).

       What algorithms can do for us

      A world without algorithms is hardly imaginable today. They have crept almost imperceptibly into our lives. Intelligent machines are now used almost everywhere that information is available electronically. The following nine chapters show the extent to which they are deployed and the impact they have. This second part uses practical examples to show how algorithms can make life better and more just for each of us and for society as a whole. Yet people and machines do not always complement each other in a meaningful way. Their interaction can also have negative consequences for individuals and society – be it unintentionally or by malice aforethought.

       An algorithm for algorithms

      It is precisely this tension that interests us. We want to examine those algorithmic systems that influence whether people can participate in society. For better or for worse. With consequences that concern us all, because they bring either social progress or serious disadvantages. Not all algorithms are truly relevant to society. Neither the spell checker in word processing software nor a car rental company’s computer-driven fleet management system will shake the foundations of communal life. They do not need public discourse – which, on the other hand, is indispensable if algorithms are to have a say in asylum procedures or prison sentences.

      To select the examples in the following chapters we used an “algorithm for algorithms” developed by researchers Kilian Vieth and Ben Wagner (see Chapter 14), which measures the relevance to society as a whole. Its main criteria: Are people being evaluated by the algorithmic system? How dependent are they on the result? How much political and economic power does


Скачать книгу
Яндекс.Метрика