DE
|
EN
Analysis

Artificial Intelligence in the World of Work: The Use of Artificial Intelligence Requires a Great Deal of Human Expertise

BIO

Ms Zweig, do the political decision-makers you meet in Germany and Europe understand the changes that artificial intelligence (AI) will bring?

Zweig: I think it differs a great deal. The members of the Bundestag’s Study Commission on Artificial Intelligence are, of course, exactly the parliamentarians who have already engaged with this issue. Otherwise, I see the full spectrum: a district government has shown interest in these issues, and the governments of some of the German federal states (Länder) are sitting up and taking notice. Together with Gerald Swarat from Fraunhofer, we have just set up an AI initiative for municipalities which essentially aims to begin by educating them – we’re concerned that otherwise AI systems could be purchased too quickly and without an understanding of how the technology functions.

Are there typical misconceptions about AI that you encounter again and again?

Zweig: There is this vague idea that AI is intelli­gent. Very often, people don’t understand the basic mechanism – namely, that the methods which are so often discussed today are simply statistical processes which look for patterns in data. And many people don’t realise how many control options exist in this context. They have the impression that the machine calculates an optimal and objective solution. But some issues are still too complex for that, especially when it comes to people.

In what sense?

Zweig: In the case of complex issues, we have no algorithms which could actually find an optimal solution – only what are known as “heuristics”. These heuristics try to find patterns in the data which are as meaningful as possible – but they can’t guarantee that they have found the best patterns. This means that, depending on the data­set and the question asked, errors are possible. Most people don’t realise that, and so in some areas too much is expected of AI while in other areas this confidence is justified.

Does this excessive confidence in the capabilities of AI result in policy-makers and businesses being overly concerned about AI systems – and keen to regulate it too strictly?

Zweig: It can go either way. If the capabilities of artificial intelligence are blown out of proportion, this can also give rise to a feeling that AI offers opportunities for businesses which absolutely have to be unleashed. And on the other hand, there is the desire for strict regulation – and we really need to be careful in that respect.

Why?

Zweig: AI is not a singular technology. It is a set of methods which are used to try to extract patterns from data. My field of research concerns those AI systems which then make decisions using the identified patterns. And these decisions are as diverse as the decision-makers they are intended to replace or support. It has to make a difference, when it comes to regulation, whether someone is recommending a book to me in a public library, whether a doctor is offering me a diagnosis, or whether a judge is deciding on the length of my prison sentence. It’s the same for AI systems.

„AI is not a singular technology. It is a set of methods which are used to try to extract patterns from data.“

Is there a rule of thumb regarding which AI systems should actually be regulated?

Zweig: Really only the systems which are used in areas regulated by law. Broadly speaking, these are AI systems which take decisions about people or people’s belongings, or which grant access to social or natural resources.

What resources, specifically?

Access to the labour market, access to the housing market, access to oil, energy, education – any resource which is not infinitely available. But when it comes to the question of how I can produce the best paper clips, if I develop an AI system which removes paper clips from the production line if they are not properly folded – surely that’s a case where no regulation is needed.

What criteria should be used to assess whether or not regulation should be introduced and how strict it should be?

Zweig: We are researching this issue at the moment – it’s not easy to develop simple rules for this because as I already mentioned, there are a wide variety of application contexts. When it comes to AI systems which take decisions or support decision-making processes, the main issues are how high the potential for harm is when an AI system is used and how dependent people are on the decision. Take an evaluation system for job applicants and staff, for example: it makes a difference whether I, as an applicant, write to 200 companies which all have their own system – because even if 10 per cent of the systems reject me, I can still hope that the other 90 per cent use different criteria.

However, if the same system is used to evaluate internal applicants, the degree of dependence is much higher because I can’t simply change companies in order to be evaluated differently. So as you can see, the same system integrated into different social processes requires different levels of oversight.

What is the situation like in the field of labour and social affairs? Is there interest in the changes that AI will bring?

Zweig: Works councils have been involved in examining AI systems for a very long time already, more than five years, I’m sure. We do a lot with works councils, and we are now almost on the point of creating a workshop system through which we can provide regular training. I have also already given talks at employment agencies, and we’ve also done some work in the field of consumer protection. I think in every field there are people who realise: we should actually take a look at this issue.

„The same system integrated into different social processes requires different levels of oversight. “

Are there positive examples, in your view, of how AI is already being used today by public administrations which should be looked at more closely?

Zweig: On the whole, it is a difficult area, but it can’t be ruled out that such systems could be of assistance to agencies. It is important for the use of these systems to be observed and assessed by experts to determine how accurate and effective they are. In addition, the process in which the system is to be used must also be carefully prepared.

With this in mind, there is a positive example of how a pilot project can be set up when a public authority introduces AI: the algorithm used by Austria’s public employment service (AMS). Actually, it’s only a heuristic, but let’s stick to the more common term “algorithm”. The system assigns unemployed people to one of three groups.

The first group consists of people who will definite­ly get back into work. The third group consists of people who have been out of work for a very long time already and might not return to work. And there is a second group between the two for everyone who doesn’t fall into either of these groups. The intention is that this second group will receive greater support in the form of continuing education and training. Obviously, this is a very sensitive task and so it must be properly evaluated.

How does it function in concrete technical terms? How does the AI sort the claimants?

Zweig: The heuristic used is what is known as a logistic regression, which is a very, very simple form of machine learning. Unlike other machine learning methods, the result is still very easy to follow: a human being can understand how the various factors affect the outcome.

And what was learned from this process?

It turned out that there is a kind of penalty if you are a woman, over 50, or a caregiver. This resulted in a huge outcry in the media and claims that the software is discriminatory. But that’s not quite right – because the heuristic which was used learned from the labour market.

The software exposes discrimination, so to speak, but does not discriminate itself?

Zweig: Yes, it exposes discrimination. Does it discriminate itself? Well, first of all, the software has absolutely no agency of its own, it doesn’t “do” anything in the sense of taking autonomous action. But in certain circumstances, depending on how the software is used, the identified discrimination can be perpetuated. According to the head of Austria’s public employment service, the expected effect is that people who face discrimination in the labour market are more likely to be sorted into the second category by the algorithm – and as a result they will receive greater support. That would almost be a form of positive discrimination, a kind of compensatory approach.

What can be learned from the Austrian example?

Zweig: I believe there is only one way to find out whether such systems are helpful or not: they must receive expert support from the out­set and the system as a whole must be evaluated to determine whether performance really improves or not. But that is often impossible simply because we don’t know how good the human decision-making was beforehand. There is often a feeling – in the field of HR, for example – that the decisions taken by people are not good enough. And so the decision is made to do something about it.

What’s the problem with that?

Zweig: It results in action for action’s sake. A system is purchased which is supposedly good and it is often pre-trained on external data. I have a good example of this from the medical field: many German hospitals ran pilot projects trialling an AI system to support diagnosis in cancer cases – and it did quite a good job at that, in and of itself. However, the system also made strange suggestions. One reason for that might be the fact that the system was trained in the United States, where doctors have a financial stake in the med­ication they prescribe. Of course, that fed into what the system learned, and so the system chose specific drugs. What I’m saying is that these systems can’t be trained just anywhere and then simply purchased.

Is it good that you can’t simply buy whatever pre-trained AI system is available?

Zweig: Yes, I personally find that reassuring. I’m often asked whether we in Europe have already been completely left behind, because the United States and China are supposedly so far ahead of us. But all of these examples show, again and again, that if anyone wants an AI system for Europe, it has to be trained on European data. Europe is an important market – but only when taken together! And that means we are in a powerful position in this context: if we decide that our data has to be handled differently, then there are few alternatives for anyone who wants to develop systems for Europe con­cerning Europeans’ behaviour. The most important change I’m calling for is that our digital behavioural data should no longer be allowed to be collected centrally in order to learn from it. Decentralised machine learning processes should have to be used instead. So far, there is still a lack of infrastructure in this field and additional research is needed.

If the power of the market is to work, however, AI systems must be subject to oversight – but there are no central agencies for this yet. In your view, who should be responsible for the oversight of AI systems?

Zweig: I believe that we already have arbitration organisations for most social processes: the works councils where workers are concerned, the consumer protection authorities where consumers are concerned, and the supervisory authorities for private broadcasters where the private media are concerned. But of course the AI literacy of these bodies and agencies would need to be developed.

„If anyone wants an AI system for Europe, it has to be trained on European data. Europe is an important market – but only when taken together!“

From the perspective of employees, many of the issues in the field of labour and social affairs – from the hiring process, to bonuses, assessments, and even automated processes to discard applications, like those already tested by a large American e-commerce company – fall into what you have described as a sensitive area. Is it realistic to develop AI literacy on a decentralised basis?

Zweig: It’s completely realistic. Works councils have been thinking about these issues for years already. And in fact we run quite a lot of workshops and, in all honesty, even in 45 minutes you can achieve a certain level of literacy on the subject of AI systems. That is why I’m relatively optimistic that we can fairly easily and quickly ensure a widespread basic understanding of how machine learn­ing functions, in particular, and what it can and can’t do.

And then the works council will look at whether an AI system is discriminatory? And whether the data has been properly prepared?

Zweig: No, the works council itself can’t do that. Experts are of course needed for that. But they will be available when the market exists and there are services available.

What would these bodies and institutions need to be able to ensure actual oversight of artificial intelligence systems?

Zweig: On the one hand, they need the special­ists I mentioned. Above all, though, they need – depending on how great the system’s potential for harm is – access to data and interfaces which enable them to understand exactly what is happening and whether, for example, discrimination is taking place. That is why we have proposed regulation which establishes various transparency and explainability obligations based on the potential for harm and the degree of dependence on a decision. (Figure 1)


What could that look like in practice?

Zweig: My colleague Michael Wagner-Pinter in Austria, who developed the algorithm used by the Austrian public employment service, offers his software together with a set of social responsibility rules, for example. Rules on how the system is to be used in practice.

What do these rules require?

Zweig: For example: a decision on what category a person ends up in must always be discussed with the jobseeker. The individual is allowed to contest the decision. An individual can see and change his or her basic data at any time. This means that if the machine has taken a decision on the basis of inaccurate data, the decision can be over­ridden. If it is overridden, it is necessary to docu­ment why it was overridden. And the system is updated each year, using only data from the previous four years. Jobseekers thus have a right to be forgotten; poor performance need not follow them into old age. We now call for addi­tional technical access options which enable lawyers to identify systematic differences in treatment – in cooperation with specialists, of course.

„Jobseekers thus have a right to be forgotten; poor performance need not follow them into old age. “

What are the arguments against an association for technical inspection to oversee AI systems?

Zweig: The point is that it is not the system in and of itself which needs to be reviewed. The system is just one part of the review. And after all, we don’t have a single authority which simultaneously determines whether doctors have made mistakes or whether lawyers are performing their work correctly – instead, there are separate institutions people can contact if they have the feeling that wrong deci­sions are being made systematically in either of these professions. In addition, we need an approach which always looks at the overall process, as was shown by the example of the algorithm used by the Austrian public employment service. In my opinion, we need an approach which, rather than signing off on the software, monitors the quality of the overall process and certifies that it meets certain quality standards. Another advantage is that it would remove the need to certify different versions of a software. Instead, the quality assurance procedure of the overall process would have to be certified – and so long as the evaluation always takes place on the basis of certain criteria and in view of the potential for harm, the com­pany will then be able to operate. In addition, however, we need an independent institution when it comes to AI in the public sector.

„But whether we ultimately let machines take decisions about people in these sensitive areas – that is a question which requires broad dis­cussion!“

In your view, could the field of labour and social affairs in the public administration be a testing ground for seizing the opportunities of AI?

Zweig: Labour is a tricky field. It’s one field where I’m not sure whether today’s AI systems are complex enough to take into account the context dependence that we’d ideally like to have. It is often claimed today that there is no alternative to using AI systems. But of course alternatives exist: employing more and better job counsellors, for example. So yes: on the one hand, it would be an interesting field, because we could learn a great deal about how to provide better support for human decision-making. That is because the computer forces us to define things more clearly: what actually constitutes success? Getting more people into work? How do we subsequently want to measure success? I believe that this process, in itself, could have a very positive impact. But whether we ultimately let machines take decisions about people in these sensitive areas – that is a question which requires broad discussion!

share page