Why can artificial intelligence be racist and macho?



[ad_1]

8 November 2018 06:35
|
Updated November 8, 2018 18:36

Peruvian researcher Omar Flórez is preparing for the future "well, very close", where the streets are full of surveillance cameras that are able to identify the face and gather information about us through the city.

Explain that they do it without our promise because they are public spaces, and most of us do not usually cover their faces when leaving home.

Our growth comes as a password, and when we store it, it recognizes and examines information as if we were new or repeating customers or where we are before we cross the door. All the information collected depends on what kind of treatment this company provides to us.

Flórez wants to avoid such aspects as gender or skin color are part of the criteria that these companies estimate when deciding whether we deserve a discount or other special attention. Something that can happen without the same companies discovering.

Artificial intelligence is not perfect: even though it is not programmed to do so, the software can learn by itself to distinguish.

Flórez is an algorithm that allows for facial recognition, but hides sensitive information, such as race and gender OMAR FLOREZ

This engineer, who was born in Arequipa 34 years ago, has received a doctorate in computer science from Utah State University (USA) and is currently a researcher at Capital One.

He is one of the few Latin Americans who studies the ethical aspects of machine learning or automatic learning, the process he describes as "the ability to foresee the future with computing computers".

Technology based on algorithms used to develop a car without a driver or to identify, for example, skin cancer diseases.

Flórez works in an algorithm that allows computers to identify a face but can not understand a person's gender or ethnic origin. His dream is that when it comes to the future, companies will incorporate their algorithm into their computer systems so that racist or macho decisions can not be ignored.

We always say that we can not be objective precisely because we are people. We have tried to rely on machines so that they are, but it does not seem that they can;

Because they're programmed by humans. In fact, we have recently noticed that the algorithm itself is an opinion. I can solve the problem with algorithms in different ways and each of them, in some way, takes my own view of the world. In fact, choosing what is the right way to evaluate the algorithm is already a perception, an opinion about the algorithm itself.

Let's say I want to predict the likelihood that someone will commit a crime. So I'll take pictures of people who have committed crimes where they live, what is their breed, age, etc. Then I use this information to maximize the accuracy of algorithms and predict who can commit the crime later, or even when the next crime can happen. This prediction can lead to more police focus on areas where there are suddenly more African people because there are more crimes in the area or are beginning to stop the Latins because they are unlikely to have any documents.

So, for someone who has a legal residence or an Afro offspring and lives in this area, but does not commit a crime, so the algorithm's brutality is twice as difficult. Since the algorithm belongs to your family or distribution, it is statistically much harder to leave a family or distribution. In some ways, the reality surrounding you has a negative impact. In essence, until now, we have codified stereotypes that we have as human beings.

The dream of Peru's researcher is that in the future companies using face recognition programs will use their algorithm. | OMAR FLOREZ

That subjective element is the criteria you chose when programming an algorithm.

Exactly. There is a chain of processes in the process of getting an automatic learning algorithm: collecting data, choosing which features are important, selecting the algorithm itself, then doing a test to see how it works and reducing errors, and finally bringing it to the public to use it. We understand that prejudices are in all of these processes.

ProPublic's research revealed in 2016 that a number of US state legal systems used software to determine which respondents were more likely to reoffend. ProPública noticed that algorithms favored white people and punished with blueberries, even though the form in which the data was collected did not contain questions about skin tones & mldr; So the machine thinks it and uses it as a criterion of value, even if it is not designed to do so, right?

What happens, there are data that already encode the competition, and you do not even know it. For example, in the United States, we have a postal code. There are areas where only or mostly African Americans live. For example, in Southern California, mostly Latino people live. So, if you use the postal code as an automatic learning algorithm, you also encode the ethnic group without realizing it.

Is it possible to avoid this?

Obviously, at the end of the day, responsibility lies with people who program the algorithm and how ethical it can be. So if I know that my algorithm works 10% more error and stop using something that can be sensitive to characterizing the individual, then I simply take it away and take responsibility for the possible economic, financial consequences that I can be my company. So, it is definitely an ethical barrier to what decide what goes and what does not go to the algorithm and often falls into the programmer.

Suppose that algorithms handle only large amounts of data and save time. Is there no way to make them misleading?

Incorrect No. Because they are always an approximation of reality, so it's good to have a certain mistake. However, there are currently very interesting research work in which you say clearly punish sensitive information. Thus, in principle, a person chooses which information may be sensitive or not, and the algorithm quits its use or does so in a way that does not seem to correlate. But honestly, every computer has numbers: either it is 0 or 1 or an average, it does not make sense. Although we have many interesting works that we can try to avoid prejudices, there is an ethical part that always belongs to man.

Is not an area that an expert does not believe should not be left to artificial intelligence?

I think at this time we should be willing to use the computer to assist rather than automating. Your computer should tell you: these are the ones you should first deal with in the judiciary. I should also tell you why. This is called interpretation or transparency, and the machines should be able to indicate what kind of reasoning has resulted in such a decision.

Computers have to decide on the models, but are not ordinary stereotypes? Are not they useful for detecting a system?

For example, if you want to minimize the error, it is good to use numerical prejudices as it provides a more accurate algorithm. However, the developer must understand that this is the ethical component. At this time, there are provisions that prevent you from using certain features, such as analyzing credit information or even using security videos, but they are very early. Suddenly we need it. Know that reality is unfair and that it is full of prejudices.

Interestingly, though, some algorithms allow us to try to minimize this prejudice. That is, I can use skin tone, but without that more important or the same meaning for all ethnic groups. So to answer your question, yes, you may think that in practice this uses more accurate results and many times so. The ethical factor is once again: I want to sacrifice a certain degree of precision so that we do not give the user a bad experience or use any prejudices.

Technology for car acquisition without a driver uses machine language learning. | GETTY IMAGES

Amazonian experts realized that the IT tool they had designed for staff selection discriminated against curricula that contained the word "woman" and favored more commonly used terms. It's pretty surprising, because avoiding bias, you should guess what the terms men use more often than women in curriculum.

Even people are hard to understand.

But at the same time we do not try to make gender differences and say that words or clothes are not masculine or feminine, but we can all use them. Machine learning affects the opposite direction because you have to recognize the differences between men and women and examine them.

Algorithms gather only what is actually happening and the reality is that yes, men use a few words that women may not. And the reality is that people sometimes associate better with these words, as men also judge. So to say otherwise you might be able to resist the information. This problem is avoided by collecting the same number of curricula for men and women. It algorithm has the same weight for both or for words that use both sexes. If you choose only 100 curricula that you have on your desk, maybe only two of them are for women and 98 for men. Then you make prejudices because you model just what is happening in the human universe in this work.

So, it's not science for those who care about politically correct because you need to dig out the differences & mldr;

You have touched a big point that is empathy. One of the engineering stereotypes is someone very analytical and maybe even social. It is a coincidence that we need engineers that we did not regard as meaningless, or it seemed good to us that it was like empathy, ethics and mldr; We need to develop these things because we make so many decisions during the algorithm implementation process and often there is an ethical part. If you're not even aware of it, you will not notice.

Flórez says that in the near future face is our password. | GETTY IMAGES

Do you notice the difference between the algorithm designed by the person and the 20 designed by the person?

In theory, bias should be reduced by more human algorithms. The problem is that many times this group is composed of very similar people. Perhaps they are all men or they are all Asians. Perhaps it is good for women to understand things that a group does not usually understand. That is why diversity is at this moment so important.

Can we say that the algorithm reflects the writer's prejudices?

Yes.

And that there are algorithms with prejudices, just because the algorithms are low in diversity?

Not only that, but it's an important part. I would say that it is partly due to self-consciousness that reflects reality. Over the last 50 years we have been trying to create algorithms that are similar to reality. Now, we have realized that many times reality also reflects stereotypes of people.

Do you think there is enough awareness in the industry that algorithms can be offended or something that is not so important?

At a practical level, it is not considered important. At the research level, many companies are beginning to seriously investigate this issue by creating FAT groups: fairness, accountability and transparency (right, responsibility and transparency).

[ad_2]

Source link