Assinar

Uma visão (c)ética da Inteligência Artificial. Opa!

Buscar
Meio e Mensagem – Marketing, Mídia e Comunicação

Uma visão (c)ética da Inteligência Artificial. Opa!

Buscar
Publicidade

Blog do Pyr

Uma visão (c)ética da Inteligência Artificial. Opa!

Ninguém duvida que IA é legal. Que tem um monte de coisas que vai ajudar em nossa vida. Mas ela vem junto com probleminhas. Aqui, após o evento AI Now da semana passada, em artigo da Fast Company, um alerta bem enfático sobre questões éticas e legais da plataforma tecnológica.


22 de outubro de 2018 - 8h00

Nosso querido e dileto marketing será uma das muitas atividades que serão fortemente impactadas pela invasão da Inteligência Artificial em nossas vidas. Já está sendo impactado, a bem da verdade.

Como você certamente já leu em “n” lugares por aí, mas aqui, com certeza, dezenas de vezes, o alerta que vem junto dessa novidade tecnológica é que sua tão avançada perfeição, que ultrapassa os limites humanos, coloca em cheque até onde devemos ir no avanço das pesquisas e na ampliação de uso de AI.

Não é um tema trivial, nem de segunda importância. Ele é, em verdade, “O” grande tema, quando se fala de Inteligência Artificial.

Em recente evento realizado nos EUA, o AI Now, entre outras coisas, ética e legislação de AI foram debatidos.

Abaixo, em artigo de Katharine Schwab, da Fast Company, um resumo do importante. Basicamente, AI não é neutra, não só só ética, é direitos humanos, cientistas são responsáveis, sim, e, por fim, de que lado você (é você mesmo, leitor (a)) está desse jogo?

A Skeptic’s Guide to Thinking About AI

From marketing hype to fuzzy ethics

Image: Donald Iain Smith/Blend Images/Getty Images

By Katharine Schwab

This week, at the research institute AI Now‘s annual symposium, experts debated some of the most critical issues in society through the lens of AI. The event brought together law and politics professors, lawyers, advocates, and writers to discuss how we as a community can ensure that the technology doesn’t destabilize justice and equity.

For the audience, the discussions offered a compelling introduction to the false claims and ethical dilemmas that surround AI right now. It was a valuable primer for anyone, from designers who are starting to work with machine learning to users who simply have questions about the way their data is being used in society.

Here are four insights about how all of us–from developers to designers to users alike–can see more clearly through the hype. They cast a skeptical eye on the AI’s true abilities, its efficiency-oriented value systems, and the way technology companies are approaching ethics.

AI Is Not Neutral

Virginia Eubanks, associate professor of political science at SUNY Albany, studies how algorithms impact the way welfare is distributed in the U.S. for those living below the poverty line. In her book Automating Inequality, Eubanks explores how algorithms are brought in to decide who is worthy of receiving benefits and who is not.

While they provide a veneer of neutrality and efficiency, these tools are built on what she calls the “deep social programming of the United States”–discrimination against poor people, racial discrimination, gendered discrimination.

“Particularly in my work in public services, these tools get integrated under the wire. We see them as being administrative changes, not as consequential political decisions,” she says. “We have to reject that they’re just creating efficiencies . . . It buys into this idea that there’s not enough for everyone. But we live in a world of abundance, and there is enough for everyone.”

In other words, AI might sound “efficient” on the surface–after all, machines are supposed to be impartial–but in practice, it’s anything but. Chances are, it’s simply faster at making decisions that are rife with the same systemic injustices. Beware of any claim that AI is neutral.

‘AI’ Usually Relies on a Lot of Low-Paid Human Labor

If someone claims their product, service, or app is using AI, don’t necessarily believe it. AI is often used as a marketing tool today–and obscures the humans who are really doing the work.

“So much of what passes for automation isn’t really automation,” says writer and documentarian Astra Taylor. She describes a moment when she was waiting to pick up her lunch at a cafe, and another customer walked in, awestruck, wondering aloud how the app knew that his order was ready 20 minutes early. The woman behind the counter just looked at him and said, “I just sent you a message.”

“He was so convinced that it was a robot,” Taylor says. “He couldn’t see the human labor right in front of his eyes.”

She calls this process fauxtomation: “Fauxtomation renders invisible human labor to make computers seem smarter than they are.”

Don’t Just Talk About Ethics–Think About Human Rights

“Ethics” has become the catchall term for thinking about how algorithms and AI are going to impact society. But for Philip Alston, a law professor at NYU who is currently serving as the UN Human Rights Council’s Special Rapporteur on extreme poverty and human rights, the term is too fuzzy.

“In the AI area we’re well accustomed to talking about inequality, the impact of automation, the gig economy,” Alston says. “But I don’t think the human rights dimension comes in very often. One of the problems is that there’s neglect on both sides. The AI people are not focused on human rights. There’s a tendency to talk about ethics which is undefined and unaccountable, conveniently. And on human rights side, it’s out of our range.”

In a 2017 report, Alston documented his travels across the U.S. studying extreme poverty. He questions whether policy makers are giving enough thought to how the use of machine learning technology is impacting the most vulnerable people in the country–and if it is violating human rights. “It is extremely important for an audience interested in AI to recognize that when we take a social welfare system and . . . put on top of it ways to make it more efficient, what we’re doing is doubling down on injustices,” he says.

Despite rhetoric that claims AI will solve a host of human ills, we have to be careful it doesn’t enable or exacerbate them first.

If You’re Designing with AI, Ask Yourself These Questions

Eubanks says she often is asked for a “five-point plan to build better tech.”

While she doesn’t think that tech is the answer to the problem of policy (instead, “we need to move toward political systems that act like universal floors not moral thermometers”), there are a few questions she always asks designers to think about as they’re building technology.

1. Does this tool increase the self-determination and dignity of the people it targets?

2. If it was aimed at someone other than poor people, would it be acceptable?

While these questions are tailored toward Eubanks’s focus on welfare distribution algorithms, the first, in particular, is a question that every designer should be asking of their work. When you’re trying to help people using technology, you need to ensure, first and foremost, that your tool is going to affirm the self-determination and dignity of your users.


Katharine Schwab is an associate editor based in New York who covers technology, design, and culture. Email her at kschwab@fastcompany.com and sign up for her newsletter here.

Publicidade

Compartilhe

Veja também