And their vision of our future

The soft brutality of crappy algorithms, and how this model delivers blunt reflections of prejudice.

Corporations don’t want our opinions with the nuances and compassion still attached. They capture a causally harsh picture of us and that picture is reflected in the development of the only tools we have to communicate with each other…which reinforces its impact on society.

  • Virtually all internet hubs traffic in this lowest-common-denominator model, embracing and broadcasting our worst and weakest traits as norms. How do we fight this?
  • Whatever systems we have for societal healing and repair (do we have any?) are drowned in a flood of unanticipated impacts.
  • How will these results affect the assumptions built into business and social planning?
  • How will those plans reinforce our tacit assumptions about each other?

Emil Protalinski, writing for VentureBeat: At the Movethedial Global Summit in Toronto yesterday, I listened intently to a talk titled “No polite fictions: What AI reveals about humanity.” Kathryn Hume, Borealis AI’s director of product, listed a bunch of AI and algorithmic failures — we’ve seen plenty of that. But it was how Hume described algorithms that really stood out to me. “Algorithms are like convex mirrors that refract human biases, but do it in a pretty blunt way,” Hume said. “They don’t permit polite fictions like those that we often sustain our society with.” I really like this analogy. It’s probably the best one I’ve heard so far, because it doesn’t end there. Later in her talk, Hume took it further, after discussing an algorithm biased against black people used to predict future criminals in the U.S.

“These systems don’t permit polite fictions,” Hume said. “They’re actually a mirror that can enable us to directly observe what might be wrong in society so that we can fix it. But we need to be careful, because if we don’t design these systems well, all that they’re going to do is encode what’s in the data and potentially amplify the prejudices that exist in society today.” If an algorithm is designed poorly or — as almost anyone in AI will tell you nowadays — if your data is inherently biased, the result will be too. Chances are you’ve heard this so often it’s been hammered into your brain.

 

twitterrssinstagramtwitterrssinstagram

FacebooktwittermailFacebooktwittermail