Wednesday, March 11, 2020

Researchers Have Accidentally Been Making Their Software Sexist

Researchers Have Accidentally Been Making Their Software Sexist As our workplaces continue to increase their reliance on technology, artificial intelligence ensures that newer technologies keep evolving so they remain the best possible resource. But how do we make sure that technologies develop the right kind of perspective and that they avoid the unconscious biases humans have?Computer science professors at the University of Virginia recently tested the existence of unconscious bias within software they were building. They taught machines using basic photo collections and quickly discovered that the materials they were using were inadvertently teaching machines sexist views of women.Researchers found that major research-image collections including one supported by Microsoft and Facebook displayed a predictable gender bias. As an example, these images associated photos of coaching with men, while women were tied to images of shopping and washing.Professor Vicente Ordez, who spearhe aded the study, told Wired how the software magnified its bias in other functions. It would see a picture of a kitchen and more often than not associate it with women, not men, he said. The software would recognize a photo of a person in a kitchen and assume that that person, just because they were in a kitchen, was a woman.Ordez realized that the software didnt develop its sexist views on its own the biases displayed by the software were unconsciously injected by the researchers who built it and the data it learned from. fruchtmark Yatskar, a researcher who also worked on the project, stressed that technological unconscious biases must be actively avoided.This could work to not only reinforce existing social biases, he said. But actually make them worse.To his point, machine-learning software didnt just mirror existing biases it amplified them. If the software analyzed a photo set that generally associated women with cooking, the software then created an even stronger association b etween the person and their environment. As major companies rely on this software to accurately train consumer-facing tech on how to view people, the biases within the data are incredibly concerning.A system that takes action that can be clearly attributed to gender bias cannot effectively function with people, Yatskar said.Fortunately, these biases can be addressed. Researchers can prevent (and de-program) unconscious biases, but in order to do so they must actively seek out specific, shared prejudices within the software. This neutralizes the bias, but as larger tech companies like Microsoft have shown, it is a Herculean task.I and Microsoft as a whole celebrate efforts identifying and addressing bias and gaps in data sets and systems created out of them, Eric Horvitz, director of Microsoft Research told Wired. In response to this, Horvitzs team has developed an ethics code for all of its consumer-facing technology. If the technology doesnt meet those standards, it does not move f urther in development.If this strategy sounds vaguely similar to your companys diversity training program, thats because it is. Diversity training programs require employees to undergo a lot of self-analysis to determine what their biases are. When aiming to eliminate unconscious bias in machinery, researchers must do the same thing.Sheryl Sandberg, Facebook COO and author of Lean In, acknowledges that technology used as the foundation for consumer products needs to be held to a higher standard. At Facebook, I think about the role absatzwirtschaft plays in all this, because marketing is both reflective of our stereotypes and reinforces stereotypes, she told The New York Times. Do we partner into sexism or do we partner against sexism?Sandbergs decision to partner against sexism is one of the reasons her nonprofit, Lean In, partnered with Getty Images to create the Lean In Collection a series of stock photos that feature diverse women in a multitude of different careers.You cant be what you cant see, Sandberg said in reference to the photo collection.Sandbergs steps forward are great steps for the immediate future, but ensuring that the data used to train new technology remains bias free remains a pressing issue. If replicated in larger products, these biases could create erroneous digital ideas about women and eliminate much of the progress we have made.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.