Twitter Launches “Bug Bounty Challenge” to Remove Bias and Discrimination in Their Algorithm

By Caroline Campbell on Aug 13, 2021
Encyclopedia of Organizational Knowledge, Administration,
and Technology
Mehdi Khosrow-Pour D.B.A.
©2021 | 2,734 pgs. | EISBN: 9781799834748
  • Contributed by 190+ Researchers From 55+ Countries
  • Contains 185+ Chapters
  • Covers AI, Automation, Big Data, & More
Quick Links
Bibliographic Information
Pricing & Purchase Options
Table of Contents
Recommend to Library
Access Full Text
Recently, the popular social media platform, Twitter, announced its first “algorithmic bias bounty challenge” for researchers and hackers to fix the bias in their image-cropping algorithm. According to an Insider article, this competition was established after a group of researchers found that the platform’s algorithm favored “white people over black people, and women over men.” The individual who solves this issue will be award US$ 3,500 and will be invited to present their work at the DEF CON AI Village event, which is a workshop hosted by Twitter annually.
This bounty challenge is highlighting the ongoing issue of algorithm bias (also known as machine learning bias) that is occurring across social media platforms and technology. It is what led to the issues in facial recognition in Apple products, lead to Amazon removing machine learning and AI from their hiring process as it favored men over women, and led to contention over political ads on social media in the 2020 presidential election. This bias can occur in AI systems due to flawed data sampling and it can be unintentionally influenced by those that are building this system.
Not only can this influence nearly all platforms that utilize AI, machine learning, and data, but from a recent BBC radio 4 episode, “Science in the Time of Cancel Culture,” it is greatly influencing the academic and research community by providing the perfect storm on social media that is leading to professors, researchers, and academicians to be “cancelled” or ostracized by their peers and fired from their positions. The host, Prof. Michael Muthukrishna, from London School of Economics, UK, interviews experts in this area who state that algorithm bias is promoting false science and bringing a “collective mindset” to the forefront of social media, which is leading to individuals coming together to push against scientists, whose findings they may not agree with. This includes “cancelling” geneticists who have findings on the origins of gender, researchers studying global warming, and more.
Understanding that algorithm bias can greatly impact our hyper-technological society and threaten the foundation of science and academia, Prof. Julie M. Smith, from University of North Texas, USA, explains how to overcome algorithm bias in her chapter, “Algorithms and Bias,” featured in the Encyclopedia of Organizational Knowledge, Administration, and Technology.
Legal Regulations, Implications, and Issues Surrounding Digital Data
Interdisciplinary Approaches to Digital Transformation
and Innovation
Enriching Collaboration and Communication in Online Learning Communities
Understanding the Role of Artificial Intelligence and Its Future Social Impact


View a Preview of the Open Access Article Below
On March 23, 2016, Microsoft released a bot called Tay on Twitter that was capable of interacting with other Twitter users. Just two days later, a Microsoft Vice President, Peter Lee, had to publicly apologize because, as Lee (2016) explained in a blog post, “Tay tweeted wildly inappropriate and reprehensible words and images,” including support for genocide. Lee explained that, while this was not the first chatting bot Microsoft had released, they were unprepared for users who exploited Tay’s tendency to parrot extremely offensive messages. Lee noted that Tay, as an artificial intelligence, learned from “both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical.”
Tay is but one example of the fact that by the second decade of the twenty-first century, algorithms were responsible for making a nearly infinite number of decisions, from deciding who would be given a bank loan to determining what advertisements joggers hear on their music streaming service. It is fair to say that most people do not appreciate the extent to which the choices they make are influenced by algorithms. Microsoft’s experience with Tay serves as a cautionary tale for the ability that algorithms, especially those that deploy machine learning and/or artificial intelligence, have to engage in patently discriminatory behavior--even despite the intentions of their creators. Megan Garcia, a senior fellow emphasizing cybersecurity at New America CA, wrote, “Computer-generated bias is almost everywhere we look” (Garcia, 2016, p. 112). This article will examine algorithmic bias, which is the potential for algorithms to engage in discrimination, and it will also suggest ways to avoid this problem.
While there is widespread agreement that algorithmic bias exists, defining it precisely is tricky, as there are different ways to measure bias. Using the example of an algorithm that assesses risk for criminal behavior, one might measure bias in several different ways (Huq, 2019). One could look at aggregate risk scores for various groups and see if they differ. Or, one could determine if the same initial risk score resulted in the same final risk score for people with different demographic characteristics. One could determine whether the rate of false positives and/or false negatives varied for each demographic group. Efforts to improve the fairness of an algorithm on one measure may actually lead to worse performance on another measure. For example, Speicher, et al (2018), borrowed the concept of inequality indices, which are used by economists, and applied them to biased algorithms. This provided a way to quantify bias. But they also found that efforts to minimize between-group bias may actually increase within-group bias.
For purposes of this article, a biased algorithm will be defined as one that that unfairly and/or inaccurately discriminates against a certain person or group of people, especially on the basis of protected categories such as race and/or gender. In some cases (as will be discussed below), the bias is not in the algorithm per se but rather in the data which it uses.
Unfortunately, examples of algorithmic bias are not difficult to find. A study found that names associated with people of color were far more likely to return search results that were negative (in this case, related to arrest records) than neutral or positive. A black-identified name was 25% more likely to return an ad for an arrest record (Sweeney, 2013). In an experience that went viral in 2015, Jacky Alcine and his friend, both African American, were tagged as “gorillas” by Google Photos (Garcia, 2016). Google responded promptly by removing the auto-tags for terms that might be offensive, but they didn’t fix the underlying problem, which was the algorithm’s inability to properly identify people with darker skin (Monea, 2019).
In addition to racial disparities, some algorithms display gender bias. In a study of smartphone-based personal assistants (including Siri, S Voice, and Google Now), the assistant was able to recognize statements such as “my foot hurts” but was not able to respond appropriately to statements such as “I was raped” or “I was beaten up by my husband” (Garcia, 2016).
And sometimes the bias is intersectional: in a study of gender classification systems, there was an error rate for darker-skinned women of over one-third, while the error rate for lighter-skinned men was less than one percent (Buolamwini & Gebru, 2018). Similarly, a Google search for “unprofessional hairstyles for work” featured almost all women of color, while a search for “professional hairstyles for work” pictured white women (Noble, 2018).
Interested in Reading the Rest of this Article (Full Text)?
Click Here to Freely Access Through IGI Global’s InfoSci Platform

Complimentary Research Articles and Chapters on
Algorithm Bias, Racial Discrimination, & Machine Learning
Legal Regulations, Implications, and Issues Surrounding Digital Data
Prof. Margaret Jackson (RMIT University, Australia) and Marita Shelly (RMIT University, Australia)
©2020 | 240 pgs. | EISBN: 9781799831327
  • Edited by Leading Researchers from RMIT University
  • Covers Data Protection, Free Speech, & Online Scams
Quick Links
Bibliographic Information
Pricing & Purchase Options
Table of Contents
Recommend to Library
Access Full Text
"A Matter of Perspective: Discrimination, Bias, and Inequality in AI"
Prof. Katie Miller (Deakin University, Australia)
Interdisciplinary Approaches to Digital Transformation
and Innovation
Prof. Rocci Luppicini (University of Ottawa, Canada)
©2020 | 368 pgs. | EISBN: 9781799818809
  • Features Over 20+ Chapters
  • Over 25+ International Contributors
  • Covers AI, Ethical Dilemmas & Cryptocurrency
Quick Links
Bibliographic Information
Pricing & Purchase Options
Table of Contents
Recommend to Library
Access Full Text
Bias and Discrimination in Artificial Intelligence: Emergence and Impact in E-Business
Profs. Jan C. Weyerer (German University of Administrative Sciences, Speyer, Germany) et al.
Enriching Collaboration and Communication in Online Learning Communities
Profs. Carolyn N. Stevenson (Purdue University Global, USA) et al.
©2020 | 319 pgs. | EISBN: 9781522598169
  • Over 25+ International Contributors
  • Features Over 12+ Chapters
  • Covers Ethical Standards, Online Communication, &
    Social Media
Quick Links
Bibliographic Information
Pricing & Purchase Options
Table of Contents
Recommend to Library
Access Full Text
Overcoming Implicit Bias in Collaborative Online Learning Communities
Profs. Ludmila T. Battista (Claremont Lincoln University, USA) et al.
Understanding the Role of Artificial Intelligence and Its Future Social Impact
Prof. Salim Sheikh (Saïd Business School, University of Oxford, UK)
©2021 | 284 pgs. | EISBN: 9781799846086
  • Authored by Leading Researcher with 20+ Years of Experience
    in AI
  • Ideal Resource for Researchers, Developers, & Strategists
  • Covers AI Disruption, Ethics, & Natural Language Processing
Quick Links
Bibliographic Information
Pricing & Purchase Options
Table of Contents
Recommend to Library
Access Full Text
Ethics of AI
Prof. Salim Sheikh (Saïd Business School, University of Oxford, UK)
View All Chapters and Articles on This Topic
The “View All Chapters and Articles on This Topic” navigates to IGI Global’s Demo Account, which provides a sample of the IGI Global content available through IGI Global’s e-Book Collection (6,600+ e-books) and e-Journal Collection (140+ e-journals) databases. If interested in having full access to this peer-reviewed research content,
Recommend These Valuable Research Tools to Your Library
AI and machine bias is just one element in society that is hindering us from the wide adoption of diversity, inclusivity, and equity (DEI). Understanding that the DEI movement is at the forefront and has created a deep need for research in this area, IGI Global has created an all-encompassing DEI e-Book Collection.
For Journalists Interested in Additional Trending Research

Contact IGI Global’s Marketing Team at marketing@igi-global.com or 717-533-8845 ext. 100 to access additional peer-reviewed resources to integrate into your latest news stories.

About IGI Global

Founded in 1988, IGI Global, an international academic publisher, is committed to producing the highest quality research (as an active full member of the Committee on Publication Ethics “COPE”) and ensuring the timely dissemination of innovative research findings through an expeditious and technologically advanced publishing process. Through their commitment to supporting the research community ahead of profitability, and taking a chance on virtually untapped topic coverage, IGI Global has been able to collaborate with over 100,000+ researchers from some of the most prominent research institutions around the world to publish the most emerging, peer-reviewed research across 350+ topics in 11 subject areas including business, computer science, education, engineering, social sciences, and more. To learn more about IGI Global, click here.

Newsroom Contact

Caroline Campbell
Assistant Director of Marketing and Sales
(717) 533-8845, ext. 144
ccampbell@igi-global.com
www.igi-global.com

Browse for more posts in:
Research Trends

No comments Comments

Log in or sign up to comment.
Be the first to comment!

More from IGI Global

How can advertisers possibly keep up with billions of unique individuals on the planet? Enter artificial intelligence.
IGI GlobalRead More
Business and ManagementMarketingBooks & E-BooksResearch Trends
In its second year, the IGI Global Annual Academic Publishing Trends & Open Access Survey 2024 seeks to create a realistic outlook on problems faced by the academic community and their potential solutions.
IGI GlobalRead More
Resources for LibrariansResources for ResearchersOpen Access
The World Health Organization (WHO) has reported a surge in Lassa fever cases in Nigeria, emphasizing the urgent need for containment measures.
IGI GlobalRead More
Medical, Healthcare, and Life SciencesHealthcare Information SystemsBooks & E-BooksResearch Trends
IGI Global congratulates the winners of this year's Journal Reviewer Award
IGI GlobalRead More
JournalsAwards & RecognitionOpen Access
For decades, academic publishing has been plagued with discrepancies surrounding authorship of scholarly research...
IGI GlobalRead More
Books & E-BooksAcquisitions
Two IGI Global publications have been recognized by Doody's for their excellence and niche topic focus.
IGI GlobalRead More
Medical, Healthcare, and Life SciencesMedia and CommunicationsBooks & E-BooksAwards & Recognition
Digital Inclusion Week underscored the urgent need for a national digital equity plan in the US due to disparities in internet access and digital skills.
IGI GlobalRead More
The majority of IGI Global's books Frontlist is now indexed by Scopus. Learn what this prestigious recognition means for the publisher and the experts behind these books.
IGI GlobalRead More
Books & E-BooksAwards & RecognitionReviews & Indexing
Hear from Dr. Velliaris, who was voted as a Top 30 Global Guru in Education.
EducationBooks & E-BooksInterviewAuthor News
First Previous 1 2 3 4 5 6 7 8 9 10  ... Next Last