Twitter will pay hackers to discover biases in its automated picture cropping after being accused of doing so.

Twitter is running a competition in the hopes of finding biases in its picture cropping algorithm, and the best teams will get cash awards (via Engadget). Twitter hopes that by allowing teams access to its code and picture cropping model, they will be able to identify ways in which the algorithm might be detrimental (for example, cropping in a way that stereotypes or erases the image’s topic).

Those that compete must submit a summary of their results as well as a dataset that can be put through the algorithm to show the problem. Twitter will then give points depending on the kind of damages discovered, the potential impact on people, and other factors.

The winner team will get $3,500, with $1,000 awards granted for the most creative and generalizable results. On Twitter, that figure has sparked some debate, with some people arguing that it should include an extra zero. For example, if you discovered a bug that allowed you to execute activities for someone else (such retweeting a tweet or picture) via cross-site scripting, Twitter’s standard bug bounty programme would pay you $2,940. You’d make $7,700 if you could find an OAuth flaw that allowed you to take over someone’s Twitter account.

Twitter had already conducted its own study into its image-cropping algorithm, publishing a paper in May that looked at how the system was biassed in the wake of claims that its preview crops were racist. Since then, Twitter has mainly abandoned algorithmically trimming previews, but it is still utilised on desktop, and a good cropping algorithm is a useful tool for a firm like Twitter.

Opening a competition allows Twitter to receive input from a much wider group of people. For example, the Twitter team had a meeting to discuss the competition, during which a team member stated that they were getting queries about caste-based biases in the algorithm, something that software developers in California may not be aware of.
Twitter is also searching for more than simply unintentional algorithmic bias. Both deliberate and unintended damages have point values on the scale. Unintentional harms are cropping behaviours that may be abused by someone publishing maliciously created photos, according to Twitter. Intentional harms are cropping behaviours that could be exploited by someone posting maliciously intended images.

The competition, according to Twitter’s announcement blog, is distinct from its bug bounty programme; if you submit a complaint regarding algorithmic biases to Twitter outside of the competition, your report will be closed and tagged as not applicable, the company warns. If you’re interested in participating, visit the competition’s HackerOne page to learn more about the rules, qualifications, and other details. Submissions are open until August 6th at 11:59 p.m. PT, and the challenge winners will be revealed on August 9th at the Def Con AI Village.