Twitter will pay hackers to discover biases in its automated picture cropping after being accused of doing so.

Twitter is running a competition in the hopes of finding biases in its picture cropping algorithm, and the best teams will get cash awards (via Engadget). Twitter hopes that by allowing teams access to its code and picture cropping model, they will be able to identify ways in which the algorithm might be detrimental (for example, cropping in a way that stereotypes or erases the image’s topic).


Those that compete must submit a summary of their results as well as a dataset that can be put through the algorithm to show the problem. Twitter will then give points depending on the kind of damages discovered, the potential impact on people, and other factors.


The winner team will get $3,500, with $1,000 awards granted for the most creative and generalizable results. On Twitter, that figure has sparked some debate, with some people arguing that it should include an extra zero. For example, if you discovered a bug that allowed you to execute activities for someone else (such retweeting a tweet or picture) via cross-site scripting, Twitter’s standard bug bounty programme would pay you $2,940. You’d make $7,700 if you could find an OAuth flaw that allowed you to take over someone’s Twitter account.


Twitter had already conducted its own study into its image-cropping algorithm, publishing a paper in May that looked at how the system was biassed in the wake of claims that its preview crops were racist. Since then, Twitter has mainly abandoned algorithmically trimming previews, but it is still utilised on desktop, and a good cropping algorithm is a useful tool for a firm like Twitter.


Opening a competition allows Twitter to receive input from a much wider group of people. For example, the Twitter team had a meeting to discuss the competition, during which a team member stated that they were getting queries about caste-based biases in the algorithm, something that software developers in California may not be aware of.
Twitter is also searching for more than simply unintentional algorithmic bias. Both deliberate and unintended damages have point values on the scale. Unintentional harms are cropping behaviours that may be abused by someone publishing maliciously created photos, according to Twitter. Intentional harms are cropping behaviours that could be exploited by someone posting maliciously intended images.


The competition, according to Twitter’s announcement blog, is distinct from its bug bounty programme; if you submit a complaint regarding algorithmic biases to Twitter outside of the competition, your report will be closed and tagged as not applicable, the company warns. If you’re interested in participating, visit the competition’s HackerOne page to learn more about the rules, qualifications, and other details. Submissions are open until August 6th at 11:59 p.m. PT, and the challenge winners will be revealed on August 9th at the Def Con AI Village.

Malware hiding in AI neural networks

A trio of Cornell University academics discovered that malware code may be hidden inside AI neural networks. On the arXiv preprint server, Zhi Wang, Chaoge Liu, and Xiang Cui have published a paper outlining their experiences with inserting code into neural networks.

Criminals’ attempts to get into devices running new technology for their objectives, such as deleting or encrypting data and demanding payment from customers for its recovery, are becoming more complicated as computer technology becomes more complex. The researchers discovered a new technique to infect specific types of computer systems running artificial intelligence applications in their new study.

AI systems function by processing data in the same manner that the human brain does. However, the study team discovered that such networks are vulnerable to foreign code intrusion.

Foreign actors can infiltrate neural networks by their very nature. All such agents have to do is imitate the network’s structure, similar to how memories are added to the human brain. The researchers were able to accomplish so by embedding malware into the neural network powering an AI system dubbed AlexNet, despite the virus is very large, taking up 36.9 MiB of RAM on the AI system’s hardware. The researchers picked what they thought would be the optimum layer for injection to inject the code into the neural network. They also added it to a model that had previously been taught, although they cautioned that hackers may choose to target an untrained network since it would have less impact on the entire network.

Not only did ordinary antivirus software fail to detect the malware, but the AI system’s functionality remained nearly unchanged after infection, according to the researchers. As a result, if carried out surreptitiously, the infection may have gone unnoticed.

The researchers point out that merely inserting malware into the neural network would not be harmful—whoever snuck the code into the system would still need to figure out how to run it. They also point out that now that hackers can insert code into AI neural networks, antivirus software may be upgraded to detect it.

The Pegasus spyware hack reveals that Apple needs to substantially improve iPhone security.

Apple has always been proud of the secure service it provides to its customers. It often pokes fun at Android, speaks at length about privacy during keynotes, and has released few features that have irritated the other Big Tech companies. However, the new Pegasus spyware disclosure has left Apple red-faced, indicating that the Cupertino-based tech company has to beef up its security. Journalists and human rights campaigners from all around the world, including India, were targeted by the malware.

The Amnesty International Security Lab discovered evidence of Pegasus infections or attempted infections in 37 of the total 67 cellphones examined. 34 of them were iPhones, with 23 displaying evidence of a successful Pegasus infection and the other 11 displaying signs of an attempted infection.

Only three of the 15 Android cellphones, on the other hand, revealed signs of a hacking effort. However, there are two things to consider before assuming that Android phones are safer than iPhones. One, Amnesty’s investigators confirmed that Pegasus evidence was located on the iPhone more than anywhere else. Android’s logs aren’t large enough to retain all of the data required for decisive findings. People have greater security expectations than the iPhone, for two reasons.

Apple has often said in previous years that the iPhone is a more secure phone than Android, and this assertion holds whether Pegasus is there or not. However, the Pegasus tale demonstrates that the iPhone is not as secure, or rather unhackable, as Apple claims. This is reflected in Amnesty International’s statement.

The issue is especially concerning because it affected even the most recent iPhone 12 devices running the most recent version of Apple’s operating system. That’s usually the best and last level of protection a smartphone maker can provide.

“Apple strongly opposes cyberattacks against journalists, human rights advocates, and anyone working to make the world a better place,” Ivan Krstic, head of Apple Security Engineering and Architecture, said in a statement to India Today Tech. Apple has led the industry in security innovation for over a decade, and as a consequence, security experts believe that the iPhone is the safest and most secure consumer mobile device available. Such attacks are very complex, cost millions of dollars to create, have a short shelf life, and are used to target specific persons. While this means they pose no harm to the vast majority of our users, we continue to work diligently to secure all of our customers, and we’re always implementing additional safeguards for their devices and data.”

How did the iPhone’s security get hacked?

Pegasus zero-click assaults were used to hack the iPhones, according to the study. It claims that thousands of iPhones have been infected, but it cannot confirm the exact number of phones that have been affected. ‘Zero-click’ assaults, as the name implies, do not involve any activity from the phone’s user, giving an already strong virus even more potential. These attacks target software that accepts data without first determining whether or not it is trustworthy.

In November 2019, Google Project Zero security researcher Ian Beer uncovered a similar vulnerability, revealing that attackers may take total control of an iPhone in the radio vicinity without requiring any user input. Apple released a software update to remedy the problem but confessed that it was powerful enough to damage the devices.

Because zero-click attacks don’t involve any user interaction, avoiding them becomes extremely tough. Even if you are aware of phishing attempts and use the best online practices, you may still be targeted by this malware.

What does Pegasus have access to?

While there is an amount of data on who was impacted and how they were affected, no investigation has been able to uncover the data that was gathered. However, the options are limitless. Pegasus may gather emails, call logs, social network posts, user passwords, contact lists, photos, videos, sound recordings, and browser history, among other things.

It also can turn on the cameras or microphones to acquire new photos and recordings. It can listen to voice mails and gather location records to figure out where a user has gone, and it can do all of this without the user accessing their phone or clicking on a strange link.