Twitter will now take down pictures and videos that depict individuals without their consent on being notified by the individual or their legal representative, the company announced in an update to its private information policy.
“The misuse of private media can affect everyone, but can have a disproportionate effect on women, activists, dissidents, and members of minority communities,” the company mentioned in the update. Twitter already takes down personal information like addresses, financial details etc. upon request.
A worrying trend has emerged lately of abusing members of minority communities in India by misusing their images, as observed in the Sulli Deals controversy. Twitter’s policy takes a step forward in giving users control over how media depicting them is consumed online.
How will Twitter implement its policy around private media?
Where individuals have a reasonable expectation of privacy in an individual piece of media, we believe they should be able to determine whether or not it is shared. — Twitter Personal Information Policy
What is not a violation? In the policy document, Twitter clarified that these instances are not in violation of its policy:
- the media is publicly available or is being covered by mainstream media;
- the media and the accompanying tweet text add value to the public discourse or are shared in public interest;
- contains eyewitness accounts or on the ground reports from developing events;
- the subject of the media is a public figure.
Who can report violations? The company requires a first-person report from individuals depicted or their legal guardians.
What happens in case of a violation? Twitter will require users to remove the content in violation and lock their account until the time the media is removed.
What’s the downside? Concerns remain regarding whether implementing this policy will entail Twitter maintaining facial data of its users, and how it will distinguish between public and private individuals. MediaNama has reached out to Twitter about these concerns and will update the report once a response is received.
Do Facebook and Google have similar policies?
Facebook: Facebook’s image privacy rights are not as broadly applicable as Twitter’s. The company may remove media if the person depicted is:
- A minor under the age of 13, and the content was reported by the minor or a legal guardian.
- A minor between the ages of 13 and 18 years old, and the content was reported by the minor.
- An adult, where the content was reported by the adult from outside the United States and applicable law may provide rights to removal.
- Any person who is incapacitated and unable to report the content on their own.
Google: Google removes private images from search results in specific instances, according to its help center. For Google to take them down, the images will need to contain:
- Sensitive financial, medical, or national information
- Non-consensual intimate personal image or “revenge porn”
- Involuntary fake pornography
Google also removes personal images if they’re hosted on a site with exploitative removal practices, and for legal reasons. The company also recently announced that it will remove photos of minors from search results upon request.
Both Facebook and Google have certain protections in place for personal media, but neither go as far as Twitter in giving users the absolute right to how media depicting them is shared.
Notice sent to Twitter over Sulli Deals
An app called ‘Sulli Deals’ based on GitHub was recently found to be sharing pictures of several Muslim women without their consent. A legal notice was sent to Twitter regarding the app, for failing to stop the discovery of such images on its platform.
Among other things, the legal notice asked Twitter to:
- Ensure that content uploaded by a user, such as a photo, cannot be downloaded by another user without their consent and there should be a photo-shield feature like the one on Facebook in order to ensure that no malicious content posted as a picture or meme goes unnoticed.
- Institute a dedicated committee to look into and take action against hate speech towards women, people of colour, LGBTQIA+ communities, and other historically underrepresented communities on a priority
- Ensure and create a mechanism that lets the platform take action against hateful content and sexual harassment as per Indian law beyond suspending the miscreant(s) account.
Also read:
- Notice Sent To Twitter Over Circulation Of ‘Sulli Deals’ Content; Seeks Rs 10 Lakhs Compensation
- US States Want To Find Out If Facebook Is Exploiting Children In The Interest Of Profit
- Tripura Police Tells Twitter To Remove Tweets And Block Handles Amid Communal Unrest
Have something to add? Post your comment and gift someone a MediaNama subscription.
Figuring out subscriptions and growth at MediaNama. Email: nishant@medianama.com
