[UPDATE] WhatsApp moderators can read your messages, if they’ve been reported

[ UPDATE 10/09/2021 16:00 ]: The article, by ProPublica, that we cited has been updated to be more accurate. From the update, the company said “A previous version of this story caused unintended confusion about the extent to which WhatsApp examines its users’ messages and whether it breaks the encryption that keeps the exchanges secret. We’ve altered language in the story to make clear that the company examines only messages from threads that have been reported by users as possibly abusive. It does not break end-to-end encryption.”

We want to emphasise that WhatsApp does not break end-to-end encryption and only scans through reported messages, but they still have submitted metadata to law enforcement.

====

WhatsApp has been very proud of its privacy features, providing “end-to-end encryption” to ensure information safety. As explained in their security page, “End-to-end encryption ensures only you and the person you’re communicating with can read or listen to what is sent, and nobody in between, not even WhatsApp.” According to a report by ProPublica, this is not exactly true.

Facebook, the company that owns WhatsApp, is no stranger to privacy controversies. In July 2019, the Federal Trade Commission (FTC) fined Facebook 5 billion USD citing privacy violations. Now, WhatsApp is the one in hot water as ProPublica reports that moderators can see your messages if they have been reported to them.

Specifically, once content has been reported, WhatsApp moderators can see the last five messages in the thread, regardless of consent from the other party. You might ask, “What is the difference between that and submitting a screenshot of the chat to report it?” This is Facebook’s defense behind the controversy.

Facebook claims that this does not break end-to-end encryption as tapping the “report” button creates a new chat between the reporter and WhatsApp, and essentially just copy-pastes the content to them.

WhatsApp also uses AI to flag content, which is necessary at this point, as 400,000 reports have been made to child safety authorities just last year. It is almost impossible to keep up with the amount of inappropriate content on the app without a computer doing it. The problem comes when the moderators gets innocent pictures of a kid taking a bath by mistake. WhatsApp told ProPublica that they receive an inordinate amount of these images and that “a lot of the time, the artificial intelligence is not that intelligent.” Although, they do state that the AI cannot scan through all the messages due to the encryption.

In their terms of service, they say that when content is reported, WhatsApp “receives the most recent messages” from the conversation and “information on your recent interactions with the reported user.” This is very general and does not specify that they can also see users’ phone numbers, profile photos, status messages, and IP addresses.

The AI also finds inappropriate content by scanning through unencrypted data, like “the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.”

They are not afraid of sharing metadata with law enforcement, as recently they submitted proof that a government official was talking to a reporter at BuzzFeed, ultimately resulting in a 6 month prison sentence.

All of this contradicts WhatsApp’s public image, as they often are under the guise of protecting their users’ data. Earlier this year, the Indian government wanted to pass a law that allowed authorities to view suspects’ messages, and WhatsApp pushed back with this statement:

“Requiring messaging apps to ‘trace’ chats is the equivalent of asking us to keep a fingerprint of every single message sent on WhatsApp, which would break end-to-end encryption and fundamentally undermines people’s right to privacy.”

WhatsApp – Source: Reuters

So, what is the solution? I think WhatsApp should be more transparent of what moderators can see in various circumstances.

[ SOURCE, VIA ]

Related reading

Recent Posts

Gentari turns on 200kW DC Charger at Petronas Penchala Link (Damansara Bound)

Ahead of the Chinese New Year holiday, Gentari has upgraded its existing EV charging station…

3 days ago

Astro drops HBO channels after nearly 30 years, introduces 4 new channels under Astro One Epic Pack

Astro will discontinue HBO channels from 1st March 2026 and introduce four new channels under…

3 days ago

Travelling to Malaysia? Beware of MDAC Scam Targeting Tourists

A traveller by the name of stef747 on YouTube, who was flying from Singapore to…

5 days ago

U Mobile ULTRA5G Tourist Plan: Free eSIM with 100GB data and no speedcap for 24 hours

In conjunction with the launch of 5G in-building coverage at various transportation hubs including Kuala…

6 days ago

A Look Inside the All-New Maxis Centre at 1 Utama: What’s Different?

This post is brought to you by Maxis. Visiting a telco store usually means one…

7 days ago

Samsung Galaxy S26 launching on 26 Feb, full specs leaked: Another year of playing it safe?

Samsung has announced that its upcoming Galaxy Unpacked event will take place on 26 February…

7 days ago

This website uses cookies.