[UPDATE] WhatsApp moderators can read your messages, if they’ve been reported

[ UPDATE 10/09/2021 16:00 ]: The article, by ProPublica, that we cited has been updated to be more accurate. From the update, the company said “A previous version of this story caused unintended confusion about the extent to which WhatsApp examines its users’ messages and whether it breaks the encryption that keeps the exchanges secret. We’ve altered language in the story to make clear that the company examines only messages from threads that have been reported by users as possibly abusive. It does not break end-to-end encryption.”

We want to emphasise that WhatsApp does not break end-to-end encryption and only scans through reported messages, but they still have submitted metadata to law enforcement.

====

WhatsApp has been very proud of its privacy features, providing “end-to-end encryption” to ensure information safety. As explained in their security page, “End-to-end encryption ensures only you and the person you’re communicating with can read or listen to what is sent, and nobody in between, not even WhatsApp.” According to a report by ProPublica, this is not exactly true.

Facebook, the company that owns WhatsApp, is no stranger to privacy controversies. In July 2019, the Federal Trade Commission (FTC) fined Facebook 5 billion USD citing privacy violations. Now, WhatsApp is the one in hot water as ProPublica reports that moderators can see your messages if they have been reported to them.

Specifically, once content has been reported, WhatsApp moderators can see the last five messages in the thread, regardless of consent from the other party. You might ask, “What is the difference between that and submitting a screenshot of the chat to report it?” This is Facebook’s defense behind the controversy.

Facebook claims that this does not break end-to-end encryption as tapping the “report” button creates a new chat between the reporter and WhatsApp, and essentially just copy-pastes the content to them.

WhatsApp also uses AI to flag content, which is necessary at this point, as 400,000 reports have been made to child safety authorities just last year. It is almost impossible to keep up with the amount of inappropriate content on the app without a computer doing it. The problem comes when the moderators gets innocent pictures of a kid taking a bath by mistake. WhatsApp told ProPublica that they receive an inordinate amount of these images and that “a lot of the time, the artificial intelligence is not that intelligent.” Although, they do state that the AI cannot scan through all the messages due to the encryption.

In their terms of service, they say that when content is reported, WhatsApp “receives the most recent messages” from the conversation and “information on your recent interactions with the reported user.” This is very general and does not specify that they can also see users’ phone numbers, profile photos, status messages, and IP addresses.

The AI also finds inappropriate content by scanning through unencrypted data, like “the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.”

They are not afraid of sharing metadata with law enforcement, as recently they submitted proof that a government official was talking to a reporter at BuzzFeed, ultimately resulting in a 6 month prison sentence.

All of this contradicts WhatsApp’s public image, as they often are under the guise of protecting their users’ data. Earlier this year, the Indian government wanted to pass a law that allowed authorities to view suspects’ messages, and WhatsApp pushed back with this statement:

“Requiring messaging apps to ‘trace’ chats is the equivalent of asking us to keep a fingerprint of every single message sent on WhatsApp, which would break end-to-end encryption and fundamentally undermines people’s right to privacy.”

WhatsApp – Source: Reuters

So, what is the solution? I think WhatsApp should be more transparent of what moderators can see in various circumstances.

[ SOURCE, VIA ]

Related reading

Recent Posts

Google Ads to enforce mandatory verification for financial services ads in Malaysia from 14 April

Google has announced new verification requirements for advertisers promoting financial products and services to users…

1 hour ago

Malaysia EV registrations grew 68% YoY in February 2026 despite overall car registrations declining

Malaysia’s electric vehicle (EV) market continued to grow in February 2026 even as overall vehicle…

14 hours ago

MGS5 EV CKD now open for booking: More power, better range, lower price

The order books for MGS5 EV CKD are now open after MG Motor Malaysia officially…

16 hours ago

JomCharge x DBKL deploy EV Chargers at TK Bakery Bandar Menjalara Kepong, 50% off this weekend

JomCharge x DBKL continue to deploy more street-level EV chargers and the latest location is…

1 day ago

Proton X90 MC1 i-GT to launch on 11 March: Wireless Apple CarPlay and Android Auto confirmed

Proton is set to launch the refreshed version of its D-segment SUV, the X90, on…

2 days ago

Poco X8 Pro series launching on 17 March, powered by MediaTek Dimensity 8500 Ultra and 9500s

Not too long after launching the flagship Xiaomi 17 series, Xiaomi Malaysia is launching new…

2 days ago

This website uses cookies.