[UPDATE] WhatsApp moderators can read your messages, if they’ve been reported

[ UPDATE 10/09/2021 16:00 ]: The article, by ProPublica, that we cited has been updated to be more accurate. From the update, the company said “A previous version of this story caused unintended confusion about the extent to which WhatsApp examines its users’ messages and whether it breaks the encryption that keeps the exchanges secret. We’ve altered language in the story to make clear that the company examines only messages from threads that have been reported by users as possibly abusive. It does not break end-to-end encryption.”

We want to emphasise that WhatsApp does not break end-to-end encryption and only scans through reported messages, but they still have submitted metadata to law enforcement.

====

WhatsApp has been very proud of its privacy features, providing “end-to-end encryption” to ensure information safety. As explained in their security page, “End-to-end encryption ensures only you and the person you’re communicating with can read or listen to what is sent, and nobody in between, not even WhatsApp.” According to a report by ProPublica, this is not exactly true.

Facebook, the company that owns WhatsApp, is no stranger to privacy controversies. In July 2019, the Federal Trade Commission (FTC) fined Facebook 5 billion USD citing privacy violations. Now, WhatsApp is the one in hot water as ProPublica reports that moderators can see your messages if they have been reported to them.

Specifically, once content has been reported, WhatsApp moderators can see the last five messages in the thread, regardless of consent from the other party. You might ask, “What is the difference between that and submitting a screenshot of the chat to report it?” This is Facebook’s defense behind the controversy.

Facebook claims that this does not break end-to-end encryption as tapping the “report” button creates a new chat between the reporter and WhatsApp, and essentially just copy-pastes the content to them.

WhatsApp also uses AI to flag content, which is necessary at this point, as 400,000 reports have been made to child safety authorities just last year. It is almost impossible to keep up with the amount of inappropriate content on the app without a computer doing it. The problem comes when the moderators gets innocent pictures of a kid taking a bath by mistake. WhatsApp told ProPublica that they receive an inordinate amount of these images and that “a lot of the time, the artificial intelligence is not that intelligent.” Although, they do state that the AI cannot scan through all the messages due to the encryption.

In their terms of service, they say that when content is reported, WhatsApp “receives the most recent messages” from the conversation and “information on your recent interactions with the reported user.” This is very general and does not specify that they can also see users’ phone numbers, profile photos, status messages, and IP addresses.

The AI also finds inappropriate content by scanning through unencrypted data, like “the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.”

They are not afraid of sharing metadata with law enforcement, as recently they submitted proof that a government official was talking to a reporter at BuzzFeed, ultimately resulting in a 6 month prison sentence.

All of this contradicts WhatsApp’s public image, as they often are under the guise of protecting their users’ data. Earlier this year, the Indian government wanted to pass a law that allowed authorities to view suspects’ messages, and WhatsApp pushed back with this statement:

“Requiring messaging apps to ‘trace’ chats is the equivalent of asking us to keep a fingerprint of every single message sent on WhatsApp, which would break end-to-end encryption and fundamentally undermines people’s right to privacy.”

WhatsApp – Source: Reuters

So, what is the solution? I think WhatsApp should be more transparent of what moderators can see in various circumstances.

[ SOURCE, VIA ]

Related reading

Recent Posts

GXBank on cybersecurity, scams and AI: What really happens behind the scenes to protect users

GXBank recently marked its second anniversary with more than one million Malaysians onboard, cementing its…

18 hours ago

Realme C85: World record-breaking “ultra waterproof” phone, but not for the reason you think

Realme has just launched a new budget-oriented mid-range smartphone in Malaysia, the Realme C85 5G.…

21 hours ago

sooka’s Gilerrr Streaming Challenge Draws 273 Participants, Clinches Malaysia Book of Records Title

This post is brought to you by sooka. sooka pulled a lively crowd to Pavilion…

1 day ago

Infinix teams up with Pininfarina for future smartphone designs. Note 60 Ultra launching first in 2026

Infinix has just announced its strategic partnership with Pininfarina for its upcoming flagship smartphones, revealed…

2 days ago

Your Proton car can soon be controlled from a Huawei smartwatch

During Proton's Tech Showcase, the national carmaker has also highlighted its digital and connected automotive…

2 days ago

MoF Inc triggers Put Option for DNB shares: CelcomDigi, Maxis and YTL Power to fork out RM328 mil each

Digital Nasional Berhad (DNB), Malaysia's first 5G network, will soon transform into a fully private…

2 days ago

This website uses cookies.