Think of the Children (And Ignore This Massive Power Grab) - EU Chat Control
Hello people. I forgot to write for the blog for some time, but have to come back and talk about what really is a scary future for our digital freedom.
Now chat control is something that's happening in the EU. And you might be like, "But we're not from the EU. Why do you care?" Well, the EU is one of the largest zones economically. When Brussels adopts major frameworks such as GDPR or its digital competition rules, other democracies often follow, whether out of alignment or pressure.
Firstly, Chat Control is an EU legislation that would allow tech companies like Google and Meta to scan every user’s messages, even encrypted ones, for illegal content such as CSAM (Child Sexual Abuse Material).
Stopping CSAM is good.
But giving governments the legal power to inspect every citizen’s private conversations is not good.
Why Chat Control Is Dangerous
The EU keeps pushing updated versions (Chat Control 2.0, etc.). They change the wording slightly each time, but the core problem stays:
A loophole in Article 4 forces email and messaging services— including encrypted ones like WhatsApp or Signal to "mitigate risk."
Translation:
They can force client-side scanning on your phone.
They want AI to scan:
-
photos
-
videos
-
links
-
text conversations
But AI cannot understand context. It can’t tell the difference between:
-
a flirt
-
a joke
-
a sarcastic message
-
medical photos
-
or actual criminal behavior
And we already saw how Google wrongly flagged and refused to reinstate an account of a father for taking medical photos of his son at the request of a doctor. His account was banned, authorities were alerted and his entire reputation was nearly ruined.
"AI" is often just pattern-matching against a database of known CSAM hashes combined with newer classifiers looking for "grooming" text. It's the text-classification part that is incredibly unreliable and prone to false positives, especially across different cultures and languages. This isn't a perfect system that will become flawless; it has its inherent flaw in its lack of human understanding.
To "age verify," they propose requiring every user to upload:
-
government ID
-
or a facial scan
Just to use a messaging app.
We’ve already seen huge leaks of uploaded IDs like the Discord leak where tons of Australian IDs were exposed.
The pretext of "protecting children" has been a Trojan horse into mass surveillance. Yes, criminals might be caught. But destroying encryption in messaging app for millions of innocent citizens to catch a few bad actors is not democratic. It’s not proportional. And it creates a surveillance state that will absolutely be abused.
An alternative technology that can be used by these platforms is Microsoft’s PhotoDNA: a tool that matches files against databases of hashes of already known CSAM imagery. While PhotoDNA is more reactive than proactive, this
approach would be the better practice as it respects the integrity of
private communication while effectively fights against confirmed
criminal material.
The Threat to Privacy and Encryption
Encryption is important. If you take away encryption or you allow governments to have back doors, it makes the system completely useless. Encryption ONLY works when it is between the sender and the receiver. That's it. If you undermine it, then you don't have security. Simple as that. So when they bring up chat control, it’s an all or nothing situation. Privacy is binary. You either have encryption and your personal privacy, or you just let mass surveillance run you.
Creating a 'backdoor' for the 'good guys' is a fantasy. A backdoor is a vulnerability, and once it exists, it can and will be discovered and exploited by hackers, hostile foreign governments, and abusive individuals within the system itself.
I believe in creating a safer and more informed Internet, so I fully support the efforts to combat the spread of CSAM materials. However, online safety must also include the protection of users’ privacy and fundamental rights. These goals are not mutually exclusive. With thoughtful design and approach I believe it is possible (and necessary) to achieve both.
You might be like, "What is the actual fix here?" The actual fix is learning your rights to communicate privately on the Internet. Because if you don't try now, then in 10 years from now, you'll be living in a surveillance state and you'll be acting on the internet with complete fear.

Comments
Post a Comment