The new rector of Prague’s technical university, Michal Pěchounek, recently said in an interview with Forbes: “I want to know that it’s safe when my child is on a social network run by an algorithm. That the child won’t spiral into bulimia. Or that algorithms will prevent a depressed teenager from persuading a chatbot to explain how to take his own life. But those guarantees simply don’t exist.”
He is right that we have arrived at an utterly absurd situation. A handful of corporations legally operate systems that unquestionably affect the human brain and very likely damage it.
It is astonishing that in a world where everything else must pass endless tests—where half of all products never even make it to market, where the slightest suspicion of harm leads to immediate bans and constant litigation—no one even demands the most basic safeguard: disclosure of the algorithms used by social media platforms.
If someone were adding a substance to food that had the same effect on the brain, the product would be banned immediately and the manufacturer would likely end up in prison.
Appeals to consumer choice make no sense here. Social media algorithms are deliberately designed to bypass conscious decision-making—the very faculty that makes choice possible—and to target parts of the brain where users cannot effectively defend themselves.
Nor does the argument about protecting trade secrets hold up. Platforms like Facebook or TikTok have no real competitors who could exploit such information.
The situation becomes even more absurd when one considers that the European Commission and several European governments exert enormous pressure on social networks to prevent the spread of oppositional political views. Damaging people’s brains, apparently, is not a problem.
The development of artificial intelligence cannot be stopped and likely cannot even be slowed. Whatever is prohibited in one country will simply be developed somewhere else.
But confiscating the profits of social media companies—or blocking them in certain countries—is entirely possible. And such blocking would not even have to be complete. It would be enough to cut off a large portion of users; the platform would quickly become unattractive to most of the rest.
In the end, it is simply a question of political will.
