How algorithms lie

It's no longer spies or pamphlets that we should fear during election periods. Likes , clicks, and 30-second videos are enough. With a careless scroll , an election can be tainted, opinions shaped, and society divided. Political manipulation through algorithms is neither fiction nor futurology: it is the digital present of democracy.
The algorithms that manage social media and content platforms were designed to capture our attention, extend our time spent there, and anticipate our desires. The problem? What holds our attention isn't always what informs us best. On the contrary: the more emotional, polarizing, and provocative the content, the greater its reach. The algorithm learns from us, but it also shapes us. And when this cycle is manipulated by external interests, it becomes a weapon of interference.
This is what we saw with Russian interference in the US presidential election, the Brexit referendum, and several European elections. Fake profiles, bots, professionalized disinformation campaigns, and digital strategies designed to divide, generate distrust, and undermine public discourse. What's more serious: most of these campaigns would not be effective without the silent complicity of algorithms that amplify what shocks, what inflames, what lies.
The truth is that algorithms are not neutral. They have economic logic, defined priorities, and defined objectives. And when they lack mechanisms for transparency, accountability, or external auditing, they become black boxes with real power over opinions, behaviors, and political decisions. Whoever controls the algorithm partially controls the public sphere.
China is a paradigmatic case: platforms like TikTok obey a logic of censorship and content promotion in line with the regime's interests. The danger isn't just for Chinese users—it's for everyone who uses these platforms as a primary source of information. The same applies to videos denying the existence of the war in Ukraine, historical revisionism promoted by fake accounts, or hate speech propagated by radical segments of the networks.
Portugal is not immune. Digital literacy remains low, dependence on social media is high, and platform scrutiny is virtually nonexistent. We've already seen disinformation circulating about vaccines, immigration, war, and national politics. External interference, through informational destabilization, poses a real risk to Portuguese society as well.
Europe is reacting. With the Digital Services Act, a regime of greater transparency and accountability for platforms is beginning to take hold. The AI Act could complement this response by imposing limits on generative artificial intelligence and requiring verification systems.
If algorithms shape what we see, only education can shape how we think. The response cannot be merely legislative—it must also be educational. Without digital literacy, we will always be easy targets on an invisible chessboard.
This is also the new form of hybrid warfare: constant and invisible. It's not just about protecting data. It's about protecting democracy. In a world where algorithms lie, defending the truth is a political choice. And a collective duty.
observador