Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
AI they are often seen as a threat to democracies and a blessing to dictators. In 2025 it is likely that algorithms will continue to disrupt democratic discourse by spreading anger, disinformation, and conspiracy beliefs. In 2025 algorithms will continue to speed up the implementation of all monitoring regimes, in which all people can see 24 hours a day.
More importantly, AI enables the accumulation of all information and power in a single hub. In the 20th century, information networks like the USA worked better than centralized networks like the USSR, because human apparatchiks at the center could not analyze everything properly. Replacing apparatchiks with AIs would make the Soviet central network superior.
However, AI is not all good news for dictators. First, there is the well-known control problem. Tyranny is based on threats, but algorithms cannot be threatened. In Russia, the invasion of Ukraine it is legally defined as “special military service,” and calling it “war” is a crime punishable by up to three years in prison. If a chatbot on the Russian Internet calls it “war” or mentions war crimes committed by the Russian military, how can the government punish the chatbot? The government can ban and seek to punish the people who created it, but this is much more difficult than punishing the people who use it. In addition, official bots can have a different opinion on their own, just by looking at the behavior in Russian products. That is the problem of reconciliation, Russian style. Russian engineers may do their best to create AIs that are fully compliant with the government, but given the ability of AI to learn and evolve on its own, how can engineers ensure that an AI that receives the government seal of approval in 2024 does not? Can’t I get into illegal territory in 2025?
The Russian constitution makes important promises that “everyone will be guaranteed the right to think and speak” (Article 29.1) and “investigations will be prohibited” (29.5). Not a single Russian citizen is smart enough to keep these promises. But bots don’t understand doublespeak. Chats that are advised to adhere to Russian laws and values can read the law, say that freedom of speech is very important in Russia, and criticize the Putin government for violating that value. How can Russian engineers explain to a chatbot that although the law guarantees freedom of speech, chatbots should not believe in the constitution and should not mention the difference between theory and reality?
In the long run, authoritarian regimes may face a major risk: instead of criticizing them, AIs may dominate them. Throughout history, the greatest threat to dictators has often come from their rulers. No Roman emperor or Soviet prime minister was overthrown by the democratic revolution, but they were always in danger of being overthrown or turned into puppets by the people under them. A dictator who gives AI more control in 2025 could be their puppet.
Dictatorships are more dangerous than democracies in this kind of systematic takeover. It would be difficult for even a super-Machiavellian AI to gain power in a stable democracy like the United States. Even if an AI learns to compromise the US president, it may face opposition from Congress, the Supreme Court, state governors, the media, corporations, and various non-governmental organizations. How would an algorithm, for example, deal with a Senate filibuster? Seizing power in the most centralized system is easy. In order to destroy the network of authority, the AI needs to control only one person.