From useful tool to invisible system of individualized influence
– Technological dependence
– Thought homogenization
– Invisible surveillance
– Personalized influence
In our previous article, we questioned the AI popularized by the company Meta about the risks that could arise from the use of this tool, so generously made available to the public, and which, as street vendors proclaim while offering their goods on passing buses, “should not be missing from the lady’s purse or the gentleman’s pocket.”
The Spanish proverb says: “When the alms are large, even the blind distrust.”
And what a gift this product is, capable of resolving our greatest doubts!
A few months ago, a university professor gave his psychology students a remote test with difficult questions and was surprised by the speed and quality of the answers.
But when comparing them, he concluded that all had been produced by AI.
The students had merely copied and transferred them.
Naturally, the test was annulled.
And they returned to in-person exams, the old-fashioned way.
Let us imagine, as an intellectual exercise, that those answers had been validated and that the same had occurred throughout the entire degree.
What would be the real training of those psychologists graduating with the highest grades?
Or now that we have AI, do we no longer need psychologists?
Reasoning with AI
At the end of that article, we asked:
What is the real reason behind making AI available to everyone?
Could it be a psychopolitical tool?
Meta’s AI responded with the same points it had raised about risks.
Dependence: “it could become excessive and lead to a loss of skills and knowledge.”
The same dependence we have on our phones: we lose them and cannot even contact our children because we have not memorized their numbers.
In that sense, Borges’ character Funes the Memorious would pass the test, since his only memory was biological.
Obviously, we are not advocating illiteracy, but highlighting, through an example drawn from experience, how dependent we have become on the multitude of technological devices in which we are immersed.
The second aspect is:
Social homogenization: “people may stop thinking critically and creatively.”
“Don’t even think wrongly, what for, if life goes on anyway, and besides, you risk being called a fool,” says the tango by Discépolo.
Surveillance: “controlling people, which poses a risk to freedom and autonomy.”
We do not ignore the importance of surveillance, especially in a country like Uruguay, where its absence is increasingly noticeable.
But the surveillance demanded is to prevent crime.
To be able to walk freely through the streets.
To combat drug trafficking, organ trafficking, and human trafficking.
To allow children to play safely on the sidewalk.
The kind whose absence forces us to put bars on windows, even on high floors.
Instead, the system knows what we spend, where we shop, what we buy, how often we go to the cinema or theater, which bus we take, and when we do these things.
Every time we use a credit or debit card, we leave a trace of our movements.
Our phone places us in one location or another…
So, what relationship does AI have with psychopolitics?
Questioning ChatGPT
We asked the question.
Surprisingly, this is what it replied:
“AI is not just an instrument of psychopolitics: it is its highest stage.”
Classical psychopolitics needed to know desires, predict behaviors, and influence without commanding.
But AI no longer works with the masses, but with micro-profiles, emotional states, and moments of vulnerability.
AI acts when the individual doubts, is sad, becomes more influenceable, or seeks meaning.
That is pure psychopolitics, because power no longer forces.
AI recommends, suggests, optimizes, never orders or threatens.
And precisely for that reason, it is obeyed.
Because ideal power is the one that presents itself as an assistant.
So, are these “risks,” as Meta claims, or objectives?
Continue reading at https://perspectivaliberal.com/global-order-geopolitics
