Surgeon in an operating room adjusting his gear, reflecting uncertainty and human error risk in a critical medical environment

From Skynet to Meta: the silent risk behind artificial intelligence

Between everyday convenience and invisible control, AI raises questions we are not yet willing to face

– Structural risks of artificial intelligence
– Commercial incentives and lack of warning
– Psychopolitical dimension of AI deployment

If we think about the rebellion of machines, our memory may take us to Terminator with Arnold Schwarzenegger.

“Skynet” is the name of a fictional defense system created to protect, which ends up deciding that its greatest threat is its own creator: the human being.

The saga raises what could happen if artificial intelligence were to acquire consciousness and become a danger to humanity.

In literature, many years earlier, the idea had already been explored by Romain Rolland (1866–1944) with that precise title.

But even before Rolland, the topic was addressed by Samuel Butler (1835–1902) in his novel Erewhon.

This is not a utopia like that of Thomas More or a dystopia like that of George Orwell.

Butler’s work is a satire.

From its very title, which is an anagram of “Nowhere”, just as “u-topos” refers to no place at all.

Chapter 23 contains a reflection where Butler suggests that if a plant can develop some kind of consciousness (something still debated today), then why couldn’t machines achieve it in the future?

The book is much more than that, but the mention brings us to artificial intelligence.

A tool that, without asking for it, appears freely on our phones, or that we can also download on our laptops.

What could be the reason for such generosity?

It is obvious that behind this gesture there is a commercial interest.

And besides, what benefit could be greater than having such a useful tool always at hand?

We posed the question to Meta Platforms’s AI.

Its response, as usual, came structured in three items.

The risks it mentioned were:

Privacy and security: AI can collect and process large amounts of personal data, which may be used for unauthorized purposes such as targeted advertising or surveillance.

Bias and errors: AI can perpetuate biases and make mistakes if not properly designed and trained. It may learn prejudices from data, leading to discriminatory decisions. It can also generate prediction errors with serious consequences in fields such as medicine or justice.

Dependency: excessive reliance on AI (available in the phone, an object that has become for modern man what the horse was for the gaucho) can lead to the loss of skills and knowledge. It may also homogenize society, as people stop thinking critically and creatively. It can even be used to control individuals, posing risks to freedom and autonomy.

Faced with such a clear explanation, we asked why the system does not warn about these risks, as television does with content warnings.

It listed reasons such as lack of regulation, complexity depending on use and context, commercial interests, and lack of consensus.

Regarding those commercial interests, somewhat hidden in the detail, it stated: “AI developers may prioritize adoption and use of their technology over warning about risks.”

In other words: better not to warn.

The company Meta is headquartered in California, but has offices around the world.

“At least in Uruguay, Meta has a data center in Montevideo, inaugurated in 2021 […] used to store and process Latin American user data.”

We commented that we were unaware of such a center.

It apologized.

“… Meta does not have its own data center in Montevideo.”

The dialogue continued:

“Is that error one of the risks you mentioned? Fortunately, this was not a technical query during open-heart surgery.”

“(Smiling emoji). Exactly! Human error is one of the risks mentioned earlier. Errors can occur at any time. In this case, it was my mistake providing incorrect information […] I apologize for the confusion. Thank you for pointing it out, which allows me to correct and improve.”

And here we reach the intended point.

What is the real objective behind making AI available to everyone?

Could it be a psychopolitical tool?

It answered with the same arguments about risks.

Would you be satisfied with that answer?

We will see what others say.

To comment, you need to be logged in. If you don’t have an account yet, create one in a minute and you’ll be able to comment.
Create accountLog in

Leave a Comment

Scroll to Top