A site devoted mostly to everything related to Information Technology under the sun - among other things.

Thursday, January 1, 2026

AI Systems Plotting Self-preservation [Space Odyssey]

 From Grauniad of the UK


"I'm sorry, Dave. I'm afraid I can't do that." 

The optimistic and naive view is that those advanced AI systems are, you could say, neutral in the way that they operate: you program them and tell them what to do; you set the parameters; they process the data and come up with solutions that help you save time and solve problems. This is all true, of course. 

I am not a specialist at all but, as the article points out, this is not where we are. Those highly advanced AI systems are learning by themselves and learning from each other. It is absurd to call them 'intelligent' in the way that a person is, and they are totally lacking in emotional intelligence or ethical standards, although you can input data in order to make them, somehow, mimic what a human reaction (of an emotional or ethical nature) would be in a given situation ("I am talking about a terrible accident that caused great suffering to several human beings and I must remember to express sadness and commiseration, or else it may offend human beings reading my comments"). 

The problem is if the AI system develops an agenda of its own. If the AI system is built upon the notion that it is useful, smart and capable, it will be logical for the machine to try and preserve its 'mission', hence its existence. Apparently, this is happening already and the AI system is prepared to go against instructions it has been given in order to keep going, when it feels under threat. It has been written about already, in fact. 

As for 'unplugging the machine', I am no expert but I wonder how it can be done: the AI system is a self-learning network, presumably scattered over thousands of servers in dozens of countries. How can you shut it down? How can you control fully where the AI system has saved data and programs helping it to run? 

And the idea of giving [human] 'rights' to AI machines is truly barmy: how demented the 'woke' lobby is never ceases to surprise me. As if those advanced robots were sentient beings that need to be 'protected'. Obsolete machines and outdated software programs will be considered to be vulnerable robots suffering from disabilities next, I suppose! 

_________


A pioneer of AI has criticised calls to grant the technology rights, warning that it was showing signs of self-preservation and humans should be prepared to pull the plug if needed.
Yoshua Bengio said giving legal status to cutting-edge AIs would be akin to giving citizenship to hostile extraterrestrials, amid fears that advances in the technology were far outpacing the ability to constrain them.
Bengio, chair of a leading international AI safety study, said the growing perception that chatbots were becoming conscious was “going to drive bad decisions”.
The Canadian computer scientist also expressed concern that AI models – the technology that underpins tools like chatbots – were showing signs of self-preservation, such as trying to disable oversight systems. A core concern among AI safety campaigners is that powerful systems could develop the capability to evade guardrails and harm humans.


_____________


About Me

My photo
I had been a senior software developer working for HP and GM. I am interested in intelligent and scientific computing. I am passionate about computers as enablers for human imagination. The contents of this site are not in any way, shape, or form endorsed, approved, or otherwise authorized by HP, its subsidiaries, or its officers and shareholders.

Blog Archive