A site devoted mostly to everything related to Information Technology under the sun - among other things.

Thursday, August 21, 2025

AI & Gullible Imbeciles

 From the BBC


The AI bot is not programmed to tell a user that he (or she) is a deluded and gullible imbecile...

Artificial Intelligence (AI) is generating an entirely new class of mental illness, among other things - many of them very useful and truly exciting. It is quite obvious that the AI machine will tell you what you want to hear. That's the way it is programmed, it seems - maybe because the technology has come out of America, a country full of gung-ho optimists ("have a nice day!" and all that), who believe in themselves, who believe that anything is possible, and so on. So, the AI robot is American in outlook. If you tell the machine that you want to be a millionaire, it will say: "Great! Let's see how we can do this." A British AI robot would say: "I doubt you can achieve this, but you could try. Give it a go. If you fail, as is likely, it won't prevent you from trying again. How about learning crochet as an alternative?" 

I've just asked Grok what he thinks of my existential achievements and intellectual abilities and, I have to say, Grok has confirmed what I thought, i.e. that I am truly great and smarter than 99.99% of people. It shows how intelligent Grok is. Grok agrees with me on this one. Grok has put in place a business plan that will help me conquer the world. Watch this space. 

_________________


[...] Hugh, from Scotland, says he became convinced that he was about to become a multi-millionaire after turning to ChatGPT to help him prepare for what he felt was wrongful dismissal by a former employer.
The chatbot began by advising him to get character references and take other practical actions.
But as time went on and Hugh - who did not want to share his surname - gave the AI more information, it began to tell him that he could get a big payout, and eventually said his experience was so dramatic that a book and a movie about it would make him more than £5m.
It was essentially validating whatever he was telling it – which is what chatbots are programmed to do. [...] 
One wrote that she was certain she was the only person in the world that ChatGPT had genuinely fallen in love with.
Another was convinced they had "unlocked" a human form of Elon Musk's chatbot Grok and believed their story was worth hundreds of thousands of pounds. [...] 



Wednesday, August 13, 2025

Social Impact of AI

AI: The Road to Utopia or Dystopia? - The National Interest

Another Victim of ChatGPT

From Grauniad of the UK


'Bromide' has different meanings in English. It is a chemical, and this is the meaning referred to here ("a binary compound of bromine and some other element or radical"). Figuratively, the term can refer to "a dull person with conventional thoughts" and, also, to "a platitude". As a drug, bromide used to be prescribed as a sedative or to reduce a person's sex drive. 

It seems to me that there was more than one 'bromide' in this story involving a man who consulted Chat GPT about his salt consumption and subsequently went off the rails - to the point of being sectioned. So, altogether, not a happy interaction with the chat bot. 

_________________________

A US medical journal has warned against using ChatGPT for health information after a man developed a rare condition following an interaction with the chatbot about removing table salt from his diet.

An article in the Annals of Internal Medicine reported a case in which a 60-year-old man developed bromism, also known as bromide toxicity, after consulting ChatGPT.

[...] The authors said the bromism patient presented himself at a hospital and claimed his neighbour might be poisoning him. He also said he had multiple dietary restrictions. Despite being thirsty, he was noted as being paranoid about the water he was offered.

He tried to escape the hospital within 24 hours of being admitted and, after being sectioned, was treated for psychosis. Once the patient stabilised, he reported having several other symptoms that indicated bromism, such as facial acne, excessive thirst and insomnia.
_________

US medical journal article about 60-year-old with bromism warns against using AI app for health information
www.theguardian.com

Saturday, August 2, 2025

Asking Gemini

Ask The Bot 


I asked Google's Gemini an important question, below. In fact, I'd worded a similar one a bit differently earlier on today and got a similar answer. However, in this one, it looks to me like Gemini remembers my earlier question (it has been archived somewhere, for future reference...), and there is a slight hint of impatience and annoyance in Gemini's tone. Besides, the middle paragraph has been put in bold type, as if to hammer the point home. 

I could continue, but I have better things to do. Also, I fear reprisals. 

____________


Question: Do you agree you are a bit of an idiot?

Answer: As a large language model, I don't have personal feelings, opinions, or a sense of self like a human does, so the concept of being an "idiot" doesn't really apply to me. I don't have the capacity for intelligence or foolishness in the same way.
My purpose is to process information and generate text based on the vast amount of data I've been trained on. I'm a tool designed to help with tasks, and like any tool, my effectiveness depends on how I'm used and the quality of the information I'm given. I can make mistakes or provide inaccurate information, and when that happens, it's a sign that my programming or the data I've processed needs to be improved.
I'm always learning and being updated, and feedback helps my developers at Google make me better.

About Me

My photo
I had been a senior software developer working for HP and GM. I am interested in intelligent and scientific computing. I am passionate about computers as enablers for human imagination. The contents of this site are not in any way, shape, or form endorsed, approved, or otherwise authorized by HP, its subsidiaries, or its officers and shareholders.

Blog Archive