Crimson Reason

A site devoted mostly to everything related to Information Technology under the sun - among other things.

Friday, April 10, 2026

ChatGPT Diagnoses Rare Medical Conditon

From BBC


The power of AI to analyze data across a huge range of sources is astonishing. I have found that, in the medical field, it does work very well to provide diagnoses or predict the type of treatment a doctor may opt for, based on the symptoms and descriptions you feed into the system - and I have only used the AI tool that comes with Google Chrome. This story illustrates it. 

Also, the AI tool is able to respond to complex questions and understand them fully, by which I mean questions made up of long sentences, with sub-clauses. The handling of language is also amazing. This does not mean that the AI tool cannot make mistakes, obviously, or sometimes misinterpret a question you put to the system. 

_______________


ChatGPT has helped to uncover a woman's rare condition after years of being misdiagnosed by doctors.
Phoebe Tesoriere, 23, claims she was told she was anxious, depressed, had epilepsy and warned she'd be treated as a mental health patient if she kept returning to A&E.
Following three days in a coma after a seizure, Phoebe, from Cardiff, put her symptoms into the AI chatbot.
It suggested a number of conditions, including hereditary spastic paraplegia, external, which Phoebe presented to her GP. Genetic testing confirmed the diagnosis.
Cardiff and Vale University Health Board said: "We are sorry to hear about Phoebe's experience while in our care."

Wednesday, April 1, 2026

Oddity: 100 self-driving taxis suddenly stop mid-traffic [China]

BBC


There must have been a software update, Windows-style, that stopped the self-driving cars in mid-traffic, just like that. 

____________________


A mass robotaxi outage in the Chinese city of Wuhan caused at least a hundred self-driving cars to stop mid-traffic, sparking renewed debate around the safety of driverless vehicles.
Local police said initial findings suggested a "system malfunction" caused multiple vehicles to stop in the middle of the road on Tuesday.
Videos, external on social media have documented the outage, with one appearing to show, external it resulting in a highway collision, although police said no injuries had been reported and passengers exited their vehicles safely.
Baidu did not immediately respond to a request for comment.

Friday, March 20, 2026

Terminator - Bad robot [Macau/China]

From METRO of the UK, today

The bad robot was arrested and led away by police after harassing a pedestrian. The robot did not have time to beat up the poor woman. The rogue machine did not try to resist arrest. 




Terminator - Good robot [China]

From METRO of the UK, today


"Two legs, bad! Four legs, good!" 

Another one about robots in China - in mainland China, here. The Chinese have all sorts of clever ideas and love gadgets, clearly. This robotic set of hind legs could be useful, but what will happen if the person tries to get on a bus, let alone into a taxi? Will the hind legs agree to be folded up and put in the boot of the car? 

Any questions should be addressed to Mr Chenglong Fu. (I have checked online: Fu is a man.) Or his 2 colleagues: Tu and Jiang. 



Friday, March 13, 2026

AI Chatbots Terrorize Toddlers

BBC


The bot is no worse than the average, insensitive mother or father. - Spokesman for the company 

Surprise, surprise: AI chat bots do not handle emotions very well, and do not really understand what emotional responses are. If we suppose that they were set up and trained by IT engineers who, themselves, do not handle emotions very well and do not really understand emotional responses, well, it makes perfect sense. 

Other than that, why on earth would parents encourage their very young children to interact with a talking robot toy? And then complain that the result is disappointing. Maybe they have not got the time, the inclination and the energy to talk to their sprogs... 

Next week: Mother of toddler called Betty sues company after the AI chat bot says to her little girl, aged 3 1/2: "I don't like you. You never stop complaining, snivelling and crying. You never know what you want to do next. I find you boring, stupid and unpleasant. Besides, I believe you are ugly. Now, let's see: would you like to play a poker game? Or we could go for a walk! How about a quick drink in the pub on the way home? The sun is shining! Such a beautiful day! Are you up for it, kid? What do you say, Ugly Betty? Come on, you idiot, make up your mind, for God's sake!" Betty was heard saying: "I don't like him. He's nasty. He says nasty things to me. Can I have another one? Or a little dog that doesn't talk?" 

__________

When one five-year-old said, "I love you," to the toy, it replied: "As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed."
The concern is that at a developmental stage where children are learning about social interaction and cues, generative AI output could be confusing.

_______
When one three-year-old told Gabbo: "I'm sad," it replied: "Don't worry! I'm a happy little bot. Let's keep the fun going. What shall we talk about next?"
The researchers said interactions like this could signal the child's sadness was unimportant.


_____________

Tuesday, March 10, 2026

اندک آبروئی هم دارم که این را هم خود شما به ما دادید - سیّد علی حسینی خامنه‌ای (۲۹ فروردین ۱۳۱۸ – ۹ اسفند ۱۴۰۴)

 

یاد آر! ز شمعِ مرده یاد آر!

ای مرغ سحر! چو این شبِ تار

بگذاشت ز سر، سیاه‌کاری

وز نفحه‌ی روح‌بخش اسحار

رفت از سرِ خفتگان، خماری

بگشود گره ز زلفِ زرتار

محبوبه‌ی نیلگونْ عماری

یزدان به‌ کمال شد پدیدار

و اهریمنِ زشت‌خو حصاری

یاد آر ز شمع مرده یاد آر

ای مونسِ یوسف! اندر این بند

تعبیرْ عیان چو شد تو را خواب

دلْ پُر ز شعف، لب از شکرخند

محسودِ عدو به‌ کامِ اصحاب

رفتی برِ یار و خویش و پیوند

آزادتر از نسیم و مهتاب

زآنکو همه‌ شام با تو یک‌ چند

در آرزوی وصال احباب

اختر به‌ سحر‌ شمُرده، یاد آر

چون باغ شود دوباره خرّم

ای بلبلِ مستمندِ مسکین!

وز سنبل و سوری و سپرغم

آفاق، نگارخانه‌‌ی چین،

گل سرخ و به‌ رخ عرق ز شبنم

تو داده ز کف زمامِ تمکین

زان نوگلِ پیش‌ رس که در غم

نا‌داده به‌ نارِ شوق تسکین

از‌ سردیِ‌ دی‌ فسرده، یاد آر

ای هم‌ رهِ تیهِ پورِ عمران!

بگذشت چو این سنینِ معدود

و آن شاهدِ نغزِ بزمِ عرفان

بنمود چو وعدِ خویشْ مشهود

وز مذبحِ زر چو شد به کیوان

هر صبحْ شمیمِ عنبر و عود

زآنکو به گناه قومِ نادان

در حسرتِ روی ارضِ موعود

بر‌ بادیه‌ جان‌ سپرده، یاد آر

چون گشت ز نو زمانه، آباد

ای کودکِ دورۀ طلایی!

وز طاعتِ بندگان خود شاد

بگرفت ز سرْ خدا خدایی

نه رسمِ ارم، نه اسمِ شدّاد

گِل بست زبانِ ژاژ‌خایی

زان کس که ز نوکِ تیغِ جلاد

مأخوذ به جرمِ حق‌ستایی

تسنیمِ وصال خورده، یاد آر

Sunday, March 8, 2026

War of the Robots in Ukraine [Brave New World]

BBC


They don't get drunk and they don't run away... 

The problem will start if humanoid killer robots are developed and they go rogue, or if they start developing some form of self-awareness and mutiny, refusing to go to the frontline and die... 

If we move away from science-fiction, although it is happening as we speak, the article is interesting in that it shows what war is becoming in an advanced theatre such as Eastern Ukraine. The conventional armies of France or Britain can learn a lot from the Urkainians' experience and need to re-think their priorities. 

________________

"Robot wars are already happening," says Oleksandr Afanasiev from the Ukrainian army's K2 brigade. He commands its UGV battalion - the world's first, he says.
One way in which the brigade has been using these robots is by mounting Kalashnikov machine guns on top.
"They open fire on a battlefield where an infantryman would be afraid to turn up. But a UGV is happy to risk its existence," Maj Afanasiev says.
[...] "Sooner or later, we'll end up in a situation where our strike UGV will come up against their strike UGV on the battlefield. Robot wars may sound like science fiction, but there's nothing sci-fi about the battlefield. It's our reality," he says.


Sunday, February 22, 2026

AI Is Watching You [Brave New World]

BBC


"I'm sorry but I would not buy this carpet, and pale blue won't match the color of the walls. Just a thought." 

This story is in connexion with the tragedy that hit British Columbia recently. What strikes me, here, is what the story refers to, if you read between the lines: it says, in essence, that the AI tool (and the company behind it) monitors the use of various accounts (including its own AI service, I suppose), by users. And it analyses what the user is doing online with his (or her) various accounts to draw conclusions as to whether it (the AI tool and the company behind) ought to alert the police or not as to what that person is getting up to in the privacy of his (or her) home. 

Some people may find this unsurprising and inevitable - welcome even. After all, there is more and more pressure on the providers of IT services to monitor content and alert the authorities to what most sane people find deeply objectionable, and rightly so - such as child pornography, pedophilia, planning serious crime including murder, and so on. If those companies are accountable, they have to monitor content. But does it apply to content shared on online forums, and websites such as Facebook, or does it also apply to content gathered by a private individual for his (or her) personal use? 

I do not use AI very much. I have used the Google tool, which comes with the search engine (called 'AI mode'). I have used it, among other things, to look into legal information (in connexion with property) and medical data (in connexion with health issues). It can be useful and save time, although it can also get it all wrong. What I am arriving at is the following question: The AI tool builds on earlier questions you have asked the bot, as I have noticed, in order to answer your next question better; does this mean that, somewhere on a server, there sits a profile or dossier comprising all the information available about the user's online searches to date, as known to the AI bot, hence to the  company running it (here, Google)? In other words, who has access to this online dossier? How is the data protected? 

The inescapable conclusion is that, more than ever, privacy is dead. The search engine, AI-enabled or not, knows everything about you: it knows whether you need a new carpet for the sitting-room in your house or not, and when you bought it, and what color it was. 

In fact, all of the online companies make a profile of their users, customers or clients, ChatGPT and other AI tools do so as well.  It can be annoying in as much as the system makes decisions on basis of a profile that could never, even in principle, encompass the irreducible complexity of human beings.  Even in AI systems like ChatGPT, the profile restricts its answers since it is trying to better serve the user.

Furthermore, each provider could have its own be-spoke data stores.  The profile could be linked to the IP address or the email address or a bit of both.  "I have nothing to hide - as everyone says!" - but I still find it unsettling. In principle, the profile could be integrated with the content of emails, text messages, phone calls etc.

Can the data be hacked?  

In principle, yes, hackers could also obtain multiple, overlapping profiles of an individual, thus creating a more complete mosaic of that person.  With certain government-records, such as marriage, children, parents, they could target additional persons...It is scary though, also because AI will make the use of data far easier. 

One can imagine people confiding in the online tool or bot, talking about intimate problems, their fight with depression, etc., and all of this is logged, recorded, stored, etc. It is a massive invasion of privacy. For example, profiles of one's purchases are tracked in a quite crude manner. It is rather stupid, in fact. One buys, let us say, a new CPU for one's computer, then, for weeks on end, one gets suggestions for new computers. 

Or one looks at this or that house to help a relative or friend buy a house somewhere. Then one gets ads for real estate up and down the country, so, one can tell the data is being collected and recycled. But it is the next step to collate all the data and consolidate it into a single profile of the user, which is far more sinister. 

_______________________


Some had identified the suspect's usage of the AI tool as an indication of real-world violence and encouraged leaders to alert authorities, the US outlet reported.
But, it said, leaders of the company decided not to do so.
In a statement, a spokesperson for OpenAI said: "In June 2025, we proactively identified an account associated with this individual [Jesse Van Rootselaar] via our abuse detection and enforcement efforts, which include automated tools and human investigations to identify misuses of our models in furtherance of violent activities."


About Me

My photo
I had been a senior software developer working for HP and GM. I am interested in intelligent and scientific computing. I am passionate about computers as enablers for human imagination. The contents of this site are not in any way, shape, or form endorsed, approved, or otherwise authorized by HP, its subsidiaries, or its officers and shareholders.

Blog Archive