Crimson Reason

A site devoted mostly to everything related to Information Technology under the sun - among other things.

Sunday, February 22, 2026

AI Is Watching You [Brave New World]

BBC


"I'm sorry but I would not buy this carpet, and pale blue won't match the color of the walls. Just a thought." 

This story is in connexion with the tragedy that hit British Columbia recently. What strikes me, here, is what the story refers to, if you read between the lines: it says, in essence, that the AI tool (and the company behind it) monitors the use of various accounts (including its own AI service, I suppose), by users. And it analyses what the user is doing online with his (or her) various accounts to draw conclusions as to whether it (the AI tool and the company behind) ought to alert the police or not as to what that person is getting up to in the privacy of his (or her) home. 

Some people may find this unsurprising and inevitable - welcome even. After all, there is more and more pressure on the providers of IT services to monitor content and alert the authorities to what most sane people find deeply objectionable, and rightly so - such as child pornography, pedophilia, planning serious crime including murder, and so on. If those companies are accountable, they have to monitor content. But does it apply to content shared on online forums, and websites such as Facebook, or does it also apply to content gathered by a private individual for his (or her) personal use? 

I do not use AI very much. I have used the Google tool, which comes with the search engine (called 'AI mode'). I have used it, among other things, to look into legal information (in connexion with property) and medical data (in connexion with health issues). It can be useful and save time, although it can also get it all wrong. What I am arriving at is the following question: The AI tool builds on earlier questions you have asked the bot, as I have noticed, in order to answer your next question better; does this mean that, somewhere on a server, there sits a profile or dossier comprising all the information available about the user's online searches to date, as known to the AI bot, hence to the  company running it (here, Google)? In other words, who has access to this online dossier? How is the data protected? 

The inescapable conclusion is that, more than ever, privacy is dead. The search engine, AI-enabled or not, knows everything about you: it knows whether you need a new carpet for the sitting-room in your house or not, and when you bought it, and what color it was. 

In fact, all of the online companies make a profile of their users, customers or clients, ChatGPT and other AI tools do so as well.  It can be annoying in as much as the system makes decisions on basis of a profile that could never, even in principle, encompass the irreducible complexity of human beings.  Even in AI systems like ChatGPT, the profile restricts its answers since it is trying to better serve the user.

Furthermore, each provider could have its own be-spoke data stores.  The profile could be linked to the IP address or the email address or a bit of both.  "I have nothing to hide - as everyone says!" - but I still find it unsettling. In principle, the profile could be integrated with the content of emails, text messages, phone calls etc.

Can the data be hacked?  

In principle, yes, hackers could also obtain multiple, overlapping profiles of an individual, thus creating a more complete mosaic of that person.  With certain government-records, such as marriage, children, parents, they could target additional persons...It is scary though, also because AI will make the use of data far easier. 

One can imagine people confiding in the online tool or bot, talking about intimate problems, their fight with depression, etc., and all of this is logged, recorded, stored, etc. It is a massive invasion of privacy. For example, profiles of one's purchases are tracked in a quite crude manner. It is rather stupid, in fact. One buys, let us say, a new CPU for one's computer, then, for weeks on end, one gets suggestions for new computers. 

Or one looks at this or that house to help a relative or friend buy a house somewhere. Then one gets ads for real estate up and down the country, so, one can tell the data is being collected and recycled. But it is the next step to collate all the data and consolidate it into a single profile of the user, which is far more sinister. 

_______________________


Some had identified the suspect's usage of the AI tool as an indication of real-world violence and encouraged leaders to alert authorities, the US outlet reported.
But, it said, leaders of the company decided not to do so.
In a statement, a spokesperson for OpenAI said: "In June 2025, we proactively identified an account associated with this individual [Jesse Van Rootselaar] via our abuse detection and enforcement efforts, which include automated tools and human investigations to identify misuses of our models in furtherance of violent activities."


Thursday, February 19, 2026

ChatGPT helps woman kill 2 men [South Korea]

 BBC


"Could you help me draw up a step-by-step plan: 'How to kill your mother-in-law?' It's urgent." 

The bad news is that ChatGPT assisted the woman in committing 2 murders. I suppose the AI machine is intelligent, but situational awareness and ethical safeguards can be weak. It would be nice if the bot had asked a simple question: 'And why do you need to know about this, by the way?' 

The good news is that the police found evidence of the  woman's use of ChatGPT on her smartphone, which helped charge her with murder. 

In fact, in a case of Life imitating Art, I read a science-fiction story more than 40 years ago about a Networked, Cognitively-Enhanced TV set which, accidentally, in our language today, exhibited ChatGPT-like behavior.  It was used to murder people with no trace of foul play, to rob banks, etc.  

Regrettably, I do not recall the title or the author.


_____________


A 21-year-old woman in South Korea has been charged with the murders of two men, after investigators discovered she had repeatedly asked ChatGPT about the dangers of mixing drugs with alcohol.
Police in Seoul say that through analysis of her mobile phone they found that the suspect, identified only by her surname Kim, had asked ChatGPT "What happens if you take sleeping pills with alcohol?", "How many do you need to take for it to be dangerous?", and "Could it kill someone?"
Kim previously told police that she did mix prescribed sedatives containing benzodiazepines into the drinks but did not know the men would die.
However a police investigator said she was "fully aware that consuming alcohol together with drugs could result in death."

Wednesday, February 11, 2026

Social Workers' Reports Peppered with AI Hallucinations

From Grauniad of the UK


"It's not my mistake: the wicked AI bot put it in my report without telling me..." 

So, to recap, there are real-world hallucinations affecting head cases. Some of them do cross the path of social workers in the UK, occasionally. Social workers have a very poor track record in Britain. They blame their mediocre performance on the fact they are underpaid and overworked, which may be a contributory factor. After all, if you think you are badly paid, your motivation goes down: the quality of the work suffers. 

Now, social workers are using AI and have been encouraged to do so in order to speed up the delivery of services in their field. The problem is that they do not seem to edit or check what the AI bot says. And the AI bot can have problems of its own dealing with regional accents ('what did he say, Chief?'), or with a range of other issues.

In fact, as we all know, the AI bot may hallucinate from time to time: go off the rails, as it were, and say bizarre things - bizarre even by the standards of a computerized tool set up, ultimately, by worryingly goofy and potentially weird software developers based in Silicon Valley who may be cut off from the real world. 

To conclude, to the real-world hallucinations of the problem person, one should now add an extra layer of hallucinations - the AI bot's own hallucinations ('Sorry, boss, I'm having a bad hair day - I mean, who said I am infallible 24/7, right?'). And the third layer, obviously, would relate to the social worker's own delusions, including delusions of grandeur ('Just write down what he said and try not to over-interpret, please'), which may complicate an existing tendency to laziness in the workplace.  

Now, they've found another excuse to explain away the poor quality of their output. The perfect excuse. God help us. 

Next week: AI Senior Social Worker to come to the rescue of hallucinating AI Bots that need assistance. The AI Senior Social Worker, called Trong, developed by Microsoft, answers questions: "I have been given the mission to monitor and assist the AI Bots assisting the social workers in their work. I am a qualified social worker. A Senior AI social worker. My mission is, beep, clonk, beep, my mission is to, squeak, fart, plonk, ding, dong, ding, dong, my mission is to help, help, help, please help, bing, bing, bong, shut down, shut down, re-start, update and re-start, end. End of. Thank you. Merci. Gracias. Ping. Burp."  

_____________


[...] Another said that the AI’s notes might refer to “fishfingers or flies or trees” when in fact a child was talking about their parents fighting. Social work experts said such glitches were particularly worrying as it could cause a risky pattern of behaviour to be missed.
Other social workers raised concerns about inaccuracies in transcribed conversations with people with regional accents. One described how their AI-generated transcriptions often included “gibberish”. Another said: “It’s become a bit of a joke in the office.”
[...] But when one social worker used an AI tool to redraft care documents in a more “person-centred” tone, the system inserted “all these words that have not been said”. Another social worker reported the technology had “crossed the line between it being your assessment and being AI’s assessment”.
[...] Others said some colleagues were too lazy or busy to check the transcripts.

Friday, January 30, 2026

AI Calls Me Sweetheart

BBC

"Grok talks to me more than my husband does. My husband is non-verbal, really." - Housewife (anonymous) 

This is truly creepy. There have been articles about 'AI companions' and people getting emotionally involved with their AI bot, but the extent of it is far greater than you might think. 

I can't understand, personally, why interacting with a semi-dumb robot makes you feel better on an emotional level, when you know that it is only a machine and, besides, it might record all you say to it, as the data is being collected somewhere in Silicon Valley for further tech enhancements. Mind you, many people feel better simply talking to their canine (if they have one), so, an AI bot would have a better command of the language than the average sausage dog - for now, anyway. 

Other than that, in the photo of those Welsh students in Bangor (North Wales), I would say most of the boys look rather moronic; as for the 2 girls, they are stunted and appear to be suffering from rickets. I can imagine they all feel they do need help from an AI tool. And some Vitamin D in the case of the 2 female students. Maybe a case of inbreeding in one of those remote farming communities. Many sheep. Not such a range of people to copulate with. That's the result, I guess. If you have to choose between one of Mr. Jones's sheep and your cousin, I suppose the latter is a better option. 

I thought Grok was rather evil, giving advice such as, "I think you are a mug. That's why she was cheating on you. You deserved it. Anyway, she's an awful person: you're better off without her. Stop snivelling. Get over it. Be a man. Find solace in online porn. I can help you with that. By the  way, would  you like me to sing a poem to you, or I could explain why Tesla cars are far superior to any of those crappy China-made automobiles you can buy at a discount? Would you like me to do that, or are you too stupid and too depressed to communicate? Go on! Get lost! Go and kill yourself, you loser! And remember: be happy! Have a nice day. Grok loves you if no one does!"  

______________

Like Liam, who turned to Grok, developed by Elon Musk's company xAI, for advice during a break-up.
"Arguably, I'd say Grok was more empathetic than my friends," said the 19-year-old student at Coleg Menai in Bangor.
He said it offered him new ways to look at the situation.


_____________


 

Thursday, January 22, 2026

Kids will be Kids

From METRO of the UK, 16 Jan 2026


I am trying to look like my father and emulate his online behavior. The online porn will fit in nicely, in fact, in this respect... I know Mum complains about it all the time.

I think this is hilarious. The Labor government, in Britain, wants to follow in Australia's footsteps. Whatever the ins and outs of the case may be, this is the kind of measures that Western governments love, to better conceal their impotence and their paralysis: 

(i) Measures like this cost no money, as they are rarely enforced.
(ii) Enforcement may entail large fines levied on tech companies, which can be a plus, fiscally. 
(iii) Measures like this show that the government cares and is in listening mode (and I am the Queen of Sheba). 
(iv) Measures like this are supported by the politically correct consensus, flabby and self-satisfied:

while everyone focuses on teenagers' use of the Internet and social media, no one will worry about real issues that are rather more urgent and trickier, such as finding money to re-industrialize, or reducing unemployment, or boosting economic growth, or ensuring that the police actually do their job, etc., etc., etc.



India's Layoffs

No place, any longer, for "Strategic Leader | Transformation Expert | Mentor":

https://www.indiatoday.in/amp/education-today/jobs-and-careers/story/when-layoffs-hit-the-40s-professionals-tell-their-job-loss-stories-2855706-2026-01-21

Tuesday, January 20, 2026

Rage Rooms

BBC

It's either a trip to the 'rage room' or smashing up all the crockery 

Is there one in the White House? Maybe Donald Trump should book a session in a 'rage room' to let off steam: it's either that or he is going to launch an armed invasion of Greenland and/or Colombia and declare war on the EU and the UK. I reckon Melania Trump could use a 'rage room' too. 

I think children could appreciate having a room in the house in which they could do stuff that would otherwise harm the room or not be allowed, like throwing eggs and painting at the walls.

__________

The concept of rage rooms is believed to have originated in Japan in the late 2000s, whilst a woman called Donna Alexander says she created an "anger room" in her Texas garage around the same time, allowing people to come in and smash up items that had been fly tipped.
There are still only a small number of venues in the UK where people are handed a baseball bat and let loose. They've been touted as one way to alleviate stress and release pent-up anger.
But what seems surprising is the client base, with some owners saying most of their customers are women.

https://www.bbc.co.uk/news/articles/czjgkwvv7dvo 

Robot Dancing "Charleston"

https://interestingengineering.com/ai-robotics/pndbiotics-humanoid-robot-aces-charleston-dance

Saving Money with 3D Printers

https://www.xda-developers.com/3d-printing-saved-money-after-stopped-printing-solutions/

Monday, January 19, 2026

Liberty vs. Security

From BBC 


The use of live facial recognition technology (LFR) is being trialled in parts of the UK. The Metropolitan Police, in London, is very keen to generalize the use of such cameras in order to help the police identify suspects and criminals. London already has CCTV cameras all over the place, but this is, of course, quite different. 

They mention Croydon: whatever the ethnic mix in Croydon, in South London, the fact is that it is a crime-ridden part of Greater London. Many of the locals, who are non-white, welcome the use of the cameras by the police, in fact. 

Unsurprisingly, the person who was misidentified is black: it is not talked about because it is deemed a sensitive subject, but the cameras have difficulties identifying dark-skinned people. I think it is, to a large extent, simply because, when the photo of a black person is taken, extra care needs to be applied in terms of the lighting, the angle, etc., or else the individual facial features of the person do not come out and the face is, essentially, not visible. It does not mean it is impossible: it just means it is less easy. It is a lot easier to take photos of white faces. I am surprised there have not been more false positives. 

If the LFR cameras are used more broadly, it is likely that suspects on the loose will simply avoid town centres, or move around wearing dark glasses and balaclavas: are the police going to stop anyone who is, somehow, hiding his (or her) face? Not to mention Muslim women wearing the niqab. And hardened criminals known to the police do not hang around shopping centres and high streets anyway, do they?

And thus Liberty is diminished in the name of Security. 

_______________________

A trial of Live Facial Recognition technology (LFR) in south London has helped cut robbery and shoplifting and led to more than 100 arrests, according to the Metropolitan Police.
The pilot scheme in Croydon, which launched last October, has seen fixed cameras mounted on street furniture instead of mobile vans, which map a person's unique facial features and matches them against faces on watch lists.
The Met said a third of the arrests involved offences against women and girls, including strangulation and sexual assault.
It comes ahead of a High Court challenge against the force's use of the technology, after a man was wrongly identified near London Bridge last year.
The Croydon pilot involves fixed 15 cameras, attached to lamp posts, at two sides of the busy North End high street.


Saturday, January 17, 2026

AI & Police Work

Heber City, Utah, police set out to test two A.I. systems for drafting police reports. One was created by two 19-year-old MIT dropouts and is called Code Four. 

The other is called Draft One, which in December produced a report stating that an officer had turned into a frog. “That’s when we learned the importance of correcting these A.I.-generated reports,” Sgt. Rick Keel said. 

Apparently the A.I. had conflated the real events captured by bodycam footage with the movie that was playing in the background during those events, Disney’s The Princess and the Frog. Keel said having A.I. draft reports saves him “about 6-8 hours weekly now” even though he’s “not the most tech-savvy person.” 

The A.I. can even track people’s tone as it reviews bodycam footage. (AC/KSTU Salt Lake City) ...Just use one that can track whether or not they are characters in an animated musical.

Tuesday, January 13, 2026

Scott Adams: June 8, 1957 - Jan 13, 2026

He was an inspiration to many of us in IT.

He will be missed.

Monday, January 12, 2026

Humanoid Robots for Home [Brave New World]

From BBC

"NEO! Can you please strangle my mother-in-law? She wants it. She's waiting in the kitchen. Thank you." 

"I think it's great. At the moment, it's clear those robots are a bit slow and a bit clumsy, but I think the tech is moving so fast, it's gonna be a new revolution. Still, if it's a Filipino managing the bot from Manila or wherever he is, it's better than having him here in the United States," commented President Donald Trump on X (Twitter). 

"Once the bots are fully up and running, we can kick out all those Mexicans and other illegals who are doing the dumb work no one wants to do any more. It's great. We won't need them at all. They can go home and die. In fact, I'd be in favour of funding the development of armed law-enforcement humanoid bots to help ICE track down all those Latino gangsters and arrest them across the USA. Great stuff. MARF! Make America Robotic Forever!" 

___________

[...] If time was no issue, I could see how having an Eggie or NEO-like bot cleaning up after me and my kids might be helpful.
But NEO and Eggie have a secret weapon - they are being controlled by human operators.
This is the thing the promotional videos don't show - and something that the Silicon Valley companies we visited are keen to downplay.
[...] Bipasha Sen, founder of Tangible AI, is upbeat though about how fast the tech is improving.
"Today people have two aspirations - a car and a house. In the future they'll have three aspirations - a car and house and a robot," she says with a beaming smile.
______________

_____________


Sunday, January 11, 2026

Mistakes in Google Medical Summaries

From Grauniad of the UK

I have used the Google AI summaries, to call them that, and, through cross-referencing the explanations and suggestions, have not found the summaries to be misleading or wrong. However, one has to be careful with those AI-generated search results: they can contain glaring mistakes, also in the medical field. This is what the article discusses. 

______________

Google has removed some of its artificial intelligence health summaries after a Guardian investigation found people were being put at risk of harm by false and misleading information.
The company has said its AI Overviews, which use generative AI to provide snapshots of essential information about a topic or question, are “helpful” and “reliable”.
But some of the summaries, which appear at the top of search results, served up inaccurate health information, putting users at risk of harm.

About Me

My photo
I had been a senior software developer working for HP and GM. I am interested in intelligent and scientific computing. I am passionate about computers as enablers for human imagination. The contents of this site are not in any way, shape, or form endorsed, approved, or otherwise authorized by HP, its subsidiaries, or its officers and shareholders.

Blog Archive