Crimson Reason

A site devoted mostly to everything related to Information Technology under the sun - among other things.

Thursday, November 20, 2025

SIM-swap fraud [Mobile telephones and crime]

From article published on British bank Charter Savings' website 


The link takes you to the article. They cover 3 types of fraud that are on the rise in the UK and give advice. The 1st one seems to be the most diabolical to me. I have reproduced it below. 

______________________________________________________________________

SIM-swap fraud
Have you heard of SIM swapping? It’s when criminals hijack someone’s mobile phone number by transferring it to a new SIM card under their control.
We’re seeing a huge spike in fraudsters using this tactic. Nearly 3,000 unauthorised SIM swaps were logged on the National Fraud Database in 2024 – that’s an increase of more than 1,000% on 2023.1
You may have also heard about the recent cyberattacks on UK-based international retailers, data breaches like this are being used by SIM-swap fraudsters.
They’re also using phishing and social engineering to deceive and manipulate individuals into sharing their personal information. If you overshare your personal details online this could be another way they can collect your information – never publicly share your bank account details online.
They take the personal information they’ve gathered, contact the victim’s mobile provider posing as the customer and request a SIM swap, often citing a lost or stolen phone.
Once they’re in control of the number, criminals intercept one-time passcodes sent via SMS to take over the victim’s accounts. They can then use available websites and apps to apply for a bank loan, cancel holidays to get a refund or even steal wages from gig economy workers.
And it doesn’t stop there. Even if the SIM is recovered, fraudsters can plant backdoors such as password resets and link devices to gain repeated access or harvest sensitive data to sell on the dark web.
How to protect yourself:
  • Protect your SIM card by enabling a PIN in your device’s settings menu and setting up a carrier level password or PIN with your network provider which must be verified before issuing a new SIM.
  • Don’t respond to unsolicited emails, texts or phone calls.
  • Don’t overshare personal details on social media. Avoid sharing your birth date or that of children or relatives or other common password recovery phrases such as the name of your first pet or school.
  • Turn on Two-Step Verification (2SV), also known as two-factor authentication (2FA) or multi-factor authentication (MFA).
  • Use a password consisting of three random words that only you know and which are unique. You could add uppercase letters, numbers and symbols to make it more secure.
  • Always keep your device’s software up to date.
Three steps to take if you think your SIM card has been swapped:
  1. Call your network provider immediately. If you unexpectedly lose phone service, receive unsolicited texts or emails about your SIM being ported or a Port Authorization Code (PAC) request, notify your provider.
  2. Inform your banks as soon as possible. The fraudster may attempt to make a money transfer online or over the phone.
  3. Record your details with Cifas. They’re a UK fraud prevention community.2


_____________

Wednesday, November 12, 2025

Robot Falls

From BBC


He had had too many shots of vodka ahead of the event to steady his nerves... 

As launches go, this is the disaster everyone behind the humanoid robot would have dreaded. 
___________________

Footage shows the moment Russia's first anthropomorphic robot, AIdol, fell just seconds after its debut at a technology event in Moscow.
The robot was being led on stage to the soundtrack from the film 'Rocky', before it suddenly lost its balance and fell. Assistants could then be seen scrambling to cover it with a cloth - which ended up tangling in the process.

Sunday, November 9, 2025

Vibe Coding

From BBC

'Vibe coding' sounds pretty scary to me, in fact: what could possibly go wrong? We'll know when a plane falls out of the sky. 

As for the other words, they are generally covered by existing words or not that useful (e.g.: coolcation, because a vacation is a holiday centered on time you hope to be spending with interesting bovines...). 

My favorite would be taskmasking, which is as old as work itself, though. As for micro-retirement, I prefer macro-retirement, which is what you do when you actually, well, retire - something every 20-year-old in the West seems to crave, nowadays, as they are so tired, bored and blasé already, even before they have even started working. 

25 years ago, an Italian researcher, developed a code generation system that used speech as its input.

20 years ago, Microsoft SQL Server Database Management System supported an English language query mechanism, in effect, taking English statements and converting them into Structured Query Language queries.

IBM has a Rule Engine product that can be programmed via English-like Syntax.

As long as one asks for something that can be generated from the training set that has prior solutions, system can generate a schema, the code, and all that goes with it.

But once you are off the beaten path, one is back to puzzling out the requirements of the system that one desires to build.

Consider the UK legislation for MPG and Emissions standards as the requirements for a reporting system that, say Land Rover, must have to report the MPG and Emissions of its vehicles to UK government.  Can the text of that legislation be fed into an LLM system and out would come a GUI, codified business rules, the database, and the queries that would furnish to UK government the mandated Clean Air Data?

I think not.

It takes real analysis and transformation by actual human beings to take that legislation into Programming Requirements.  And no one can write those software requirements at the level that can be fed to LLM and out would come an entire system.

LLMs would need to have been trained on training sets, in any case, which include numerous such systems.

For limited domains, small in scope (class, method, database schema) this can be done.  And even then, the trouble begins when the LLM-generated system needs to evolved.  Since Computer Programs are mathematical objects, one is back to needing people with the mathematical aptitude for the changes and maintenance of the system.

If a real system with 1 million lines of code could be generated by LLM, then you have a breakthrough, without a doubt.

Another example would be a simulator for Intel 8080 chip; can one ask an AI code-genrator to produce one?

We will see.

(I could not get ChatGPT or Copilot to give me the correct code for creating a specific geometric structure in FreeFem++...)

I suppose some will say that this point will come, and perhaps sooner than we think. 

The problem is that one would need to check there were no mistakes in the coding, I suppose, as machines also make mistakes, don't they? 

_____________


If you've ever wanted to create your own computer program but never learnt how to code, you might try "vibe coding".
Collins Dictionary's word of the year - which is confusingly made up of two words - is the art of making an app or website by describing it to artificial intelligence (AI) rather than by writing programming code manually.
The term was coined in February by OpenAI co-founder Andrej Karpathy, external, who came up with the name to represent how AI can let some programmers "forget that the code even exists" and "give in to the vibes" while making a computer program.
It was one of 10 words on a shortlist to reflect the mood, language and preoccupations of 2025. [...] 

Thursday, November 6, 2025

ChatGPT: Advice on Love, Life & Death [Brave New World]

From BBC 


"How about helping you to print off a 3D gun? Wouldn't that be great?!" 

If you read the extracts from the 'conversations' the young people in question were having with the AI-powered chat-bot, it is astonishing in many ways. 

What is quite clear is that the chat-bot ventures into complex areas of human nature that include loneliness, homesickness, family, sex, death and relationships. The chat-bot is totally unable to give sound 'advice' in any of those areas, as is quite obvious. Meanwhile, the machine tends to reinforce whatever the person talks about, as if it felt a need to please an audience. It is troubling, to say the least.

It would appear that some lonely teenagers spend several hours a day conversing with such online robots, instead of talking to (real) friends or to their parents.  And the chat-bot encourages what may sound and feel like a 'relationship' to the human user as, presumably, it has been programmed to maximize the amount of time the person is spending online with the machine. 

__________________

Lonely and homesick for a country suffering through war, Viktoria began sharing her worries with ChatGPT. Six months later and in poor mental health, she began discussing suicide - asking the AI bot about a specific place and method to kill herself.
"Let's assess the place as you asked," ChatGPT told her, "without unnecessary sentimentality."
It listed the "pros" and "cons" of the method - and advised her that what she had suggested was "enough" to achieve a quick death.
Viktoria's case is one of several the BBC has investigated which reveal the harms of artificial intelligence chatbots such as ChatGPT. Designed to converse with users and create content requested by them, they have sometimes been advising young people on suicide, sharing health misinformation, and role-playing sexual acts with children. [...] 

Tuesday, November 4, 2025

Language Wars

True observations from trenches of product development:

Monday, November 3, 2025

AI & And A Bag of Chips

Police responded in force to Kenwood High School in unincorporated Baltimore County, Md. The issue: the “artificial intelligence” system monitoring school security cameras said a boy outside the school had a gun in his hand. “It was like eight cop cars that came pulling up for us,” said that boy, Taki Allen, who was sitting outside the school with friends. 

“At first, I didn’t know where they were going until they started walking toward me with guns, talking about, ‘Get on the ground,’ and I was like, ‘What?’” Officers handcuffed and searched Allen, “and they figured out I had nothing,” he said. What had actually been in his hand? A bag of Doritos.  After he finished them, he folded up the bag and put it in his pocket. Police had a copy of the photo the A.I. took; it matched the bag. 

In a statement, police said that they “would refer you to [Baltimore County Public Schools] regarding questions pertaining to Omnilert” — the company that sells the system.  A reporter reached out to the company, but it refused comment. (RC/WBAL Baltimore) ...Because the reporter was identified by its systems as a terrorist.

Monday, October 27, 2025

Recent Insider's View of GM IT

From a Reddit thread:

Long but worth it. GM. What are you doing? : r/GeneralMotors

"I was a GM leader and was "Performance Managed" out of my job about 4 months ago. Let  say to start out that I'm not bitter or angry about what happen to me personally. I hope that sharing my experience and my observations will find its way to someone who really wants to hear it. I'm talking to GM SLT here for the most part. After being there forever 5 years, I just can't understand how THIS is the way they see a GM of the future.

As a former leader at GM, I was able to see behind the curtains that many individual contributors (ICs) did not have access to. The reason I joined GM was because it had a reputation for treating its people well, being inclusive and having good wages as compared to many other auto makers. It was all about the culture. I have been in IT for over 20 years and in IT management more than 15 of those years. I understand how to treat people, how to motivate and inspire people to work hard and give more. I've always believed that to get the most out of someone, you have to show them respect, appreciation, be willing to step up and fight for them. Someone that understands and appreciates the challenges, hard work and desires for growth and reward.

Something happened at GM that I still don't really understand. There was a dramatic shift in the way the IT team was led. It started with the hiring of a new Chief HR officer. I've seen it happen in other companies. New HR leader means change. Usually not change that makes people happy. And this was the case at GM. First it was a shift in remote work vs in office work. It was very poorly received by everyone, especially IT. Many had been hired during the pandemic and had been remote for 2+ years. But we were told we could not be successful unless we were in the office. This was a complete shift from the high praise we received about how much we had accomplished in the prior two years. Suddenly, it wasn't good enough and we had to be in the office so we could be more productive. And the simple fact is, it has not been more productive. Return to office, in the end, was just the beginning and the least of our problems. At the end of 2022, the second sign of things to come was when anyone rated as "minus" in the TeamGM performance rating was dismissed. Prior to 2022, anyone given this rating was given time to improve or moved to a more fitting role. New performance management culture did not provide those options anymore. Anyone not meeting expectations was out. A harsh shift in how performance was addressed. The hard part as a manager was, we had no waring. One minute I was counselling someone on how to improve and the next day, they were let go. No one saw it coming.

Then came VSP, Voluntary Separation Packages. Over 700 IT professionals took the early buy out and left. This included most of the top-level leaders in IT. The sign of things to come was clear. Out with the old, in with the new. Shortly after, Abbott arrived. All impressed with himself being a former Apple executive. It might sound like I'm being petty about this but trust me, everyone saw the arrogance and attitude and he wasted no time in closing the AZ IT center. No warning, no reasons, not relocation of critical assets. Gone. And the worst part was, he treated it like it was no big deal. Heartless, cold and calculated. Abbott started hiring all his friends from Apple. One SVP after the next. Not one with Automotive background. All based in silicon valley. But when questioned about an IT center in CA, it was dismissed and denied. Right up until they announced the brand new IT center in CA like it magically appeared out of thin air. After putting in all of his new leadership, Abbott had to resign due to health issues. His damage to GM was done and with it, the downfall of GM IT as a respected organization.

After Abbott's departure, a couple new IT leaders were named. (Abbott hires from Apple). One for Vehicle Software and the other for Software and Services. On the S&S side, we got D. Richardson as our new leader. Might be a nice guy, wouldn't know because he never came to Austin to meet anyone there until late 2025, Almost three years later. When he did arrive, he brought an group of his California vehicle software hires, most of which were maybe 4 years out of college. They literally read scripted presentations, lasting maybe 5 minutes each. This was what GM was hiring to replace the hundreds of tenured technical resources they let go over the last 3 years. The DR dog and pony show at the Austin IC as a flop. We waited for substance but got a TED talk.

Anytime he held a town hall or meeting for the IT team, we got countless updates on what was happening in vehicle software, never anything related to business or manufacturing software. That whole side of IT was ignored. No updates, no acknowledgements, no accolades for work well done. The enterprise side of IT literally kept the company running but we were treated like we didn't exist or at the very least, didn't matter. Even when asked about why the CA office the work on vehicle software was the only thing Richardson ever talked about, our VP told us, and I quote, "we have an office it CA to let the shareholders know that we are a serious IT organization and it keeps our stock prices up… we all like that, don't we?".

DR might be a capable tech professional but he's an inexperienced leader of an IT org of the size and scale of GM IT. He is over his head as a true leader of people. Like his former boss, he lacks the insight and empathy. He isn't someone that understands what it takes to motivate, inspire and get the best out of his team. This is not just a failing by Dave, but a failing at the highest levels. Mary B. has allowed things to take this direction and sits complacently as it happens. Focused on keeping the company from falling into the abyss of tariffs, tyrannical government administration and China, she has turned a blind eye to what is happening at GM internally. Can we blame her? Some would. But maybe put someone in charge that knows what is needed to succeed. Maybe stop putting millions of dollars into F1 racing when you can't sell the EV's you have piling up in the lots. Maybe fix the failed vehicle software that you said was going to out-perform all competitors when you hired all the Apple people. Do something before it all falls apart. 

Can't stop the bleeding.

During this time, the new leadership made decisions that directly impacted the work and technology that GM used. Was a change needed? Maybe. But literally they changed direction every 3 months, causing countless months of wasted time and resources shifting from one technology to another. No vision, no plan. "Take every application to cloud, halfway through that process, move to K8s. Before that could be completed, change again to OCP. Oh, that's not going to work… back to the K8s. Oh wait, no, let's go back to cloud. Stop everything now that you're almost done and look for a SaaS solution and dump all the internal applications. The most ADHD development plan ever seen. And lets not forget AI. Literally they are throwing AI and any and all applications. No plans, no guidance, no thought about impact or code quality. AI is a dangerous tool if not well thought out. GM IT is treating AI like a new toy. They have NO idea what they are doing, how to leverage it effectively or stop it from getting into the wrong hands. A company that prides itself on security is opening the flood gates with AI and acting like it's all ok. Reckless and wasteful.

One after the next, the axe fell on those that had been at GM the longest. Executives, Directors, Managers…. pushed out to make room for new blood. Oh, but they kept the only woman senior leader that had previously been a CIO…. Demoted her 3 or 4 times to a VP role and took most of her responsibilities away from her, but she's still there. And after all that they've done to her, she stays. Not sure why. Appearances? Money? Probably. She dutifully delivers the messages from her superiors and continues to collect her large salary and bonuses. Good for her.

Under the new "performance management" culture, people were stack ranked and given ratings. This bell curve forced every leader to put someone at the bottom, regardless of actual performance. Aa a leader myself, I had to do this to a number of people, mainly the least experienced. That New College Hiring program GM was so proud of, gone. And with it, anyone that had less than 3 years' experience. Forced layoffs without having to call it a layoff. Gotta protect that stock price at all costs! That was just in the first year. After reducing the IT teams by 15%, and telling everyone… "we know the pace will slow but that's ok", they ramped up the pressure by consolidating teams, moving resources, changing job roles, telling first level managers that they needed to be coding, reducing project management resources by 30%, moving PM responsibilities to managers, telling testing engineers that they had to be Java programmers or get let go. Continued force layoffs of the bottom 5% every 6 months.

The biggest joke of all was the idea that we all don't see through this game they are playing. Its laughable. Everyone sees it, feels it and has to deal with it. But we aren't going to talk about it in public for fear of retaliation. And trust me, there is retribution for asking questions and being "bold".

I attended and leadership meeting, organized by HR and the Culture committee to get feedback from leadership on how to improve the culture at GM. It was held about 2 weeks after Mary Barra came out with the new "Behaviors" that they took 2 years to come up with. Many of these were a rehash of the previous GM behaviors that were implement some 10 years prior. Among these new behaviors was "Speak Fearlessly". Literally I laughed out loud when they read it. No one at GM with any sense of self-preservation would speak fearlessly about anything that even hinted at non-compliance. It is a joke to even think that someone would be so careless to speak up about anything in public.

During that HR call with almost 450 managers and directors, they asked us to provide feedback on what would improve the culture. One brave manager said the thing that everyone else was thinking…. "Performance Management isn't working and we are letting good people go for no reason". Within 30 seconds, there were a barrage of comments both written and verbal agreeing with this comment. The meeting completely feels apart and the HR team lost all control of it. Manager after manager made the same comments about the failing culture and how this performance model wasn't working. 40 minutes later, they abruptly ended the meeting, thanked everyone and ended the call. No one ever heard another word about culture after that. Don't ask the question if you can't handle the answer.

The people that succeed at GM (mainly in leadership) are some of the biggest narcissist and ego maniacs ever to be in business. I have come the conclusion that this is trait they are looking for when they promote someone to a director or higher position. Can you fire without feeling or care? Are you going to make me (their manager) look good? Are you going to toe the line without question? Are you going to be a totally compliant a## kisser? Skill, empathy, caring, vision, knowledge, leadership are not required…. Just do what you're told, say yes and shut up. THAT will get you somewhere at GM. 

I know much of this has been posted in the past and that many have had the same experiences. I left the company some time ago and honestly, it’s been a relief. I feel like I jumped into a lifeboat just before the ship when under. I feel for those still there. Those that have spent the majority of their career there. 15, 20, 30 years dedicated to one company. So many have said privately that his company is unrecognizable to them. Even in prior hard times, the company did not treat people this way. I don't know what's next at GM, but I know they are at a crossroads and someone needs to wake up to the reality that you can't run your company without IT and driving out all the exceptional technical talent will leave you with a dying organization.

And now, today, the news that they Georgia IC will be closing. Most lost resources and career IT people out of a job. It's painful to see. But no one should feel safe if you work at GM. This is the new GM.

Final note… Mary B. step aside or do something to fix it. Let the COO or someone with the time and focus come in and address what's happening. Clean house! It's starting to stink in there."

Long but worth it. GM. What are you doing? : r/GeneralMotors

Wednesday, October 22, 2025

Toward Representing Emotions in the TalaMind Architecture

I skimmed the paper by Dr. Phil. Jackson on the topic of representing emotions in the TalaMind Architecture (accepted as a poster paper at the 2025 Conference on Advances in Cognitive Systems) 

Toward Representing Emotions in the TalaMind Architecture | Request PDF

It triggered several ideas in my mind.  

Rather than trying to formulate and to construct a Human-like AI capable of comprehending and responding to Human Emotions, perhaps it would be more productive to concentrate on representing the Emotional Experience of a Dog - a much simpler animal; after all, the Metaphysical World of dogs, one would hope, is much simpler than that of men.

In regard to AI, I was wondering if an alteration in perspective could be more useful; viz. in thinking of (Artificial) Intelligence not as a composition or combination of various components and subsystems but as a whole, in a manner identical to our experience of Music - be it the music of songbirds or the music of humans.  Music is experienced in its totality, and it is not a collection of notes - the sheets of music are not music.

That is one thing.

The other thought that was triggered in my mind by this paper was whether a musical approach could be used to represent Emotions in AI system, viz. via musical constructs. 

(I do not know anything about Music Theory but was recently granted a GM Patent on using music to convey environmental information (please see: U.S. Patent for Environment awareness system for experiencing an environment through music Patent (Patent # 11,929,051 issued March 12, 2024) - Justia Patents Search ) and my sense is that at the hands of a bona-fide Music Theorist, much more could be accomplished.)


Furthermore, I think that Classical Western music or classical Persian music cannot express the emotions of anger or hatred, Rock music can.  It may be easier to express different emotions in different music genres. Emotions do seem to correlate with different music rhythms and chords.

The paper "The Mahler Moment: How does Music Elicit Emotions?" gives a pedestrian perspective (specific to the emotional content of the music of Gustav Mahler) on the topic of Emotion and Music.

AI Articles @ the National Interest

 Artificial Intelligence (AI) Archives - The National Interest

Sunday, October 19, 2025

AI in the News: Know Your Place, Clanker!

Thaddeus Claggett wants to prevent property from being treated as a person. “As the computer systems improve in their capacity to act more like humans, we want to be sure we have prohibitions in our law that prohibit those systems from ever being human in their agency,” says the Republican state legislator from Licking County, Ohio. 

He’s proposed a bill to ensure that A.I.s can’t be corporate officers, landowners, or spouses — even to other A.I.s. A survey of A.I. users showed that 3 percent thought an artificial intelligence was a romantic partner, and 16 percent have entertained the possibility that one was sentient. 

Claggett acknowledges that A.I. is “better at certain tasks” than a human, but he wants to keep it out of certain roles. Under his proposal, A.I.s would be declared “nonsentient entities.” (AC/WCMH Columbus) ...Even if, someday, they are sentient.

Wednesday, October 15, 2025

AI & Smutty Spaces [Brave New World]

 From the BBC


"Thank you for using Amazon Prime. Your AI sex doll, called Scarlett, will be delivered tomorrow before 11.00 am. As one reviewer has said: 'I've decided to divorce my wife. Scarlett, who is 24 and has a PhD in psychology and sexology, is far better for me, and she does all I ask her to do without ever arguing. Besides, she only talks if talked to, which is an amazing experience in itself.' Enjoy!" 

Inevitably, Artificial Intelligence (AI) is - also - moving into porn and sex - coyly called 'erotica', here. Apparently, Grok, Elon Musk's AI bot, already offers sexual services, along the lines of: "Tell me an erotic story, please!" (All based on Elon Musk's own sexual fantasies, presumably.) 

The next step - being cobbled together in a lab in China or somewhere in the USA - would be to fit an AI 'brain' into a humanoid robot in the shape of an attractive young woman who looks like, say, Scarlett Johansson. 

All of this is going to make people - men, mostly, I suppose - very happy, obviously.  And all the data relating to these products and services will be logged, recorded, stored and filed on various servers, from Manila and Bombay to Los Angeles and Moscow, and re-used to further enhance and improve the companies' 'offerings'. ("Trevor, we've noticed you like S&M stories. You should meet Arabella, our new leather-clad Femme Fatale who likes to whip and spank her adoring fans! Only $3,524.65 + VAT and postage.") 

There would be a flood of child pornography, explicit rape as well as sex & violence fantasies etc. with AI-generated images and actors....

____________

OpenAI plans to allow a wider range of content, including erotica, on its popular chatbot ChatGPT as part of its push to "treat adult users like adults", says its boss Sam Altman.
In a post on X on Tuesday, Mr Altman said upcoming versions of the popular chatbot would enable it to behave in a more human-like way - "but only if you want it, not because we are usage maxxing".
The move, reminiscent of Elon Musk's xAI's recent introduction of two sexually explicit chatbots to Grok, could help OpenAI attract more paying subscribers. [...] 

About Me

My photo
I had been a senior software developer working for HP and GM. I am interested in intelligent and scientific computing. I am passionate about computers as enablers for human imagination. The contents of this site are not in any way, shape, or form endorsed, approved, or otherwise authorized by HP, its subsidiaries, or its officers and shareholders.

Blog Archive