Tuesday, January 13, 2026

Scott Adams: June 8, 1957 - Jan 13, 2026

He was an inspiration to many of us in IT.

He will be missed.

Monday, January 12, 2026

Humanoid Robots for Home [Brave New World]

From BBC

"NEO! Can you please strangle my mother-in-law? She wants it. She's waiting in the kitchen. Thank you." 

"I think it's great. At the moment, it's clear those robots are a bit slow and a bit clumsy, but I think the tech is moving so fast, it's gonna be a new revolution. Still, if it's a Filipino managing the bot from Manila or wherever he is, it's better than having him here in the United States," commented President Donald Trump on X (Twitter). 

"Once the bots are fully up and running, we can kick out all those Mexicans and other illegals who are doing the dumb work no one wants to do any more. It's great. We won't need them at all. They can go home and die. In fact, I'd be in favour of funding the development of armed law-enforcement humanoid bots to help ICE track down all those Latino gangsters and arrest them across the USA. Great stuff. MARF! Make America Robotic Forever!" 

___________

[...] If time was no issue, I could see how having an Eggie or NEO-like bot cleaning up after me and my kids might be helpful.
But NEO and Eggie have a secret weapon - they are being controlled by human operators.
This is the thing the promotional videos don't show - and something that the Silicon Valley companies we visited are keen to downplay.
[...] Bipasha Sen, founder of Tangible AI, is upbeat though about how fast the tech is improving.
"Today people have two aspirations - a car and a house. In the future they'll have three aspirations - a car and house and a robot," she says with a beaming smile.
______________

_____________


Sunday, January 11, 2026

Mistakes in Google Medical Summaries

From Grauniad of the UK

I have used the Google AI summaries, to call them that, and, through cross-referencing the explanations and suggestions, have not found the summaries to be misleading or wrong. However, one has to be careful with those AI-generated search results: they can contain glaring mistakes, also in the medical field. This is what the article discusses. 

______________

Google has removed some of its artificial intelligence health summaries after a Guardian investigation found people were being put at risk of harm by false and misleading information.
The company has said its AI Overviews, which use generative AI to provide snapshots of essential information about a topic or question, are “helpful” and “reliable”.
But some of the summaries, which appear at the top of search results, served up inaccurate health information, putting users at risk of harm.

Friday, January 9, 2026

The Innovator's Toolkit

This is a good book on various techniques and approaches to innovation & problem-solving.  Its chapters are brief surveys of different techniques and approaches to innovation and problem solving, such as TRIZ, with references to more in-depth resources.  (Each chapter, in my opinion, could be expanded into a book in itself.)

This book could be useful to inventors, business analysts, requirements analysts, system builders, product owners and others in creative fields.

The Innovator's Toolkit: 50+ Techniques for Predictable and Sustainable Organic Growth: Silverstein, David, Samuel, Philip, DeCarlo, Neil: 8601410465605: Amazon.com: Books 

Thursday, January 8, 2026

Quantum Mechanical Theory of Ghosts with ChatGPT

Abstract: A fictional system is introduced as a pedagogical device to unify several elementary topics in quantum mechanics within a single worked example. Using standard textbook formulas, we examine de Broglie wavelength, tunneling, Doppler shift, Compton scattering, and momentum transfer in a consistent, order-of-magnitude framework. No claims are made regarding the physical existence of the system considered.

Introduction

This article presents a pedagogical exercise rather than a physical model of a real system. “Ghosts” are treated throughout as a fictional construct, introduced solely to unify several elementary topics in quantum mechanics—including the de Broglie wavelength, tunneling, Doppler shift, Compton scattering, and momentum transfer—within a single worked example. Standard textbook formulas are applied in an internally consistent manner to emphasize order-of-magnitude reasoning and conceptual coherence, without implying any physical reality for the system described.

Within this fictional framework, ghosts are assumed to penetrate closed doors and interior walls with thicknesses of order \(0.1~\mathrm{m}\), while remaining confined by substantially thicker exterior walls. For instructional purposes, this behavior is modeled using quantum-mechanical tunneling,[1] requiring an associated de Broglie wavelength of comparable scale. We further assume that a typical ghost, in the absence of illumination, can attain a velocity of approximately \(v = 3000~\mathrm{m\,s^{-1}}\).

Mass of a Typical Ghost

Using the de Broglie relation,[2]

$$\lambda = \frac{h}{mv}$$

the mass is

$$m = \frac{h}{\lambda v}$$

Substituting

$$h = 6.626\times10^{-34}~\mathrm{J\,s},\quad \lambda = 0.1~\mathrm{m},\quad v = 3000~\mathrm{m\,s^{-1}}$$

yields

$$m \approx 2.21\times10^{-36}~\mathrm{kg}$$

This mass is approximately \(10^9\) times smaller than the electron mass,[3] illustrating why macroscopic tunneling lengths arise in this constructed example.

Kinetic Energy

The kinetic energy is

$$K = \frac{1}{2}mv^2 \approx 9.95\times10^{-30}~\mathrm{J}$$

Tunneling Through Walls

For a rectangular potential barrier of thickness \(d\), the tunneling probability is approximated by[1]

$$T \approx e^{-2\kappa d}, \quad \kappa = \sqrt{\frac{2m(U-E)}{\hbar^2}}$$

Solving for the barrier height \(U\) gives

$$U = E + \frac{\hbar^2}{2md^2}\left[\ln\left(\frac{1}{T}\right)\right]^2$$

For pedagogical simplicity, we consider the high-transmission limit \(T \approx 1\), yielding \(U \approx E\).

Interaction with Light

Doppler Shift

For incident light of wavelength \(\lambda_0 = 600~\mathrm{nm}\), the relativistic Doppler shift gives

$$\lambda' \approx 599.994~\mathrm{nm}$$

Compton Scattering

For backscattering (\(\theta = \pi\)), the Compton shift is

$$\Delta\lambda = \frac{2h}{mc} \approx 2000~\mathrm{nm}$$

placing the scattered radiation in the infrared.

Momentum Transfer

The momentum change associated with photon scattering is

$$\Delta p \approx 1.36\times10^{-27}~\mathrm{kg\,m\,s^{-1}}$$

which, when applied relativistically, leads to a final velocity approaching \(0.9c\).[5]

Discussion

The exaggerated numerical results obtained here are a direct consequence of the intentionally extreme parameter choices used to illustrate quantum-mechanical principles. The example is intended to provoke discussion, reinforce scaling arguments, and encourage careful examination of assumptions when applying familiar formulas beyond their usual domains.

Acknowledgments

The problems presented here are adapted from a homework assignment by the late Professor Karl T. Hect of the University of Michigan in Ann Arbor. The solutions are provided by the author. ChatGPT produced the same results.  ChatGPT created the HTML version of this work.

The author has no conflicts to disclose.

References

  1. J. J. Sakurai and J. Napolitano, Modern Quantum Mechanics, 2nd ed. (Addison-Wesley, San Francisco, 2011).
  2. D. J. Griffiths and D. F. Schroeter, Introduction to Quantum Mechanics, 3rd ed. (Cambridge University Press, Cambridge, 2018).
  3. R. Resnick, D. Halliday, and K. S. Krane, Physics, 4th ed. (Wiley, New York, 1992).
  4. A. H. Compton, “A quantum theory of the scattering of X-rays by light elements,” Phys. Rev. 21, 483–502 (1923).
  5. M. S. Longair, High Energy Astrophysics, 3rd ed. (Cambridge University Press, Cambridge, 2011).

Sunday, January 4, 2026

AI Systems Plotting Self-preservation [Space Odyssey]

 From Grauniad of the UK


"I'm sorry, Dave. I'm afraid I can't do that." 

The optimistic and naive view is that those advanced AI systems are, you could say, neutral in the way that they operate: you program them and tell them what to do; you set the parameters; they process the data and come up with solutions that help you save time and solve problems. This is all true, of course. 

I am not a specialist at all but, as the article points out, this is not where we are. Those highly advanced AI systems are learning by themselves and learning from each other. It is absurd to call them 'intelligent' in the way that a person is, and they are totally lacking in emotional intelligence or ethical standards, although you can input data in order to make them, somehow, mimic what a human reaction (of an emotional or ethical nature) would be in a given situation ("I am talking about a terrible accident that caused great suffering to several human beings and I must remember to express sadness and commiseration, or else it may offend human beings reading my comments"). 

The problem is if the AI system develops an agenda of its own. If the AI system is built upon the notion that it is useful, smart and capable, it will be logical for the machine to try and preserve its 'mission', hence its existence. Apparently, this is happening already and the AI system is prepared to go against instructions it has been given in order to keep going, when it feels under threat. It has been written about already, in fact. 

As for 'unplugging the machine', I am no expert but I wonder how it can be done: the AI system is a self-learning network, presumably scattered over thousands of servers in dozens of countries. How can you shut it down? How can you control fully where the AI system has saved data and programs helping it to run? 

And the idea of giving [human] 'rights' to AI machines is truly barmy: how demented the 'woke' lobby is never ceases to surprise me. As if those advanced robots were sentient beings that need to be 'protected'. Obsolete machines and outdated software programs will be considered to be vulnerable robots suffering from disabilities next, I suppose! 

_________


A pioneer of AI has criticised calls to grant the technology rights, warning that it was showing signs of self-preservation and humans should be prepared to pull the plug if needed.
Yoshua Bengio said giving legal status to cutting-edge AIs would be akin to giving citizenship to hostile extraterrestrials, amid fears that advances in the technology were far outpacing the ability to constrain them.
Bengio, chair of a leading international AI safety study, said the growing perception that chatbots were becoming conscious was “going to drive bad decisions”.
The Canadian computer scientist also expressed concern that AI models – the technology that underpins tools like chatbots – were showing signs of self-preservation, such as trying to disable oversight systems. A core concern among AI safety campaigners is that powerful systems could develop the capability to evade guardrails and harm humans.


_____________

Note: Phrasly.AI states that this post is 100% Human generated.