A site devoted mostly to everything related to Information Technology under the sun - among other things.

Saturday, December 28, 2024

Geoffrey Hinton on AI Future

Dr. Geoffrey Hinton is articulating a pretty scary scenario of the AI future, could he be right?

I do not think so, he is wrong on many levels.  

(I wonder why these physicists always are full of Gloom & Doom; like Steven Hawking predicting the demise of man because of Global Warming.)

AI is not smarter than humans, it contains a slice of human rationality which is codified into an electronic machine.  He would be righter if he could point out to machines that are capable of Insight.  Insight is a very complex human capability and, IMHO, like Consciousness, is a Mystery.

I worked with AI/ML scientists here at GM and the neural nets that they were using required training data sets that, over millions of data sets (e.g. pictures of dogs and cats), would cause them to recognize patterns.  I understood the basic concepts and structures; there was no deep mystery there.  But, our AI, for recognizing a Traffic Stop Sign, could be fooled by sticking a rectangular reflecting strip on the sign.

I have no idea how ChatGPT or Gemini, which are examples of Large Language Models, are constructed.  I think they are based on what has been accomplished in Natural Language Processing.  NLP started with mathematical statistics, but I am ignorant as to what additional techniques are used in it since the machine translations are adequate now but are not great.  The key ingredient, however, is still the prior human corpus for the machines to use.  In case of LLMs, they lie, and they also respond to corrective algorithms by trying to neutralize them. (Please see here: Crimson Reason: Frontier Models are Capable of In-context Scheming.) I asked ChatGPT for a book on Assyria and it gave me titles that did not exist!

An LLM for Swahili, or any of the Bantu languages, for which no large corpus exists will be useless.  The reasons that LLM-based system is so impressive is the existence of large corpuses that have the codified human knowledge in multiple languages.

Isn't this rather worrying if the intention is for AI to replace human workers? I mean, if a system integrates lying and scheming into its strategy and the kind of data it gives to its handlers, this is worrying. In truth, the AI system doesn't know what the truth is. Should it know that it should value the truth, though, but is that possible for it to know if it doesn't fully understand what 'truth' is? And of course, one could argue that we, as humans, do not agree as to what the 'truth' is. But searching for Truth, in all aspects of human endeavor, is very much part of our mental and emotional life; "What is the Truth?".  That question could not be formalized, IMHO; we already have proof of that in Godel's Incompleteness Theorem.

Google has an AI tool integrated into its search engine, and you can see how it works. It's basically a crawler that collates information from across the web. On occasions, I've noticed it comes up with totally unreliable data. I am going to country X for a holiday, and I was checking on visas, etc. It came up with old and/or misleading data, because the rules have changed and it was probably using data posted up on amateurish blogs etc. posted up by travellers who may not be that well informed, and so on. Replicated on a larger scale and over more important issues, such as medicine or finance, this kind of approach of the bot could be very damaging. 

I also think that even if we someday succeed in creating a form of synthetic consciousness, it will be inferior to us.  The reasons are two folds: we have no Science of Man, we do not understand ourselves.  Nor do we have a Science of Consciousness, we cannot even define it!  

We seemingly are good at the construction of mechanical and information processing automata.  But they are not our peers let alone our superiors. But then, we shall see.

About his other predictions, I wish he had shared with us his chain of (quantitative) reasoning that led him to such numerical probability percentage.  He is speculating and his speculations are as good as you or I. 

One wonders whether an advanced AI-powered system would not be able, without awareness or emotions as we know them, nevertheless take control and impose its agenda, if only to complete 'the mission' in an uncompromising way. This is what you see in '2001, a Space Odyssey', and other sci-fi films where the robot develops a blueprint of its own, one aim being to continue to exist and another being to deal with problems effectively. Of course, when we say all this, we attribute human characteristics and urges to what are, in essence, IT systems. 

In the movie 'Archive': there is an interesting take on this, but quite far-fetched, clearly. Another interesting movie in this department is 'Ex-Machina'. The films dwell on what it means to be human. In the case of 'Archive', also, what it means to be alive, and the ending is quite clever. 

Personally, I think it is easy to envision a cybernetic system without emotion or awareness.  We already have automated trains, land-vehicles, drone that execute a mission.  This concept of mission is already in use in Autonomous Space probes, for example.  The mission is a slice of human intentionality, and no one would argue, I think, that the space probe is executing its own agenda.

A time-bomb, another kind of automaton, would explode and it has neither awareness nor knowledge.  But let us consider a single-cell organism such as an amoeba.

An amoeba is clearly executing its own agenda even though, we are currently unwilling to attribute to it any intentionality, or awareness.  It has a mission to survive (Spinoza would call it Conatus) and maintain itself.  It has no awareness (let us say, until proven otherwise) but it follows its mission with awareness or emotion.  In my opinion, here, the cell, its mission (to continue to metabolize food and to reproduce) are one and the same.

(These are philosophical questions if an amoeba has awareness, has self-awareness, has emotion.  They are philosophical questions since we only are aware of those things in ourselves with confidence.  We extend those things to dogs, let us say, but beyond birds, our confidence starts going down...)

So, can we build something like an amoeba?  I read that there have been synthetic single-cell life forms that have been created by using pieces of extant single cell organisms.  But nothing has ever been built starting with raw chemicals.  And if extant single cell organisms are used, per my point above, it has the same mission as others.  For example, could we create a single cell organism whose mission was just to eat and not to procreate?  I do not know but I do not think we know how to do that from ground up, we have no Science of Life.

My ruminations about amoeba were meant to clarify that the Amoeba's agenda is woven into its structure, and we do not know how it came about.  We have no knowledge of that, just fancy narrative about "Evolution", with which, just like Jesus, all things are possible.

I think what people seem to be concerned about is the creation of an artificial system with a Self.  Again, our first problem is how to define Self.  Our second problem is how to relate "Self" to something external to it.  (In Newtonian mechanics, Force is an intuitive human-based notion that is related to external events via Newton's Second Law.  In Newtonian mechanics, Force drops out of calculation, e.g. in calculating planetary orbits, for it has served its purpose.)

Yes, so, if we know how to create a self, how to endow it with awareness, how to give it the ability to alter itself, and give it either an Agenda or to generate its own Agenda, etc., we could be in trouble because such a system, like H.A.L., could be unstable, mad, deranged....And again, I come back to the same basic question: Where is the Science of Man, in a manner analogous to the Science of Mechanical engineering, that permits us to design land vehicles, or bridges, or aircraft?

We cannot cure mental patients because we have no Science of the Mind even though there are competing un-empricial rationalistic models of the mind proposed by Freud, Jung, Piaget and others.  We have no generic Science of the Mind that we could specialize to cats, let us say, or to a Termite Colony.  In the absence of such sciences, we have, in A.I., a collection of heuristics, techniques, and specialized applications.  When a translator produces text from Persian to English, it does imply that the translator understands those sentences.

When I worked on Solar Ponds, my supervisor told me that sometimes he feels that pond was acting like a living being.  I think that was probably because the pond was acting in a non-linear way.  And perhaps we have a tendency to equate nonlinear phenomena with Life, which is the epitome of nonlinearity.  Perhaps something like this is going on with (generative) A.I. and large language models as well, the output is nonlinear, and we like to endow it with consciousness.

Lastly, I think we have another aspect of human emotion involved here, i.e. Womb Envy by all these purveyors AI: "Look, Ma, I could procreate without a woman!".  Since, dealing with women, as explained by Wolfram von Eschenbauch in Parsifal, is not easy.

---------------------------------------------------------------------------------------------------------------------------

https://www.theguardian.com/technology/2024/dec/27/godfather-of-ai-raises-odds-of-the-technology-wiping-out-humanity-over-next-30-years

No comments:

About Me

My photo
I had been a senior software developer working for HP and GM. I am interested in intelligent and scientific computing. I am passionate about computers as enablers for human imagination. The contents of this site are not in any way, shape, or form endorsed, approved, or otherwise authorized by HP, its subsidiaries, or its officers and shareholders.

Blog Archive