A site devoted mostly to everything related to Information Technology under the sun - among other things.

Sunday, June 1, 2025

AI, Careers, and Some Thoughts,

Following on my earlier posts on possible loss of jobs due to the adoption of AI: the range of jobs being replaced is surprising. And it's happening already.  

Content creators will be downgraded to the role of proofreader on lower fees, I'd imagine. Like translators, who are increasingly asked to tidy up something that has been processed by Machine Translation (MT). 


Last year, a new college graduate employee at General Motors told me that during his rotations, the duties of one of the roles could entirely be done via ChatGPT.  

Would there still be a need for someone to check the robot's output? 

What if the AI system makes a mistake and no one is checking, and the output is send out wrong information to a decision-maker higher0up in the organizational hierarchy?

That is a valid concern, in my opinion; please see as an example: https://futurism.com/amazon-programmers-ai-dark.  Many observers had anticipated this: less creating and more 'proof-reading' of mediocre code churned out by the AI system, more pressure from management to work fast, etc. 

The output of the AI Robot that is using LLM, can, perhaps, get you 80% of the way there.  That is a huge amount of gain in productivity!  Even 50% of the way there to a finished output would be an enormous gain in productivity!

In my work, I have detected numerous programming mistakes in the output of ChatGPT.  Since I am an experienced programmer, I could detect them and try to correct them.  For a less experienced programmer, the situation would be more challenging than for me.

As it is, Senior Developers like me are not appreciated nor considered needed, and the new college graduates are not hired since it is presumed that the LLM could do their jobs, or so the current thinking goes.

Programming & IT are activities in which one automates or otherwise solves different types of business problems....There is the understanding of the problem by talking to those who need a solution (Requirements Gathering), then modeling, then programming, and then testing.  

The core of any system is its data model, which embodies the ontology of the problem domain.  It models things that exists in the world according to that particular business activity and the relationships among them.  This data model is the intellectual property of businesses and a form of trade secret.  They could be quite complex, and it takes real expertise and knowledge to develop them.  I have often been a consumer of this or that data model which had been realized in a data base.  At times, it had taken me weeks to fully understand the data that I had been working with.  Any odd programmer can write a UI, but it takes real effort to understand a data model and even more difficult to extend it to accommodate changes in the way the business is conducted.

I do not believe LLMs are ready to tackle complex data modeling tasks for two reasons: no one is making public their IP which is in the data models of the systems that they are using; ergo, insufficient training data. And two, because I think development of a data model requires insightful thinking.

Even in procedural coding tasks, LLMs are very good at creating code snippets for well-known problems, due to the training data sets.  I do not know if a ChatGPT-like system could write an entire system using programming language A and Class Library B.  To do that, the first step would be to be able to write the system's requirements in such a way that there is no ambiguity, i.e. one does not need to go back to the Business for further clarifications.  

I think that is an impossibility for a human requirements analyst.  Can a ChatGPT-like Business Analyst be created that can generate all the right questions combinatorially, present them to the Business Users, pick their brains, create the software system requirements and generate the system?  I think not since Human Insight, Human Creativity, and Human Imagination are ingredients that go into the development of a new system.

As a test, I would suggest an interactive system that verbally interacts with a human who is trying to invent the first airplane and spits out detailed blueprints and Bill of Materials for it.  

I mean, had ChatGPT and the like AI systems based on LLM were available in 1900, could someone go and ask it to build a flying machine?  Or supply the clear action-able instructions for what would be needed?  I think not, but I am likely a minority of 1 among the AI Nay-sayers!

What I am saying could be a bit scary, specially to a layman. The AI system will have difficulties doing the job; organizations are increasingly relying on AI systems, though; an experienced IT engineer such as myself can detect and correct errors; my work is not valued; I will not be replaced when I leave the company; the younger and less experienced programmers wouldn't know and may not be hired, as AI is 'replacing' them; in conclusion, this sounds like a recipe for disaster. 

I found ChatGPT very good for simple programming tasks - not to exceed a few hundred lines of code - as well as for guidance in how to use an unfamiliar language or programming library; it was like having an assistant nearby.

In 1997, when programming, I had to search a CD-ROM from Microsoft for samples and examples of how to use their software.  Microsoft also provided a very extensive sample data model, called the Northwind Database, which many of my former colleagues used as a starting point for creating a be-spoke data model for this or that project for our clients.

ChatGPT is doing similar things and more, based on the pre-existing work of human beings.  It has a generative feature that creates new things, e.g. ask it to write a poem praising France in Swahili...

Also, at that time, Microsoft Visual Studio Sofware wizard generated skeleton applications for Windows programs using the Microsoft Foundation Classes framework.  Presently, you can ask ChatGPT to setup a skeleton program using, say, Akka Libraries.

But everything that LLMs do crucially depends on the availability of existing material & solved problems.  New things will not come out of ChatGPT, e.g. ask it for the electric field outside of a charged metal torus.

I think this degradation of intellectual/white-collar work has been across the board since the early to mid 1990s, to be honest, but it is affecting new fields due to AI. Email and the Internet had meant that people were supposedly able to work fast or faster, etc.; rising targets, more pressure, more reporting (to justify your job! - which, in the parlance of Agile and Scrum would be called Daily Standup), etc. 

There is a destruction of the value and enjoyment of work. This is very clear. In a way, that is also what young people are rebelling against. I've heard it said from experienced designers or producers in TV too, a man in his 50s, who couldn't wait to retire and said all the enjoyment of the work at the tv network had been drained away.

Employers don't realize that they are simply destroying not only jobs but, also, 'work', as a social/cultural good/value.  

An optimist would say that certain jobs will disappear, and others will emerge. You're still going to need people to watch and monitor the machines, as it were - then again, fewer than before, I'd imagine. 

Some professions are in melt-down. I've read about and talked to designers, including web design, film-industry specialists, etc., many of them freelance suddenly, 80% of the work they used to get is done by AI. And the results are supposedly Ok (I can't judge). For the companies concerned, anyway, they don't care - it doesn't have to be very good, but good enough, and the average manager can't tell the difference and doesn't give a damn. 

I actually quite like LLMs, and I think they will revolutionize research and teaching...but they have their limitations (Where is Insight?) ...

The problem, as well, is that non-specialists who make decisions on budgets and projects (politicians, officials, economists, bankers, etc.) are, most probably, not able to fully understand all of these issues. They only have a vague idea and they probably over-estimate the potential of AI, and perhaps under-estimate its potential for failure and error. 

Then again, that's not new. The 2008-2010 banking crisis was also caused by the fact that many of the senior directors on the boards of banks didn't actually understand the technology involved in the trading programs used by traders and all the tech side of it. I have read this more than once. They didn't know/ didn't understand/ perhaps didn't care what their teams of specialists and traders were actually doing.

But my opinions are and will be dismissed as those of an aged Luddite whose useful shelf life has been long over in any case...

No comments:

About Me

My photo
I had been a senior software developer working for HP and GM. I am interested in intelligent and scientific computing. I am passionate about computers as enablers for human imagination. The contents of this site are not in any way, shape, or form endorsed, approved, or otherwise authorized by HP, its subsidiaries, or its officers and shareholders.

Blog Archive