Crimson Reason

A site devoted mostly to everything related to Information Technology under the sun - among other things.

Tuesday, October 8, 2024

Loss of Confidence in IT

This article very much jives with my own experiences and observations as a Senior Software Developer at GM IT during my years with General Motors.  What this article fails to mention, which I experienced at GM IT, is the Fear and Hierarchy that permeated that entire IT organization.

At GM, the loss of confidence by the Business leaders in IT manifested itself in the creation and staffing of an entirely new GM IT facility in Mountainview and the reduction of GM IT in other GM IT locations by 2300 persons in about a year.

Failure always has consequences.

https://www.cio.com/article/3550623/many-c-suite-execs-have-lost-confidence-in-it-including-cios.html?amp=1#origin=https%3A%2F%2Fwww.google.com%2F&cap=swipe,education&webview=1&dialog=1&viewport=natural&visibilityState=prerender&prerenderSize=1&viewerUrl=https%3A%2F%2Fwww.google.com%2Famp%2Fs%2Fwww-cio-com.cdn.ampproject.org%2Fc%2Fs%2Fwww.cio.com%2Farticle%2F3550623%2Fmany-c-suite-execs-have-lost-confidence-in-it-including-cios.html%3Fusqp=mq331AQGsAEggAID&amp_kit=1

Wednesday, September 25, 2024

AI and Philosophy

We face many problems and issues for which we would like to have answers or even partial answers.

We also available to us the works of all Thinkers and Philosophers of Europe and Western Asia written over the last 2500 years.

I think it will bea good idea to develop (train) Large Language Models for Plato, Aristotle, Ibn Sina, Ibn Rusd, Saint Thomas of Aqinas, and others.  In this manner, we could pose questions to them, in a metaphoric way, and get their answers.  Needless to say, such LLMs need to include our contemporary context as well, the Revolutionary changes that Empirical Sciences have caused in our understandings of the world as well as archaeological and historical knowledge that has been gained over the last 200 years.

Furthermore, such an approach may be extended to the very large corpus of religious commentaries and expositions of extant religions; in Judaism, Christianity, Islam, Hindu Spiritual Doctrines, and Buddhism.

Ideally, one could pose moral, metaphysical, epistemological, ontological questions to these LLMs and see what they come up with.  They are meant as hypothesis generators and discussion starters and not a substitute for Thinkers and Philosophers.

News of IBM (and AI)

https://www.theregister.com/2024/09/24/ibm_layoffs_ai_talent/

We read ""Senior software engineers stopped being developed in the US around 2012... No country on Earth is producing new coders faster than old ones retire. India and Brazil were the last countries and both stopped growing new devs circa 2023. China stopped growing new devs in 2020."

Yet no employers have been begging me to join their organization!

I must be missing something crucial.

Tuesday, September 10, 2024

Pointing a User Story

When one attempts to point a User Story, one needs to take into account the 3 major factors pertaining to the delivery and completion of that User Story: Viz. Complexity, Effort, Uncertainty, in order to arrive at the points for it.

Factors for Estimating a User Story

One way of doing so would be to point each of the 3 factors above separately, using the usual Fibonacci sequence to assign an independent value to each factor.

The next step would be to compute the geometric mean of the 3 factors and round it up to the nearest Fibonacci number.  Here the geometric mean that could be used would be the cubic root.  Geometric mean is used since, Life is in general rather non-linear.

The advantage of this approach would be to explicitly call out each factor in the pointing process and to cause Developers to consider each factor independently.

The implementation could be a simple HTML page with 3 dropdowns (that are prepopulated by Fibonacci numbers): sel-options-complexity, sel-options-effort, and sel-options-uncertainty.

The JavaScript code - using jQuery - would be as follows:

<script>

    function GetSelectedValue() {

        var complexity = $('#sel-options-complexity').val();

        var effort = $('#sel-options-effort').val();

        var uncertainty = $('#sel-options-uncertainty').val();


        var score = Math.cbrt(complexity * effort * uncertainty);


        score = fitToFibonacciNumber(score);


        document.getElementById("combined-points").innerHTML = score;

    }


    function fitToFibonacciNumber(value) {

        if (value >= 1 && value < 2) return 1;

        if (value >= 2 && value < 3) return 2;

        if (value >= 3 && value < 5) return 3;

        if (value >= 5 && value < 8) return 5;

        if (value >= 8 && value < 13) return 8;

        if (value >= 13 && value < 21) return 13;

        if (value >= 21 && value < 34) return 21;

        if (value >= 34) return Math.trunc(value);

    }

</script>

The method GetSelectedValue() would be invoked every time the user selects a value from any of the dropdowns.  The result is then displayed on the UI (not shown here). 

In principle, any other non-linear function could be used, such as logarithm, or even the Sigmoid function.  One has to experiment with a stable Agile team over time to determine suitable alternatives to the geometric mean, utilized here.

In this treatment, the 3 factors Complexity, Effort, Uncertainty have been treated with equal weights.  However, there could be situations in which the Agile team is tackling User Stories that are more uncertain (ambiguous) or are working with lower-than-needed staffing levels or are dealing with very complex User Stories.  Under such circumstances, it might make sense to assign a different weight to each factor and then calibrate the weights over several Sprints with the Agile team.

Thursday, September 5, 2024

News of Cyborgs

In a case of Life imitating Art, we learn of a cyborg whose biological component is a fungus:

https://www.cnn.com/2024/09/04/science/fungus-robot-mushroom-biohybrid/index.html

Sunday, September 1, 2024

Programmer Joke

 Question: How can you tell when you are talking to an extrovert programmer?

  Answer: He stares at your shoes, when he talks to you!

Saturday, August 24, 2024

Using AI for Software Maintenance

As someone who has used ChatGPT & Copilot for migrating a suite of Spark Streaming applications last year from Java 8 to Java 17, as well as for addressing GitHub Dependabot issues, I can attest to the plausibility of this report:

https://www.benzinga.com/markets/equities/24/08/40524790/amazon-ceo-andy-jassy-says-companys-ai-assistant-has-saved-260m-and-4-5k-developer-years-of-work

Friday, August 23, 2024

Data Monetization Estimation

 

William Thomson, Lord Kelvin
William Thomson, Lord Kelvin

"When you can measure what you are speaking about, and express it in numbers, you know something about it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts advanced to the stage of science.”

Lecture on "Electrical Units of Measurement" (3 May 1883), published in Popular Lectures Vol. I, p. 73

Estimating Data Value

Introduction

There has been a lot of interest in data monetization over the last decade or so and this discussion is meant as a way of thinking about estimating the potential value of data and in contradistinction to realization of that value.  Think of this as an analogue of a geological survey without any guarantee of finding economically viable mines; an attempt at a quantitative analysis of what value data have along the lines that the late Lord Kelvin had suggested.

Column Value Model

We assume a company’s value to be a combination of human, physical, and data capital; a fully automated company with no human resources is a pipedream, a company must have physical assets to conduct business (be they rented) and must make decisions based on some information or other.  (For simplicity, Goodwill valuation is excluded from this model).

So we start with the following formula for the company’s valuation and the proceed to give an estimate of its data assets in a more defined manner.

            Company Valuation = Human Capital + Physical Capital + Data Capital

Assume all three parts have the same value and the company’s valuation is $ 3 billion:

               Data Capital = Company Valuation/3 = $ 1 Billion

In this model, we would like to estimate the average value of each column of data in all the relational schemas that the company uses, i.e.



              




Let us further assume that there are 50 relational schemas in this company, that they each have 50 tables of 20 columns each.  This gives a total of 50,000 columns in total that yields an average value of $ 20,000 per column.  In this approach, all columns are considered to be of equal value; be they customer names (let us say) or such ubiquitous columns as “Last Updated Date”.

We can then proceed to estimate an Average value for each Schema



In this discussion, since all schema are assumed to be identical in the number of tables and columns, their average data value is $ 20 million.  However, in practice, there are different schema and each is different from the other and this type of model serves to identify the most valuable schema that a company has.

Alternative Models

Data Volume Model

In this model, the value of each schema is estimated based on its data volume.  That is:





The total Data Valuation will be divided by this number, an average value per Gigabyte is extracted, and the value of each schema is then computed by multiplying its data volume by that average number. 

Time Dependent Model

Another model is one with time-dependent variable weights for value of each part of the company’s valuation, i.e.:

 Valuation = h(t) Human Capital + p(t) Physical Capital + d(t) Data Capital

With constrains:

h(t) + p(t) + d(t) = 1 and h(t) ≥ 0, p(t) ≥ 0, d(t) ≥ 0

Weighted Schema Model

In this model, we are capturing the importance of each schema – via the factor a(n) – and the historical data volume available for each Schema by the second sum and the factor b(m). 

The exp(1-m) is intended to model the aging and staleness of the data.



 

About Me

My photo
I had been a senior software developer working for HP and GM. I am interested in intelligent and scientific computing. I am passionate about computers as enablers for human imagination. The contents of this site are not in any way, shape, or form endorsed, approved, or otherwise authorized by HP, its subsidiaries, or its officers and shareholders.

Blog Archive