https://www.xda-developers.com/windows-11-users-fight-back-against-microsoft-local-accounts/
Crimson Reason
A site devoted mostly to everything related to Information Technology under the sun - among other things.
Sunday, March 30, 2025
Thursday, March 27, 2025
Wednesday, March 26, 2025
Tuesday, March 25, 2025
Monday, March 24, 2025
Sunday, March 23, 2025
Monday, March 17, 2025
Flaws in Smartphone Face-recognition Software [Tech News]
From British consumer magazine WHICH
The problem with face-recognition: apparently, with some smartphones, holding an ordinary photograph up to the phone can unlock it, if I understand correctly. I suppose this means that the person stealing your mobile phone from you knows you, as they would need to have the photo in the first place.
I have avoided activating the face-recognition functionality on my smartphone (which is a basic Samsung) because I was wondering what you do if it doesn't work: the software has not been updated, or the place where you are is too dark and the phone cannot 'see' you, and so on.
The Extract below. The article may be behind a pay wall.
____________________________
In 2023, we revealed that over 40% of phones we tested had face recognition security that could be easily fooled and successfully unlocked by a 2D printed photograph. These included popular handsets from Samsung, Motorola, Xiaomi and others. Unfortunately the issue isn't going away and we were surprised that the same issue affects two phones in the Samsung Galaxy S24 series, and all phones in the S25 series.
Fortunately, Samsung
offers a clear warning during the face recognition set up that tells users the
system can be fooled with a photograph. But to avoid this vulnerability, we recommend you don't enable the feature and use the
fingerprint sensor or a password/PIN instead. Long PINS are generally more
secure (six characters), and if you can set up a password, use a mixture
of different characters so it's harder to guess. We recommend setting up
protections on your apps that contain sensitive information too – this could
involve logging out when you're not using them, or setting up passwords or
fingerprint locks.
We approached Samsung for comment on our findings about face recognition on the Samsung Galaxy S24 and Galaxy S24+ in 2024. It said: 'We provide various levels of biometric authentication, with the highest level of authentication from the fingerprint reader. In addition, we provide users with multiple options to unlock their smartphones through both biometric security methods, and convenient options such as swipe or facial recognition. Further information about facial recognition can be found via the settings on Samsung Galaxy smartphones.'
Friday, March 14, 2025
Thursday, March 13, 2025
Monday, March 10, 2025
Conflict & Reconciliation Feature for Social Networks - A Proposal
Subject Matter & Problem
In the course of interactions among human beings, there arises, perhaps inadvertently and perhaps otherwise, that one or more parties take offense or are made angry by the utterances or behavior of another party or parties. Human societies, historically, have developed mechanism to cope with social friction among the members, limit its negative consequences, and to facilitate reconciliation among the affected parties.
Virtual communities and social networks such as Facebook,
Google Groups, etc. are novel ways of human interactions and group formation
which are based on computer and information technologies. However, they are not immune to the
occurrences of social friction among their members and the attendant issues of
anger management and reconciliation among “warring” parties. Facebook, for example, offers a binary choice
between a “Friend” and “Un-Friend”; one may “Un-Friend” a “Friend” if one has
been angered – or otherwise disappointed – by that person. There is no intermediate state between a
“Friend” and “Un-Friend” which could distinguish among total strangers in
contradistinction to former friends and acquaintances who could, when one’s
anger cools, become one’s friends again.
Solution
The gist of this solution is to endow the social networking
sites and virtual communities with a feature to mediate and affect a
reconciliation among former friends and acquaintances beyond the binary choice
of “Friend” and “Un-Friend” and to further facilitate the resolution of the
conflict through selected “Intercessors”.Use Case 1 – Normal Use
- The user navigates to a screen that contains a list of his contacts.
- He selects one or more of them and designates them as: “Not on Speaking Terms”, or “Sulking”, or “Furious” or some such phrase (or its equivalent in other languages). That is, he sets their “Conflict” status.
- The system updates their profiles accordingly.
- The system displays the text above and an appropriate icon that indicates that these individuals have been thus designated as being in “Conflict” with this or that user.
This will be visible to all.
This designation is not conceived to be
equivalent to “Un-Friend”.
- The user could further designate a time period for that status: from “Never Expires” to a date and time range (either selectable from a screen widget or entered manually by the user).
- The user may optionally decide if he wants to block communication from the “Conflict” contacts.
- The system will notify those contacts of a change in their status.
That communication could be in the form of
email, internal communication, text messages, phone calls etc. – together with
an appropriate text message.
- The system will notify others who share the affected contacts with the user and are also in the user’s social network of the “conflict” status change above.
- When the “Conflict” status time interval expires, the system reverts the status to “Friend”.
Use Case 2 – User Selected Intercessor Option
- The user could further designate one or more “Intercessors” for each and any of the affected contacts from among their common contacts.
- The role of “Intercessor”, should they accept it, would be to try to mediate and resolve the conflict among the affected parties within the stated period of the conflict.
- This is an option that the user may or may not exercise.
- The system will query the “Intercessors” through a possible multiplicity of communication channels if they consent to play that role (for conflict resolution and mediation).
- If the “Intercessor” agrees, the system will notify the user who had initially requested the help of the “Intercessor” as well as the affected “Conflict” contacts.
4. It is up to the “Intercessor” then to initiate the process of reconciliation.
Use case 3 – User elects to be “Intercessors”
Optionally, any user may elect to be an available as an
“Intercessor” in a social network or virtual community.
Use case 4 – System Recommends “Intercessors”
Optionally, the system recommends a list of available “Intercessors” to the user.
Use case 5 – “Conflict” contact looks for an “Intercessors”
Optionally, the system recommends a list of available “Intercessors” to a “Conflict” contact after the system has informed him of the change in his “Conflict” status by another user.
Description
This is envisioned as a software add-on to existing as well
as new social networking sites and virtual communities. The operation of the invention is described
in the above use cases.
Advantages
The chief advantage of this invention is that it enables one to retain one's "Friends" in social network even when one is cross with them. To wit, one announces publicly that he is cross with this or that person, others take note of it, and depending on the desire of both sides and their own inclinations could help them reconcile. So one does not necessarily wind up losing one's friend in a network due to a temporary emotional outburst or state of anger.
System Architecture & Design
The
architecturally significant components of the system are illustrated below:
Possible Modifications
Something analogous to this may be incorporated into email
clients – a “conflict” icon or button in which enables a user to designate one
or some of his/her contacts as “In Conflict” until further notice. The email server can then alert the person so
designated of the change in his/her status by another user. Optionally, all members of a person’s social
network could be advised of the status.
Furthermore, the email client can expire the status and revert it back
to “Friend” at a designated date and time (by the user). Likewise, the email client or the email
server could, for the duration, junk or otherwise archive email messages from
such users.
Saturday, March 8, 2025
Wednesday, March 5, 2025
Tuesday, March 4, 2025
Sunday, March 2, 2025
Automated Exploration of Knowledge with Large Language Models, Concept Extraction, and Cyc - A Proposal
The process is as follows:
Overview of Steps:
- User poses a question to LLM
- LLM Generates Text
- Concept Extraction is applied to LLM Text
- CYC uses to those extracted concepts to find related concepts (please see below)
- Questions are created from those related concepts
- Those questions are posed to the LLM & Summarized
Step 4:
Overview of Steps:
- Start
with a collection of initial concepts.
- For
each concept, find its nearest related concepts based on the
relationships in the Cyc knowledge base (e.g., “isa,” “part-of,”
“related-to”).
- Navigate
to these related concepts, recursively, if necessary, to explore
further connections.
- Generate
new concepts by expanding from the nearest neighbors.
- Visualization or Compilation
Key Queries in CycL for Navigating Concepts:
- Isa
(Is-A) Relationships: This tells you what class a concept belongs to.
It’s often used to explore a concept’s category.
- Part-Of:
This is useful to find what parts or components a concept belongs to.
- Related-To:
This can be used to find other concepts that are semantically related to a
given concept.
- Sub-collection:
This allows you to find all instances of a specific concept.
Example: Navigation from Concepts
Let's say you have a collection of starting concepts, such
as Cow, Milk, and Farm.
You can navigate the relationships by querying Cyc
for the nearest concepts based on different relationships. We’ll perform a
step-by-step expansion for each of these concepts, showing how to retrieve the
"nearest" related concepts.
Step A: Start with a Collection of Concepts
Let’s say you have the following concepts to begin with:
- Cow
- Milk
- Farm
These are the seed concepts from which you want to expand.
Step B: Define Queries to Explore Nearest Concepts
Now, you can create queries for each starting concept to
find the nearest related concepts. Here’s how you can perform some basic
queries in CycL for this task.
Query 1: Find the "Is-A" (Classification)
Relationships
This query identifies what category the concept
belongs to. For example, for Cow, you can check for its broader
classification.
(isa Cow ?X)
This asks, “What categories does Cow belong to?” and
will return categories such as Mammal, DomesticAnimal, etc.
Query 2: Find "Related-To" Concepts
Next, you can explore relationships with other
concepts. For example, you could ask:
(relatedTo Cow ?X)
This will give you concepts that are semantically related
to Cow, such as Milk, Udder, Farm, etc.
Query 3: Find "Part-Of" Relationships
The Part-Of relationship is useful for exploring components
or parts of a concept. For example:
(partOf Cow ?X)
This will return things that are part of a Cow, like
Udder or Hoof.
Step C: Process the Results and Expand
Once you have the results from these queries, you can expand
the set of concepts by finding the nearest related concepts from each of
the starting concepts.
Example Expansion (for the concept "Cow"):
- Starting
Concept: Cow
- Query:
isa Cow ?X
- Result:
Mammal, DomesticAnimal
- Query:
relatedTo Cow ?X
- Result:
Milk, Farm, Udder
- Query:
partOf Cow ?X
- Result:
Udder, Hoof
From Cow, the nearest related concepts are:
- Mammal
- DomesticAnimal
- Milk
- Farm
- Udder
- Hoof
- Starting
Concept: Milk
- Query:
isa Milk ?X
- Result:
SubstanceProducedByMammals
- Query:
relatedTo Milk ?X
- Result:
Dairy, Farm
- Query:
partOf Milk ?X
- Result:
Cow
From Milk, the nearest related concepts are:
- SubstanceProducedByMammals
- Dairy
- Farm
- Starting
Concept: Farm
- Query:
isa Farm ?X
- Result:
AgriculturalFacility
- Query:
relatedTo Farm ?X
- Result:
Milk, Cattle, DairyFarm
- Query:
partOf Farm ?X
- Result:
Field, Barn
From Farm, the nearest related concepts are:
- AgriculturalFacility
- Cattle
- DairyFarm
- Milk
- Field
- Barn
Step D: Recursively Expand to New Concepts
If you want to explore even further from each of the
nearest concepts, you can repeat the same queries for each new concept that was
found. For instance:
- If Milk
led you to Dairy, you can now run:
- (relatedTo
Dairy ?X)
This might give you related concepts like Cheese, Butter,
Yogurt, etc.
- Similarly,
if Cattle was found under Farm, you could run:
- (relatedTo
Cattle ?X)
And this might lead you to concepts like Beef, Livestock,
etc.
Step E: Visualization or Compilation
Once you’ve collected a large set of related concepts, you can organize or visualize them as a concept graph or network. This allows you to see how the concepts are connected, and explore new insights based on their relationships. It also enables a user to prune the concepts and to keep those that are germane to his interests.
Example Flow:
- Start
with: Cow
- Expand:
Mammal, DomesticAnimal, Milk, Farm, Udder, Hoof
- Next:
Explore Milk → Dairy, SubstanceProducedByMammals, Farm
- Next:
Explore Farm → AgriculturalFacility, DairyFarm, Cattle, Field, Barn
- Continue
expanding based on relations and parts...
Final Notes:
- Recursion:
You can use recursion to keep expanding from one set of related concepts
to the next. For each concept, you check its is-a, related-to,
and part-of relationships to find more concepts.
- Prioritization:
Depending on your task, you might prioritize certain relationships (e.g.,
“is-a” vs. “related-to”) to focus on more hierarchical or semantic links.
- Reasoning:
The ability to reason about relationships (e.g., if "A produces
B" and "B is needed for C", then "A may be needed for
C") helps enrich the exploration.
Step 5:
- What:
This is generally applied to objects, entities, or types. If the concept
is a class or object, "what" could be used to query its
characteristics or define what it is. For instance, querying "What is
a dog?" or "What are the properties of a cat?"
- Where:
Applied when the concept involves a location, spatial information, or
events tied to geographical or spatial contexts. If the concept is related
to a place or location, "where" would likely apply. For
instance, "Where is the Eiffel Tower located?"
- Who:
Applied to concepts that refer to people, individuals, or specific agents.
If the concept represents a person or an actor in a particular event,
"who" would apply. Example: "Who is the president of the
United States?"
- When:
Used for temporal concepts, events, or instances that are related to time.
If the concept has a temporal dimension (such as an event or an
occurrence), then "when" would apply. Example: "When did
World War II start?"
- How:
Typically used when the concept is associated with processes, methods, or
causes. If the concept is a process, causal relationship, or method of
doing something, "how" would be applicable. Example: "How
does photosynthesis work?"
- Why:
Applied for causal explanations or reasons. This question type is most
often relevant to concepts related to reasons, causes, or justifications.
For instance, "Why do leaves turn yellow in the fall?"
To determine which of these questions can be applied, you would need to check the class or type of concept and its relationships in the knowledge base and then build a query accordingly. For example:
- What
could be queried with: (#$isa #$myConcept #$Thing)
- Where
could be queried with: (#$isa #$myConcept #$Location)
- Who
could be queried with: (#$isa #$myConcept #$Person)
- When
could be queried with: (#$isa #$myConcept #$Time)
- How
could be queried with: (#$isa #$Concept #$Process)
- Why
could be queried with: (#$cause #$myConcept ?Cause)
Wednesday, February 26, 2025
Enhancing Large Language Models with Concept Extraction & Cyc- A Proposal
The initial idea is to create a more structured, semantically rich representation of the generated text of Large Language Models (LLM) such as ChatGPT by leveraging CYC (please see Cyc - Wikipedia) together with Concept Extraction & Concept Mining approaches (please see Concept Extraction - an overview | ScienceDirect Topics & Concept mining - Wikipedia) as described below:
Here's how you could integrate LLM output, concept
extraction tools, and a reasoning system like CYC for structured
knowledge matching and enhanced reasoning:
Workflow Overview:
- Generate
Text with LLM:
- First,
you generate text with a large language model (e.g., GPT) based on a
given prompt. For example, the prompt could be something like:
"Describe the relationship between cows and milk production."
- Concept
Extraction:
- Use concept
extraction tools to process the LLM's output. This step could involve:
- Named
Entity Recognition (NER) to extract specific entities such as
"cow," "milk," "production," etc.
- Relation
extraction to identify the relationships between those entities,
such as "cow -> produces -> milk."
- Coreference
resolution to link pronouns and references to the same entity or
concept (e.g., "it" refers to "cow").
- Keyword/Concept
extraction to identify key concepts and broader ideas from the text
(e.g., "herbivores," "dairy farming").
- Mapping
Extracted Concepts to CYC:
- After
extracting the relevant concepts and relationships from the LLM's output,
you can try to match these concepts against the CYC ontology. The
idea is to map the raw concepts to CYC’s structured knowledge to
ensure that the relationships make sense and align with CYC’s predefined
logic.
For example:
- Cow:
Match it to CYC's concept of a domesticated mammal (or whatever
relevant class CYC has for cows).
- Milk
production: Map it to a relationship where cow produces milk,
which could be defined in CYC's ontology as a causal relationship
or as part of cows' behavior.
- You
can also ensure the semantic accuracy by checking if the
extracted relationships (like "cow produces milk") align with
CYC's formal knowledge base.
- Reasoning
with CYC:
- Once
you have the concepts and relationships mapped to CYC's structured
knowledge, you can use CYC’s logical reasoning capabilities to:
- Validate
the accuracy of the extracted information by checking if it aligns with
existing facts in CYC’s knowledge base.
- Derive
new knowledge by reasoning based on the extracted concepts and CYC’s
existing knowledge. For example, if CYC knows that cows are
herbivores and produce milk, it could infer or help answer
questions like: "What are the dietary needs of cows?" or
"How do cows affect dairy farming?"
- Enrich
the information by connecting it to related concepts. For instance,
if you extract a concept like “milk,” CYC might link this to other
concepts like lactation or dairy production.
- Generate
Enhanced or Verified Output:
- Finally,
you could generate an enhanced output that combines the strengths
of both LLMs and CYC. For example:
- LLM-generated
text: "Cows produce milk, which is a staple food item in many
cultures."
- CYC-enhanced
text: After matching concepts and running logic, you might get:
"Cows, as herbivores, produce milk, which is used as a source of
nutrition in dairy farming. Lactation in cows is supported by a diet
rich in grass and supplemented with minerals."
This would not only provide the original output but also
incorporate fact-checking, contextual reasoning, and related
knowledge from CYC’s ontology.
Advantages of This Approach:
- Accuracy:
By matching extracted concepts to CYC’s structured knowledge, you can
ensure the correctness of the information and reduce the chances of errors
or inconsistencies.
- Contextual
Reasoning: CYC can help to generate more contextually relevant and
logically coherent answers by reasoning about how concepts are related
(e.g., if the cow produces milk, it must also have a diet that supports
lactation).
- Enrichment:
This hybrid approach allows the model to fill in gaps in knowledge.
If an LLM output lacks detail or has ambiguities, CYC can supplement the
response with additional structured facts.
- Domain-specific
Knowledge: If you are working in a specialized domain (e.g., medicine,
law, engineering), CYC can provide a deep understanding of
the underlying concepts and relationships, ensuring that the model’s
output is highly relevant to the domain.
Example Scenario:
Step 1: Generate Text with LLM Prompt: "How does
a cow produce milk?"
LLM Output:
- "Cows
produce milk as part of their natural biological processes. The milk is
produced in the udder, and the process is triggered after the cow has
given birth."
Step 2: Extract Concepts From the LLM output, you
extract:
- Entities:
"cow," "milk," "udder," "birth."
- Relationships:
"cow produces milk," "milk is produced in udder,"
"milk production is triggered after birth."
Step 3: Map Concepts to CYC
- Cow: Map it to CYC’s concept of domestic animal and mammal and cow
- Milk:
Map it to a type of fluid produced by mammals.
- Udder:
Map it to part of mammal’s body associated with milk production.
- Birth:
Link it to reproduction process in CYC.
Step 4: Reasoning with CYC
- CYC
can infer additional knowledge, such as:
- Lactation
in cows typically begins after childbirth, and it is supported by
specific dietary needs (e.g., grass, water).
- CYC
may also note that milk production is a key feature of dairy farming.
Step 5: Enhanced Output Using the knowledge from CYC,
you get:
- "Cows,
as mammals, produce milk through lactation, which occurs in the udder
after birth. Lactation is a biological process that requires specific
nutrients and a conducive environment, typically found in dairy farming
practices."
Let's break down the implementation steps in more
detail, focusing on how to integrate concept extraction with CYC
to reason about LLM-generated output.
Step 1: Generate Text with an LLM (e.g., GPT)
First, you’ll need to generate text using a large language
model (LLM) based on your prompt.
Example:
- Prompt:
"How does a cow produce milk?"
- LLM
Output (e.g., GPT): "Cows produce milk as part of their natural
biological processes. The milk is produced in the udder, and the process
is triggered after the cow has given birth."
At this point, the text generated by the LLM contains useful
concepts and information, but it may be unstructured or lack formal logical
relationships. So, the next step is to extract meaningful concepts.
Step 2: Extract Concepts from LLM Output
Now, you’ll want to extract meaningful concepts, entities,
and relationships from the LLM output. For this, we can use various NLP
tools to analyze and process the text. I'll explain some key techniques and
libraries you can use to implement this:
Tools for Concept Extraction:
- Named
Entity Recognition (NER): To extract entities (e.g., "cow,"
"milk," "udder," "birth").
- spaCy
is a powerful NLP library that can be used for NER.
- Relation
Extraction: Identifies relationships between entities (e.g., "cow
produces milk").
- OpenIE
(Stanford NLP) is a popular tool for relation extraction, or you can
fine-tune a transformer-based model for this task.
- Coreference
Resolution: Resolves which pronouns or phrases refer to the same
entity (e.g., "it" referring to "cow").
- spaCy
has a coreference resolution model built-in.
- Dependency
Parsing: To analyze the grammatical structure of sentences and extract
relationships.
- spaCy
also provides dependency parsing.
- Key
Phrase Extraction: Identifying important concepts or phrases from the
text.
- RAKE
(Rapid Automatic Keyword Extraction) can be useful for this.
Example Code to Extract Concepts (Using spaCy):
import spacy
# Load spaCy's pre-trained model
nlp = spacy.load("en_core_web_sm")
# LLM-generated text
text = "Cows produce milk as part of their natural biological processes. The milk is produced in the udder, and the process is triggered after the cow has given birth."
# Process the text with spaCy NLP pipeline
doc = nlp(text)
# Extract entities using NER
entities = [(entity.text, entity.label_) for entity in doc.ents]
print("Entities:", entities)
# Extract relations (simple approach using dependency parsing)
relations = []
for token in doc:
if token.dep_ in ('nsubj', 'dobj', 'prep'):
relations.append((token.head.text, token.dep_, token.text))
print("Relations:", relations)
Entities: [('Cows', 'NORP'), ('milk', 'PRODUCT'), ('udder', 'LOC'), ('birth', 'TIME')]
Relations: [('produce', 'nsubj', 'Cows'), ('produce', 'dobj', 'milk'), ('produced', 'nsubj', 'milk'), ('triggered', 'prep', 'after')]
In the output, we extract entities like cow (as a
"NORP" for nationality, religion, or political group), milk
(as a "PRODUCT"), and udder (as a "LOCATION").
Additionally, relations are identified based on grammatical dependencies like Cows
produce milk.
Step 3: Map Extracted Concepts to CYC
After extracting the relevant concepts and relationships, we
now need to map them to CYC’s ontology. CYC uses a formal knowledge
base with concepts and relationships that represent human knowledge.
Steps to Integrate CYC:
- CYC
Knowledge Base: You will need access to CYC’s knowledge base, which
contains concepts like "cow," "milk," and
relationships such as "produces."
- Mapping
Concepts: The extracted entities and relationships should be mapped to
CYC concepts. For example:
- Cows
→ Map to the CYC concept DomesticAnimal (or something more specific in
CYC’s ontology).
- Milk
→ Map to the CYC concept SubstanceProducedByMammals.
- Produces
→ Map to a relationship like produces in CYC’s ontology, which connects a
producer (cow) to the product (milk).
This can be accomplished via Cyc API
to search for terms like "milk" and "cow" directly.
Here's a general approach you could follow:
Use Cyc's API or Query System: Cyc
provides a formal querying system where you can search for terms or concepts.
You might use the CycL (Cyc Language) or the API to look for "milk"
and "cow."
Look for Specific Terms or
Relationships:
For "milk," you'd search
for the concept or any relationships it has with other concepts like
"cow" or "dairy."
For "cow," you'd search
for relationships such as "produces" or "gives birth to"
and check for how "milk" is connected.
Conceptual Searches: Cyc supports
both direct and indirect queries, so you might find that "milk" is
related to other concepts like "dairy product,"
"nutrition," or "cow," while "cow" could be
connected to "animal," "mammal," or even more specific
categories like "livestock."
Using Semantic Reasoning: Cyc's
reasoning engine can also help you find indirect relationships. For example,
even if there isn't a direct link between "cow" and "milk,"
Cyc could deduce it based on other knowledge.
- Use
CYC’s Reasoning Capabilities: CYC is not just a database of facts, but
also a reasoning engine that can infer new information. For
example, you can use CYC’s reasoning engine to:
- Infer
that cows are herbivores.
- Check
if the fact "cow produces milk" is consistent with CYC’s
knowledge base.
- Enrich
the information by adding related facts (e.g., the cow’s dietary needs
for lactation).
Example of Concept Mapping (Hypothetical):
- Cow
→ CYC Concept: DomesticAnimal
- Milk
→ CYC Concept: SubstanceProducedByMammals
- Produces
→ CYC Relationship: produces
Step 4: Use CYC for Reasoning
Once the concepts are mapped to CYC’s ontology, you can
perform reasoning tasks, such as:
- Fact
Validation:
- Ensure
that the statement "Cows produce milk" is logically consistent
with CYC’s rules.
- Use CYC’s
inference engine to check relationships like cow → produces → milk.
- Enhanced
Output Generation:
- With
the reasoning capabilities of CYC, you can generate an enhanced answer to
the original question. For example:
- LLM
Output: "Cows produce milk after giving birth."
- CYC-enhanced
Output: "Cows produce milk through lactation after giving
birth. Lactation requires a specific diet that includes grass and
water."
- Fact
Augmentation:
- You
can use CYC’s knowledge to fill in gaps, enrich the text, or ensure that
all necessary relationships and entities are included.
Example Integration of CYC and Concept Extraction:
- Extracted
Concept: "Cow produces milk."
- Map
to CYC: Cow → DomesticAnimal, Milk → SubstanceProducedByMammals,
Relationship → produces.
- Reason
with CYC: CYC checks if a DomesticAnimal (cow) can logically produce a
SubstanceProducedByMammals (milk). CYC may also infer that cows are
herbivores and require specific nutrition for lactation.
- Output
Enhanced Information: Based on CYC’s knowledge, generate the final
output like:
- "Cows,
as domesticated animals, produce milk after giving birth. Lactation is
supported by a diet of grass and water, which is a key aspect of dairy
farming."
Step 5: Generate Final Enhanced Output
Finally, after reasoning with CYC, you can generate an
output that combines LLM-generated creativity with CYC-enhanced
factual correctness.
Final Thoughts
- Concept
extraction can be automated using NLP tools like spaCy, Stanford
NLP, or Hugging Face transformers, which allow you to identify
entities, relationships, and concepts in the text.
- By mapping
the extracted concepts to CYC’s ontology, you can ensure that the
information is logically consistent and relevant.
- CYC’s reasoning capabilities then help you validate and enrich the extracted information to generate a more accurate and comprehensive response.
This hybrid approach—LLM output combined with concept extraction and reasoning via CYC—can dramatically improve the quality, consistency, and reliability of AI-generated text. It provides a way to validate, enrich, and reason about the concepts presented in the LLM’s output, ensuring that it aligns with structured knowledge and logical reasoning.
Useful Links
- .Net Code Samples
- AJAX for MS Developers
- C# Tutorials
- Channel9
- Code Search Engine
- Douglas Crockford's JavaScript Site
- DZONE
- Google Code
- IBM Developer Works
- IBM Public Skunkworks
- Is This Thing On?
- Java tutorials, hints, tips
- Jon Udell Weblog
- Knowing .Net
- Massive List of Information for Programmers
- MIT Courses
- MSDN
- Simple-Talk
- SUN Java
- That Indigo Girl
- UC Berkeley Lectures
- Yahoo UI Library
Topics
- 3-D Printing (14)
- AI (290)
- Art (107)
- Article (135)
- book (12)
- books (83)
- Business Intelligence (18)
- Careers (82)
- Cloud Computing (20)
- Cognition (13)
- Complexity (8)
- Computer Science (20)
- COVID-19 (1)
- Cyber-security (83)
- Data Analysis (39)
- Data Management (19)
- Data Visualization (30)
- Design Thinking (1)
- Embedded Tools (34)
- Gadgets (76)
- Games (33)
- Google (8)
- Hardware (43)
- High Performance Computing (33)
- History of Mathematics (1)
- Humor (77)
- Inetrview (7)
- Intelligent Transportation (17)
- IoT (15)
- IT as Metaphor (2)
- Magazine Subscription (8)
- Mathematics Tools (4)
- Microsoft Platforms (23)
- Microsoft Tools (65)
- Mobile Computing (3)
- Motto (3)
- Network Tools (12)
- News (148)
- Offshoring (6)
- Open-Source Sofware (9)
- Outsourcing (1)
- Philosophy (7)
- picture (1)
- Pictures (149)
- PLM (5)
- Programming Languages (74)
- Quantum Computing (5)
- Reports (52)
- RFID (3)
- Robo (2)
- Robots (104)
- Science (62)
- Scientific Computing (17)
- Search Tools (7)
- Semantic Networks (11)
- Simulations (34)
- Social Computing (26)
- Software Architecture (27)
- Software Development (157)
- Software Testing (5)
- Software Tools (271)
- Some Thoughts (60)
- Speech (6)
- Standards - Telematics (9)
- Transportation (14)
- Video (11)
- Visualization (10)
- Web Site (231)
- Web Site for Science (50)
About Me

- Babak Makkinejad
- I had been a senior software developer working for HP and GM. I am interested in intelligent and scientific computing. I am passionate about computers as enablers for human imagination. The contents of this site are not in any way, shape, or form endorsed, approved, or otherwise authorized by HP, its subsidiaries, or its officers and shareholders.
Blog Archive
- March (17)
- February (22)
- January (45)
- December (19)
- November (11)
- October (10)
- September (7)
- August (11)
- July (6)
- June (11)
- May (12)
- April (7)
- March (5)
- February (1)
- January (3)
- December (1)
- October (2)
- September (4)
- August (1)
- July (3)
- June (2)
- April (2)
- March (2)
- February (2)
- January (10)
- December (1)
- October (1)
- September (1)
- August (4)
- June (1)
- April (6)
- March (2)
- February (4)
- January (3)
- December (1)
- October (1)
- June (3)
- April (1)
- March (1)
- February (1)
- January (6)
- December (8)
- November (3)
- October (5)
- September (2)
- August (3)
- July (6)
- June (2)
- May (7)
- April (19)
- March (22)
- February (6)
- January (5)
- December (4)
- November (4)
- October (9)
- September (3)
- August (7)
- July (3)
- June (2)
- May (6)
- April (4)
- March (8)
- February (5)
- January (18)
- December (6)
- November (10)
- October (6)
- September (7)
- August (2)
- July (4)
- June (5)
- May (8)
- April (5)
- March (9)
- February (3)
- January (7)
- December (2)
- November (1)
- October (3)
- September (5)
- August (10)
- July (8)
- May (5)
- April (8)
- March (9)
- February (6)
- January (11)
- November (6)
- October (9)
- September (5)
- August (13)
- July (9)
- June (9)
- May (8)
- April (4)
- March (2)
- February (8)
- January (9)
- December (3)
- November (7)
- October (9)
- September (7)
- August (4)
- July (2)
- June (4)
- May (7)
- March (4)
- February (2)
- January (1)
- December (2)
- November (1)
- October (6)
- September (1)
- August (1)
- July (4)
- June (1)
- April (1)
- March (1)
- February (1)
- January (2)
- December (5)
- October (4)
- August (2)
- July (3)
- June (8)
- May (7)
- April (5)
- March (9)
- February (3)
- January (7)
- December (4)
- October (7)
- September (5)
- August (5)
- July (8)
- June (6)
- May (9)
- April (5)
- March (4)
- February (5)
- January (6)
- December (12)
- November (7)
- October (5)
- September (4)
- August (19)
- July (12)
- June (4)
- May (8)
- April (5)
- March (15)
- February (5)
- January (9)
- December (14)
- November (6)
- October (12)
- September (2)
- August (10)
- July (8)
- June (8)
- May (11)
- April (10)
- March (10)
- February (9)
- January (20)
- December (16)
- November (9)
- October (25)
- September (24)
- August (12)
- July (18)
- June (20)
- May (13)
- April (29)
- March (26)
- February (14)
- January (17)
- December (17)
- November (9)
- October (32)
- September (27)
- August (27)
- July (11)
- June (22)
- May (25)
- April (33)
- March (33)
- February (28)
- January (38)
- December (12)
- November (39)
- October (28)
- September (29)
- August (29)
- July (18)
- June (27)
- May (17)
- April (23)
- March (40)
- February (31)
- January (6)