Latex Maths

Saturday, October 20, 2012

Playing with WordNet (first impressions)

What I want is to build an ontology for use with common-sense inference.  An ontology induces a distance measure amongst words (or concepts), which can be used to distribute the concepts inside a high-dimensional sphere, with the concept of "everything" at the center.  This will be useful for my "matrix trick" (which I have explained in some slides, but I'll write about it later in this blog).

So, what I had in my mind was an ontology that looked like this (I just made this up):


But I quickly found out that the reality of ontologies is very different from what I expected!

The first problem is that WordNet has too many words, but I want to load an upper ontology into main memory and still be able to do inference search.  So my idea is to mine the ontology on demand.  For example, if I need an upper ontology of 1000 basic English words, then I can just construct the ontology by looking up the relevant relations in WordNet.

The idea seemed nice, but I found out that WordNet has too many hypernyms (= super-classes) for nouns and almost no hypernyms for adverbs.  For example, look at this visualization of the hypernyms of "cat" and "dog":

We have some surprises such as "a dog is a kind of unpleasant woman".

Also, some meanings are not what we expect, for example, for "cat" to mean a whip, or for "dog" to mean a mechanical device.  Because there is no fuzziness or probabilities, it is hard to distinguish between common and uncommon usage.

The red lines are links that connect "cat" to "dog".  There is the obvious connection via "carnivore" but there are also 2 other connections based on slang usage.

This is just the visualization of 2 iterations of hypernyms.  If we continue like this, the number of hypernyms will grow very large before they start to converge (because everything will ultimately end up as "entity").

This is a visualization of 1 step of hypernyms (blue) for 100 basic words (red):

Notice that at the bottom there are some loners left out of the game because they have no hypernyms (mostly adverbs).  Nevertheless, they are important words and should be included in a good ontology of concepts.

This is the visualization of 2 steps of hypernyms (blue) for 100 basic words (red):

The iteration will terminate ~10 steps for most branches, leaving us with 1000's of nodes and 100K's of linkages.  It is a non-trivial algorithmic problem to prune the graph.  Also disappointing is the fact that this graph will have many more intermediate nodes than the basic English words we started with -- something that I didn't expect.

At this point I gave up and started to try another method:  WordNet provides some similarity measures (such as "path_similarity") between words.  Maybe I can use these distances to cluster the words flatly?

This is a visualization of the similarities between 100 words:

As you can see, again, some adverbs or special words are left out at the bottom.

Also, there is no strong link between "left" and "right" (the strength is only 0.333, relatively low).  This is where I think WordNet's way of defining similarity in [0,1] is wrong.  Similarity should be measured in [-1,1] so that opposite concepts can be measured properly.

My conclusion so far:  "In artificial intelligence, avoid hand-crafting low-level data;  It's usually better to machine-learn!"

Looking forward: we may have to set up Genifer like a baby to learn a small set of vocabularies.  Or, we can use a flawed ontology but somehow supplement it to correct the deficiencies...  (Perhaps WordNet wasn't designed for using adverbs in a way similar to nouns.)

Sunday, February 12, 2012

unification = the calculus of concepts

Today I just had an insight:

Haskell Curry proposed combinatory logic as a logica universalis, but it ran into inconsistency problems.  (I'm trying to use fuzzy-probabilistic truth values to get around that problem, but that's a different topic.)

So, in 1963 JA Robinson discovered resolution, which is really unification + propositional resolution.  Unification decides if 2 terms can be made equationally identical.  Propositional resolution deals with the "calculus of thinking" at the proposition level.

Combinatory logic provides a free way to compose terms via "application".  I regard terms as concepts.  For example:
   "tall handsome guy"
is the combination of the concepts
   tall ∙ (handsome ∙ guy).

Now, a few examples:
"tall handsome guy" is equivalent to "handsome tall guy";
"very tall guy" implies "tall guy";  but
"very tall guy" does not equal "tall very guy";

Thus the unification theory is modulo some special rules akin to commutativity, associativity, etc, and certain reduction rules.  In other words, unification modulo a theory = "the calculus of concepts".

So we have a neat decomposition:
    calculus of thoughts = calculus of concepts
                                    + calculus of propositions

Video: Genifer in 4 minutes

This was made 2 months ago.  Just posting it here so people can find it from our blog:

Saturday, July 23, 2011

Distributive agents, illustrated

The following illustrates how deduction (backward-chaining) is performed.  Forward-chaining works very similarly.  I have left out how the agents find others to answer queries -- this is the routing strategy which is an optimization problem.

Agent2 performs only one step, namely, the resolution of:
  • P\/Q (the query Agent2 is being asked)
    with
  • ~P\/Q (the rule that Agent2 has in its KB)



This is another illustration, using a common-sense example:


By the way, the implication statement "A implies B":
     nice(X) ← sandals(X)
is classically equivalent to "not A or B":
     ~sandals(X) \/ nice(X)
and therefore it can be resolved with
    nice(matt)
by the substitution { X / matt }, yielding the resolvent:
    sandals(matt).
This is why the resolution step works.

Thursday, June 16, 2011

Distributive architecture for inference engine (deduction)

Eureka!! This new architecture is much simpler:



Each agent responds to queries and spits out solutions. For example, if you believe that "professors who wear sandals are nice to students" then you listen to queries about "who is nice to students". When there is a hit, you either:
  1. return an answer, if you know as a fact that XYZ is nice to students. 
  2. return a sub-goal, in this case, "does XYZ wear sandals?" and wait for others to answer. 
In case #2, if you got an answer "Professor Matt Mahoney wears sandals", say with TV = 0.9, then you decide how to calculate the TV of the conclusion given that TV of premise = 0.9. The only calculation you need to perform is for the rule that you own. Then you return the answer to the asker.

This architecture is so wonderful because there is no need to construct the proof tree anymore. The proof tree seems to have disappeared but it is really implicitly constructed within the network of agents!

Thanks to Matt Mahoney for proposing the CMR (competitive message routing) architecture.

For reference, this is an older design that reveals my thinking:  (This can be seen as a single agent, building the proof tree internally while trying to answer 1 query.  In the new architecture each agent is responsible for applying only one rule at a time).

Saturday, May 28, 2011

Self-programming architecture

This is not a new idea -- Ben and Jared in Opencog has mentioned it before (in the context of MOSES):



Abram seems to have an idea where the GP is replaced by RL (reinforcement learning).

Yesterday I was analyzing the GP + IE idea in more details:
  1. Let the GP side and the IE side gradually evolve in cycles, starting with $GP_1 + IE_1$.
  2. The key question is whether the cyclic route is faster than hand-coding IE.  Initially, it would involve more work because the GP side needs to be custom-made (we cannot use off-the-shelf GP software).  It may pay off only if $GP_1 + IE_1$ increases programming productivity significantly.
  3. A very weak $IE_1$ cannot increase programming productivity because GP + weak IE is still too slow to be usable.  For example, one idea is to have IE suggest a number of primitive functions when given a goal, so GP can include those primitives in the genes for that population.  But, even with current state-of-the-art GP, this cannot efficiently solve > 1 line programs, even if primitives are suggested.
  4. $IE_*$ (the ideal form of IE) will be able to deduce the program when given the desired goal: $$ G:goal \mapsto \{ P:program | \quad P \vdash G \}. $$ Whereas the above $IE_1$ is too weak (suggesting primitives similar to the goal): $$ G:goal \mapsto \{ x | \quad x \approx G \}. $$ Perhaps we need to find something in between weak $IE_1$ and $IE_*$.
  5. In other words, we simply have to hand-code $IE_1$ to reach a certain level of functionality before putting it to use with GP.  That basic level seems to include:
    • Ability to express simple plans (so that human teachers can supply basic programming knowledge as decomposition of tasks into sub-tasks)
    • Ability to express similarity and to perform simple associative recall.
    Interestingly, the ability to perform deduction seems not required for $IE_1$, nor the ability to calculate truth values.
The new insight may change our priorities during implementation...

Wednesday, September 8, 2010

Genifer: Facts and Rules Visualization

depth=3, 2D

depth=4, 3D


This interactive visualization is created by a GeniferGraph adapter class which exposes GeniferLisp's internal data to dANN.  dANN then uses the graph representation to dimensionally embed it in N-dimensional space.

Here is the sourcecode for the GeniferGraph class.  Notice that it uses recursive descent to add all LISP symbols to a finite depth.


Wednesday, September 1, 2010

Genifer Logic Graphs Visualization

Visualized by creating a dANN directed graph from the ABCL (Armed Bear Common Lisp) data structures, and visualizing it with SpaceGraph-J2 + dANN Hyperassociative Map.