Latex Maths

Sunday, February 12, 2012

unification = the calculus of concepts

Today I just had an insight:

Haskell Curry proposed combinatory logic as a logica universalis, but it ran into inconsistency problems.  (I'm trying to use fuzzy-probabilistic truth values to get around that problem, but that's a different topic.)

So, in 1963 JA Robinson discovered resolution, which is really unification + propositional resolution.  Unification decides if 2 terms can be made equationally identical.  Propositional resolution deals with the "calculus of thinking" at the proposition level.

Combinatory logic provides a free way to compose terms via "application".  I regard terms as concepts.  For example:
   "tall handsome guy"
is the combination of the concepts
   tall ∙ (handsome ∙ guy).

Now, a few examples:
"tall handsome guy" is equivalent to "handsome tall guy";
"very tall guy" implies "tall guy";  but
"very tall guy" does not equal "tall very guy";

Thus the unification theory is modulo some special rules akin to commutativity, associativity, etc, and certain reduction rules.  In other words, unification modulo a theory = "the calculus of concepts".

So we have a neat decomposition:
    calculus of thoughts = calculus of concepts
                                    + calculus of propositions

Video: Genifer in 4 minutes

This was made 2 months ago.  Just posting it here so people can find it from our blog:

Saturday, July 23, 2011

Distributive agents, illustrated

The following illustrates how deduction (backward-chaining) is performed.  Forward-chaining works very similarly.  I have left out how the agents find others to answer queries -- this is the routing strategy which is an optimization problem.

Agent2 performs only one step, namely, the resolution of:
  • P\/Q (the query Agent2 is being asked)
    with
  • ~P\/Q (the rule that Agent2 has in its KB)



This is another illustration, using a common-sense example:


By the way, the implication statement "A implies B":
     nice(X) ← sandals(X)
is classically equivalent to "not A or B":
     ~sandals(X) \/ nice(X)
and therefore it can be resolved with
    nice(matt)
by the substitution { X / matt }, yielding the resolvent:
    sandals(matt).
This is why the resolution step works.

Thursday, June 16, 2011

Distributive architecture for inference engine (deduction)

Eureka!! This new architecture is much simpler:



Each agent responds to queries and spits out solutions. For example, if you believe that "professors who wear sandals are nice to students" then you listen to queries about "who is nice to students". When there is a hit, you either:
  1. return an answer, if you know as a fact that XYZ is nice to students. 
  2. return a sub-goal, in this case, "does XYZ wear sandals?" and wait for others to answer. 
In case #2, if you got an answer "Professor Matt Mahoney wears sandals", say with TV = 0.9, then you decide how to calculate the TV of the conclusion given that TV of premise = 0.9. The only calculation you need to perform is for the rule that you own. Then you return the answer to the asker.

This architecture is so wonderful because there is no need to construct the proof tree anymore. The proof tree seems to have disappeared but it is really implicitly constructed within the network of agents!

Thanks to Matt Mahoney for proposing the CMR (competitive message routing) architecture.

For reference, this is an older design that reveals my thinking:  (This can be seen as a single agent, building the proof tree internally while trying to answer 1 query.  In the new architecture each agent is responsible for applying only one rule at a time).

Saturday, May 28, 2011

Self-programming architecture

This is not a new idea -- Ben and Jared in Opencog has mentioned it before (in the context of MOSES):



Abram seems to have an idea where the GP is replaced by RL (reinforcement learning).

Yesterday I was analyzing the GP + IE idea in more details:
  1. Let the GP side and the IE side gradually evolve in cycles, starting with $GP_1 + IE_1$.
  2. The key question is whether the cyclic route is faster than hand-coding IE.  Initially, it would involve more work because the GP side needs to be custom-made (we cannot use off-the-shelf GP software).  It may pay off only if $GP_1 + IE_1$ increases programming productivity significantly.
  3. A very weak $IE_1$ cannot increase programming productivity because GP + weak IE is still too slow to be usable.  For example, one idea is to have IE suggest a number of primitive functions when given a goal, so GP can include those primitives in the genes for that population.  But, even with current state-of-the-art GP, this cannot efficiently solve > 1 line programs, even if primitives are suggested.
  4. $IE_*$ (the ideal form of IE) will be able to deduce the program when given the desired goal: $$ G:goal \mapsto \{ P:program | \quad P \vdash G \}. $$ Whereas the above $IE_1$ is too weak (suggesting primitives similar to the goal): $$ G:goal \mapsto \{ x | \quad x \approx G \}. $$ Perhaps we need to find something in between weak $IE_1$ and $IE_*$.
  5. In other words, we simply have to hand-code $IE_1$ to reach a certain level of functionality before putting it to use with GP.  That basic level seems to include:
    • Ability to express simple plans (so that human teachers can supply basic programming knowledge as decomposition of tasks into sub-tasks)
    • Ability to express similarity and to perform simple associative recall.
    Interestingly, the ability to perform deduction seems not required for $IE_1$, nor the ability to calculate truth values.
The new insight may change our priorities during implementation...

Wednesday, September 8, 2010

Genifer: Facts and Rules Visualization

depth=3, 2D

depth=4, 3D


This interactive visualization is created by a GeniferGraph adapter class which exposes GeniferLisp's internal data to dANN.  dANN then uses the graph representation to dimensionally embed it in N-dimensional space.

Here is the sourcecode for the GeniferGraph class.  Notice that it uses recursive descent to add all LISP symbols to a finite depth.


Wednesday, September 1, 2010

Genifer Logic Graphs Visualization

Visualized by creating a dANN directed graph from the ABCL (Armed Bear Common Lisp) data structures, and visualizing it with SpaceGraph-J2 + dANN Hyperassociative Map.




Tuesday, August 31, 2010

Self-Reflective Interactive Realities


A.I. SpaceGraph

another reason for going directly to JOGL is that i see an opportunity to make an AI scenegraph
a scenegraph is just a framework for managing what drawing commands are performed
so what i'm saying is that certain parts of dANN could form that framework
which is why i wondered if dANN had anything like a quadtree or octree
it seemed similar to that VectorSet or VectorMap we were discussing
for using GraphDrawing to find contents in a hypersphere
a scenegraph is usually a DAG
i think a more appropraite term is 'SpaceGraph'
not scene
described here http://automenta.com/spacegraph

How does space graph differ from scene graph?

scenegraph is just a term used in computer graphics
i'm making a connection between AI and 2d/3d computer rendering
the other side of it, is input sensing
the mouse cursor for example (and multitouch too) generate a sequence of events
and changes in state
the input logic too could benefit from AI

2D Prototype

just working on a 2D model right now
which would be suitable for graph drawing (and interaction)
it does zooming and panning and now i can do intersection with rectangular areas

what i'm saying is that we can do even better than ordinary ‘scenegraphs’ potentially
whether it optimizes it or replaces it entirely


Why focus on user-interfaces rather than AI “work”?

because the graph drawing stuff could form the basis for a new desktop UI
and as a result, apps like netbeans would be obsolete
because we could in essence program directly into it
i'm thinking deeply about it because it may hold a chance of accelerating development, in the long long term
if the system can reason about its drawing commands, its input sense, and what it is working with, it will be completely reflectively AI
then the entire system state can be persisted in graphs and shared over the network
no need for the file system, trac, etc anymore
Git might be a good versioning system to use for serialized graphs
(there is JGit pure java impl of Git which can be used to make a virtual filesystem interface)
for example, code and issues would be linked explicitly
so realizing all this, i'm trying to make a simple 2D interactive framework that dANN can hook into this way
for at minimum, graph layout, text input, and drawing connections between things
i mean a user-inputted drawn connection
like sketches or wires
i think i've found a minimum list of UI functionality necessary to achieve this
full graph reflexivity
even typing the letter 'a' would start to bring up associations
'...is the first letter of English alphabet'


Dont you think for that to happen the AI needs to reach a more mature level?

everything necessary exists right now. its not really AI, just designing it in a way that it can be utilized by AI components like graphs, neural networks, genetic algorithms, etc
then as soon as we can construct those AI components from within the system, we can accelerate experimentation
because all of the results will be right there


Why move to JOGL instead of Java3D?

for interactive graph panning, zooming, and clicking buttons
JOGL is more direct
Java3D's scenegraph is nearly made obsolete when compared with Ardor3D or JME3
some of its terminology are really unweildly, like 'viewplatform' etc
anyway Ardor3D and JME use JOGL as their underlying OpenGL interface
https://jogl.dev.java.net/