Mixing Machine Learning and Rule Systems

Rule systems are very powerful, flexible systems for describing an agent’s behaviour. However, it only usually learns facts; that is, the rules that govern their behaviour normally remain fixed. In this post we will try to couple rule systems with machine learning methods to augment an agent’s capabilities. This was basically the main goal of my master’s thesis, about 10 years ago. But we’ll see that the tools for this task have evolved enormously during this time.

Drools

Drools is JBoss’ suite for business logic. Drools includes Drools Expert, which is a nice rule engine usable in other areas, like of course AI for online games. Some other rule engines are CLIPS and Jess, but CLIPS is slightly more difficult to integrate with other packages since it’s a C package (please don’t yell too loud at the poor author) and Jess is not open source. It also seems that Drools is getting more and more momentum, implementing features like adding concurrency in the rule engine.

Weka

Weka is Pentaho’s (and originally University of Waikato’s) component for machine learning. Some will object that Java is not the best choice for statistical number crunching, but Java is perfect for system integration, and Weka includes a lot of different methods one can try out on a specific problem. Furthermore, JNI makes it possible to include your top-notch, processor-matched, fine-tuned ATLAS or LIBPACK libraries if it were necessary. The University of Waikato has also moved into adaptive, CEP-like methods with the Moa package, but that’s another story that will be told in another post.

Mixing it all

The starting point will be a very simple Drools rule file:

package com.brainific.learningDemo

import com.brainific.learningDemo.Opponent;
import com.brainific.learningDemo.ClassifierWrapper;
import com.brainific.learningDemo.Action;

rule "Decide what to do when you come across an opponent"
    when
        $opponent: Opponent(identifier:id)
        $classifier: ClassifierWrapper(id == "sample")
    then
        String cl = $classifier.classifyInstance($opponent);
        Action a = new Action();
        a.setOppId(identifier);
        a.setAction(cl);
        insert(a);
end

rule "Act!"
    when
        $action: Action()
    then
        System.out.println("Acting!!! " + $action);
end

Basically, this ruleset takes an opponent in the agent’s “cognitive focus” (aka fact base), classifies it using some Java function, and asserting some action to take. Asserting the action instead of sending it to some underlying engine allows us to further reason about the action. Note that this function could be anything; we’ll see how to derive it from learning examples.

I’ve seen that ogre before…

Our “sensory system” will provide us with the following information about the opponent:

  • Level: some overall measure of the opponent’s fighting capability
  • Armor: how well protected the opponent is
  • Weapon: what will the opponent use against us

We will in turn either “attack” the opponent or “flee” from it.
Our agent will have some time to experience different opponents, and whether attacking the opponent was a successful strategy. After a couple of combats, we can summarize the experience in the following ARFF file:

@relation 'opponents'
@attribute LEVEL integer
@attribute ARMOR integer
@attribute WEAPON {bow, sling, axe, sword, dagger}
@attribute class {attack, flee}
@data
1,3,sword,attack
2,3,axe,attack
9,0,bow,flee
8,1,sword,flee

This allows us to create a J48 classifier with this dataset. J48 is Weka’s own implementation of the well-known C4.5 decision tree extraction algorithm. This algorithm will take some classified examples and derive a decision tree that tries to account for as many classified examples as it can, while at the same time keeping the tree simple.

package com.brainific.learningDemo;

import java.io.File;
import java.io.IOException;
import java.util.Arrays;

import weka.classifiers.Classifier;
import weka.classifiers.trees.J48;
import weka.core.Attribute;
import weka.core.Instances;
import weka.core.converters.ArffLoader;

public class LearningSensor {
        //...some code omitted...
	public static Classifier loadClassifier(String file) throws Exception
	{
		ArffLoader myLoader = new ArffLoader();
		myLoader.setFile(new File(file));
		Instances opponents = myLoader.getDataSet();
		opponents.setClassIndex(3);
		J48 opponentTree = new J48();
		opponentTree.setUnpruned(true);
		opponentTree.setConfidenceFactor(0.1f);
		opponentTree.buildClassifier(opponents);
		return opponentTree;
	}
}

The LearningSensor class allows us to easily create a new Classifier (which will be wrapped in another class, ClassifierWrapper) from existing examples. So, we can see what happens when this classifier is added to the rule engine with the current example set:

package com.brainific.learningDemo;
// imports removed for clarity

public class RuleEngineAgent {
	public static void main(String[] argv) throws Exception
	{
        KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
        kbuilder.add(ResourceFactory.newInputStreamResource(new FileInputStream("opponents.drl")),ResourceType.DRL);

        if (kbuilder.hasErrors()) {
            System.out.println(kbuilder.getErrors());
            return;
        }
        Collection kpkgs = kbuilder.getKnowledgePackages();
        KnowledgeBase kbase = KnowledgeBaseFactory.newKnowledgeBase();
        kbase.addKnowledgePackages( kpkgs );

        StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession();
        KnowledgeRuntimeLogger logger = KnowledgeRuntimeLoggerFactory.newConsoleLogger(ksession);

        Classifier c1 = LearningSensor.loadClassifier("opponents1.arff");
        ClassifierWrapper cw1 = new ClassifierWrapper();
        cw1.setId("sample");
        cw1.setClf(c1);

        Opponent opp = new Opponent();
        opp.setArmor(4);
        opp.setLevel(2);
        opp.setWeapon(Weapon.axe);
        opp.setId("ogre");
        ksession.insert(opp);
        ksession.insert(cw1);
        ksession.fireAllRules();
	}
}

The agent’s decision tree, as extracted from the examples, is:

J48 unpruned tree
------------------

LEVEL <= 2: attack (2.0)
LEVEL > 2: flee (2.0)

Quite simply, we will attack with a level less or equal than 2. When we come across the ogre described in the agent class, the agent thinks like this:

OBJECT ASSERTED value:com.brainific.learningDemo.Opponent@14275d4 factId: 1
ACTIVATION CREATED rule:Decide what to do when you come across an opponent activationId:Decide what to do when you come across an opponent [2, 1] declarations: $opponent=com.brainific.learningDemo.Opponent@14275d4(1); $classifier=com.brainific.learningDemo.ClassifierWrapper@2c17f7(2); identifier=ogre(1)
OBJECT ASSERTED value:com.brainific.learningDemo.ClassifierWrapper@2c17f7 factId: 2
BEFORE ACTIVATION FIRED rule:Decide what to do when you come across an opponent activationId:Decide what to do when you come across an opponent [2, 1] declarations: $opponent=com.brainific.learningDemo.Opponent@14275d4(1); $classifier=com.brainific.learningDemo.ClassifierWrapper@2c17f7(2); identifier=ogre(1)
ACTIVATION CREATED rule:Act! activationId:Act! [3] declarations: $action=(3)
OBJECT ASSERTED value: factId: 3
AFTER ACTIVATION FIRED rule:Decide what to do when you come across an opponent activationId:Decide what to do when you come across an opponent [2, 1] declarations: $opponent=com.brainific.learningDemo.Opponent@14275d4(1); $classifier=com.brainific.learningDemo.ClassifierWrapper@2c17f7(2); identifier=ogre(1)
BEFORE ACTIVATION FIRED rule:Act! activationId:Act! [3] declarations: $action=(3)
Acting!!! 
AFTER ACTIVATION FIRED rule:Act! activationId:Act! [3] declarations: $action=(3)

Our brave agent grabs its weapon and dashes head on toward the approaching monster. Shazam!

Another one bites the dust

Unfortunately, our agent has not seen enough action yet. Our agent dwelves into the action… and fails miserably. After returning to the spawn point, it gathers some more information and updates its example list:

@relation 'opponents'
@attribute LEVEL integer
@attribute ARMOR integer
@attribute WEAPON {bow, sling, axe, sword, dagger}
@attribute class {attack, flee}
@data
1,0,bow,attack
3,1,bow,attack
1,3,sword,attack
2,0,dagger,attack
8,0,dagger,attack
2,1,sword,attack
7,2,dagger,attack
9,0,bow,flee
8,1,sling,flee
7,1,bow,flee
3,3,sword,flee
2,4,axe,flee
4,3,sword,flee
7,4,axe,flee

The agent then loads the classifier obtained from this second dataset, and it gets the following decision tree:

J48 unpruned tree
------------------

WEAPON = bow
|   LEVEL <= 4: attack (2.0)
|   LEVEL > 4: flee (2.0)
WEAPON = sling: flee (1.0)
WEAPON = axe: flee (2.0)
WEAPON = sword
|   LEVEL <= 2: attack (2.0)
|   LEVEL > 2: flee (2.0)
WEAPON = dagger: attack (3.0)

Suddenly the world seems a much more complicated place. Apparently, most of our prior judgments were biased towards swordsmen! Since most of the combats against axe-wielding opponents ended in failure, our learning phase has taught us that this time we should avoid the ogre in the example:

OBJECT ASSERTED value:com.brainific.learningDemo.Opponent@15b0333 factId: 1
ACTIVATION CREATED rule:Decide what to do when you come across an opponent activationId:Decide what to do when you come across an opponent [2, 1] declarations: $opponent=com.brainific.learningDemo.Opponent@15b0333(1); $classifier=com.brainific.learningDemo.ClassifierWrapper@13b9fae(2); identifier=ogre(1)
OBJECT ASSERTED value:com.brainific.learningDemo.ClassifierWrapper@13b9fae factId: 2
BEFORE ACTIVATION FIRED rule:Decide what to do when you come across an opponent activationId:Decide what to do when you come across an opponent [2, 1] declarations: $opponent=com.brainific.learningDemo.Opponent@15b0333(1); $classifier=com.brainific.learningDemo.ClassifierWrapper@13b9fae(2); identifier=ogre(1)
ACTIVATION CREATED rule:Act! activationId:Act! [3] declarations: $action=(3)
OBJECT ASSERTED value: factId: 3
AFTER ACTIVATION FIRED rule:Decide what to do when you come across an opponent activationId:Decide what to do when you come across an opponent [2, 1] declarations: $opponent=com.brainific.learningDemo.Opponent@15b0333(1); $classifier=com.brainific.learningDemo.ClassifierWrapper@13b9fae(2); identifier=ogre(1)
BEFORE ACTIVATION FIRED rule:Act! activationId:Act! [3] declarations: $action=(3)
Acting!!! 
AFTER ACTIVATION FIRED rule:Act! activationId:Act! [3] declarations: $action=(3)

Our agent flees form the ogre… and lives to gather more information about the world.

Conclusion

In this example, we have seen that machine learning can be integrated in our high-level AI systems to make use of past experience and improve our agents’ actions. Many other methods, like COBWEB’s conceptual clustering or and SVM’s nonlinear classification capabilities, could be useful in other situations that will be hopefully explored in future posts.

1 thought on “Mixing Machine Learning and Rule Systems”

  1. A very useful post.
    Could you describe the function of the ClassifierWrapper please.
    It would be much appreciated.

    Thanks!

Leave a Reply

Your email address will not be published. Required fields are marked *