Introducing the Action Framework

After watching many behaviour tree implementations and GOAP prototypes, we decided that Brainific could build a tool that takes the best from both, and hopefully allows other interesting structures. As we explain in our previous post, there are some common aspects in them, mainly low level reactive behaviours, and some mechanism to find out when an action succeeds or fails and how to select the next action, that could make this approach feasible.

Layered tastes better

DISCLAIMER: As any work in progress, the architecture of the Action Framework is subject to changes. Please comment as much as you want so we can design the kind of engine you really need.

In the Action Framework, we intend to separate the “planning” aspects of the engine from the “reactive” aspects. Planning can be amortized over several frames and interrupted, even scheduled on a separate core; whereas reactive behaviours need to run very fast and synchronously. So we will fork the “sensory” input when it arrives: one copy will proceed synchronously to the reactive engine and another will be updated in the planning engine. Note that we will sometimes want to act asynchronously on the planning engine, for example to abort the execution of a planning stage that is not necessary anymore. Note also that this separation has been inspired by cognitive architectures.

ActionFW diagram

Apart from a reactive engine and a planning engine, we will add an internal memory that can be accessed by both. This internal memory can be used as a more flexible mechanism where one could use a state machine.

The reactive engine contains two types of reactors: perceptors and behaviours. Behaviours are your bread and butter: they take an input state and output an action, and contain conditions for success and failure. Perceptors, on the other hand, build an abstraction over the purely sensory data, potentially across frames. For example, one can build a “danger” perception that builds up across frames and maxes out when 10 frames pass with health below a 5% threshold. We favor “perceptors” over “sensors” to reflect the structure and high-level function of perception compared to sensation in cognitive psychology. Perceptors can act over internal memory, or send events asynchronously to the planning engine.

The planner engine acts whenever the reactive engine tells it to: either a perceptor detects some kind of “end of action” event (run to point A includes the running behaviour, but also the end condition of reaching point A) or someone adds a goal to be fulfilled (e.g. “remove that soldier”, that can translate into different plans like shooting at him or just distracting him to run somewhere else). The planner comes up with the “best thing to do next” and updates the reactive engine accordingly. A planner could be a utility behaviour selector, a behaviour tree execution, or a planner proper. Its only requirement is to take the external state, the internal memory, the reactive engine state and its own state to come up with a change in the active behaviours. The planner engine will usually work asynchronously, unless the whole current planning stage does not make sense any more. This usually happens when a fundamental behaviour has failed (e.g. when an agent cannot reach point A, the current plan beyond point A is now useless), or there is some more important goal at stake (e.g. throw away flanking plans to come up with a retreat plan when health is low).

Enter Haskell

To develop this architecture we need a language that lends itself naturally to the kind of AI mechanisms we want to use. We need both flexibility and as much speed as we can get. In the end the choice was Haskell.

Haskell is a pure, lazy, declarative functional language with a lot of mileage. The language structure itself is deceptively simple, providing a rich and complex type system (the Hindley-Milner type system, with several extensions available) that catches a lot of errors in compile time. Most constructs in Haskell are either type- or function-related. For the newcomers, this might seem a bit like an outdated Scala. The fact is that Scala borrows some of the features from Haskell, making some compromises with uneven results. Haskell also compiles to native code via LLVM, making it appropriate for some applications (e.g. videogames or embedded environments) where a virtual machine’s stats like memory usage may not be appropriate.

Haskell provides a lot of interesting features:

  • Easy concurrency on several mechanisms (our favourites are Channels and STM transactions) with user threads scheduled by the Haskell runtime
  • Performance on the order of x2 related to equivalent hand-tuned C++ for dummy nonsense applications (i.e. the computer language shootout)
  • Multithreaded execution scheduling decoupled user threads (à la Java Executor, but with preemptive user thread scheduling like in Erlang)
  • Monads that encapsulate side effects, making it easy to know which of your functions can execute in which contexts, e.g. inside a transaction, inside an imperative computation…
  • Advanced compilation and runtime system providing small CPU and memory footprint (priceless for computer games)
  • Backends for several architectures, including… OpenCL!
  • A powerful but relatively complex to use Foreign Function Interface (FFI) mechanism, usable in both Linux and Windows (with some small tweaks)

After passing the point of maximum slope, advancing in the language was fairly easy. The concurrency constructs are clear and easy to use, user threading works just fine, and the compiler gives mostly useful comments, mainly related to typing. But we need to tackle FFI if we want it to be used in production environments. In the next post we will detail exactly how the Action Framework can be used in a Windows C++ project.

Leave a Reply

Your email address will not be published. Required fields are marked *