PyMLNs: Markov logic networks in Python

by Dominik Jain (jain@cs.tum.edu)
This package consists of: Prerequisites:

The Graphical Tools

Two graphical tools, whose usage is hopefully self-explanatory, are part of the package: There's an inference tool (queryTool.py) and a parameter learning tool (learningTool.py). Simply invoke them using the Python interpreter. (On Windows, do not use pythonw.exe to run them because the console output is an integral part of these tools.)

python queryTool.py
python learningTool.py

General Usage

Both tools work with .mln and .db files in the current directory and will by default write output files to the current directory, too. (Note that when you invoke the tools, the working directory need not be the directory in which the tools themselves are located, which is why I recommend that you create appropriate shortcuts.) The tools are designed to be invoked from a console. Simply change to the directory in which the files you want to work with are located and then invoke the tool you want to use.

The general workflow is then as follows: You select the files you want to work with, edit them as needed or even create new files directly from within the GUI. Then you set the further options (e.g. the number of inference steps to take) and click on the button at the very bottom to start the procedure.

Once you start the actual algorithm, the tool window itself will be hidden as long as the job is running, while the output of the algorithm is written to the console for you to follow. At the beginning, the tools list the main input parameters for your convenience, and, once the task is completed, the query tool additionally outputs the inference results to the console (so even if you are using the Alchemy system, there is not really a need to open the results file that is generated).

Configuration

You may want to modify the configuration settings in config.py:

Integrated Editors

The tools feature integrated editors for .db and .mln files. If you modify a file in an internal editor, it will automatically be saved as soon as you invoke the learning or inference method (i.e. when you press the button at the very bottom) or whenever you press the save button to the right of the dropdown menu. If you want to save to a different filename, you may do so by changing the filename in the text input directly below the editor (which is activated as soon as the editor content changes) and then clicking on the save button.

Session Management

The tools will save all the settings you made whenever the learning or inference method is invoked, so that you can easily resume a session (all the information is saved to a configuration file). Moreover, the query tool will save context-specific information:

Command-Line Options

When starting the tools from the command line, they (to some degree) interpret and take over any Alchemy-style command line parameters, i.e. you can, for example, directly select the input MLN file by passing "-i <mln file>" as a command line parameter to learningTool.py. Uninterpretable options will be added to the "additional options" input.

Tool-Specific Fields

Query Tool

File Formats

The file formats for MLN and database files that our Python implementation of MLNs processes are for the most part compatible with the ones used by the Alchemy system.

General conventions

MLN Files

An MLN file may contain: Limitations:

Database/Evidence files

A database file may contain:

Modules

The main functionality of PyMLNs is contained in MLN.py (everything directly related to Markov logic, including inference and parameter learning) and FOL.py (first-order logic). The graphical tools expose but a small fraction of the full functionality. Use Python's help method on the modules to find out more about what's there – or simply take a look at the source files; there is quite a bit of documentation available (though not quite enough).

The MLN module also contains a main app – a little helper script – that offers some basic functions that may be useful.

Contact

If you have any questions or comments, please don't hesitate to contact me.