Current git codebases, sorted alphabetically

  • Git codebases have been gathered manually with RADAR from online sources, or from github or similar. RADAR is not spidering yet and we have not yet automatically processed all systems for descriptions, hence only some descriptions are displayed.


The Catalyst Advent Calendar is using the [POD]( format. For each day of the month there is a corresponding pod file into the `root` directory. If you don't feel comfortable to write the article using the POD format, don't worry. Into the `examples/` directory of this repository there are few examples from previous years.


[![GoDoc](]( [![Travis CI](]( Abot (pronounced *Eh-Bot*, like the Canadians) is a digital assistant framework that enables anyone to easily build a digital assistant similar to Apple's Siri, Microsoft's Cortana, Google Now, or Amazon Alexa. Further, Abot supports a human-aided training backend enabling anyone to build services like Facebook M.


This creates two tfrecord files under the data folder.


This package contains scripts and tools for doing unsupervised acceptability prediction. For a full description of the software, please refer to the publication listed at the bottom of this document. Datasets are hosted on our project website.


This repository hosts a program that derives, validates, and corrects the financial information that it is given. The program uses redundancy to carry out its validations and corrections. By this it is meant that knowledge of parts of a company's financial data imposes certain constraints on the company's other financial data. If the program is given a company's ledger, then it knows what the balance sheet should look like. If the program is given a company's balance sheet, then it has a rough idea of what the ledger should look like.


This project implements a subset of the syntax of Attempto Controlled English (ACE) version 6.7 in Grammatical Framework (GF) and ports it to ~20 natural languages (see the Makefile for the currently supported languages). Note that this project does not implement the mapping of ACE sentences to discourse representation structures.


AceRules is a rule engine based on Attempto Controlled English (ACE).


AceWiki is a semantic wiki based on controlled natural language.


* The actual database - a flat sqlite file - is, it needs to be unzipped, of course. * The database-schema.txt file (in this directory) contains information regarding the database. * See for instructions on how to reproduce our analyses. This also gives examples on working with the database in python (and in SQL, since we issue queries directly). Note that this requires sklearn, numpy, & statsmodels modules to be installed.


> **Abstract:** We propose a new task in the field of computational argumentation in which we investigate qualitative properties of Web arguments, namely their convincingness. We cast the problem as relation classification, where a pair of arguments having the same stance to the same prompt is judged. We annotate a large datasets of 16k pairs of arguments over 32 topics and investigate whether the relation "A is more convincing than B" exhibits properties of total ordering; these findings are used as global constraints for cleaning the crowdsourced data. We propose two tasks: (1) predicting which argument from an argument pair is more convincing and (2) ranking all arguments to the topic based on their convincingness. We experiment with feature-rich SVM and bidirectional LSTM and obtain 0.76-0.78 accuracy and 0.35-0.40 Spearman's correlation in a cross-topic evaluation. We release the newly created corpus UKPConvArg1 and the experimental software under open licenses.


This repository contains code for experiments described in my ACL paper.


In this project, an approximation of ROUGE-N is derived. This approximation is linearly factorizable into the individual scores of sentences which can be then optimize via Integer Linear Programming (ILP). This repositery contains the code for our optimizer which takes scored sentences and extract the best summary according to the ROUGE approximation.


> This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.


There is a copy of the paper in this repository in the file called `Wilson_ACL_2019.pdf`.


Adaptive Skip-gram (AdaGram) model is a nonparametric extension of famous Skip-gram model implemented in word2vec software which is able to learn multiple representations per word capturing different word meanings. This projects implements AdaGram in Julia language.


This repository contains the implementation of algorithms PP, RPP, SDPP, SDRPP, ADPP, and ADRPP described in the article


Aetheria Game Engine is a system for playing text adventure (interactive fiction) games, written in Java. Game worlds are represented in XML, with Beanshell code to account for complex object behaviour. PUCK (Playable Universe Construction Kit) is a graphical IDE that can be used to build such XML files.



Agentpolis is a fully agent-based platform for modeling transportation systems. It comprises a high-performance discrete-event simulation core, a cohesive set of high-level abstractions for building extensible agent-based models and a library of predefined components frequently used in transportation and mobility models. Together with a suite of supporting tools, Agentpolis enables rapid prototyping and execution of data-driven simulations of a wide range of mobility and transportation phenomena.


In this repository, we demonstrate how to use [Agentpolis]( to simulate urban transportation scenarios. It contains a Python script that illustrates how to convert raw OpenStreetMap data to geoJSON format used by Agentpolis. Further, it contains an example Java code that exploits the functionality of Agentpolis to simulate and visualize movement of several vehicles over the road network specified in the input geoJSON files.


* Semantically a goal marks a certain state of the world an agent _wishes to bring about_ [AgentSpeak, p.40] * _Achievement goals_ triggers an _achievement goal addition_ which leads to the execution of a corresponding [plan](#plan) * On agent start, there can exists one _initial goal_ only (like the ```main``` function in Java, C/C++) * Each agent can track _more than one goal_ at the same time otherwise the agent idles (the suspending state is not used) * Goals are triggered by external events which will match by the goal name * Goals will be resolved into [plans](#plan) with equal name (and allowed context), the [plan](#plan) is the intantiiation of the goal * Goals are run in parallel independed from other goals * A goal is a sequence of [plans](#plan) which must all finished successfully * A goal is part of exactly one [intention](#intention) * If a goal can match a [desire](#desire) (the goal is near to the desire) it can add an event to match the desire [belief](#belief) * If the agent is in sleeping / hibernate state and the ```wakeup``` method is called, it triggers the wakeup-goal


This repo contains an implementation of Foundation, a framework for flexible, modular, and composable environments that **model socio-economic behaviors and dynamics in a society with both agents and governments**.


This repository contains a [Jupyter Notebook](, which you can see live at []( It collects problems and metrics / datasets from the artificial intelligence and machine learning research literature, and tracks progress on them. You can use it to see how things are progressing in specific subfields or AI/ML as a whole, as a place to report new results you've obtained, and as place to look for problems that might benefit from having new datasets/metrics designed for them, or as a source to build on for data science projects.


# Ai_Papers This is a catalog for the foundations and emergence of AI research. Understanding the historic development of computational logic from primary sources is useful in gaining insight on the current state of AI.


AIDA is a framework and online tool for entity detection and disambiguation. Given a natural-language text, it maps mentions of ambiguous names onto canonical entities (e.g., individual people or places) registered in the Wikipedia-derived [YAGO2][YAGO] [YAGO2] knowledge base.


This repository was the original code base, back in 1995. Since then, the Java and Python versions have become more popular, and this Lisp version is no longer up-to-date. But it is here for whatever use you want to make of it.


This project includes skeletons for the classes and functions needed to solve deterministic logistics planning problems for an Air Cargo transport system using a planning search agent. With progression search algorithms like those in the navigation problem from lecture, optimal plans for each problem will be computed. Unlike the navigation problem, there is no simple distance heuristic to aid the agent. Instead, you will implement domain-independent heuristics. ![Progression air cargo search](images/Progression.PNG)


AIRIS is an Artificial General Intelligence (AGI) project that combines aspects of Reinforcement Learning (RL) with more traditional symbolic techniques (GOFAI).


AIWar is a game that let you create artificial intelligences to control space ships. The goal is to assemble a fighter army to destroy the ennemy base. To do that, you must get minerals with miningship and create fighters with your ressources. And you should also defend yourself again the ennemy army. The first team that destroies the ennemy base wins the match !


I put together a list of resources at [](, which is a public Instapaper folder I set up to make sharing the list of links easy. The slides will refer to each of these links. I’d recommend having this open in a tab so you can refer back to the links easily.


This project provides a step-by-step walkthrough to help you build a **hands-free** [Alexa Voice Service]( (AVS) prototype in 60 minutes, using wake word engines from [Sensory]( or [KITT.AI]( Now, in addition to pushing a button to "start listening", you can now also just say the wake word "Alexa", much like the [Amazon Echo]( You can find step-by-step instructions to set up the hands-free prototype on [Raspberry Pi](../../wiki/Raspberry-Pi), or follow the instructions to set up the push-to-talk only prototype on [Linux](../../wiki/Linux), [Mac](../../wiki/Mac), or [Windows](../../wiki/Windows).


1-Minute Mindfulness from Walking Affirmations is a skill that allows you to take a break from the world around you & enter into a one minute sound meditation.


To use alien, you will need several other programs. Alien is a perl program, and requires perl version 5.004 or greater. If you use slackware, make sure you get perl 5.004, the perl 5.003 in slackware does not work with alien!


Language Acquisition ITS
ALL is a system that supports many tasks of language learning. Knowledge of other languages is deemed essential to education of the mind and, when combined with clear, opens the door to immense quantities of knowledge. ALL supports this task for both written and spoken language (a necessity). It interfaces with bard, clear, and picform.


An [Apache 2.0]( NLP research library, built on PyTorch, for developing state-of-the-art deep learning models on a wide variety of linguistic tasks.


A simplified, highly flexible, commented and (hopefully) easy to understand implementation of self-play based reinforcement learning based on the AlphaGo Zero paper (Silver et al). It is designed to be easy to adopt for any two-player turn-based adversarial game and any deep learning framework of your choice. A sample implementation has been provided for the game of Othello in PyTorch, Keras, TensorFlow and Chainer. An accompanying tutorial can be found [here]( We also have implementations for GoBang and TicTacToe.


A list of existing pipelines can be found in `de.mpg.mpi_inf.ambiversenlu.nlu.entitylinking.uima.pipelines.PipelineType`, where you can also define new pipelines.


A Mind Forever Voyaging is a 1985 interactive fiction game written by Steve Meretzky and published by Infocom.


AMR-EAGER [1] is a transition-based parser for Abstract Meaning Representation (


AMR-EAGER [1] is a transition-based parser for Abstract Meaning Representation ( This repository provides an extension of AMR-EAGER to English, Italian, Spanish, German and Chinese. See [2] for a detailed explanation and experiments.


This is a fork of the amzi expert systems in prolog ported to swi-prolog and put in git instead of awkward file by file dl on amzi site


This sample demonstrates how to use the Bluetooth LE Generic Attribute Profile (GATT) to transmit arbitrary data between devices.


* open terminal in the directory with the release files * set `` as executable and run it


Animanager_ is a command line program for advanced anime watching management.


If you'd like to annotate a file that contains a single document without any SGML markup, add "--sgml f". However, for annotating a large quantity of files this is unadvisable, because loading the Stanford models takes a couple of minutes. It is more efficient to include several documents in one file (and documents should be formatted like parses).


***** Release check-list - make sure all the bugs are resolved in - make sure ANTLRWorks is compiled against the correct version of ANTLR and ST sources - update the ANTLR and ST jar files in main/lib - change version number (and date when it applies) into these files: - main/ - main/resources/properties/ - main/plugin/src/org/antlr/works/plugin/properties/ - update history in: - main/History - update online files (ask Terence for the path): - index.html - update.xml and such files for new versions - push release notes and such to doc dir - build ANTLRWorks by running ant on the main build file: $ cd main $ ant - verify the following in main/dist folder: - file versions are correct - jar file is running fine - OS X application is launching fine - upload files online: - - - antlrworks-1.x.jar - branch the release in p4 (main -> release/1.x)


This document explains how APE (ACE Parsing Engine) is compiled and used.



This is the source for building the core Amzi! Prolog + Logic Server system.


Aptly is a swiss army knife for Debian repository management.


This package provides a sequence tagger implementation customized for Arabic features, including a named entity detection model especially intended for Arabic Wikipedia. It was trained on labeled ACE and ANER data as well as an unlabeled Wikipedia corpus. Learning is with the structured perceptron, optionally in a cost-augmented fashion. Feature extraction is handled as a preprocessing step prior to learning/decoding.


This program is a command-line based tool that can be used to analyze systems modelled using the AltaRica language.


[Argdown]( is a simple syntax for analyzing complex argumentation.


This repository contains code for our ACL19's paper [Argument Generation with Retrieval, Planning, and Realization](


> This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.


* This site contains supplementary data for the Unshared Task * See [the corresponding call for papers](call-for-papers.txt) and visit the [official workshop website](


This program was created in order to explore Argumentation Logic, a concept created by Prof. Antonis Kakas, Dr. Francesca Toni and Prof. Paolo Mancarella.


# arisu arisu is a bot for discord written for [Let's all love Lain]( in python using!


Ark-SAGE is a Java library that implements the L1-regularized version of **S**parse **A**dditive **G**enerativ**E** models of Text (SAGE). SAGE is an algorithm for learning sparse representations of text. Details of the algorithm is described in


where the jar file is the one included in the release download. The tagger outputs tokens, predicted part-of-speech tags, and confidences. Use the "--help" flag for more information. On Unix systems, "./" invokes the tagger; e.g.


### Scraping Images from Wikiart `` will allow you to scrape artworks from wikiart based on their genres. The usage is quite simple. In `` there is a variable called `genre_to_scrape` - simply change that to any of the genre's listed on [this page](, or to any of the values in the huge list of comments right after `genre_to_scrape` is defined.


## Introduction ASCENT is a pipeline for extracting and consolidating commonsense knowledge from the world wide web. ASCENT is capable of extracting facet-enriched assertions, for example, `lawyer; represents; clients; [LOCATION] in courts` or `elephant; uses; its trunk; [PURPOSE] to suck up water`. A web interface of the ASCENT knowledge base for 10,000 popular concepts can be found at


A minimalist framework for developing apps (skills) for the Amazon Echo's SDK: The Alexa Skills Kit (ASK).


ask-alexa-pykit is currently at version 0.3 Latest changes: - The main changes between v0.2 - v0.3 is the removal of the RequestHandler class, I started finding the design of that class was not very modular and didn't seem to lend itself well to easy use since it would have to be subclassed to add significantly new functionality. Instead I divided up the function of the RequestHandler into 3 simple APIs - the Request, the VoiceHandler function, and the ResponseBuilder. - The Request object contains information about the Alexa request - such as intent, slots, userId etc. - A VoiceHandler function (specified with an annotation) takes a request as an input, performs some arbitrary logic on top of it, and returns a Response. - The ResponseBuilder is an encapsulated way to construct responses for a VoiceHandler. A Response can be constructed by called ResponseBuilder.create_response. - This way each part of the code has an unambiguous responsbility, hopefully leading to an extremely easy API. - I had to do a little magic using the inspect module in to make it happen, hopefully the code is not too hard to understand. - Check out voice handlers for the new way to map a VoiceHandler to an intent - the new Handlers are more like AWS Lambda functions. When writing a new skill, you can simply copy this code, generate the intent schema and fill out some custom functions in the file.


Aurum is a work in progress, we expect to release its first open-source version in the 4th quarter of 2018. We are happy to accept contributions of the community. If you are interested in contributing take a look at the [CONTRIBUTING](../ and feel free to email We also have a code of conduct:


The Automated Programming Framework (APF) is a tool to generate compilations to PDDL such that off-the-shelf classical planners can compute solutions from which we can induce programs or controllers. This is a framework that covers several publications in generalized planning (see [references](#references)), so it includes different compilations in the same code that can be called with configuration files.


This repository holds the source code for the AutoMATES documentation and several component pipelines.


Autoplay is a learning environment for creating agents that play text-based games. Supported games include the popular Zork series and other z-machine interpretable files (specifically the .z5 format). These games are provided as part of this repository.


This diagram illustrates the data flows between components that comprise the AVS Device SDK for C++.


- [[][undo-tree]] - Visualize the whole undo history in buffer as a tree, and you can access anywhere in it. - [[][highlight-symbol]] - Auto/manually highlight the same symbols in code, navigate in them, or replace string. - [[][rainbow-delimiters]] - Highlights parentheses, brackets, and braces according to their depth. - [[][rainbow-mode]] - Colorize color names in buffers. - [[][visual-regexp]] - Replace via RegExp, with real-time visual feedback directly in the buffer. - [[][visual-regexp-steroids]] - The same as visual-regexp, but use modern regular expressions instead of Emacs-style. - [[][whitespace]] - =[built-in]= Visualize blanks (tab/space/newline). - [[][linum-relative]] - display relative line number in the left margin in emacs. - [[][prettify-symbol-mode]] - =[built-in]= displaying characters as fancy symbols (e.g. =lambda= -> =λ=). - [[][typo.el]] - Emacs extension for typographical editing. - [[][highlight-thing]] - Light-weight minor mode to highlight thing under point using built-ins. - [[][focus]] - Dim the font color of text in surrounding paragraphs. - [[][Solaire mode]] - Visually distinguish file-visiting windows from other types of windows (like popups or sidebars) by giving them a slightly different background. - [[][beacon]] - Never lose your cursor again. - [[][dimmer.el]] - Interactively highlight which buffer is active by dimming the others. - [[][volatile-highlights.el]] - Minor mode for visual feedback on some operations in Emacs. - [[][color-identifiers-mode]] - Color Identifiers is a minor mode for Emacs that highlights each source code identifier uniquely based on its name. - [[][yascroll-el]] - Yet Another Scroll Bar Mode. - [[][goto-line-preview]] - Preview line when executing `goto-line` command. - [[][highlight-parentheses.el]] - highlight surrounding parentheses. - [[][literate-calc-mode]] - display live =calc= results inline - [[][math-preview]] - Preview TeX equations inline


* [AllegroGraph]( - high-performance, persistent graph database that scales to billions of quads * [Apache Jena]( - open source Java framework for building Semantic Web and Linked Data applications * [Eclipse RDF4J]( - (formerly known as Sesame) is an open source Java framework for processing RDF data. This includes parsing, storing, inferencing and querying of/over such data. It offers an easy-to-use API that can be connected to all leading RDF storage solutions. It allows you to connect with SPARQL endpoints and create applications that leverage the power of linked data and Semantic Web. * [GraphDB]( - enterprise ready Semantic Graph Database, compliant with W3C Standards * [Virtuoso]( - a "Data Junction Box" that drives enterprise and individual agility by deriving a Semantic Web of Linked Data from existing data silos * [Hoply]( - explore bigger than RAM relational data in the comfort of Python.


This repo is our research summary and playground for MRC. More features are coming.


This work is supported by Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA).


Babel2 is a general framework for implementing and running your agent-based experiments, both in a simulated environment or embodied in grounded robots. It connects our core technologies such as [Fluid Construction Grammar]( and Incremental Recruitment Language (IRL) with mechanisms for multi-agent interactions, robotic embodiment, cognitive processing and learning. An extensive monitoring system opens up every detail of Babel2’s intermediate representations and underlying dynamics. A modular design ensures that the system can be used in a wide variety of scenarios. It is therefore possible to use each component individually, according to your needs.


Baleen is an extensible text processing capability that allows entity-related information to be extracted from unstructured and semi-structured data sources. It makes available in a structured format things of interest otherwise stored in formats such as text documents - references to people, organisations, unique identifiers, location information.


The visual and textual mentions of a *man* shown in the red text and in the red box refer to the same entity, and they should be linked together. The other visual mention i.e. *racket*, *ball* and *logo* should be linked to different entities. These three entities are not known (i.e., they are not part of the initial knowledgebase **K**), and therefore three new entities of type *racket, ball* and *logo* should be added to the knowledge base, i.e., the **A-box** of **K** should be extended with the assertions *Racket(enew1)*, *Ball(enew2)* and *Logo(enew3)*. The visual and textual mentions of *R.Federer* is also referring to the same entity. However, this time the entity is known (i.e., **YAGO** contains an entity for *man*) and therefore the two mentions should be linked to the same entity. For the other textual mentions, i.e., *Lukas Lacko*, *Wimbledon*, *London*, *2018*, we already have instances in the **knowledgebase**, so we have to link them to these entities. (For details read our papers: coming soon!)


OpenAI Baselines is a set of high-quality implementations of reinforcement learning algorithms.


bashlex is a Python port of the parser used internally by GNU bash.


We have a new bottleneck: we're limited by how quickly we can partition/pump our dataset out to the nodes. awk and sort begin to show their limitations (our clever awk script is a bit cpu bound, and @sort -m@ can only merge so many files at once). So we use two little helper programs written in C (yes, I know! it's cheating! if you can think of a better partition/merge using core unix tools, contact me) to partition the data and merge it back.


run\_dha\ ([github]( is a basic example of analysis using BayesDB. For a first test, run the following from inside the top level BayesDB dir


# Bayou Bayou is a data-driven program synthesis system for Java API idioms that uses the novel technique of Neural Sketch Learning.


The `` is a shell script that automates the [install steps]( for installing BigBlueButton 2.0.


bddem is a library for manipulating Binary Decision Diagrams in SWI-Prolog (


This software realises a mechanism for integrating Belief-Desire-Intention (BDI) reasoning into agents within an agent-based simulation (ABM). The concept is described in the following papers papers:


This project contains experiments for spelling error prediction. The pre-processing steps for error extraction from learner corpora could also be used for other error types. The experiments are described in detail in the paper "Predicting the Spelling Difficulty of Words for Language Learners". Please use the following citation:


BedSit is a **Bed**rock upon which to build your **Sit**uation driven application. It provides objects and categories that work with either [SitCalc]( or [STRIPState]( allowing you to get on with making your application without having to worry about such details.


- behaviac is a framework of the game AI development, and it also can be used as a rapid game prototype design tool - behaviac supports the behavior tree, finite state machine and hierarchical task network - Behaviors can be designed and debugged in the designer, exported and executed by the game - The designer can only run on the Windows platforms, The run time library is implemented with C++ and C#, and it supports all major platforms (Windows, Linux, Android, iOS, Unity etc.) and Unity. - The C++ version is suitable for the client and server side. - [Website]( for documents, tutorials, API,FAQ,source code, downloads,etc. - BehaviacSetup*.exe is the setup package with the binary editor and demo executable. You can download/clone the source code from [github behaviac](


This library include the following core structures...


This __C++ 14__ library provides a framework to create BehaviorTrees. It was designed to be flexible, easy to use, reactive and fast.


This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.


The BFG is a simpler, faster ([10 - 720x]( faster) alternative to `git-filter-branch` for cleansing bad data out of your Git repository:


This project is a joint work by Nir Lipovetzky, and Hector Geffner.


# The Bibliotheca Anonoma The **Bibliotheca Anonoma** is a wiki designed to collect, document, and safeguard the products and history of internet culture; which constitutes **the shared experience of humanity on a network that defines our lives**.


This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version.


Bixo is an open source Java web mining toolkit that runs as a series of Cascading pipes. It is designed to be used as a tool for creating customized web mining apps. By building a customized Cascading pipe assembly, you can quickly create a workflow using Bixo that fetches web content, parses, analyzes, and publishes the results.


Google's Blockly is a library that adds a visual code editor to web and mobile apps. The Blockly editor uses interlocking, graphical blocks to represent code concepts like variables, logical expressions, loops, and more. It allows users to apply programming principles without having to worry about syntax or the intimidation of a blinking cursor on the command line. All code is free and open source.


This is a simple little chatbot written in Clojure, mostly to have fun and learn about Clojure and also chatbots, AI, you name it. It can either talk through the command-line or connect to an irc server. For the moment, with its default brain, it only accepts simple facts described in SVO sentences with proper names, and simple general rules and queries, as depicted in the example interaction below.


A ttyrec of one Medusa run is in the repo:



## BoxCars116k dataset The dataset was created for the paper and it is possible to download it from our [website]( The dataset contains 116k of images of vehicles with fine-grained labels taken from surveillance cameras under various viewpoints. See the paper [**BoxCars: Improving Vehicle Fine-Grained Recognition using 3D Bounding Boxes in Traffic Surveillance**]( for more statistics and information about dataset acquisition. The dataset contains tracked vehicles with the same label and multiple images per track. The track is uniquely identified by its id `vehicle_id`, while each image is uniquely identified by `vehicle_id` and `instance_id`. It is possible to use class `BoxCarsDataset` from `lib/` for working with the dataset; however, for convenience, we describe the structure of the dataset also here. The dataset contains several files and folders: * **images** - dataset images and masks * **atlas.pkl** - *BIG* structure with jpeg encoded images, which can be convenient as the whole structure fits the memory and it is possible to get the images on the fly. To load the atlas (or any other pkl file), you can use function `load_cache` from `lib/`. To decode the image (in RGB channel order), use the following statement. ```python atlas = load_cache(path_to_atlas_file) image = cv2.cvtColor(cv2.imdecode(atlas[vehicle_id][instance_id], 1), cv2.COLOR_BGR2RGB) ```


This is a RESTful API for Node.js (version >=0.10.x) as an attempt to create a crowdsourced database for boycotted venues, corporations, organizations, events, etc.


In an attempt to keep all user-facing documentation in one place, please visit the [brat homepage][brat] which contains extensive documentation and examples of how to use and configure brat. We apologise for only providing minimal documentation along with the installation package but the risk of having out-dated documentation delivered to our end-users is unacceptable.


## Data Release ## This release consists of some data from a BRAWL prototype. We created a small enterprise network, described below. We then ran a single game using the MITRE CALDERA research project as a red bot.


This is prototype code for building a behaviour tree from examples of expert behaviour. This code is explained in the accompanying paper [Building Behavior Trees from Observations in Real-Time Strategy Games](


The BBN Speech, Language, and Multimedia Group uses an internal Java library of common utility functions written by many people, `bue-common`. We sometimes make releases of open-source software which depend on parts of this library, requiring that certain classes by open-sourced as well. This repository contains the (small) open-source portion of this library.


**Buka** is a modern software that helps you manage your ebook at ease. With a simple, clean and straight-forward user interface, **Buka** aims to gather your ebooks for a reading experience without hassles. **Buka** currently support .PDF format with configurations that helps user focus more on the content.


Bundler is a structure-from-motion system for unordered image collections (for instance, images from the Internet). Bundler takes a set of images, image features, and image matches as input, and produces a 3D reconstruction of the camera and (sparse) scene geometry as output. The system, described in [1] and [2], reconstructs the scene incrementally, a few images at a time, using a modified version of the Sparse Bundle Adjustment package of Lourakis and Argyros [3] as the underlying optimization engine.


This is the source code for the agent that [won]( the IEEE CIG 2016 Text-based adventure AI Competition. It has been formatted to work with [autoplay](


A set of analogy tasks of the form A:B::C:D, intended as a benchmark for analogical reasoning and planning. Analogies are augmented with Penn Treebank part-of-speech tags and include both one-to-many and many-to-one relationships. The dataset contains 23,692 analogies in all.


This software is released under the Apache License, Version 2.0. See LICENSE in the project root directory for all details. Portions of this software were originally developed at the United States Naval Academy as NavyTime, and then expanded into CAEVO at the 2013 SCALE Workshop at Johns Hopkins University. Software from Steven Bethard's ClearTK system is also included as separate sieves.


CALDERA is an automated adversary emulation system that performs post-compromise adversarial behavior within Windows Enterprise networks. It generates plans during operation using a [planning system](#planning-system) and a pre-configured adversary model based on the [Adversarial Tactics, Techniques & Common Knowledge]( (ATT&CK™) project. These features allow CALDERA to dynamically operate over a set of systems using variable behavior, which better represents how human adversaries perform operations than systems that follow prescribed sequences of actions.


[C&C tools]( "C\&C tools") is a suite of software for linguistic analysis of the English language, including a tokenizer, several taggers and a parser. [Boxer]( "Boxer") is a tools for deep semantic analysis that takes in input the output of the C\&C parser. Together, the C&C tools and Boxer form a pipeline toolchain to perform a complete analysis on English text. Here is an example:


This assignment considers the Situation Calculus and Planning. It focuses on: - Formalizing a planning problem, using Situation Calculus to represent the world. - Implementing the model and verifying its correctness using a planner based on the Golog syntax. - Extending the model as well as its implementation in order to deal with additional aspects of the environment.


A Java-based Framework for Programming Environments in Agent-oriented Applications.


# CASCADE CASCADE is a research project at MITRE which seeks to automate much of the investigative work a “blue-team” team would perform to determine the scope and maliciousness of suspicious behavior on a network using host data.


A simple access example for CAS.


This is the repository for the ACL 2020 paper [Embarrassingly Simple Unsupervised Aspect Extraction]( In this work, we extract aspects from restaurant reviews with attention that uses RBF kernels.


This is the propositional version of the game.


CatMUD is a MUD server (and MUD game) written in Prolog. It is not designed to be robust, nor widely used, so it's probably not going to stand up to a regular MUD environment.


This project uses several libraries that either need to be installed or


This version is able to handle forward and backward application, forward and backward composition and forward type-raising (which is enough to parse sentences written in French)


CEL is a lightweight Description Logic reasoner for large-scale biomedical ontologies. The CEL Plug-ing uses the [OWL API]( and lets CEL be used as a plug-in for [Protege](


A non-complete parser for the Prolog programming language


A [Prolog-ish][Prolog] interpreter written in Rust, intended perhaps for use in the compiler, but also for experimentation.


This project is based in two main resources: 1) DeepMind's Oct19th publication: [Mastering the Game of Go without Human Knowledge]( 2) The great Reversi development of the DeepMind ideas that @mokemokechicken did in his repo:


This is RPI BLENDER Chinese slot filling system. Definition of slot filling: Slot filling aims at collecting from a large-scale multi-source corpus the values (“slot fillers”) for certain attributes (“slot types”) of a query entity, which is a person or some type of organization.[1]


This repository contains Web extension for Google Chrome/Chromium, Vivaldi, Opera (and other WebExtensions capable browsers) and native host messaging connector that provides integration with GNOME Shell and the corresponding extensions repository


The chunked extractors project is a collection of three extractors.


Cicero is an Open Source implementation of the [Accord Project Template Specification][apspec]. It defines the structure of natural language templates, bound to a data model, that can be executed using request/response JSON messages.


This is the source code for the [CiteSeerX academic digital library.](


The GAMBOL package is a trivially modified extraction of the logic programming portion of the Frolic system written at the University of Utah. I have made a few changes to get it to compile under a modern Common Lisp, in addition to a few style changes that don't alter any functionality.


`cl-ggp` is a tiny framework for writing [general game players][GGP] in Common Lisp.


This is a realization of Marc Kuo's ["modelling approach to OR (operations research)"]( for Prolog language.


Command Line Artificial Intelligence `CLAI` is an open-sourced project aimed to bring the power of AI to the command line. Using CLAI, users of Bash can access a wide range of skills that will enhance their command line experience. This repository contains the source code and documentation to get you started.


This is the schedule: _Event_0x7f3e9007a690: 0.00 s global_start_event: 0.00 s _Event_0x7f3e8f7dcd10: 5.00 s _Event_0x7f3e8f7fc750: 15.16 s _Event_0x7f3e8f797cd0: 20.16 s _Event_0x7f3e8f7c33d0: 25.16 s _Event_0x7f3e8f7d0a90: 30.16 s _Event_0x7f3e8f77c410: 35.16 s _Event_0x7f3e8f7471d0: 47.16 s _Event_0x7f3e8f747e50: 57.16 s _Event_0x7f3e8f6d4590: 69.16 s _Event_0x7f3e8f704450: 79.16 s _Event_0x7f3e8f695190: 79.16 s global_end_event: 79.16 s ```


This repository is a simple collection of PDDL files. Currently only classical problems are included, but more are expected to be added in the future.


[]( is a large English lexicon derived from COMLEX. It conforms to the [ACE Lexicon Specification]( and can be used as a drop-in replacement for the (small) lexicon file included in the [APE source distribution](


ClioPatria is an extension of the SWI-Prolog RDF infratructure (`semweb' package) that provides you with a ready-to-run web-server that can be extended into a full-fledged Semantic Web application. The semweb package provides reading and writing RDF (XML and Turtle), storage and querying by means of rdf(Subject, Predicate, Object). ClioPatria adds the following:


This project is the bases for []( -- a web service that provides access to an automated planner. Please report any bugs or feature requests you may have on the [[issue list](] for the project.

This is the official website for CloudForFree


A collection of tools for manipulating Common Logic texts. See


`--output-dir` is an optional switch that specifies an output directory for the extracted content. If not used, cluewebextractor will either: not use a directory, if input is a single file, or; use the name of the input directory as the output directory, if input is a directory.


This native Common Lisp version will be refactored, documented, and modernized yielding a much smaller and easier to modify system. It should also run inferences faster than the layered and semi-interpreted Java version, which emulates a Lisp-like environment (SubL/CycL).


This native Common Lisp version will be refactored, documented, and modernized yielding a much smaller and easier to modify system. It should also run inferences faster than the layered and semi-interpreted Java version, which emulates a Lisp-like environment (SubL/CycL).


**Coauthor** is a tool for group collaboration, discussion, keeping track of notes/results of meetings, etc., in particular to enable **[supercollaboration](**. Coauthor's primary goal is to ease multiauthor collaboration on unsolved problems in theoretical computer science, so e.g. you'll find LaTeX math support, but it has proved useful in other fields too.


The repository contains the PDDL<->MA-PDDL conversion scripts and competition running scripts.


This package contains COLIN-TRH, a planner for domains with time windows. For more details, see the papers:


This project contains experimental code for classying opinion and persuasiveness from speech using vanilla long short-term memory network (LSTMs) recurrent neural nets implementation from Keras.


The oracle file is a Yaml-serialised file of the following format:


Many tasks require correct and meaningful communication and integration among intelligent agents and information resources. A major barrier to such interoperability is semantic heterogeneity: different applications, databases, and agents may ascribe disparate meanings to the same terms or use distinct terms to convey the same meaning. Even when software applications use the same terminology, they often associate different semantics with the terms. This clash over the meaning of the terms prevents the seamless exchange of information among the applications. The development and application of ontologies play a central role in achieving semantic integration. An ontology is a computer-interpretable specification that is used by an agent, application, or other information resource to declare what terms it uses, and what the terms mean. Ontologies support the semantic integration of software systems through a shared understanding of the terminology in their respective ontologies.


## Author Compass is written by [Chris Eppstein](
Chris is a software engineer at [LinkedIn]( and a member of the [Sass]( core team.


## Overview The CompCert C verified compiler is a compiler for a large subset of the C programming language that generates code for the PowerPC, ARM, x86 and RISC-V processors.


### Latest release: 2.9.2, 2013-07-30 - cutesy release code name "Practice! Practice! Practice!" ### [Quick Start]( ### [What's New?]( ### [Road Map]( ### Questions? Problems? Just want to talk about computational journalism? * [Follow @znmeb on Twitter]( * [File an issue on Github]( * [Frontiers of Journalism on]( * [R for Journalists on](


The repository contains scripts and data used in the [Computational Semantics]( course at the University of Groningen.


Answer Graph Criterias to check for: 1. w is a well formed CG 2. w is true if the data base is correct 3. The entire query graph q is covered by a join from w 4. For every concept in q that has a value, the corresponding concept in w has the same value. 5. For every concept in q that had a question mark, the corresponding concept in w has a value.


This Python package contains a toolset for loading new datasets into ConceptNet 5, and it serves the HTML and JSON Web APIs for it. You don't need it to simply access ConceptNet 5; see for more information.


Concerto is a lightweight 100% JavaScript schema language and runtime. It works in both a Node.js process and in your browser. The browserified version of Concerto is ±280KB. We are working on making it even smaller.


This repository contains the the logic of dialog planner. It is deployed as a bluemix python application with a NoSQL db database that is supposed to store solutiions generated by planner.


This is a tutorial to help first-time contributors to participate in a simple and easy project.


copernic is web application that is (mostly) implemented with Python programming language. It is supported by a database that is a triple store versioned. It is possible to do time traveling queries at any point in history while still being efficient to query and modify the latest version. The versioned triple store is implemented using a novel approach dubbed generic tuple store. copernic goal is to demonstrate that versioned databases allow to implement workflows that ease cooperation.


An implementation of [Douglas Hofstadter]('s Copycat algorithm. The Copycat algorithm is explained [on Wikipedia](, and that page has many links for deeper reading. See also [Farglexandria](


Coq is a formal proof management system. It provides a formal language to write mathematical definitions, executable algorithms and theorems together with an environment for semi-interactive development of machine-checked proofs.


# COSMOS Cosmos is an open source semantic search engine that focuses on the retrieval of information from PDF documents. While created with the intention of automating the process of scientific discovery and analysis, the components can be applied generally to stacks of documents.


City of the Damned is a simple fast-paced coffee-break roguelike inspired by a 7DRL entry "City of the Condemned" by Tapio (


This is the GitHub repository for the CountryInfo.txt and related utility programs. CountryInfo.txt is a general purpose file intended to facilitate natural language processing of news reports and political texts. It was originally developed to identify states for the text filtering system used in the development of the Correlates of War project dataset MID4, then extended to incorporate CIA World Factbook and WordNet information for the development of TABARI dictionaries. File contains about 32,000 lines with country names, synonyms and other alternative forms, major city and region names, and national leaders. It covers about 240 countries and administrative units (e.g. American Samoa, Christmas Island, Hong Kong, Greenland). It is internally documented and almost but not quite XML.


A Web Service for the CPAN ==========================


CPArec is a tool for verifying recursive C programs via source-to-source program transformation. It uses a recursion-free program analyzer CPAChecker as a black box and computes function summaries from the inductive invariants generated by CPAChecker_. Such function summaries enable CPArec to check recursive programs.


Description: This program is a ncurses based console tool to manage passwords and store them public key encrypted in a file - even for more than one person. The encryption is handled via GnuPG so the programs data can be accessed via gpg as well, in case you want to have a look inside. The data is stored as as zlib compressed XML so it's even possible to reuse the data for some other purpose.


This repository contains code in Torch 7 for text classification from character-level using convolutional networks. It can be used to reproduce the results in the following article:


This repository contains the code to reproduce the experiment result of the paper [CRF autoencoder for unsupervised dependency parsing]( on WSJ data set and PASCAL dataset.


CROMER (CROss-document Main Events and entities Recognition) is a novel web-based tool to manually annotate event and entity coreference across clusters of documents. The tool has been developed so as to handle large collections of documents, perform collaborative annotation (several annotators can work on the same clusters), and enable the linking of the annotated data to external knowledge sources. Given the availability of semantic information encoded in Semantic Web resources, this tool is designed to support annotators in linking entities and events to DBPedia and Wikipedia, so as to facilitate the automatic retrieval of additional semantic information. In this way, event modelling and chaining is made easy, while guaranteeing the highest interconnection with external resources.


In order to compile, you will need to download the SDKs for the particular release you are trying to build. They can be found [here](


This is a small program to help you solve cryptograms.


Crystal is a natural language question answering program. It converts natural text into a semantic representation based on Discourse Representation Theory and performs inferences on the result. Its features include anaphora and presupposition resolution, semantic reasoning through the use of WordNet and VerbNet databases and logical inference. The application currently covers only a small subset of English, but it is sufficiently interesting to mess around.


QUASIMODO is a system to extract commonsense knowledge from query logs and QA forums.


Each problem is stored in the `Problems` directory. The best way to get a feeling for how a problem is stored is to look at an existing problem (Problem/prob001 is a good start).


The RNN output matrix of the **Mini example** testcase contains 2 time-steps (t0 and t1) and 3 labels (a, b and - representing the CTC-blank). Best path decoding (see left figure) takes the most probable label per time-step which gives the path "--" and therefore the recognized text "" with probability 0.6\*0.6=0.36. Beam search, prefix search and token passing calculate the probability of labelings. For the labeling "a" these algorithms sum over the paths "-a", "a-" and "aa" (see right figure) with probability 0.6\*0.4+0.4\*0.6+0.4*0.4=0.64. The only path which gives "" still has probability 0.36, therefore "a" is the result returned by beam search, prefix search and token passing.


This repository contains code for the [Contract Understanding Atticus Dataset (CUAD)](, a dataset for legal contract review curated by the Atticus Project. It is part of the associated paper [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review]( by Dan Hendrycks, Collin Burns, Anya Chen, and Spencer Ball.


CVC4 is a tool for determining the satisfiability of a first order formula modulo a first order theory (or a combination of such theories). It is the fourth in the Cooperating Validity Checker family of tools (CVC, CVC Lite, CVC3) but does not directly incorporate code from any previous version.


This repository demonstrates how to train and test on the CycIC dataset using the popular transformers library from huggingface. The original example scripts can be found at [transformers/examples/multiple-choice/]( Here, they have been extended with an additional data processing class for the CycIC task.


A cross-platform C++ game based on the [D20 System]( from Dungeons and Dragons.


DALI is a meta interpreter built on top of Sicstus Prolog (R) (at the moment).


Dantalian is a Python 3 library to assist file organization and tagging using hard links.


# Darknet # Darknet is an open source neural network framework written in C and CUDA. It is fast, easy to install, and supports CPU and GPU computation.


This is the top-level repository for the DART project. Check out the [project webpage]( and [wiki]( for more details.


The DBpedia DataID Unit is a DBpedia group with the goal of describing LOD datasets via RDF files, to host and deliver these metadata files together with the dataset in a uniform way, create and validate such files and deploy the results for the DBpedia and its local chapters. Established vocabularies like [DCAT](, [VoID](, [Prov-O]( and [SPARQL Service Description]( are to be reused for maximum compatibility. This way, we hope to establish a uniform and accepted way to describe and deliver dataset metadata for arbitrary LOD datasets and to put existing standards into practice.


# DataId-Ontology The DBpedia DataID core vocabulary is a meta-data system for detailed descriptions of datasets and their different manifestations. Established vocabularies like DCAT, VoID, Prov-O and FOAF are reused for maximum compatibility to establish a uniform and accepted way to describe and deliver dataset metadata for arbitrary datasets and to put existing standards into practice. In addition DataID can describe the relations of Agents (like persons or organizations) to datasets in regard to their rights and responsibilities.


DATALOG_SOLVE is a new static analyzer which implements a powerful, fully automatable method to evaluate Datalog queries by using Boolean Equation Systems (BESs).


Dataverse is an [open source][] web application for sharing, citing, analyzing, and preserving research data (developed by the [Data Science and Products team]( at the [Institute for Quantitative Social Science]( and the [Dataverse community][]).


DAYDREAMER is a trademark of Erik T. Mueller.


All the original code produced for DBpedia Spotlight is licensed under [Apache License, 2.0]( Some modules have dependencies on [LingPipe]( under the [Royalty Free License]( Some of our original code (currently) depends on GPL-licensed or LGPL-licensed code and is therefore also GPL or LGPL, respectively. We are currently cleaning up the dependencies to release two builds, one purely GPL and one purely Apache License, 2.0.


This module is a collection of predicates and combinators for working with Prolog's definite clause grammars (DCG). As much as possible, I've tried to make these rules symmetric so that you can use them for both parsing and generating.


This takes long time and isn't friendly for debug.


Dictionary ---- - Bilingual Dictionary - [CC-CEDICT]( A bilingual dictionary between English and Chinese. - Pronouncing Dictionary - [CMUdict]( The Carnegie Mellon University Pronouncing Dictionary is an open-source machine-readable pronunciation dictionary for North American English that contains over 134,000 words and their pronunciations.


100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:00<00:00, 147.26it/s] summary: solved 53/100 (53.0%) nb_steps wall_ms count 100.000000 100.000000 mean 628.420000 50.401287 std 412.020181 32.888645 min 1.000000 0.053883 25% 175.500000 15.448511 50% 830.000000 70.028543 75% 1000.000000 77.140987 max 1002.000000 102.509022 ``` (gas is a limit on the number of nodes explored per problem)


  • Java, with Spring as the framework
  • Machine learning. I use it to determine whether the questions are good enough to examine your understanding in reading the article you submitted
  • Distractor generator algorithm. I use it to generate 4 (four) options as the possible answers. They can be tricky and I think it's good to check whether you really understand the main concept of the article
  • Content extractor. I use it to extract only the important and suitable parts of an article that comes from the URL you submitted
  • Text summarizer. It is a part of Classifier4J, a Java library for text classification. I use it to create a summary of your article
  • Web crawler (spider). I use it to find all pages in a website that contain your requested keyword


This GIT repository accompanies the UKP lectures and seminars on Deep Learning for Natural Language Processing. In contrast to other tutorials, this tutorial focuses on the usage of deep learning methods.


This project contains the source code of DQN 3.0, a Lua-based deep reinforcement learning architecture, necessary to reproduce the experiments described in the paper "Human-level control through deep reinforcement learning", Nature 518, 529–533 (26 February 2015) doi:10.1038/nature14236.


This repository contains code necessary for designing, evolving type systems, and training neural type systems. To read more about this technique and our results [see this blog post]( or [read the paper](


A Fact-Validation framework :x: :white_check_mark:


This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.


This software (and data set) is intended for students as a way to experiment machine learning with Weka and for researcher as a reproducible experiment of DEFT 2013. THIS IS NOT SOMETHING EASY TO USE AND DEPLOY. You need machine learning, java and weka skills to use this package. We can provide a litle help ... but not too much :-)


This work was developed as final project for AI Course Fall 2019/2020 offering at AlexU Faculty of Engineering. It is our offical contribution for [Deft Eval Competition Subtask 1]( and running on it's offical [dataset]( It was an amazing experience and a great oppurtinuity to learn and explore the NLP world ! We would like to thank you the organziers of the compeition for their great work and for their willingness to help hrough forum.


DELiC4MT is a piece of software that allows to perform diagnostic evaluation of Machine Translation systems over linguistic checkpoints, i.e. source-language lexical elements and grammatical constructions specified by the user. For more details see our paper in the Credits section.


This was inspired by the opencyc bot @aindalis and have set up in #logicmoo on freenode. There is an interesting synergy of the zulip group chat UX that I think could play well with a knowledge-base-repl type gizmo.


Depdep is a merciless sentinel which will seek sensitive files containing critical info leaking through your network. Basically, it is a fast and practical sensitive data search tool maintaining personal & commercial data privacy for companies and institutions. It can very well be used by auditors making sure that their network doesn't leak any unauthorized non-compliant data through windows & unix/linux shares. The usage is easy and configurable, however, certain technical knowledge is necessary, such as using linux console, ability of writing and understanding basic regular expressions, tough the configuration file comes with several sensitive information patterns, etc.


This package provides a framework for calculating similarity between a pair of dependency parses according to *path overlap*. A very simple example can be run using SolverExample.scala.


We at [NatS]( have a long history of visualizing dependency trees. This library is a spin-off from our dependency parser [jwcdg](, which comes with its own editing and visualization tools.


### Fact Database Fact database is a collection of typed tuples, representing domain knowledge about the world.


A complete list of all the identity labels available can be found [here](


This repository contains implementations of dialog games for abstract argumentation frameworks and for two extensions that I developed during my PhD, namely *abductive* argumentation frameworks and *property-based* argumentation frameworks.


There are five separate input dictionaries or lists that PETRACH makes use of: the verb dictionary, the actor dictionary, the agent dictionary, the discard list, and the issues list. The following sections describe these files in greater detail. In addition to this documentation, which is intended for individuals planning to work on dictionaries, the source code contains internal documentation on how the dictionary information is stored by the program.


myDIG is a tool to build pipelines that crawl the web, extract information, build a knowledge graph (KG) from the extractions and provide an easy to user interface to query the KG. The project web page is [DIG](


This repository contains a set of easy-to-use tools for training, evaluating and using neural WSD models.


This repository contains code for a shift-reduce discourse parser based on rhetorical structure theory. A detailed system description can be found at


The src/ibr/ directory contains the discriminative IBR code. Run src/ibr/ for usage instructions.


Ontology has attracted much attention from both academia and industry. Handling uncertainty reasoning is important in research on ontology. For example, when a patient is suffering from cirrhosis, the appearance of abdominal vein varices is four times more likely than the presence of bitter taste. Such medical knowledge is crucial for decision-making in various medical applications but is missing from existing medical ontologies. In this paper, we aim to discover medical knowledge probabilities from electronic medical record (EMR) texts to enrich ontologies. We first build an ontology by discovering meaningful entity mentions from EMRs. Then, we propose a symptom dependency-aware naïve Bayes classifier that is built on the assumption that there is a particular level of dependency among symptoms. To ensure the accuracy of diagnostic classification, we add the value of the probability of a disease to the ontology in innovative ways. Results: We conduct a series of experiments to demonstrate that the proposed method can discover meaningful and accurate probabilities for medical knowledge. Based on over 30,000 deidentified medical records, we explore 336 abdominal diseases and 81 related symptoms. Among these 336 gastrointestinal diseases, the probabilities of 31 diseases are obtained through our method. These 31 probabilities of disease and 189 conditional probabilities between diseases and symptoms are added to the generated ontology. Conclusion: In this paper, we propose a medical knowledge probability discovery method based on the analysis and extraction of EMR text data to enrich a medical ontology with probability information. The experimental results show that the proposed method can effectively discover accurate medical knowledge probability information from EMR data. Further, the proposed method can efficiently and accurately calculate the probability of a patient suffering from a specific disease, revealing the advantage of the combination of ontology and the symptom dependency-aware naïve Bayes classifier.


This specific problem has 4 agents called "rover[0-3]", so open agents file and insert the following:


This project is no longer maintained ====================================


The class hierarchy contains two central classes, ``ArgumentComponent`` and ``ArgumentRelation``.


# DKPro Uby DKPro Uby is a Java framework for creating and accessing sense-linked lexical resources in accordance with the UBY-LMF lexicon model, an instantiation of the ISO standard Lexicon Markup Framework (LMF). The software library includes the following modules:


* Go to the [Nvidia website]( and find the latest drivers for your graphics card and system setup. You can download the driver from the website and install it, but doing so makes updating to newer drivers and uninstalling it a little messy. Also, doing this will require you having to quit your X server session and install from a Terminal session, which is a hassle. * We will install the drivers using apt-get. Check if your latest driver exists in the ["Proprietary GPU Drivers" PPA]( Note that the latest drivers are necessarily the most stable. It is advisable to install the driver version recommended on that page. Add the "Proprietary GPU Drivers" PPA repository. At the time of this writing, the latest version is 361.42, however, the recommended version is 352:


WebNav is a benchmark task for evaluating an agent with abilities to understand natural language and plan on partially observed environments. In this challenging task, an agent navigates through a web site consisting of web pages and hyperlinks to find a web page in which a query appears.


DMOZ is the largest, most comprehensive human-edited directory of the Web. It was historically known as the Open Directory Project (ODP). It contains a categorized list of Web URLs. Their listings are updated on a monthly bases and published in [RDF files](


GDL Translator Readme --------------------- This directory contains a Python program that translates game definitions from Game Description Language (GDL; usually stored in files with .kif extension) into self-contained Soar agents that simulate the mechanics of the game in working memory and productions. See


This directory contains a Java program that can translate a domain specification written in the Planning Domain Definition Language (PDDL) 1.2 into a Python SML environment and a set of Soar rules that propose legal operators. The program was generated by ANTLR v3.1.3 from a PDDL grammar written by Zeyn Saigol at University of Birmingham.


This file contains the following sections:


An open database of companies, focused on determining subsidiary and branch relationships.


The book is available in German now. It is written in NoWeb and contains


1. Run the Stanford CoreNLP with the given bash script **** with the command "*./ path_to_dplp/data*" - This is a little awkward, as I am not sure how to call the Stanford parser from any other directory.


An extension of prolog that allows rules to be labelled with a belief (a real number between 0 and 1 inclusive) and given a label so that proofs can be generated with a belief attached to them and rules can argued about.


This project was originally inspired by Kohlschütter et al, [Boilerplate Detection using Shallow Text Features]( and Weninger et al [CETR -- Content Extraction with Tag Ratios](, and more recently by [Readability](


An extensible network forensic analysis framework. Enables rapid development of plugins to support the dissection of network packet captures.


A story generation system (with choices(!)).


**Everything on the master branch is broken due to the ongoing redesign. And unluckily the latest release is outdated. Please look forward to the next major release.**


ABOUT ````` Dwarf Fortress is a single-player fantasy game. You can control a dwarven outpost or an adventurer in a randomly generated, persistent world.


This bundle contains the source code for a general game player (for more information on general game playing see written in Java. The player is based on a framework written by Sam Schreiber ( Build files are provided for use with Apache Ant.


### POS data 1. ``{domain}_dependency.pkl`` contains the part-of-speech data for action name extractor 2. ``{domain}_arg_pos.pkl`` contains the part-of-speech data for action argument extractor


A pretrained model is available [here](


DreamCoder is a wake-sleep algorithm that finds programs to solve a given set of tasks in a particular domain.


A Web-Mining Causal Relation Butler


This repository contains a partial mapping of Jerry Hobbs and Andrew Gordon's [background theory axioms](, and additional spatial axioms, all developed at USC, for inclusion on the CwC program's [ECIpedia](


edb is a cross platform x86/x86-64 debugger. It was inspired by [Ollydbg]( "Ollydbg"), but aims to function on x86 and x86-64 as well as multiple OS's. Linux is the only officially supported platform at the moment, but FreeBSD, OpenBSD, OSX and Windows ports are underway with varying degrees of functionality.


This software is a new tool based on Edit Distance Textual Entailment Suite - EDITS. The original version of EDTS can still be found on the SourceForge svn ( The version 2.1 of EDITS is integrated in the system developed by the Excitement project (


The basic concept of an agent used in EIS is that of an agent that performs actions in the environment and receives percepts from its environments. This is a [standard and generic definition of an agent]( as used in Artificial Intelligence.


EISBot is a [StarCraft: Brood War]( bot developed by Ben Weber at [UC Santa Cruz]( as part of his dissertation research. The main objective for the project is to identify the capabilities necessary for expert Starcraft gameplay and to realize these capabilities in a game-playing agent.


Elasticsearch is a distributed RESTful search engine built for the cloud. Features include:


On the technical side of things, EL:DIABLO provides the information and scripts necessary to set up a [virtual machine]( on a user's computer. For those not familiar, this can be thought of as a computer within a computer. EL:DIABLO relies on [Vagrant](, and by extension [VirtualBox](, to set up this virtual environment. These two pieces of software allow for the easy setup and use of a virtual machine. Thus, two of the files contained within EL:DIABLO are a `Vagrantfile`, which gives instructions to Vagrant on how to setup the virtual machine, and ``, which is a [shell script]( that installs the necessary software within the virtual machine.


ELF is an Extensive, Lightweight, and Flexible platform for game research. We have used it to build our Go playing bot, `ELF OpenGo`__, which achieved a 14-0 record versus four global top-30 players in April 2018. The final score is 20-0 (each professional Go player plays 5 games).


ELK is an ontology reasoner that aims to support the OWL 2 EL profile. See for further information.


elle (codename lulu) is a simple program that manages and helps clean your computer. (currently only supports windows). this program is in its infancy and is in no way complete. if you like to try the current program do the following:


Elsa is a tool that analyses your code without loading or running it. It can track types and provide helpful hints when things don't match up before you even try to run the code.


This directory tree holds version 27.0.50 of GNU Emacs, the extensible, customizable, self-documenting real-time display editor.


A simpler and more complete alternative to bash-completion.el is to run a bash shell in a buffer in term mode(M-x `ansi-term'). Unfortunately, many Emacs editing features are not available when running in term mode. Also, term mode is not available in shell-command prompts.


chess.el is an Emacs Lisp library and several clients on top of the underlying library functionality for performing various activities related to the game of chess.


This is an FFI for Emacs. It is based on libffi and relies on the dynamic modules work (available on the Emacs 25 branch) in order to be loaded into Emacs. It is relatively full-featured, but for the time being low-level.


Gargoyle is an Emacs module


This is an implementation of the Glulx virtual machine in Emacs Lisp. Since all input and output from Glulx is via the GLK library there is also an Emacs Lisp implementation of the GLK specification.


A simple library for navigating the global and local mark rings in Emacs. Simply execute M-x list-marks for a navigable list of the global-mark-list. The prefix argument can be used to limit the list to the buffer's local mark list.


Emacs Refactor (EMR) is a framework for providing language-specific refactoring in Emacs. It includes refactoring commands for a variety of languages, including elisp itself!


;; -*- mode:org -*- * Emacs-Shroud Interface :PROPERTIES: :ALT_TITLE: Introduction :DESCRIPTION: Shroud secrets manager :END: Shroud is a password manager written in Guile which uses GnuPG in the backend. See Shroud's website at [[][this link.]] This package is an Emacs interface to Shroud using the Buffers User Interface library.


## Overview **YamlMod** is an emacs-module to parse yaml, written in Rust.


This repository contains the code used to perform the classification experiments described in section 4.2 of our EMNLP15 paper. Please use the following citation:


This project runs experiments comparing the benefit of soft labeling and filtering with label aggregation for learning a classification model n natural language tasks. This project is the experiment code described in the paper, "Noise or additional information? Leveraging crowdsource annotation item agreement for natural language tasks" (Jamison and Gurevych, 2015).


>This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.


> **Abstract:** This article tackles a new challenging task in computational argumentation. Given a pair of two arguments to a certain controversial topic, we aim to directly assess qualitative properties of the arguments in order to explain why one argument is more convincing than the other one. We approach this task in a fully empirical manner by annotating 26k explanations written in natural language. These explanations describe convincingness of arguments in the given argument pair, such as their strengths or flaws. We create a new crowd-sourced corpus containing 9,111 argument pairs, multi-labeled with 17 classes, which was cleaned and curated by employing several strict quality measures. We propose two tasks on this data set, namely (1) predicting the full label distribution and (2) classifying types of flaws in less convincing arguments. Our experiments with feature-rich SVM learners and Bidirectional LSTM neural networks with convolution and attention mechanism reveal that such a novel fine-grained analysis of Web argument convincingness is a very challenging task. We release the new UKPConvArg2 corpus and software under permissive licenses to the research community.


Empire is a post-exploitation framework that includes a pure-PowerShell2.0 Windows agent, and a pure Python 2.6/2.7 Linux/OS X agent. It is the merge of the previous PowerShell Empire and Python EmPyre projects. The framework offers cryptologically-secure communications and a flexible architecture. On the PowerShell side, Empire implements the ability to run PowerShell agents without needing powershell.exe, rapidly deployable post-exploitation modules ranging from key loggers to Mimikatz, and adaptable communications to evade network detection, all wrapped up in a usability-focused framework. PowerShell Empire premiered at [BSidesLV in 2015]( and Python EmPyre premeiered at HackMiami 2016.


This is a collaborative and open Encyclopedia of Proof Systems.


This step requires the entity vectors and the word-embeddings to exist. An essential part of our system are the entity vectors (the equivalent of word-embeddings for entities). You can create your entity vectors by following the instructions of the [next chapter](#gerbil-evaluation), otherwise you can use the provided pretrained ones. We have pretrained 502661 entity vectors. Specifically, we have trained entity vectors for all the candidate entities from all possible spans of AIDA-TestA, AIDA-TestB, AIDA-Training 1, ACE2004, AQUAINT, MSNBC, Clueweb, DBpediaSpotlight, Derczynski, ERD2014, GERDAQ-Dev, GERDAQ-Test, GERDAQ-TrainingA, GERDAQ-TrainingB, KORE50, Microposts2016-Dev, Microposts2016-Test, Microposts2016-Train, N3-RSS-500, N3-Reuters-128, OKE 2015 Task1, OKE 2016 Task1, and the entity relatedness dataset of (Ceccarelli et al., 2013). In more detail, this is done by considering all possible spans of the document as a candidate span and querying our p(e|m) dictionary for all the candidate entities for this span (we keep only the top 30 for each candidate span).


This repository contains ENHSP, which stands for Expressive Numeric Heuristic Planner. It is a forward heuristic search planner, but it is expressive in that it can handle:


This repo hosts the code associated with my O'Reilly article, "Textual entailment with TensorFlow: Using neural networks to explore natural language," published on July 17, 2017.


+ is a script to run experiments on ``bomb in the toilet" problems.


A collection of [OpenEphyra]( components necessary for question analysis. **Dependencies**: Java, Maven, WordNet. **You may need to set the right locale**, see []( Unlike initial versions relying on LTI repositories, this is a self-sufficient one.


Single-Agent Planner is a complete epistemic planner without the epistemic closed world assumption for single agent which is logic-based.


This is the source code for the Ergo compiler. Ergo is the [Accord Project][apmain] language for Smart Legal Contracts.


ESBMC, the efficient SMT based model checker, is a software verification tool for C and C++ code bases. The technique is sound but incomplete -- an error found by ESBMC will be correct (modulo errors in the tool), but a successful verification does not guarantee there are no errors.



EternalRocks is a network worm (i.e. self-replicating), emerged in first half of May 2017. It spreads through public ([The Shadow Brokers NSA dump]( SMB exploits: `ETERNALBLUE`, `ETERNALCHAMPION`, `ETERNALROMANCE` and `ETERNALSYNERGY`, along with related programs: `DOUBLEPULSAR`, `ARCHITOUCH` and `SMBTOUCH`.


Welcome! EUROPA is a framework to model and tackle problems in Planning, Scheduling and Constraint Programming. EUROPA is typically embedded in a host application. It is designed to be expressive, efficient, extendable and configurable. It includes: - **A Plan Database:** The technology cornerstone of EUROPA for storage and manipulation of plans as they are initialized and refined. The EUROPA Plan Database integrates a rich representation for actions, states, objects and constraints with powerful algorithms for automated reasoning, propagation, querying and manipulation. - **A Problem Solver:** A core solver to automatically find and fix flaws in the plan database. It can be configured to plan, schedule or both. It can be easily customized to integrate specialized heuristics and resolution operations. - **A Tool Box:** Europa includes a debugger for instrumentation and visualization of applications. It also includes a very high-level, declarative modeling language for describing problem domains and partial-plans.


# Semantic Typing of Event Processes This is the repository for the resources in CoNLL 2020 Paper "What Are You Trying Todo? Semantic Typing of Event Processes". This repository contains the source code and links to some datasets used in our paper.


This repository contains both the code and the documentation (i.e. wiki pages) of the next Excitement Open Platform (EOP) release, which is an open source software platform containing state-of-the-art algorithms for recognizing texual entailment relations: _given two text fragments, one named text and the other named hypothesis, the task consists in recognizing whether the hypothesis can be inferred from the text_


This repository contains both the code and the documentation (i.e. wiki pages) of the next Excitement Open Platform (EOP) release. EOP is an open source software platform containing state-of-the-art algorithms for recognizing texual entailment relations: _given two text fragments, one named text and the other named hypothesis, the task consists in recognizing whether the hypothesis can be inferred from the text_


EXEMPLAR is an open relation extraction system originating from a research project at the University of Alberta. Relation extraction is the task of, given a text corpus, identifying relations (e.g., acquisition, spouse, employment) among named entities (e.g., people, organizations). While traditional systems are limited to the relations predetermined by the user, open relation extraction systems like EXEMPLAR are able to identify instances of any relation described in the text.


## What's this? ExiL (Expert System in Lisp) is a **CLIPS-based expert system building tool** written in Common Lisp, with forward chainging and a very basic backward chaining inference engine. It was developed along my computer science master's thesis and is meant for **academic purposes**, not for real-case scenerios (at least yet).


explainshell is a tool (with a web interface) capable of parsing man pages, extracting options and explain a given command-line by matching each argument to the relevant help text in the man page.


## About DBpedia DBpedia is a crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia, and to link the different data sets on the Web to Wikipedia data. We hope that this work will make it easier for the huge amount of information in Wikipedia to be used in some new interesting ways. Furthermore, it might inspire new mechanisms for navigating, linking, and improving the encyclopedia itself.
To check out the projects of DBpedia, visit the [official DBpedia website](


### Acknowledgement We thank [Choi et al]( for the release of the Ultra-Fine dataset and the basic model: [](


The package contains the following files:


This directory contains the source of FACTORIE, a toolkit for probabilistic modeling based on imperatively-defined factor graphs. More information, see [the FACTORIE webpage](


Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks.


An easy-to-use and efficient system to support the Mixture of Experts (MoE) model for PyTorch.


Fault Tolerant Router is a daemon, running in background on a Linux router or firewall, monitoring the state of multiple internet uplinks/providers and changing the routing accordingly. LAN/DMZ internet traffic (outgoing connections) is load balanced between the uplinks using Linux *multipath routing*. The daemon monitors the state of the uplinks by routinely pinging well known IP addresses (Google public DNS servers, etc.) through each outgoing interface: once an uplink goes down, it is excluded from the *multipath routing*, when it comes back up, it is included again. All of the routing changes are notified to the administrator by email.


Fawkes is a component-based Software Framework for Robotic Real-Time Applications for various Platforms and Domains.


The Facebook CTF is a platform to host Jeopardy and “King of the Hill” style Capture the Flag competitions.


FIBO is a trademark of EDM Council, Inc. It is also standardized by the [Object Management Group](


FIDO is an orchestration layer used to automate the incident response process by evaluating, assessing and responding to malware. FIDO’s primary purpose is to handle the heavy manual effort needed to evaluate threats coming from today's security stack and the large number of alerts generated by them. As an orchestration platform FIDO can make using your existing security tools more efficient and accurate by heavily reducing the manual effort needed to detect, notify and respond to attacks against a network.


This distribution contains the source code for the experiments presented in the following research publication ([PDF](


This is an extension to the old [FIGMENT](


**Android users:** [download the current version of the app]( _Sorry iPhone users but [the Apple store prevents apps that access WiFi information](, so I will be unable to release a iPhone version._


# FireNet FireNet is an artificial intelligence project for real-time fire detection.

FireNet is a real-time fire detection project containing an annotated dataset, pre-trained models and inference codes, all created to ensure that machine learning systems can be trained to detect fires instantly and eliminate false alerts. This is part of DeepQuest AI's to train machine learning systems to perceive, understand and act accordingly in solving problems in any environment they are deployed.


This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.


The f.lux indicator applet `fluxgui` is an indicator applet to control `xflux`, an application that makes the color of your computer's display adapt to the time of day: warm at night, and like sunlight during the day. Reducing blue light exposure in the evening can help you fall asleep at night. See for more details.


This project uses the general clock output to produce frequency modulated radio communication. It is based on idea originaly posted here: [](, but does not use DMA controller in order to distribute samples to output (clock generator),so sound quality is worse as in PiFm project and only mono transmition is available but this makes possible to run it on all kind of boards.


Fonduer is a framework for building knowledge base construction (KBC) applications from *richy formatted data* and is implemented as a library on top of a modified version of Snorkel_.


Maturaarbeit 2018: This work makes usage of deep convolutional neural networks with Keras to classify images into 230 food categories and to output a matching recipe. The dataset contains >400'000 food images and >300'000 recipes from


This is the set of tools and configurations used by the YOLO Real-Time Food Detection article at

This dataset includes mappings to some of the concepts found in: - DBpedia - - FoodOn - Units Ontology - ChEBI


An easy place to browse FoodOn is at []( As well the URI's of terms in the ontology resolve to the comprehensive [Ontobee ontology lookup service]( It is organized according to the upper level BFO ontology, so most terms can be browsed by starting at the OBI "entity" term (e.g. in [Ontobee](


FOSSology is a open source license compliance software system and toolkit. As a toolkit you can run license, copyright and export control scans from the command line. As a system, a database and web ui are provided to give you a compliance workflow. In one click you can generate an SPDX file, or a ReadMe with all the copyrights notices from your software. FOSSology deduplication means that you can scan an entire distro, rescan a new version, and only the changed files will get rescanned. This is a big time saver for large projects.


* If fpm is not helping you make packages easily, then there is a bug in fpm. * If you are having a bad time with fpm, then there is a bug in fpm. * If the documentation is confusing, then this is a bug in fpm.


A CSV transaction export from any of the following banks can be processed by `fpos`


F´ (F Prime) is a component-driven framework that enables rapid development and deployment of spaceflight and other embedded software applications. Originally developed at the Jet Propulsion Laboratory, F´ has been successfully deployed on several space applications. It is tailored but not limited to small-scale spaceflight systems such as CubeSats, SmallSats, and instruments.


This repository contains the Framester resource, the main outcome of the framester project ( All the RDF files are serialized in TURTLE format. The corresponding triples can be also found uploaded on the Framester's SPARQL endpoint available at ( A series of statistics (e.g. number of triples, predicats, classes) are available at (


The FRDCSA ( has been under development for 20 years as of writing ([2020-03-29,02:53:26]). It is a comprehensive free/libre artificial intelligence system. Mainly it collects other A.I. systems and gets them all to talk to each other. However, it has quite a lot of original code as well, maybe over 2 million lines of code. The most important individual project is the Free Life Planner (


This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License version 2, as published by the Free Software Foundation.


`FS` is a classical planner that works with the Functional STRIPS planning language [[Geffner, 2000]](#ref-geffner-fstrips-2000), a modeling language based on the quantifier-free fragment of first-order logic that includes constant, function and predicate symbols, but no variable symbols. The increased expressiveness of the Functional STRIPS language with respect to propositional languages such as standard STRIPS (which is indeed subsumed by Functional STRIPS) often results in problem encodings which are more compact, more readable, have fewer ground actions and preserve the structural properties of the problem in a manner which allows the derivation of more effective heuristics.


FSearch is a fast file search utility, inspired by Everything Search Engine. It's written in C and based on GTK+3.


FUEL is a succinct Scala framework for implementing metaheuristic algorithms, in particular evolutionary algorithms. It originated in my work on the book "Behavioral Program Synthesis with Genetic Programming" (Springer 2016, , )


A read-only tag-filesystem overlay for hierarchical filesystems


Gadgetbridge is an Android (4.4+) application which will allow you to use your Pebble or Mi Band without the vendor's closed source application and without the need to create an account and transmit any of your data to the vendor's servers.


This repository contains resources to support the AIDA Interchange Format (AIF). It consists of:


There is a small interpreter in the statemachine to do the propagation, which has inlined code depending on the number of outputs to be triggered. The ordering of basic blocks generated by the compiler are forced in way that follow the common code path (about 90% of the time, ie when there are no triggers). Ultimately, the implementation has quite a large overlap with Sancho's propnet statemachine, which since they documented in detail and seems to be the fastest way to propagate (at this point in time) - it made it very hard to do anything else. Nevertheless, I experimented a bit with some hybrid propnet/state machines and still think if given more meta-timing games such as speed chess could get an order of magnitude faster via splitting the network up some more, and generating code to replace some of the propnet.


* What? galvanise is a [[][General Game Player]], where games are written in [[][GDL]]. The original galvanise code was converted to a library [[][ggplib]] and galvanise_zero adds AlphaZero style learning. Much inspiration was from Deepmind's related papers, and the excellent Expert Iteration [[][paper]]. A number of Alpha*Zero open source projects were also inspirational: LeelaZero and KataGo (XXX add links).


An hack-and-slash style mult-player dungeon crawl blending the heuristics of NetHack with a combat engine inspired by Minnesota Dungeon (Minneapolis Dungeon, Larry's Maze, et. al.).


GAMS is an extension of an earlier project called SMASH.


Gateway is a movement and a project to create a service for cooperative storywriting and textual roleplaying that is free software and belongs to the community.


# A General-Purpose Algorithm for Constrained Sequential Inference This repository contains the archived code for the CoNLL 2019 paper [A General-Purpose Algorithm for Constrained Sequential Inference](


I am no longer associated with the GDELT project as noted [here](, so I will not continue to update this package. There is a fork of this project [here]( that has some updates available.


This is a parser for GDL (game description language). GDL is a subset [Datalog](, but when used for GGP (general game playing) it is sent in KIF (knowledge interchange format). This parser focuses on GDL and not KIF for the purpose of GGP and is currently being used in [ggp-rs](


This is a framework for testing the performance of Game Description Language (GDL) interpreters and reasoners used in General Game Playing. It allows for automatically running tests on a wide variety of reasoners across a wide variety of games, with minimal human intervention. It also supplies tools for analyzing the outputs of these tests.


This is a framework for testing the performance of Game Description Language (GDL) interpreters and reasoners used in General Game Playing. It allows for automatically running tests on a wide variety of reasoners across a wide variety of games, with minimal human intervention. It also supplies tools for analyzing the outputs of these tests.


This repository provides code for training and testing state-of-the-art models for grammatical error correction with the official PyTorch implementation of the following paper: > [GECToR – Grammatical Error Correction: Tag, Not Rewrite](
> [Kostiantyn Omelianchuk](, [Vitaliy Atrasevych](, [Artem Chernodub](, [Oleksandr Skurzhanskyi](
> Grammarly
> [15th Workshop on Innovative Use of NLP for Building Educational Applications (co-located with ACL 2020)](


Gekko is a Bitcoin TA trading and backtesting platform that connects to popular Bitcoin exchanges. It is written in JavaScript and runs on [Node.js](


This is the README file for libbash


This library expects latitude and longitude in EPSG:4326 (WGS84). To convert between different projections check out [Proj4js](


This project is a benchmarking platform for entity annotation and disambiguation tools. It also has been extended for Question Answering (see [`QuestionAnswering` branch](


The Grammatical Framework (=GF) is a grammar formalism based on type theory. It consists of



A simple Prover-based state machine implementation is included in GGP Base, so you don't need to worry about the details of converting a game description into a state machine. To write a gamer based on StateMachineGamer, derive your class from players.gamer.statemachine.StateMachineGamer. Applications like the PlayerPanel should automatically recognize your new class and it should appear in their lists of available players right away.


GGP-Botter is a GGP Bot framework written in SWI-Prolog. It provides an interface for communication with GGP Server, as well as some helper functions (TODO) which will come in handy when creating your own bot.


`ggp-rs` is a library for creating GGP (general game playing) players in Rust that is based off of [GGP Base]( While GGP Base allows the creation of players backed by a propositional network or a logic prover, this library currently only supports logic prover based players. The performance of this logic prover is comparable to the one in GGP Base.


Although many games have been trained, there is a multitude of games left to try. There are some game types which are completely unsupported right now, for starters:


A General Game Playing Engine using YAP Prolog


Sometime forensic investigators need to process digital images as evidence. There are some tools around, otherwise it is difficult to deal with forensic analysis with lot of images involved. Images contain tons of information, Ghiro extracts these information from provided images and display them in a nicely formatted report. Dealing with tons of images is pretty easy, Ghiro is designed to scale to support gigs of images. All tasks are totally automated, you have just to upload you images and let Ghiro does the work. Understandable reports, and great search capabilities allows you to find a needle in a haystack. Ghiro is a multi user environment, different permissions can be assigned to each user. Cases allow you to group image analysis by topic, you can choose which user allow to see your case with a permission schema.


`git-secret` is a bash tool which stores private data inside a git repo. `git-secret` encrypts files with permitted users' public keys, allowing users you trust to access encrypted data using pgp and their secret keys.


gitfs is a [FUSE]( file system that fully integrates with git. You can mount a remote repository's branch locally, and any subsequent changes made to the files will be automatically committed to the remote.


An analysis and visualization of collaboration between top GitHub repositories, focused on the relationship between programming languages used and the network structure.


gitRecommender ============== gitRecommender Final project for Artificial Intelligence. It is a recommender system that will suggest github repositories you might be interested in.


Gitrob is a tool to help find potentially sensitive files pushed to public repositories on Github. Gitrob will clone repositories belonging to a user or organization down to a configurable depth and iterate through the commit history and flag files that match signatures for potentially sensitive files. The findings will be presented through a web interface for easy browsing and analysis.


This command downloads the latest GNES image (based on [Alpine Linux]( and runs it in a container. When the container runs, it prints an informational message and exits.


This is a set of scripts that will be able to manipulate the Gnucash XML files.


A fast VNC driver.


This repository contains datasets for goal and plan recognition as planning.


This repository contains computer-assisted formalizations of ontological proofs.


This is a Golog interpreter written in Haskell and applications of it. [Golog]( is an action language based on the [situation calculus]( There are many dialects of Golog; this is one of them.


GOPHI (*Generation Of Parenthesized Human Input*) is a system for generating a literal reading of Abstract Meaning Representation (AMR) structures. The system, written in [SWI-Prolog]( "SWI-Prolog"), uses a symbolic approach to transform the original rooted graph into a tree of constituents that is transformed into an English sentence by [jsRealB]( "GitHub - rali-udem/JSrealB: A JavaScript bilingual text realizer for web development").


Gourmet Recipe Manager is a manager, editor, and organizer for recipes. It has a plugin architecture which allows you to enable extensions to Gourmet's base functionality. For example, there is a nutritional plugin that allows Gourmet to help you calculate nutritional information for any recipe. There are also a wide variety of import and export plugins that let Gourmet read and write recipes in various formats.


`gp-ark-tweet-nlp` is a PL/Java Wrapper for [`Ark-Tweet-NLP`]( - a state-of-the-art parts-of-speech tagger for Twitter. This package enables you to perform part-of-speech tagging on Tweets, using SQL. If your environment is an MPP system like Pivotal's Greenplum Database you can piggyback on the MPP architecture and achieve implicit parallelism in your part-of-speech tagging tasks.


##Introduction GOAP, or Goal Oriented Action Planning is a powerful tool to create game AI. For all the details I will refer to [Jeff Orkin's collection of articles]( But in short: GOAP will let computer controlled characters (NPCs) make action plans that can achieve desired goals. It will do so in a highly maintainable, easily extendible, highly modular fashion. Naive implementation of AI code will invariably blow up for any non trivial problem. GOAP on the other hand, is robust and is unlikely to buckle under large complexity. This software implements GOAP in the C programming language. It does so in a generic fashion, which makes it suitable for many projects.


This project is a multi-core GPGPU (general purpose graphics processing unit) IP core, implemented in SystemVerilog. Documentation is available here: Pull requests/contributions are welcome.


You can read about GPT-2 and its staged release in our [original blog post](, [6 month follow-up post](, and [final post](


This dataset contains: - 250K documents from the WebText test set - For each GPT-2 model (trained on the WebText training set), 250K random samples (temperature 1, no truncation) and 250K samples generated with Top-K 40 truncation


GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model.


An implementation of training for [GPT2]( that supports both GPUs and TPUs. The dataset scripts are a bit hacky and will probably need to be adapted to your needs. ## Requirements For GPUs:


Building intelligent systems starts at the database. Grakn is an intelligent database: a knowledge graph engine to organise complex networks of data and make it queryable.


A collection of grammars to write lexers, parsers, compilers for various languages and purposes.


This repository is a collection of Antlr4 grammars.


# Graph2Seq Graph2Seq is a simple code for building a graph-encoder and sequence-decoder for NLP and other AI/ML/DL tasks.


Graphbrain is an Artificial Intelligence open-source software library and scientific research tool. Its aim is to facilitate automated meaning extraction and text understanding, as well as the exploration and inference of knowledge.


This directory contains code necessary to run the GraphSage algorithm. GraphSage can be viewed as a stochastic generalization of graph convolutions, and it is especially useful for massive, dynamic graphs that contain rich feature information. See our [paper]( for details on the algorithm.


GROBID is a machine learning library for extracting, parsing and re-structuring raw documents such as PDF into structured TEI-encoded documents with a particular focus on technical and scientific publications. First developments started in 2008 as a hobby. In 2011 the tool has been made available in open source. Work on GROBID has been steady as side project since the beginning and is expected to continue until at least 2020 :)


## Motivation A household needs to be managed. I did this so far (almost 10 years) with my first self written software (a C# windows forms application) and with a bunch of Excel sheets. The software is a pain to use and Excel is Excel. So I searched for and tried different things for a (very) long time, nothing 100 % fitted, so this is my aim for a "complete household management"-thing. ERP your fridge!


Welcome to the hack.guides() content repository. This repository contains published and unpublished versions of awesome technical guides written by our community. You can browse all the guides right here or head over to our [companion site]( for a more focused reading experience.


__GUILE_LOG__ What it is: Guile log is a logic programming framework that has strong continuation support meaning that stalling of algorithm is well supported. It also sports most of the logic programming features you see in common prolog softwares like swi-prolog and guile-log comes with a prolog engine as well as a minikanren engine as well as an internal scheme interface to logic programming which is the guile-log interface.



This is the framework for the General Video Game Competition 2014 -


This is the framework for the General Video Game Competition 2014 -


**OpenAI Gym is a toolkit for developing and comparing reinforcement learning algorithms.** This is the ``gym`` open-source library, which gives you access to a standardized set of environments.


This paper introduces an approach to human-aware epistemic planning in which a rational intelligent agent plans its actions for encouraging a human to proceed in a social virtual reality (VR) environment. In order to persuade the human user to execute specific actions, the agent adapts the virtual environment by adjusting motivators in the environment. The agent's model of the human is based on the theory of planned behavior (TPB), a cognitive theory to explain and predict human behavior. The intelligent agent manipulates the environment, a process where the agent conducts epistemic actions, i.e., adapting the environment and observing human responses, in order to understand the human's behavior and encourage human actions. An action reasoning framework is introduced that defines transitions between goal-oriented human activities in the virtual scenario. The proposed human-aware planning architecture can also be applied in environments that are not virtual, by utilizing modern mobile devices which have built-in sensors that measure motion, orientation, and various environmental conditions.


This is a work-in-progress repository for the CLiPS HAte speech DEtection System (HADES).


This repository contains the code and data to reproduce the experiments of the paper "[Fine-grained Entity Recognition with Reduced False Negatives and Large Type Coverage](".


This repository is the Tensorflow implementation of the Handwriting Recognition System described in [Handwriting Recognition of Historical Documents with Few Labeled Data]( Please cite the paper if you use this code in your research paper.


This environment creates a simple whiteboard showing messages that can be written there by the entity that it creates.


Helm is a fork of `anything.el` originaly written by Tamas Patrovic and can be considered to be its successor. `Helm` sets out to clean up the legacy code in `anything.el` and provide a cleaner, leaner and more modular tool, that's not tied in the trap of backward compatibility.


The **Peter Moss Leukemia AI Research HIAS Network** is an open-source Hospital Intelligent Automation System. The system's server powers an intelligent network using a locally hosted, encrypted IoT server and proxy.


A Hadoop script for automatically extracting the needed messages and cleaning them is available in prepare_data/hadoop/. It expects to find reddit_comments and reddit_submission is in the user's home directory. If you opt to extract the messages yourself rather than using Hadoop, you will need to run prepare_data/ to clean the messages' text.


This is the distribution directory for the Kananaskis release of HOL4. See for online resources.


Home Assistant is a home automation platform running on Python 3. The goal of Home Assistant is to be able to track and control all devices at home and offer a platform for automating control.

This is the source for the [ website](


In his book *Proofs and Refutations*, Lakatos identifies seven methods by which mathematical discovery and justification can occur. These methods suggest ways in which concept definitions, conjectures and proofs gradually evolve via interaction between mathematicians. Different mathematicians may have different interpretations of a conjecture, examples or counterexamples of it, and beliefs regarding its value or theoremhood. Through discussion, concepts are refined and conjectures and proofs modified. For instance, when a counterexample is found, one might look for general properties which make it fail a conjecture, and then modify the conjecture by excluding that type of counterexample (piecemeal exclusion). Alternatively, one might generalise from the positives and then limit the conjecture to examples of that type (strategic withdrawal). Another reaction might be to deny that the object is a counterexample on the grounds that the conjecture refers to objects of a different type (monster barring). Given a faulty proof, a counterexample may be used to highlight areas of weakness in the proof, and to either modify the proof or the conjecture which it purports to prove (lemma incorporation).


The [tp-link Wi-Fi Smart Plug model HS100]( is an embedded Linux computer with a Wifi chip, a 110/220 V AC relay with a 15 A current limit, and a US-style grounded electrical socket. You pair with it by establishing an ad-hoc network between the plug and a smartphone (also called Wifi direct). After giving your router's SSID and access information, the plug connects to it and you can control the plug with the app provided by tp-link, called Kasa. One downside of using Kasa is that it's really not much more than a wall-switch in an app, though it does have pretty rich timer features which are nice. But you can't do things like turn the light on or off in response to events on the internet. Tp-link does provide a network control mode, but you have to pass control of your plug over to them, which isn't particularly great if you endeavor to remain the master of your own domain, haha only serious.


This is HT 2.1.0; Have fun...


HTNTranslation is a program for translating [Hierarchical Task Network]( problems into [PDDL]( This is an extension of the work described in "[Translating HTNs to PDDL](," handling both totally ordered and partially ordered subtasks.


This module is a pure Perl HTTP proxy.


This is a package for GNU Emacs that can be used to tie related commands into a family of short bindings with a common prefix - a Hydra.


A vectorial representation for every ingredient and recipe was generated using Word2Vec. An SVC model was trained to return recipes’ cuisines from their set of ingredients. South Asian, East Asian and North American cuisines were predicted with more than 73% accuracy. African, Southern European and Middle East cuisines contain the highest number of cancer-beating molecules. Finally, it was developed a web application able to predict the ingredients from an image, suggest new combinations and retrieve the cuisine the recipe belongs, along with a score for the expected number of negative interactions with antineoplastic drugs (


This is a mid-delevopment snapshot. The target release date for Version 2 is 1 September 2013.





Contained within the Example Project folder of this repository, there is an example Java Eclipse project, which contains a minimal `Agent` that explores the game through random movements.


This repository contains the first version of the IGGP dataset, which is discussed in detail in the paper:



- The end result should have two files and one directory (names can be changed in ``: - `im2latex.lst` - Each line is in format `formula_idx image_name render_type` - formula_idx is the line number where formula is in `im2latex_formulas.lst` - image_name is the name of the image connected to this rendering (without '.png') - render_type is the name of render setup used, defined in `` - `im2latex_formulas.lst` - Each line contains one formula - `/formula_images` - Directory where images are stored


A general-purpose, deep learning-based system to decompile an image into presentational markup. For example, we can infer the LaTeX or HTML source from a rendered image.


This repository contains the code to train and evaluate models from the paper: _Learning Cross-modal Embeddings for Cooking Recipes and Food Images_


IMPLIE (IMPLicit relation Information Extraction) is a program that extracts binary relations from English sentences where the relationship between the two entities is not explicitly stated in the text. IMPLIE supports the following target relations out-of-the-box: *has nationality*, *has job title*, *has province*, *has city*, and *has religion*. However, other relations can be supported by providing a list of keywords for a new target relations. This is possible because IMPLIE uses a target independent syntactic language model.


This is the root of the IndiGolog system. There are a few things you should


Inductor Parser =============== The Inductor Parser is a simple-to-use C++ Template-based parser. It is small and easy to understand, debug and extend.


The following features are for sure *not* in the Inductor Prolog engine (this is not an exhaustive list): - asserting or retracting anything besides a fact - declaring a function as dynamic like `dynamic(myRule/1)`: Anything can be changed in IndProlog, this declaration is not necessary - `;` (or) - `->` (if) - syntax like `a == b` instead of `==(a, b)` - `"` inside comments. Use `"This is a quote 'inside another quote' "` instead - Any Metaprogramming features or rules like `call`


INDUS is a porject for knowledge acquisition and data integration from heterogeneous distributed data, particularly from bio-informatics databases. This is migrated from


Infer is a static analysis tool for Java, Objective-C and C, written in [OCaml]( Check out the documentation at . See []( for a quick overview of the files in `infer/bin`.


*InferSent* is a *sentence embeddings* method that provides semantic representations for English sentences. It is trained on natural language inference data and generalizes well to many different tasks.


This is version 6.33 of the Inform compiler, copyright (c) Graham Nelson 1993 - 2014 Full release notes and instructions are available at


This is a Java command line application encapsulated within an Eclipse project. It provides a TCP/IP based server for communication with the [R5 Robot], and within it the Instinct Planner. The R5 Robot also requires the [Instinct Planner].


This module provides similar functionality for Prolog. It uses the same syntax as Unix shell, Perl, PHP, Tcl, etc. Namely, a local variable name prefixed with `$`. Interpolation is supported in all of the following string types:


The Planning Domain Definition Language (PDDL) is a modelling language for expressing AI planning problems, and used as the input language of a large number of general-purpose AI planning systems. The role of a plan validator is to check if a plan (generated by an AI planner or manually written) is valid, according to the domain and problem specification. A validator is a very useful tool for debugging a domain/problem specification, a planner implementation, and indeed the specification of PDDL itself.


This code uses Python 3.6 and PyTorch 0.4.1 cuda version 9.0.


This file is part of itSIMPLE.


This file is part of itSIMPLE.


A Intelligent workflow management system. Specially looking at modelling the workflow of a hospital and drug distribution process.


What is JABBAH?

JABBAH is a Java Aplication framework for the translation Between BPM (Business Process Models) And HTN-PDDL (Hierarchical Planning Domains). 

The JABBAH system provides a neat tool for analysts that need to perform resource allocation analysis on business workflows, embedding a non-trivial transformation of BPMN-expressed workflows in terms of Hierarchical Task Networks. By providing a fully automated support of the analysis, allowing engineers to exploit the vastly diffused Business Process Management Notation (BPMN) standard for workflow specification, and neatly presenting the results, this system may appeal a very wide and relevant audience. Hencefore, JABBAH may have a considerable potential impact outside the planning community.

Where can I find further details?

A scientific paper about JABBAH was presented at ICKEPS 2009 (Award of excellence), and further improvements were presented in BPM 2010 Demo Track

A extended scientific paper has been recently published at the Knowledge Engineering Review journal, available here.

Have a look at the new video screencast as well.

Who developed it?

Arturo González Ferrer created JABBAH under the supervision of professors Juan Fernández Olivares and Luis Castillo Vidal. See Contact Info for details.


#### Introduction jacana-align is a token-based word-aligner for English parallel sentences described in the following paper:


This is the JAMR Parser, updated for SemEval 2016 Task 8.


The Code Janitor is a utility for finding "objectionable" content in source code trees before releasing them to the public. These can be things your developers wrote (like profanity, insults, confessions, and so on), or things that indicate code that might be inappropriate to use in the project (like copyright notices or license statements).


Jason is an interpreter for an extended version of AgentSpeak. It implements the operational semantics of that language, and provides a platform for the development of multi-agent systems, with many user-customisable features. Jason is available as Open Source, and is distributed under GNU LGPL.


Deep learning is a form of state-of-the-art machine learning that can learn to recognize patterns in data unsupervised.


A Java language client for Torbjörn Lager's _Pengines_ distributed computing library for _[SWI-Prolog]( .


JBT is a Java framework for building and running behaviour trees. In the past few years, behaviour trees have been widely accepted as a tool for defining the behaviour of video games characters. However, to the best of our knowledge, there is no free-software Java implementation of such concept. With JBT we intend to provide a solid framework to build and run behaviour trees in Java.


JDageem is an extensible Java package that includes several implementations of parsing and training algorithms for dependency grammar induction. More specifically, JDageem includes:


There is no comprehensive documentation, if you have questions please ask. A [guide]( was written for the [VHTK]( It is a work in progress, so some aspects are still undocumented and may not be fully in sink with the current capabilities in the trunk. If you have any questions please submit an [Issue](


This is a small library that sits on top of jQuery for communicating with XML-RPC services - without worrying about the horrible bloat of XML-RPC. Using this library, you can pass JSON parameters to the library, and receive responses in JSON. Encoding the JSON document is handled for you, intelligently mapping types between the two languages.


**Natural Language Generation (NLG)** is a field of artificial intelligence that focuses on the development of systems that produce text for different applications, for example the textual description of massive datasets or the automation of routine text creation.


This module uses [semantic versioning](


Julien is a retrieval stack built for performing experiments in Information Retrieval research. The current version of Julien is 0.1, mostly because it's been under development and I haven't had time to *really* set it up for a release. Right now the documentation is spotty, but I will be shoring it up in the coming weeks. The scaladocs can be found at .


This library converts KAF to NAF and NAF to KAF. It also contains a webservice for doing exactly this.


WARNING! Do no install before starting TF code. In this case there is an incompatibility in using TPU via TF and via PyTorch in the same instance runtime. The valid sequence of running (including install packages) is in ./ and ./


Kaku is an highly integrated music player that supports different online platforms like YouTube, SoundCloud, Vimeo and more. Available on `Windows`, `Linux` and `macOS` !


# Code * `metaqa/original/`The original MetaQA vanilla dataset for 2-hop and 3-hop training and testing questions ( * `metaqa/rectified/`The rectified version of the MetaQA vanilla dataset. The original MetaQA dataset contains errorneous answers (as is discussed in our paper). We inspected the errors from the original MetaQA dataset and created a rectified version which contains the correct answers for the multi-hop questions in MetaQA. * `metaqa/cnl_input/` The MetaQA vanilla dataset in ACE CNL grammar. Note, this dataset only contains the multi-hop questions (not answers). It is used as the input to KALM-QA to get the corresponding queries in Prolog. * `tools/metaqa_to_cnl/` JAVA code that converts MetaQA n-hop English questions (NL) to CNL format. The input files (e.g., are found in `metaqa/cnl_input/` directory. * `tools/intermediate_query_processing/` JAVA code that processes the intermediate MetaQA Prolog query generated by the Prolog program. This program replaces singleton variables with anonymous variables. * `query/template/2_hop_template/` In this directory, query_template.txt contains the unique query templates for 2-hop MetaQA queries (testing). query_group_by_template.txt groups the 2-hop MetaQA queries (testing) by the query template. 2_hop_template.txt shows the query template for each query in query_group_by_template.txt. * `query/template/3_hop_template/` In this directory, query_template.txt contains the unique query templates for 3-hop MetaQA queries (testing). query_group_by_template.txt groups the 3-hop MetaQA queries (testing) by the query template. 3_hop_template.txt shows the query template for each query in query_group_by_template.txt. * `query/2_hop_test/` This directory contains the MetaQA 2-hop Prolog queries (, MetaQA KB encoded in Prolog (, MetaQA 2-hop testing question-answer pairs encoded in Prolog (, background rules (, a program checking whether the query returns the correct answers (, an entrypoint program ( By running the program, it will generate a file containing the results which compares KALM-QA answers with MetaQA answers (metaqa_result.txt). **Note that** the question-answer pairs are from the original MetaQA vanilla dataset. As is discussed in the paper, there are errors in this dataset. As a result, once you run the program, you may find the mismatches between KALM-QA answers and MetaQA answers. Error analysis will be displayed in a separate directory. The directories `2_hop_training`, `3_hop_testing`, and `3_hop_training` follow the same structure. * `error_analysis/2_hop` This directory contains the errors for 2-hop testing data. total_errors.txt has all the errors. fild_id_errors.txt has the errors that are caused by the issue where MetaQA doesn't distinguish the different films that share the same film ID. others_error.txt has all the rest errors caused by unknown reasons. We have manually checked 736 (50%) of the "other errors" and added the reasons why MetaQA doesn't return the correct answers. The analysis is in metaqa_error_analysis.txt. * `error_analysis/3_hop` This directory contains the errors for 3-hop testing data. total_errors.txt has all the errors. fild_id_errors.txt has the errors that are caused by the issue where MetaQA doesn't distinguish the different films that share the same film ID. others_error.txt has all the rest errors caused by unknown reasons. We have manually checked 1628 (50%) of the "other errors" and added the reasons why MetaQA doesn't return the correct answers. The analysis is in metaqa_error_analysis.txt. * `kalm-qa/` The source code for KALM-QA (Prolog).


1. Place your KAnnSpec into the KAnnSpec/ directory. 2. Place your document into the content/ directory. Make sure it only contains the actual document content (inside the body tag) 3. Edit line 3 in js/index.js and change "content/sample1.html" to the path of the document you want to use and change "KAnnSpecs/omdoc-annotations.xml" to the annotation you want to create. 4. Run ```grunt run``` if it is not already running. 5. Navigate to localhost:3000 and see the demo at work.


This is code developed by BBN to support the [2014 KBP Event Argument Shared Task]( A draft of the description of this task may be found [here](


This GitHub project contains the Java code (based on [Lucene](, [Sesame](, and [RDFpro]( implementing a simple evaluation system that allow configuring and evaluating KE4IR on arbitrary document collections and queries for which relevance judgments are known. You can use this code, together with the data available on KE4IR [webpage](, to replicate the evaluation results reported in the KE4IR paper. You can also use this code as a basis for experimenting with a variation of KE4IR, or even with a different approach that can be casted in the framework of KE4IR (augmentation of term vectors with semantic terms obtained via knowledge extraction).


# KEEL KEEL (Knowledge Extraction based on Evolutionary Learning) is an open source (GPLv3) Java software tool that can be used for a large number of different knowledge data discovery tasks. KEEL provides a simple GUI based on data flow to design experiments with different datasets and computational intelligence algorithms (paying special attention to evolutionary algorithms) in order to assess the behavior of the algorithms. It contains a wide variety of classical knowledge extraction algorithms, preprocessing techniques (training set selection, feature selection, discretization, imputation methods for missing values, among others), computational intelligence based learning algorithms, hybrid models, statistical methodologies for contrasting experiments and so forth. It allows to perform a complete analysis of new computational intelligence proposals in comparison to existing ones.


Kerkerkruip is a short-form roguelike in the interactive fiction medium, featuring meaningful tactical and strategic depth, innovative game play, zero grinding, and a sword & sorcery setting that does not rehash tired clichés.


This is a keylogger for Linux written in Rust, ported from my [original keylogger]( in C. It works by reading directly from the keyboard device in `/dev/input/`. The keylogger attempts to detect the keyboard device upon startup, but if one cannot be detected or if multiple are detected, you must specify the path to the device file manually.


At present this repo contains one project: [*Knowledge Graph Convolutional Networks* (KGCNs)](


- The *Process - Inputs* datasets contain detailed information about the inputs of the sets of instructions, including links to [DBpedia]( resources - The *Process - Outputs* datasets contains detailed information about the outputs of the sets of instructions, including links to [DBpedia]( resources - The *Process - Step Links* datasets contains links between different sets of instructions


Data ---- This repository contains the following datasets for experiments:


A more rigorous description of the framework is given in Célia da Costa Pereira and Andrea G. B. Tettamanzi. "An Integrated Possibilistic Framework for Goal Generation in Cognitive Agents". In Proceedings of the 9th International conference on autonomous agents and multiagent systems (AAMAS 2010), pages 1239–1246.


For example, you would vote that tiny progressive political party, if you knew your vote would matter. So let's get to work to make it matter. Don't waste your vote until you know there is a mass large enough to make it count.


An NLP framework for large scale processing using Hadoop. KOSHIK supports parsing of text in multiple languages including English, Swedish, and Chinese.


We release [OpenKE](, an open source toolkit for KRL/KE. This repository provides a standard KRL/KE training and testing framework. Currently, the implemented models in OpenKE include TransE, TransH, TransR, TransD, RESCAL, DistMult, ComplEx and HolE.


Then, we can perform training with `lamtram-train`. Here is a typical way to run it with options:


# [LangPro]( Natural [Lang]( Theorem [Pro]( LangPro is a tableau-based theorem prover for natural logic and language. See the [online demo]( (not the latest version).


In order to compile some of the examples, you will also need a version >= 1.49 of the Boost C++ libraries available on your system. You can check the version you have either manually by looking at the macro defined in `boost/version.hpp` or, on debian systems, by running `dpkg -s libboost-dev`. Be aware that systems such as the Ubuntu 12.04LTS release ship with older versions of Boost.


Description ---- The __LaZagne project__ is an open source application used to __retrieve lots of passwords__ stored on a local computer. Each software stores its passwords using different techniques (plaintext, APIs, custom algorithms, databases, etc.). This tool has been developed for the purpose of finding these passwords for the most commonly-used software. At this moment, it supports 22 Programs on Microsoft Windows and 12 on a Linux/Unix-Like OS.


This is our entry for Ludum Dare 41, a silly text based minesweeper game.


The project is a co-operation between [Andreas Harth]( at [AIFB]( and [Juergen Umbrich]( at [DERI]( [Aidan Hogan](, Tobias Kaefer and [Robert Isele]( are contributing.


This playground is a pytorch implementation of a learning framework for implementing different models for the neural abstractive text summarization and beyond. It is an extension of [NATS]( toolkit, which is a toolkit for Neural Abstractive Text Summarization. The goal of this framework is to make it convinient to try out new ideas in abstractive text summarization and other language generation tasks.


This is the Emacs mode for the [Lean theorem prover][lean].


A Learning by Reading pipeline of NLP and Entity Linking tools.


This is intended to eventually be a set of reusable components, something like daveray's bebot. However, I'm amazingly incompetent at Soar programming, so first I need to learn.


![]( # LEGOEval LEGOEval is a toolkit for dialogue system evaluation via crowdsourcing, see our [demo video](


> This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.


Leo-III [SWB16] is an automated theorem prover for (polymorphic) higher-order logic which supports all common TPTP dialects, including THF, TFF and FOF as well as their rank-1 polymorphic derivatives [SWB17]. It is based on a paramodulation calculus with ordering constraints and, in tradition of its predecessor LEO-II [BP15], heavily relies on cooperation with external (mostly first-order) theorem provers for increased performance. Nevertheless, Leo-III can also be used as a stand-alone prover without employing any external cooperation.


This project contains the data structure framework LeoPARD underlying the Leo-III prover.


Add symbols for anaphoric macro internals, `IT`, `THIS`, and `SELF` to package exports for better end-user experience. Will be available in April 2015 release of Quicklisp.


This distribution bundle includes the following components: * libarchive: a library for reading and writing streaming archives * tar: the 'bsdtar' program is a full-featured 'tar' implementation built on libarchive * cpio: the 'bsdcpio' program is a different interface to essentially the same functionality * cat: the 'bsdcat' program is a simple replacement tool for zcat, bzcat, xzcat, and such * examples: Some small example programs that you may find useful. * examples/minitar: a compact sample demonstrating use of libarchive. * contrib: Various items sent to me by third parties; please contact the authors with any questions.


For example, the `libreoffice-templates` package (description: "Additional set of templates for LibreOffice") that is available in Ubuntu, only contains the 8 default templates that come with LibreOffice itself. Installing this package thus has no effect on the templates available to the user in Impress, and no other template packages appear to be available.


An AI running on [NuPIC]( using the CLA to build a model of language, and predict the rest of a user's word, phrase, sentence.


For other languages than English you need to download TreeTagger from and install it. There is a special file in the GATE directory plugins/Tagger_Framework/resources/TreeTagger/tree-tagger-LANG-gate which must be specified and targeted to the installed TreeTagger application (this file is generated during the TreeTagger installation step in the cmd/ directory).


Linkipedia is an entity extraction and linking service that you can set up yourself against a set of ontologies and other RDF datasets you choose. It will use the interlinks available in the RDF to score the overall informativeness of each term and use the context of the text you submit to find the closest matches.


### Random split (Unmaintained) Create symlink from ``training/random_split/datasets/video`` to your video dataset folder (which contains ``s*`` directory).


A small dialect of Common Lisp based upon lisp500


LLAMA is a graph storage and analysis system that supports mutability and out-of-memory execution built on top of the compressed sparse row (CSR) representation. Its goal is to perform comparably to immutable main-memory analysis systems for graphs that fit in memory and to match or outperform existing out-of-memory analysis systems for graphs that exceed main memory.


--- At its core, **llamapun** is a [Rust]( implementation that aims at minimal footprint and optimal runtime, in order to safely scale to corpora of millions of documents and tens of billions ot tokens.


To start a training run use **** with custom parameters like number of LSTM units, dropout, IOB file path etc. . You can call the important scripts with -h to get help. All output of a training run will land in the *modelzoo* directory. To configure and run a training via a parameter grid use (just change for docker) ****. To get an overview of the performance of the trained models via **** it will generate an csv formatted file containing metrics that can be visualised with ****. The *modelzoo* directory contains examples (only one model was committed to this repo).


![#f03c15]( **NOTICE**: This is a work in progress and is being updated weekly.


```` Quite a bit more output about 123 seconds later will see something like... ```` % List of possible data transformations % /home/nlutest/.local/share/swi-prolog/pack/logicmoo_nlu/prolog/logicmoo_nlu/ % installed_converter(parser_all, input_to_acetext(+input, -acetext)). % installed_converter(parser_all, tokens_to_acetext(+tokens, -acetext)). % installed_converter(get_ape_results, ace_to_pkif(+acetext, -kif(p))). % installed_converter(ace_to_drs, call_tokenizer(+acetext, guess+on, -sentences:set, -sentencesToParse)). % installed_converter(ace_to_drs, paragraphs_to_drs(+sentences:list, guess+on, catch+off, startID+1, -sentences, -syntaxTrees, -drs0, -messages, -time)). % installed_converter(ace_to_drs, call_parser(+sentences:list, startID+1, -syntaxtrees, -drs0:reversed_set)). % installed_converter(ace_to_drs, acetext_to_drs(+acetext, -sentences:set, -syntaxTrees, -drs0, -messages)). % installed_converter(tokenizer, tokenize(+input, -tokens)). % installed_converter(tokens_to_sentences, tokens_to_sentences(+tokens:set, -sentences:set)). % installed_converter(tokens_to_sentences, tokens_to_paragraphs(+tokens:set, -sentences:set)). % installed_converter(drs_fol_pnf, drs_pnf(+drs, -fol)). % installed_converter(drs_fol_pnf, drs_fol(+drs, -pnf)). % installed_converter(get_ape_results, fol_to_pkif(+pnf, -kif(p))). % installed_converter(get_ape_results, fol_to_pkif(+fol, -kif(f))). % installed_converter(get_ape_results, fol_to_pkif(+drs, -kif(d))). % installed_converter(get_ape_results, fol_to_pkif(+sdrs, -kif(s))). % installed_converter(drs_to_ace, drs_to_ace(+drs0, -paraphrase:set)). % installed_converter(drs_to_drslist, drslist_to_ace(+drs0:list, -paraphrase:set)). % installed_converter(drs_to_drslist, drs_to_drslist(+drs0, -drs:set)). % installed_converter(drs_to_sdrs, drs_to_sdrs(+drs, -sdrs)). % installed_converter(parser_chat80, into_text80(+tokens, -text80)). % installed_converter(parser_chat80, sent_to_parsed(+text80, -syntaxTree80)). % installed_converter(parser_chat80, i_sentence(+syntaxTree80, -i_sentence)). % installed_converter(parser_chat80, clausify80(+i_sentence, -clausify80)). % installed_converter(parser_chat80, simplify80(+clausify80, -simplify80)). % installed_converter(parser_chat80, qplan(+simplify80, -qplan)). % installed_converter(parser_chat80, results80(+qplan, -results80)). % /home/nlutest/.local/share/swi-prolog/pack/logicmoo_nlu/prolog/logicmoo_nlu/ % parser_all_complete....... chat80("Which countries have a population exceeding 10 million?"). chat80("Which countries contain a city?"). chat80("Which countries contain 2 cities?"). chat80("Which countries contain 3 cities?"). chat80("Which countries contain more than 3 cities?"). chat80("Which countries contain more than 2 cities?"). chat80("Which continents contain more than 4 cities?"). chat80("Which asian countries have a population exceeding 10 million?"). chat80("What is the average area of the countries in each continent?"). chat80("What is a river?"). chat80("What is a river that is in asia?"). chat80("Which rivers are not in asia?"). chat80("What is a river that is not happy?"). chat80("does afghanistan border china?"). chat80("what is the capital of upper_volta?"). chat80("where is the largest country?"). chat80("which countries are european?"). chat80("which country's capital is london?"). chat80("which is the largest african country?"). chat80("how large is the smallest american country?"). chat80("what is the ocean that borders african countries and that borders asian countries?"). chat80("what are the capitals of the countries bordering the baltic?"). chat80("how many countries does the danube flow through?"). chat80("what is the total area of countries south of the equator and not in australasia?"). chat80("what is the average area of the countries in each continent?"). chat80("is there more than one country in each continent?"). chat80("is there some ocean that does not border any country? "). chat80("what are the countries from which a river flows into the black_sea?"). chat80("what are the continents no country in which contains more than two cities whose population exceeds 1 million? "). chat80("which country bordering the mediterranean borders a country that is bordered by a country whose population exceeds the population of india?"). chat80("which countries have a population exceeding 10 million?"). chat80("which countries with a population exceeding 10 million border the atlantic?"). chat80("what percentage of countries border each ocean?"). chat80("what countries are there in europe?"). chat80([which, is, the, largest, african, country, ?]). chat80("which countries are bordered by two seas?", [[egypt, iran, israel, saudi_arabia, turkey]]). chat80("How many rivers are not in asia?", 25). chat80("How many rivers are in asia?", 16). chat80("How many asian countries have a population exceeding 10 million?", 20). chat80("How many countries have a population exceeding 10 million?", 50). chat80("What are the continents in which no country contains more than 3 cities?", [africa, antarctica, australasia, europe]). chat80("What are the continents not containing a country?", [antarctica]). chat80("What are the continents no country in which contains more than two cities whose population exceeds 1 million ?", [africa, antarctica, australasia]). chat80("What are the continents in which no country contains more than two cities whose population exceeds 1 million?", [africa, antarctica, australasia]). chat80("What are the continents containing a country in which contains more than two cities whose population exceeds 1 million?", [america, asia, europe]).


This NLU/NLG ToolKit uses the following projects into a usable pipeline


With PDDL, Boolean variables are created from the PDDL predicates. Variables are named after the PDDL predicates, `variable().` Each variable contains exactly two values (one `true`, one `false`) of the form `value(, )`. Note that with PDDL, variables and values are named identically.


[![latest release version](]( [![License](]( [![Twitter follow](]( [![discord](]( [![total](](


The overall copyright and permission notice for Logtalk can be found in the "LICENSE.txt" file in this directory. Logtalk follows the Artistic License 2.0. The copyright notice and license applies to all files in this release (including sources, documentation, and examples) unless otherwise explicitly stated.


This file is part of Logtalk Copyright 1998-2016 Paulo Moura


This is a collection of solutions to exercises found in Learn Prolog Now! textbook by Patrick Blackburn, Johan Bos, and Kristina Striegnitz.


This repository holds the frontend web app for the [lps.js]( demonstration website, made using [Angular framework]( and bundled with Webpack. The server-side repository of the web app can be found at


Ubyline is an Apache-licensed, web-based sense annotation tool whose user interface is optimized for lexical sample data. Ubyline supports a wide range of sense inventories in several languages, including WordNet and GermaNet.



This repository contains the code needed to reproduce the results reported in Bugert et al., *LSDSem 2017: Exploring Data Generation Methods for the Story Cloze Test*.


This is a signal library for Lua 5.1. It depends on ANSI C signals and has some extensions that are available in POSIX, such as kill().


Lucida is a speech and vision based intelligent personal assistant inspired by [Sirius]( Visit [our website]( for tutorial, and [Lucida-users](!forum/lucida-users) for help. The project is released under [BSD license](LICENSE), except certain submodules contain their own specific licensing information. We would love to have your help on improving Lucida, and see [CONTRIBUTING]( for more details.


Ludii is a general game system being developed as part of the [ERC-funded Digital Ludeme Project (DLP)]( This repository hosts the publicly available source code for Ludii. A precompiled build (Ludii.JAR) can be downloaded from [Ludii's downloads page](


This repository is now deprecated; all AI source code for Ludii is included in the main open-source Ludii repo at


This repository, as well as the [Ludii Example AI repository](, are written for the latest public pre-release of Ludii available at the time of this writing: **Ludii 0.9.3**. **This is the version of Ludii that we will use for the AI competition at CoG 2020**. We do plan to release newer versions of Ludii in between, but the API may not remain 100% the same. Therefore **we now fix the version that will be used for the competition at CoG 2020 at 0.9.3**. --> ---


This makes a settings.db file in your LD install root Please don't check this file in.


In order to compile some of the examples, you will also need a version >= 1.49 of the Boost C++ libraries available on your system. You can check the version you have either manually by looking at the macro defined in `boost/version.hpp` or, on debian systems, by running `dpkg -s libboost-dev`. Be aware that systems such as the Ubuntu 12.04LTS release ship with older versions of Boost.


Source code of the MaastCTS2 agent for General Video Game playing. Champion of the 2016 GVG-AI Single-Player Track, and runner-up of the 2016 GVG-AI Two-Player Track. This repository contains code for both the Single-Player and Two-Player variants.


Here is an example of the Telegram interface for Macaw. It supports multi-modal interactions (text, speech, click, etc).


MultiAgentDecisionProcess (MADP) is a toolbox for scientific research in decision-theoretic planning and learning in multiagent systems. It is designed to be rather general, but most effort has been put in planning algorithms for discrete Dec-POMDPs.


Magentix2 is an agent platform for open Multiagent Systems. Its main objective is to bring agent technology to real domains: business, industry, logistics, e-commerce, health-care, etc.


# MAGPIE Corpus This is the **MAGPIE Corpus**, a large sense-annotated corpus of potentially idiomatic expressions (PIEs), based on the British National Corpus (BNC). Potentially idiomatic expressions are like idiomatic expressions, but the term also covers literal uses of idiomatic expressions, such as 'I leave work *at the end of the day*.' for the idiom 'at the end of the day'. The corpus contains 56,622 instances, covering 1,756 different idiom types, all of which have crowdsourced meaning labels. For details, see our [LREC paper](


A collection of chess engines that play like humans, from ELO 1100 to 1900.


- Setting the correct parameters 1. Define the parameter space in the ATP.ini 2. Check the settings in setup.ini. In particular the PROBLEMS parameter under search must be a file which contains the training problems. 3. Define the start strategies in strategies.ini


This will install marelle for all users, putting the executable in `/usr/local/bin/marelle`.


MarI/O is a program made of neural networks and genetic algorithms that kicks butt at Super Mario World.


This is a working repository for RELEASE 2 of the Marpa.


This is the fastest Internet port scanner. It can scan the entire Internet in under 6 minutes, transmitting 10 million packets per second.


[](docs/ contains the description of the current scenario.


This is my master's thesis with presentation slides.


1. Make sure you have `python3.6` and the `pip` module installed. We recommend using [conda environments]( 1. Navigate to the root folder of this repository (the same folder that contains this README file) and run `pip install -r requirements.txt`. Note: If you are using a conda env and any packages fail to compile during this step, you may need to first install those packages separately with `conda install package_name`. 1. Wait for all the requirements to be downloaded and installed. 1. Run `python install` to install this module. This will also download the Word2vec model files. If the download fails, manually download the [model](, [word embeddings]( and [output embeddings]( and put them in mat2vec/training/models. 1. Finalize your chemdataextractor installation by executing ``cde data download`` (You may need to restart your virtual environment for the cde command line interface to be found). 1. You are ready to go!


[Mathlib]( is a user maintained library for the [Lean theorem prover]( It contains both programming infrastructure and mathematics, as well as tactics that use the former and allow to develop the latter.


This software package consists of a simple implementation of MC-AIXI-CTW, an intelligent agent that learns from experience how to perform well in a wide variety of environments. This includes, but is not limited to the example games provided in this package, such as Tic Tac Toe, Pacman, and Kuhn Poker.


This software distribution consists of:


MDR is a library detect and extract listing data from HTML page. It implemented base on the `Finding and Extracting Data Records from Web Pages `_ but change the similarity to tree alignment proposed by `Web Data Extraction Based on Partial Tree Alignment `_ and `Automatic Wrapper Adaptation by Tree Edit Distance Matching `_.


MDSWriter is a software for manually creating multi-document summarization corpora and a platform for developing complex annotation tasks spanning multiple steps.


This repository contains the metadata for all articles in the Media Frames Corpus (version 2), along with the beginning and end (and associated framing dimension) of all annotated spans of text. All of this information is in a single JSON file in the annotations/ directory, with one file for each issue (immigration, smoking, and same-sex marriage). To obtain the actual articles, however, it is necessary to have access to Lexis-Nexis academic.


[Megatron]( is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor and pipeline), and multi-node pre-training of [GPT]( and [BERT]( using mixed precision.


# Multi-agent Epistemic Planner with Knowledge This is a planner for multi-agent epistemic planning. This code is continuously updated. We are planning to release a brand new version of MEPK and more details about it will be presented. You are welcome to follow this work.


A haiku library using the `xmap` operator in Jax for model parallelism of transformers.


This repository contains accompanying code for the article introducing Meta-Dataset, [](


Metagol is an inductive logic programming (ILP) system based on the meta-interpretive learning framework. Please contact Andrew Cropper ( with any questions / bugs.


miBand.customVibration(times, on_time, off_time); where `times` is an int value to determine **how many times** will vibrate(I recommend to use between 1-3 times only) and `on_time` is the time in milliseconds that each vibration will be **On** (not more than 500 milliseconds) and `off_time` is the **pause** between each consecutive vibration ### LED Color To change the LED color, you can use


MIAOW is an open source implementation of the AMD Southern Islands GPU ISA.


MicroCCG ======== MicroCCG is an adversarial Combinatory Categorial Grammar (CCG) planner for the Real-Time Strategy (RTS) Game microRTS. This agent was developed to participate in the CIG 2018 microRTS tournament. Details about microRTS can be found on the [microRTS Github page](


microRTS is a small implementation of an RTS game, designed to perform AI research. The advantage of using microRTS with respect to using a full-fledged game like Wargus or StarCraft (using BWAPI) is that microRTS is much simpler, and can be used to quickly test theoretical ideas, before moving on to full-fledged RTS games.


Miser is a Python library that can be used for writing scripts that'll help you project costs and figure out how to accumulate money. It's in unstable alpha.


MISP, Malware Information Sharing Platform and Threat Sharing, is an open source software solution for collecting, storing, distributing and sharing cyber security indicators and threat about cyber security incidents analysis and malware analysis. MISP is designed by and for incident analysts, security and ICT professionals or malware reverser to support their day-to-day operations to share structured informations efficiently.


This project provides free (even for commercial use) [state-of-the-art](../../wiki/Evaluation) information extraction tools. The current release includes tools for performing [named entity extraction]( and [binary relation detection]( as well as tools for training custom extractors and relation detectors.


A further problem is to report the results. The goal is a dependency table that shows, for each Mizar "item", which Mizar items it depends upon.


This is the source code of the paper [CREAD: Combined Resolution of Ellipses and Anaphora in Dialogues]( In this work, we propose a novel joint learning framework of modeling coreference resolution and query rewriting for complex, multi-turn dialogue understanding. The coreference resolution [MuDoCo]( dataset augmented with our query rewrite annotation is released as well.


This repository consists of the code used to run the experiment and three zip files:



Perl is a popular, powerful, and widely used programming language. Over its twenty year lifespan, it's powered millions of systems worldwide, moving trillions of dollars. More importantly, it's helped countless people get their work done effectively.


This is a set of Perl Modules designed to implement parts of the Discord public API, build on Mojo::UserAgent and Mojo::IOLoop.


A tiny wrapper around [DBD::Pg]( that makes [PostgreSQL]( a lot of fun to use with the [Mojolicious]( real-time web framework.


This repo contains the code described in the publication: *MOLIERE: Automatic Biomedical Hypothesis Generation System*


Morbig is a parser for shell scripts written in the POSIX shell script language. It parses the scripts statically, that is without executing them, and constructs a concrete syntax tree for each of them. The concrete syntax trees are built using constructors according to the shell grammar of the POSIX standard.


MOSES is a machine-learning tool; it is an "evolutionary program learner". It is capable of learning short programs that capture patterns in input datasets. These programs can be output in either the `combo` programming language, or in python. For a given data input, the programs will roughly recreate the dataset on which they were trained.


Mowgli-in-the-jungle is a library of functionalities that help building a commonsense QA solution on a variety of tasks.


mp++ is a C++11 library for multiprecision arithmetic, currently supporting arbitrary-precision integers, rationals and floats, and quadruple-precision floats.



# muc3 This is the text corpus created by the [DARPA TIPSTER Program]( for the third [Message Understanding Conference (MUC-3)]( in 1991, and reused for MUC-4 in 1992, before finding a permanent home at the [National Institute of Standards and Technology (NIST)]( when the TIPSTER Program finished. The corpus contains news reports covering terrorist activities in Latin America.


This program is a implementation of it.


A collection of modules for parsing and manipulating OWL2 ontologies in Prolog. It is developed with SWI-Prolog in mind, but the goal is to maximize portability with other prologs, such as Yap and XSB.


%MulVAL is an cybersecurity reasoning engine that can be applied on top of multiple contexts (cloud, IoT, enterprise network, etc )


A commented and [documented]( implementation of MuZero based on the Google DeepMind [paper]( (Nov 2019) and the associated [pseudocode]( It is designed to be easily adaptable for every games or reinforcement learning environments (like [gym]( You only need to add a [game file]( with the hyperparameters and the game class. Please refer to the [documentation]( and the [example](


Mycroft is a hackable open source voice assistant.


If you now take a look at the local Meteor-MongoDb (with a gui like Robomongo or the meteor mongo-shell, you will see a field named "message_enc" that contains the encryption of the message. There should be no field "message", which before contained the unencrypted data and will only appear on the client when the message is successfully decrypted.

This is the official website for MyShinyTemplate


InViEdit is a web-based writing environment for evaluating methods in intelligent writing assistance.


This document describes NAF, the NLP Annotation Format. NAF is a stand-off, multilayered annotation schema for representing linguistic annotations.


This project contains the Abs. neural abstractive summarization system from the paper


**Tasks** can arrive at any time. There are no restrictions on their content as far as they can be expressed in __Narsese__ (the I/O language of NARS). - By default, NARS makes *no assumptions* about the meaning or truth value of input beliefs and goals. - How to choose proper inputs and interpret possible outputs for each application is an *open problem* to be solved by its users. :warning:


This is a SWI-Prolog pack that runs Narsese like OpenNARS


Nativefier is a command line tool that allows you to easily create a desktop application for any web site with succinct and minimal configuration. Apps are wrapped by [Electron]( in an OS executable (`.app`, `.exe`, etc.) for use on Windows, OSX and Linux.


Natron is a free open-source (MPLv2 license) video compositing software, similar in functionality to Adobe After Effects or Nuke by The Foundry.


This repository contains source code of the project for sentiment analysis of a given text using the publically available lexical resource called [SentiWordNet]( Sentiwordnet files are to be downloaded and added to the folder to compile this source code. Input is to be given in a file named input which is to be placed in the project folder.


An experimental form that uses natural language instead of the usual form layout. Values are entered using custom input elements.


NaturalLI is a Natural Logic reasoning engine aimed at fast inference from a large database of known facts. The project's primary goal is to infer whether arbitrary common-sense facts are true, given a large database of known facts. The system is described in:


This repo contains:


After this runs it will then print a plot of the hypothesis error against the size of training set the weights where learned on. Below is an example graph plotted from the iris dataset.


A list of example command lines you can use with the pre-trained models provided in the GitHub releases:


An implementation of [neural style][paper] in TensorFlow.


This is a TensorFlow implementation of several techniques described in the papers: * [Image Style Transfer Using Convolutional Neural Networks]( by Leon A. Gatys, Alexander S. Ecker, Matthias Bethge * [Artistic style transfer for videos]( by Manuel Ruder, Alexey Dosovitskiy, Thomas Brox * [Preserving Color in Neural Artistic Style Transfer]( by Leon A. Gatys, Matthias Bethge, Aaron Hertzmann, Eli Shechtman


This folder contains scripts to use our neural seq2seq model to produce DRSs. It contains code to reproduce either our [TACL paper](, our [IWCS paper]( or our [EMNLP paper]( The models rely on [OpenNMT](, [Marian]( and [AllenNLP](, respectively.


"Newspaper is an amazing python library for extracting & curating articles." -- `tweeted by`_ Kenneth Reitz, Author of `requests`_


A project that, at its core, scrapes news data from the internet and extracts binary relations from the news using ReVerb.


ngPAWS (pronunced n-g-paws) is an authoring system based on the Professional Adventure Writing System, thus the name ngPAWS stands for "next generation PAWS".


This repository contains the data and source code release of the paper: [NL2Bash: A Corpus and Semantic Parser for Natural Language Interface to the Linux Operating System](


A syntactic neural model for parsing natural language to executable code [paper](


A lot of these names were places, and many were of little importance or were not proper nouns at all, so only the first 39 names and 27 places were kept, in `names-edited.txt` and `places-edited.txt`.


This document aims to track the progress in Natural Language Processing (NLP) and give an overview of the state-of-the-art (SOTA) across the most common NLP tasks and their corresponding datasets.


This is an implementation of [NLProlog](todo), a method for approaching Question Answering tasks with Prolog-like reasoning over natural language statements.


A server that supplies web-services for NLU (Natural Language Understanding) and NLG (Natural Language Generation) for a negotiation agent.


There is an [implementation]( of joint training of slot filling and intent detection for SLU, which is evaluated on ATIS and SNIPS datasets.


Lastly, we haven't mentioned *projection_layer* which is a dense matrix to turn the top hidden states to logit vectors of dimension V. We illustrate this process at the top of Figure 2.


This is an instance of the game [Nomic]( driven by Github interactions:


A Nomic game in Haskell


This script implements the two most common algorithms for database normalization, BCNF decomposition and 3NF synthesis. It was written as an exercise while studying for an exam in a databases class.


The initial code is based on Yakuake which is a drop down terminal emulator based on KDE Konsole technology.


# NOUS: Construction, Querying and Reasoning in Dynamic Knowledge Graphs Automated construction of knowledge graphs (KG) remains an expensive technical challenge that is beyond the reach for most enterprises and academic institutions. NOUS is an end-to-end framework for developing custom knowledge graphs driven analytics for arbitrary application domains. The uniqueness of our system lies A) in its combination of curated KGs along with knowledge extracted from unstructured text, B) support for advanced trending and explanatory questions on a dynamic KG, and C) the ability to answer queries where the answer is embedded across multiple data sources.


# NOUS : Construction and Querying of Dynamic Knowledge Graphs Automated construction of knowledge graphs remains an expensive technical challenge that is beyond the reach for most enterprises and academic institutions. NOUS is an end-to-end framework for developing custom knowledge graphs driven analytics for arbitrary application domains. The uniqueness of our system lies A) in its combination of curated KGs along with knowledge extracted from unstructured text, B) support for advanced trending and explanatory questions on a dynamic KG, and C) the ability to answer queries where the answer is embedded across multiple data sources.


This repository presents a collection of previous research papers of Neural Text Generation (NTG), as well as a taxonomy constructed according to publication time, method paradigm or paper type.


The Numenta Platform for Intelligent Computing (**NuPIC**) is a machine intelligence platform that implements the [HTM learning algorithms]( HTM is a detailed computational theory of the neocortex. At the core of HTM are time-based continuous learning algorithms that store and recall spatial and temporal patterns. NuPIC is suited to a variety of problems, particularly anomaly detection and prediction of streaming data sources.


An implementation of Cross-Language Structural Correspondence Learning (CLSCL). See [Prettenhofer2010]_ for a detailed description and [Prettenhofer2011]_ for more experiments and enhancements.


This is a web component that gets nutrition facts in json format and outputs a nicely formatted Nutrition Facts with live text.


A signed copy of the [Contributor License Agreement]( needs to be provided to before any change can be accepted.


This folder contains guidelines and materials for the Open Knowledge Extraction challenge at [ESWC 2016](


<<<<<<< HEAD Lucida is a speech and vision based intelligent personal assistant. ======= Lucida is a speech and vision based intelligent personal assistant based on Sirius. Visit the provided readmes in [lucida](lucida) for instructions to build Lucida and follow the instructions to build [lucida-suite here]( >>>>>>> 2a18f6852666636214a8d5d76ac7d543e9cd8428 Post to [Lucida-users](!forum/sirius-users) for more information and answers to questions. The project is released under [BSD license](LICENSE), except certain submodules contain their own specific licensing information. We would love to have your help on improving Lucida, and see [CONTRIBUTING]( for more details. <<<<<<< HEAD =======


``OLED`` is an online ('single-pass') Inductive Logic Programming system for learning logical theories from data streams. It has been designed having in mind the construction of knowledge bases for event recognition applications, in the form of domain-specific axioms in the Event Calculus, i.e. rules that specify the conditions under which simple, low-level events initiate or terminate complex event. However, `OLED` can practically be used within any domain where ILP is applicable (preferably, large volumes of sequential data with a time-like structure).


Ontological Pathfinding (OP) is a scalable first-order rule mining algorithm. It achieves scalability via a series of parallelization and optimization techniques: a relational knowledge base model to apply inference rules in batches, a new rule mining algorithm that parallelizes the join queries, a novel partitioning algorithm to break the mining tasks into smaller independent sub-tasks, and a pruning strategy to eliminate unsound and resource-consuming rules before applying them. Combining these techniques, OP is the first rule mining algorithm that mines 36,625 inference rules from Freebase, the largest public knowledge base with 112 million entities and 388 million facts.


This scripts allows opening your text editor from a link on a webpage/within a browser extension via MIME. See a short [[][demo]].


A frame-semantic parser for automatically detecting [FrameNet]( frames and their frame-elements from sentences. The model is based on softmax-margin segmental recurrent neural nets, described in our paper [Frame-Semantic Parsing with Softmax-Margin Segmental RNNs and a Syntactic Scaffold]( An example of a frame-semantic parse is shown below


OpenALPR is an open source *Automatic License Plate Recognition* library written in C++ with bindings in C#, Java, Node.js, and Python. The library analyzes images and video streams to identify license plates. The output is the text representation of any license plate characters.


OpenCCG is a system for parsing and generating text using [combinatory categorial grammar]( for syntax and [hybrid logic dependency semantics]( for, well, the semantic representation.


OpenCog is a framework for developing AI systems, especially appropriate for integrative multi-algorithm systems, and artificial general intelligence systems. Though much work remains to be done, it currently contains a functional core framework, and a number of cognitive agents at varying levels of completion, some already displaying interesting and useful functionalities alone and in combination.


The OpenCyc Platform is your gateway to the full power of Cyc, the world's largest and most complete general knowledge base and commonsense reasoning engine. OpenCyc contains hundreds of thousands of Cyc terms organized in a carefully designed ontology. Cycorp offers this ontology at no cost and encourages you to make use of, and extend, this ontology rather than starting your own from scratch. OpenCyc can be used as the basis of a wide variety of intelligent applications such as:


OpenEats is a recipe management site that allows users to create, share, and store recipes. OpenEats was created using django, a python web framework and several django plugins. Some of the features of OpenEats are;


This repository contains a resurrected and repaired version of OpenEphyra . It was branched from the latest version of OpenEphyra on SoundForge , as of March, 2014, for use in the OpenCog artificial intelligence system (Copyright (C) 2014 [OpenCog Foundation](


This research was supported by the National Science Foundation (NSF) under grant number CNS-1518865. Additional support was provided by the Intel Corporation, Google, Vodafone, NVIDIA, and the Conklin Kistler family fund. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and should not be attributed to their employers or funding sources.


Open IE ====== This project contains the principal Open Information Extraction (Open IE) system from the University of Washington (UW). An Open IE system runs over sentences and creates extractions that represent relations in text. For example, consider the following sentence.


OpenIoT is a joint effort of prominent open source contributors towards enabling a new range of open large scale intelligent IoT (Internet-of- things) applications according to a utility cloud computing delivery model.


This is the [PyTorch]( version of the [OpenNMT]( project, an open-source (MIT) neural machine translation framework. It is designed to be research friendly to try out new ideas in translation, summary, morphology, and many other domains. Some companies have proven the code to be production ready.


This README is somehow outdated. Please see this page for more update information, in particular with respect to installation wich is now quite easy using robotpkg.


See the license files for the original and updated contributions. The initial release of Open SPIFe to open source is given by the NASA Open Source Agreement and third-party licenses including Apache License 2.0, Eclipse Public License 1.0, Mozilla Public License 2.0, and GNU General Public License 3.0.


**** is a small Linux software written in python, built to help you **quickly find and download subtitles for your favorite videos**. It can be used as a nautilus script, or as a regular application working under GNOME or KDE desktop environments. You can also use it in full CLI mode (Command Line Interface) on your NAS, Raspberry Pi or wherever you want to bundle it really!


OpenTimelineIO is an interchange format and API for editorial cut information. OTIO is not a container format for media, rather it contains information about the order and length of cuts and references to external media.


OpenWiFiMap is a database and map for free network WiFi routers (freifunk and others, too!).


# Open nsfw model This repo contains code for running Not Suitable for Work (NSFW) classification deep neural network Caffe models. Please refer our [blog]( post which describes this work and experiments in more detail.


OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games. OpenSpiel supports n-player (single- and multi- agent) zero-sum, cooperative and general-sum, one-shot and sequential, strictly turn-taking and simultaneous-move, perfect and imperfect information games, as well as traditional multiagent environments such as (partially- and fully- observable) grid worlds and social dilemmas. OpenSpiel also includes tools to analyze learning dynamics and other common evaluation metrics. Games are represented as procedural extensive-form games, with some natural extensions. The core API and games are implemented in C++ and exposed to Python. Algorithms and tools are written both in C++ and Python. There is also a branch of pure Swift in the `swift` subdirectory.


This repository contains code for the following paper:


Opinion miner based on machine learning that can be trained using a list of KAF/NAF files. It is important to notice that the opinion miner module will not call to any external module to obtain features. It will read all the features from the input KAF/NAF file, so you have to make sure that your input file contains all the required information in advance (tokens, terms, polarities, constituents, entitiess, dependencies...).


A greatly reduced dataset of only images that have eye-bending patterns is here (**569** images, hand picked):


This is a repository for the code and data from the paper _Open Question Answering Over Curated and Extracted Knowledge Bases_ from KDD 2014. If you use any of these resources in a published paper, please use the following citation:


You can think of =org-brain= as a combination of a wiki and a mind map, where each wiki page / mind map node is an =org-mode= file which resides in your =org-brain-path=, or a headline with an ID property in one of those files. These are called /entries/. Entries can be linked together, and you can then view the network of links as a mind map, using =M-x org-brain-visualize=. Here's [[][a video introducing =org-brain=]].


# org-mind-map This is an emacs package that creates graphviz directed graphs from org-mode files. This project is currently unmaintained! If anyone would like to take this over an fix up my (very messy) code, please let me know.




#### Pulse, for last year/quarter/month (amount + delta from total) - Open and Closed Issues - Open and Merged PRs - Releases Count - Downloads divergence - Downloads degradation per release (will come later) - Stale Branches Count


OSSMETER is an EU-funded research product that is developing a platform for monitoring the quality of open-source software projects.


ECL, like many other free programs, can be built and installed a GNU tool called Autoconf. This is a set of automatically generated scripts that detect the features of your machine, such as the compiler type, existing libraries, desired installation path, and configures ECL accordingly. The following procedure describes how to build ECL using this procedure and it applies to all platforms except for the Windows ports.


5. Generic tactic considerations: The ATTACK, HUNT and DEFEND tactics also share some common heuristic features. Each tactic penalises the two team mate agents from moving too close together. This means that they cover more area both when attacking - allowing more food to be eaten, and when defending - cornering the enemy more easily. This is also advantagous when hunting, as there is a greater chance the one of the agents will directly spot the invader.


This code is a port of the Common Lisp programs found in the book [Paradigms of Artificial Intelligence Programming]( written by Peter Norvig. The goal of this project is to enable Emacs extension developers to easily use the programming techniques in PAIP. The project focuses on providing the developers with good modular software tools, rather than helping them to understand AI programming techniques. If you would like to learn it, I recommend you install the [SBCL](, a Common Lisp implementation, run and hack the original code by Norvig in Common Lisp.


This is an open-source repository for the book *Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp* by Peter Norvig (1992), and the code contained therein. The copyright has reverted to the author, who has shared it here under MIT license.


This repository contains open source board and FPGA designs associated with the Parallella project.


Note: There is a newer version of this codebase [here](, and this should be considered deprecated.


Parma is a Predicate ARguMent Alignment tool, described in the following publications:


This is the result of my thesis for graduating on Electrical Engineering. It is a simple classification system with the following specs:


1. max_program_len dictates the maximum depth of the search. 2. The result file has a json dictionary line for each program predicted. The dictionary contains the predicted program and some details about the search, like the amount of time the search took and the final beam size. 3. Use --search_method to change the method from the default CAB search to DFS.


This repository contains PDDL benchmark instances in a **consistent structure.**


This code is made publicly available by SIFT, LLC under the terms of the 3-clause BSD license, attached as [[file:license.txt][license.txt]].


This is a collection of translator from pddl format to smv format. They are all based on Fabio Patrizi first version to translate pddl files to [TLV]( files.


A short-term memory module for AI planning


PDDLtoGraph is a simple program for visualising PDDL files as relatedness and causal graphs, written in python. It also determines the diameter and the radius of the graph.


This is an implementation of the formal framework of Projective Discourse Representation Theory (Venhuizen et al. 2013; 2014), which is an extension of standard Discourse Representation Theory (Kamp 1981; Kamp & Reyle 1993) with projection pointers.


Replace the argument `examples/wsj_2300.txt` with the file or the folder containing text files you want to parse. The resulting pipe and auxiliary files would be in a folder named `output` in each folder containing text files. Note that when the argument is a folder, the parser will search for files ending in `.txt` in the folder and all of it's subfolders.


Pegasus WMS is a configurable system for mapping and executing scientific workflows over a wide range of computational infrastructures including laptops, campus clusters, supercomputers, grids, and commercial and academic clouds. Pegasus has been used to run workflows with up to 1 million tasks that process tens of terabytes of data at a time.


** Vision At its heart, emacs is an operating system based on a =tty=, which is a text stream.


The PERICLES Extraction Tool (PET) is an open source (Apache 2 licensed) Java software for the extraction of significant information from the environment where digital objects are created and modified. This information supports object use and reuse, e.g. for a better long-term preservation of data. The Tool was developed entirely for the PERICLES EU project []( by Fabio Corubolo, University of Liverpool, and Anna Eggers, Göttingen State and University Library.


This will install the program with a command-line hook. You can now run the program using:


This is a modification of Tim Finin's PFC.


The Pharos static binary analysis framework is a project of the Software Engineering Institute at Carnegie Mellon University. The framework is designed to facilitate the automated analysis of binary programs. It uses the ROSE compiler infrastructure developed by Lawrence Livermore National Laboratory for disassembly, control flow analysis, instruction semantics, and more.


This system links a series of Python programs to convert the files which have been downloaded by to coded event data which is uploaded to a web site designated in the config file. The system processes a single day of information, but this can be derived from multiple text files.


This repository contains a pytorch implementation of "Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization".


Piranha is a C++11-based computer algebra library for the manipulation of algebraic objects, such as polynomials and Poisson series, commonly encountered in celestial mechanics.


This code borrows heavily from [pytorch-CycleGAN-and-pix2pix](


A Prolog grammar written in Prolog, for parsing and serialising Prolog code.


This project provides the open source implementation of the PlaNet agent introduced in [Learning Latent Dynamics for Planning from Pixels][paper]. PlaNet is a purely model-based reinforcement learning algorithm that solves control tasks from images by efficient planning in a learned latent space. PlaNet competes with top model-free methods in terms of final performance and training time while using substantially less interaction with the environment.


A tool to animate plans generated from PDDL definitions.


This project intends to be the most comprehensive and robust platform possible for extracting scalar features from PDDL domains and problem instances for AI planning problems.


[Unison]( is a new programming platform, currently under active development. This repo contains the code for the Unison node backend (written in Haskell, lives in the `node` directory, with source in `src`), and the Unison editor (currently written in Elm, found in the folder `editor-elm`).


This project makes use of two external repositories:


What is plOpenGL ---------------- plOpenGL is an open source project that aims to develop a complete cross-platform SWI-Prolog binding for the OpenGL, GLU and GLUT libraries.


You might be interested in . This is an early version of my structure-discovery program, to which I gave a Prolog-TLI-style interface with a command language that could pass spreadsheets around as values and operate on them.


This README is a work in progress, please feel very free to post issues - we are happy to help. Save up computational power: you can find checkpoints here: (feel free to open an issue for discussing which checkpoint you should use for which game/problem!).


This project is an attempt to increase the accuracy of such queries by reducing the problems associated with polysemy by identifying the meaning of each word in a document (a process called sense tagging) and using those senses in place of words to search for a document.


Portia is a tool that allows you to visually scrape websites without any programming knowledge required. With Portia you can annotate a web page to identify the data you wish to extract, and Portia will understand based on these annotations how to scrape data from similar pages.


The PRAXICON is a conceptual knowledge base in which concepts have both symbolic


This is an attempt to predict diseases from the given symptoms. A decision tree was trained on two datasets, one had the scraped data from [here](


This will run the whole training for one epoch and regularly output the current progress, while saving the network.


PredictionIO is an open source machine learning framework for developers and data scientists. It supports event collection, deployment of algorithms, evaluation, querying predictive results via REST APIs.


This is a [situation calculus][SitCalc]- and [Golog][Golog]-based system written in [Mercury][Mercury]. See this [paper][Paper] or [these slides][Slides] for more information.


This repository contains a computer-assisted formalization of Ed Zalta's principia metaphysica, which is based on Zalta's theory of abstract objects. This work is based on a second-order modal logic which employs relational type theory as a foundation.


**ProbCog** is a statistical relational learning and reasoning system that supports efficient learning and inference in relational domains. We provide an extensive set of open-source tools for both undirected and directed statistical relational models.


This script contain the settings used for PROBE in IPC-7


This code provides a framwork for extracting procedural information from documents. Please refer to our ACL paper ([arXiv]( for further descriptions.


A decade ago, Marc Andreessen [famously wrote]( that "software is eating the world." Software now permeates every part of our existence; Google services combine for [2 billion lines of code](, and a modern vehicle [contains around]( 100 million lines of code. It's a monumental challenge to create, debug, maintain, and update these complex software systems. Recently, a fast-growing discipline known as AI for Code aims to help software developers improve their productivity by automating the software engineering process. AI for Code researchers have been leveraging technologies like NLP and augmenting them with code analysis and compilation techniques to perform a myriad of practical tasks, such as code search, summarization, and completion, as well as code-to-code translation. The discipline isn't limited to academic research either: Ruchir Puri, IBM Research's chief research scientist, discussed in a recent [podcast]( how technologies from AI for Code are being used to modernize legacy software by helping migrate monolithic applications to microservices for IBM's enterprise clients.


A static analyzing tool for Prolog written in Clojure and Prolog. The tool uses specs for predicates based on [plspec]( to find errors statically.


A Player vs AI game of checkers implemented in Prolog.


Locked Door: As seen in the map above, there is a locked door just before the dragon.


The [Graphplan algorithm]( is an [automatic planning]( algorithm that can compute, given a set of rules, a plan of action to go from an initial state to a final state.


This project is part of the course Declarative Programming taught at Vrije Universiteit Brussel. It can be executed by running the _swipl_ program in the directory of this project. SWI-Prolog is available [here]( First, one of the instances should be loaded. This can be done by one of the following commands:


This is the compiler's output:


This version is derived from the original via Quintus Prolog after some compatibility modifications for SWI-Prolog and adding a module header that allows using it safely together with other applications.


## Proof-Number Search Proof-Number search (PNS) is a best-first tree search algorithm applied to determine the definite value of AND/OR trees. PNS does not require domain knowledge, only terminal positions need to be recognized. PNS can be used to solve games and endgame positions.


This release updates the annotations for Ontonotes data and the English Web Treebank. An additional 160,000 predicates of data has been annotated in the BOLT corpora, and will be made public when LDC releases BOLT to the general catalog. This will also host other English Propbank annotations whenever we are able to post them.


Prova is an economic and efficient, Java JVM based, open source rule language for reactive agents and event processing. It combines imperative, declarative and functional programming styles. It is designed to work in distributed Enterprise Service Bus and OSGi environments.


A tool to automatically generate pseudo-code from source code.


examples.align contains the example alignments described in the paper above.


Puck is a high-speed, high-accuracy parser for natural languages. It's (currently) designed for use with grammars trained with the Berkeley Parser and on NVIDIA cards. On recent-ish NVIDIA cards (e.g. a GTX 680), around 400 sentences a second with a full Berkeley grammar for length <= 40 sentences.



NASALib is a continuing collaborative effort that has spanned over 3 decades, to aid in research related to theorem proving sponsored by NASA ( It consists of a collection of formal development (i.e., libraries) written in the Prototype Verification System ([PVS](, contributed by SRI, NASA,NIA, and the PVS community, and maintained by the [NASA/NIA Formal Methods Team at LaRC](


Pyhop is a simple HTN planner written in Python. It works in both Python 2 and 3.


This program was authored by John Beieler (jbeieler@caerusassociates).


Pyro is a flexible, scalable deep probabilistic programming library built on PyTorch. Notably, it was designed with these principles in mind: - **Universal**: Pyro is a universal PPL -- it can represent any computable probability distribution. - **Scalable**: Pyro scales to large data sets with little overhead compared to hand-written code. - **Minimal**: Pyro is agile and maintainable. It is implemented with a small core of powerful, composable abstractions. - **Flexible**: Pyro aims for automation when you want it, control when you need it. This is accomplished through high-level abstractions to express generative and inference models, while allowing experts easy-access to customize inference.


python-kasa is a Python library to control TPLink smart home devices (plugs, wall switches, power strips, and bulbs) using asyncio. This project is a maintainer-made fork of [pyHS100]( project.


**PyTodoist** is a Python package for interacting with `Todoist `_. It hides the underlying API calls with higher-level abstractions that make it easy to use Todoist with Python.


PyTrees is a python implementation of behaviour trees designed to facilitate the rapid development of medium sized decision making engines for use in fields like robotics. Brief feature list:


qgrep is an implementation of grep database, which allows you to perform grepping (i.e. full-text searches using regular expressions) over a large set of files. Searches use the database which is a compressed and indexed copy of the source data, thus they are much faster compared to vanilla grep -R.


This is the implementation of the approach described in the paper: > Dario Pavllo, David Grangier, and Michael Auli. [QuaterNet: A Quaternion-based Recurrent Model for Human Motion]( In arXiv preprint arXiv:1805.06485, 2018.


Racer is a knowledge representation system that implements a highly optimized tableau calculus for the description logic SRIQ(D). Racer is provided with a BSD-3 license (see the file LICENSE.txt).


r2 is a rewrite from scratch of radare in order to provide a set of libraries and tools to work with binary files.


#### Overview ReAgent is an open source end-to-end platform for applied reinforcement learning (RL) developed and used at Facebook. ReAgent is built in Python and uses PyTorch for modeling and training and TorchScript for model serving. The platform contains workflows to train popular deep RL algorithms and includes data preprocessing, feature transformation, distributed training, counterfactual policy evaluation, and optimized serving. For more detailed information about ReAgent see the white paper [here](


# Real-Time Voice Cloning This repository is an implementation of [Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis]( (SV2TTS) with a vocoder that works in real-time. Feel free to check [my thesis]( if you're curious or if you're looking for info I haven't documented. Mostly I would recommend giving a quick look to the figures beyond the introduction.


Reasonable Python is a module which adds F-Logic to Python. This is an initial package and is still pretty unstable. Any bug repport is very appriciated.


This is a baseline implementation. General use cases could guide restrictions that still permit tractible inference. See the slides for more conclusions.


Implementation of [ReBeL](, an algorithm that generalizes the paradigm of self-play reinforcement learning and search to imperfect-information games. This repository contains implementation only for [Liar's Dice]( game.


Updating your housekeeping book is a tedious task: You need to manually find the shop name, the date and the total from every receipt. Then you need to write it down. At the end you want to calculate a sum of all bills. Nasty. So why not let a machine do it?


# Recipe Interpretation This repository contains the code for [*Mise en Place*: Unsupervised Interpretation of Instructional Recipes]( by Chloe Kiddon, Ganesa Thandavam Ponnuraj, Luke Zettlemoyer, and Yejin Choi.


A PHP library for parsing recipe data from HTML.


The _OwnTracks Recorder_ is a lightweight program for storing and accessing location data published via [MQTT]( (or HTTP) by the [OwnTracks]( apps. It is a compiled program which is easy to install and operate even on low-end hardware, and it doesn't require an external database.


A tool to extract knowledge from syntactic and semantic relations.


REL is a modular Entity Linking package that is provided as a Python package as well as a web API. REL has various meanings - one might first notice that it stands for relation, which is a suiting name for the problems that can be tackled with this package. Additionally, in Dutch a 'rel' means a disturbance of the public order, which is exactly what we aim to achieve with the release of this package.


RelationFactory is a relation extraction and knowledge-base population system. It was the top-ranked system in TAC KBP 2013 English Slot-filling ( If you want to use RelationFactory in a TAC benchmark, please contact the authors (see LICENSE for details). RelationFactory uses SVMLight ( for classification, so you must agree to the License of SVMLight, especially to it being restricted to scientific use only.


Repairnator is an open-source project for [automated program repair]( All kinds of repair are considered: test failure repair, compilation error repair, static warning repair, crash repair, etc. Repairnator is integrated with continuous integration (Travis CI, Jenkins, etc.) and makes pull-requests with fixes. The project is hosted at the [Eclipse]( open-source foundation.


This tool allows you to setup a `webhook` that waits for the Pull Requests and scans all interesting files to check for leaked secrets. Every time PR is updated it rescans latest changes and generates a report.


ReqWiki is a novel open source web-based approach for software requirements engineering. It is based on a semantic wiki that includes natural language processing (NLP) assistants, which work collaboratively with humans on the requirements specification documents. It is the first Requirements Engineering tool that combines wiki technology for collaborative use and semantic knowledge representation for formal queries and reasoning with natural language processing assistants within a single, cohesive interface.


A resolution theorem prover written in Lisp for UMaine's COS470: Artificial Intelligence course.


This is a set of baseline algorithms for the [Retro Contest](


Gym Retro is a wrapper for video game emulator cores using the Libretro API to turn them into Gym environments. It includes support for multiple classic game consoles and a dataset of different games. It runs on Linux, macOS and Windows with Python 3.5 and 3.6 support.


Gym Retro is a wrapper for video game emulator cores using the Libretro API to turn them into Gym environments. It includes support for multiple classic game consoles and a dataset of different games. It runs on Linux, macOS and Windows with Python 3.5 and 3.6 support.


ReVerb is a program that automatically identifies and extracts binary relationships from English sentences. ReVerb is designed for Web-scale information extraction, where the target relations cannot be specified in advance and speed is important.


A python package for detecting rhyme schemes in poetry. With standard configuration, it achieves about 65% accuracy in the `rhymedata `_ corpus.


This dataset contains 260 cooking recipe texts which are the same as [CURD]( and [SIMMR]( The corpus development is detailed in [our short paper]( If our work contributes to your research, please cite the paper. ``` @inproceedings{jiang-etal-2020-recipe, title = "Recipe Instruction Semantics Corpus ({RIS}e{C}): {R}esolving Semantic Structure and Zero Anaphora in Recipes", author = "Jiang, Yiwei and Zaporojets, Klim and Deleu, Johannes and Demeester, Thomas and Develder, Chris", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "", pages = "821--826"} ```


Please solve: 1/2 + 3/4 |: 4/6. This is wrong. You cannot just sum the numerators when the denominators are different! Let us first find a common multiple of 2 and 4! Please enter a common multiple of 2 and 4: |: 2. This is wrong. 2 is no common multiple of 2 and 4, since 2 is not divisible by 4! So, let's try again! Please enter a common multiple of 2 and 4: |: 3. This is wrong. 3 is not a common multiple of 2 and 4, since 3 is not divisible by 2! So, let's try again! Please enter a common multiple of 2 and 4: |: 5. This is wrong. I see you are having a hard time with this. Hint: 2 * 4 = 8 is a possible solution. So, let's try again! Please enter a common multiple of 2 and 4: |: 8. Good, the solution is correct. There is also a smaller solution! Now apply this knowledge to the original task! Please solve: 1/2 + 3/4 |: 10/8. Good, the solution is correct, but not minimal. Please cancel common divisors in: 10/8 |: 1/4. This is wrong! Unfortunately, I cannot give any useful hints here. So, let's try again! Please cancel common divisors in: 10/8 |: 5/0. The denominator of a fraction cannot be 0. So, let's try again! Please cancel common divisors in: 10/8 |: 5/4. Good, the solution is correct and also minimal. Very nice! the interaction history: [solve(1/2+3/4),internal(1/2+3/4=4/6),solve(cm(2,4)),internal(cm(2,4)=2),solve(cm(2,4)),internal(cm(2,4)=3),solve(cm(2,4)),internal(cm(2,4)=5),solve(cm(2,4)),internal(cm(2,4)=8),solve(1/2+3/4),internal(1/2+3/4=10/8),solve(cancel(10/8)),internal(cancel(10/8)=1/4),solve(cancel(10/8)),internal(cancel(10/8)=5/0),solve(cancel(10/8)),internal(cancel(10/8)=5/4)] true.


The Record Linkage ToolKit (RLTK) is a general-purpose open-source record linkage platform that allows users to build powerful Python programs that link records referring to the same underlying entity. Record linkage is an extremely important problem that shows up in domains extending from social networks to bibliographic data and biomedicine. Current open platforms for record linkage have problems scaling even to moderately sized datasets, or are just not easy to use (even by experts). RLTK attempts to address all of these issues.


A small collection of utilities for making roguelikes


This document describes L version B<0.0.15>.


This Git Repository contains of all the code samples available on, along with instructions and supplemental tools to help get them running on your local machine.


[Rosette]( is a solver-aided programming language that extends [Racket]( with language constructs for program synthesis, verification, and more. This repository includes the source code for Rosette, as well as several example solver-aided DSLs.


**rpitx** is a radio transmitter for Raspberry Pi (B, B+, PI2, PI3B,PI3B+,PIZero,PiZerow) that transmits RF directly to GPIO. It can handle frequencies from 5 KHz up to 1500 MHz.


The code is an example of implementing a custom MovieOS-style interface for your RaspberryPi projects that include the RaspberryPi touch screen (e.g. home automation control panel). The LCARS assets can be replaced with assets from any other style of user interface (e.g. from games, cartoons, or TV series).


RTEC is an extension of the [Event Calculus]( that supports highly-scalable stream processing. It is written in Prolog and has been tested under [YAP 6.2](


This software has been tested on debian linux 7.1, but should work on any linux distribution, and might run on OS X and other POSIX compliant operating systems.


rtl_433 (despite the name) is a generic data receiver, mainly for the 433.92 MHz, 868 MHz (SRD), 315 MHz, 345 MHz, and 915 MHz ISM bands.


#### Test case The following is a cube map for a solved cube with the Left side rotated 90 degrees:


Rudel is a collaborative editing environment for GNU Emacs. Its purpose is to share buffers with other users in order to edit the contents of those buffers collaboratively. Rudel supports multiple backends to enable communication with other collaborative editors using different protocols, though currently Obby (for use with the Gobby editor) is the only fully-functional one.


A video demonstrating rudibugger can be found [here](


This repo contains tools and utilities to: 1. Generate datasets of theories and assertions meant to test the logical reasoning capabilities of a model. For details see the paper [Transformers as Soft Reasoners over Language]( 2. Run existing theories through a theorem proving engine to obtain labels.


This project contains the GOAL runtime (standalone)


Safehouse is a __headless__ (I didn't write any js or templates), __developer-focused__ (you config it by editing the source code), __scale-invariant__ (it only has one user) django server. You text it or (eventually) email it codewords and parameters, and it does stuff. Like send you a joke. Or text a bunch of your friends saying you're having a serious mental episode and need to talk to someone _right now_ before you cut off your hands.


This is a documentation on how to install and use the codes of **SafeLearner**. It is licensed under [Apache-2.0 license](


This repository contains the code to deploy and run the Sapa Replan planner (, which derives from the Sapa codebase.


This project depends on NLTK, the natural language toolkit, which also depends on other libraries. Please follow the instructions for installing this library at . (windows users may need to consult


Saul is a modeling language implemented as a domain specific language (DSL) in Scala. The main goal of Saul is to facilitate designing machine learning models with arbitrary configurations for the application programmer, including:


This is a proof-of-concept implementation of a (very!) small fragment of an English Sign-Based Construction Grammar, adapted to adhere to classic CxG assumptions. The grammar is implemented in ProFIT, a Prolog extension with Features, Inheritance, and Templates originally developed by Gregor Erbach (Universitaet des Saarlandes) in 1994. The present version of ProFIT has been ported to modern SICStus Prolog (3.8 or higher) by Mats Carlson. None of these individuals have any knowledge of the present project or share any of the blame for any of its shortcomings.


There is a new version of science-parse out that works in a completely different way. It has fewer features, but higher quality in the output. Check out the details at


* [triageServer]( generates the web archive (*.war) file that runs on a web application container (such as Jetty, Tomcat, Glassfish, etc). * [skmTriage]( contains the server-side logic for all administrative commands to generate, populate and edit the underlying database * [triageClientApp]( generates the *.swf file for the Flex web-application * [triageClientComponents]( generates the *.swc library containing all the logic of the triageModule Flex component. * [skmCore]( provides a basic layer on top of the digitalLibrary for other text mining applications using UIMA. * [digitalLibraryDao]( provides a data access to the system for base citaiton and document functions. * [lapdftext]( is the core library for manipulating PDF documents. * [lapdftextVpdmf]( links the lapdftext library to the VPDMf framework via the FTD model. * [bmkeg-as-parent]( manages maven meta-data for AS projects. * [bmkeg-parent]( manages maven meta-data for Java projects.


This is the human-annotated AI2 Reasoning Challenge (ARC) dataset (ARCADE198) from the following paper:


Scone is a knowledge representation and reasoning system – a knowledge-base system or KBS – that has been developed by Scott Fahlman’s research group in the Language Technologies Institute of Carnegie Mellon University. Scone, by itself, is not a complete AI or decision-making system, and does not aspire to be; rather, it is a software component – a sort of smart active memory system – that is designed to be used in a wide range of software applications, both in AI and in other areas. Scone deals just with symbolic knowledge. Things like visualization, motor memory, and memory for sound sequences are also important for human-like intelligence, but we believe that those will have specialized representations of their own, linked in various ways to the symbolic memory.


This is a project to provide Semantic Web programmers with [Information Extraction]( (IE) functionalities. SCOOBIE can be initialised with any kind of RDF graph. It interprets the occurrence of URI references being described with RDF properties as descriptions of formal instances. On the basis of an RDF graph with contained instances, SCOOBIE offers following methods:


We face a tradeoff between seeking the broadest geographic coverage we can get (meaning including every local paper we can find) and accuracy and relevance (which would lead us to include only large, well-known, and high quality news outlets). We're trying to balance the two objectives by including a third column indicating whether the source is one is a wire service, a dependable news source with solid international coverage, or a local source that may contribute extra noise to the data and may require specialized actor dictionaries. The distinction between the latter two is hazy and requires a judgement call. Eventually, these labels can be used to build event datasets that are either optimized for accuracy and stability (at the cost of sparseness), or micro-level, geographically dispersed (but noisy) coverage.


Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.


## How Redaction Works The redaction process is currently mostly static and fairly simple. In the future the process will be more flexible allowing submission of photos for processing or even regions of photos. The process initially uses Tesseract OCR to find words inside the image. Once this process is finished, users are notified of completion. If a user chooses to view the redactions, the currently enabled word dictionaries are applied to the results. Dictionaries can choose to white list or black list with their own internal rules. The end result is a screenshot with zero or more words wrapped in boxes and blacked out.


This is an attempt to build a graphical user interface for editing SCXML finite state machines.


Structured Data Extractor (SDE) is an implementation of DEPTA (Data Extraction based on Partial Tree Alignment), a method to extract data from web pages (HTML documents). DEPTA was invented by Yanhong Zhai and Bing Liu from University of Illinois at Chicago and was published in their paper: "Structured Data Extraction from the Web based on Partial Tree Alignment" (IEEE Transactions on Knowledge and Data Engineering, 2006). Given a web page, SDE will detect data records contained in the web page and extract them into table structure (rows and columns). You can download the application from this link: Download Structured Data Extractor.


  1. Extract
  2. Make sure that Java Runtime Environment (version 5 or higher) already installed on your computer.
  3. Open command prompt (Windows) or shell (UNIX).
  4. Go to the directory where you extract
  5. Run this command: java -jar sde-runnable.jar URI_input path_to_output_file
  6. You can pass URI_input parameter refering to a local file or remote file, as long as it is a valid URI. URI refering to a local file must be preceded by "file:///". For example in Windows environment: "file:///D:/Development/Proyek/structured_data_extractor/bin/input/input.html" or in UNIX environment: "file:///home/seagate/input/input.html".
  7. The path to output file parameter is formatted as a valid path in the host operating system like "D:\Data\output.html" (Windows) or "/home/seagate/output/output.html" (UNIX).
  8. Extracted data can be viewed in the output file. The output file is a HTML document and the extracted data is presented in HTML tables.

Source Code

SDE source code is available at GitHub.


SDE was developed using these libraries:

  • Neko HTML Parser by Andy Clark and Marc Guillemot. Licensed under Apache License Version 2.0.
  • Xerces by The Apache Software Foundation. Licensed under Apache License Version 2.0.


SDE is licensed under the MIT license.


Sigit Dewanto, sigitdewanto11[at]yahoo[dot]co[dot]uk, 2009.


This project will improve the Game Development tutorials for Perl using the SDL library. The primary goal is to introduce newcomers to Game Development in Perl. The secondary goal is to attract people to try Perl as a Game Scripting and Prototyping language.


Approach0 is a math-aware search engine.


A curated list of awesome Public Zettelkastens 🗄️ / Second Brains 🧠 / Digital Gardens 🌱


A bridge to help increase your ability to detect secrets shared on Github.


[Selenium WebDriver][wd] is a test tool that allows you to write automated web application UI tests in any programming language against any HTTP website using any mainstream JavaScript-enabled browser. This module is a Perl implementation of the client for the Webdriver [JSONWireProtocol that Selenium provides.][jsonwire]


This project is meant to automate debian package for selenium-server It will automatically download selenium-server from google code file repository and package it with init.d scripts.


# The Self-dialogue Corpus This is an early release of the Self-dialogue Corpus containing 24,165 conversations, or 3,653,313 words, across 23 topics. For more information on the data, please see [our corpus paper]( or [our submission to the Alexa Prize](


This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.


This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.


A *semagram* is a flexible structure for encoding the semantics of a given concept via a slot-filler structure.


This will produce the following output files, saved in the directory models/semeval-winning-model/answers/friends_test_scene/ :


This repository aims to solve DeftEval: Extracting term-definition pairs in free text


- `configs`: yaml configs for the system - `datasets`: contains the task datasets, which can be downloaded from the team competition webpage - `results`: the folder for submissions - `span_identification`: code for the task SI - `ner`: pytorch-transformers RoBERTa model with CRF (end-to-end) - `dataset`: the scripts for loading and preprocessing source dataset - `submission`: the scripts for obtaining and evaluating results - `technique_classification`: code for the task TC (the folder has the same structure as `span_identification`) - `tools`: tools provided by the competition organizers; contain useful functions for reading datasets and evaluating submissions - `visualization_example`: example of visualization of results for both tasks


A semantic parser maps natural language utterances into an intermediate logical form, which is "executed" to produce a denotation that is useful for some task.


A web demo for visualizing Semafor parses


A Clojure library designed to scrub sensitive data such as social security numbers and credit card numbers from strings.


SentiWordNet is a lexical resource for opinion mining. SentiWordNet assigns to each synset of WordNet three sentiment scores: positivity, negativity, objectivity. SentiWordNet is described in details in the papers:


This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.


A shell parser, formatter, and interpreter. Supports [POSIX Shell], [Bash], and [mksh]. Requires Go 1.12 or later.


[SHALMANESER]( is a SHALlow seMANtic parSER.

SharpWit is an online service that takes a natural language sentence, ie. 'I have a meeting tomorrow', and sends back data that can be easily interpreted by software, ie. 'intent: appointment, datetime: 2014-03-02T00:00:00.000+01:00'.


2. USAGE: ./scripts/ [options... -o $OUTPUT_FILE] $GRAPH $NUM_PARTITIONS $GRAPH may be a .net (SNAP) or a .dat (XSS/Graph500 binary) file. There is a snap2xss conversion utility in llama/utils By default, $GRAPH = test/hep-th.dat and $NUM_PARTITIONS = 2 If $NUM_PARTITIONS = 0, then we skip the partitioning phase.


ShellCheck is a GPLv3 tool that gives warnings and suggestions for bash/sh shell scripts:


The following is an example of the command line to run all the tests for Sherlock. This invocation hides the progress text that Sherlock normally outputs, and instead shows the verbose output of the tests.


ShinyCMS is an open source CMS built in Perl using the Catalyst framework.


SHOP2 -- Simple Hierarchical Ordered Planner 2 -- is a domain-independent planning system based on Hierarchical Task Network (HTN) planning. In the 2002 International Planning Competition, SHOP2 received one of the top four awards, one of the two awards for distinguished performance.


This repository contains the open source version of the SHOP3 planner.


Shroud is a simple secret manager with a command line interface. The password database is stored as a Scheme s-expression and encrypted with a [[][GnuPG]] key.


This software was used to extract, clean, annotate, and evaluate the corpus described in our SIGIR 2016 article.


Sigma is an integrated development environment for logical theories that extend the Suggested Upper Merged Ontology. There is a public installation with read-only functions enabled linked from


A new version of Sikuli(X) is available since 2013
as a follow up development


SimGen is a simulation language, originally created by Simularity, Inc.


This is a very shitty Emacs mode for **basic** displaying and editing of Isabelle files (.thy) the idea is to avoid opening a fully fledged JEdit for trivial stuff.


SKeylogger is a simple keylogger. I had previously been using a few other open source keyloggers, but they stopped working when I upgraded my operating system. I tried to look through the code of those keyloggers, but it was undocumented, messy, and complex. I decided to make my own highly documented and very simple keylogger.


Lucida is a speech and vision based intelligent personal assistant based on Sirius. Visit the provided readmes in [lucida](lucida) for instructions to build Lucida and follow the instructions to build [lucida-suite here]( Post to [Lucida-users](!forum/sirius-users) for more information and answers to questions. The project is released under [BSD license](LICENSE), except certain submodules contain their own specific licensing information. We would love to have your help on improving Lucida, and see [CONTRIBUTING]( for more details.


SitCalc is a framework for managing state in an application without mutation based on situation calculus.


This is a reasoning engine for multi-agent epistemic queries in the situation calculus. It was developed as part of the PhD thesis (and subsequent journal paper submission) for:


This repository provides the top-level definition for interpretations of Situations in Logtalk.


The Skills Extractor is a Named Entity Recognition (NER) model that takes text as input, extracts skill entities from that text, then matches these skills to a knowledge base (in this sample a simple JSON file) containing metadata on each skill. It then returns a flat list of the skills identified.


The SLING project is still work in progress. We do not yet have a full system that can extract facts from arbitrary text, but we have built a number of the subsystems needed for such a system. The SLING frame store is our basic framework for building and manipulating frame semantic graph structures. The [Wiki flow pipeline](doc/guide/ can take a raw dump of Wikidata and [convert](doc/guide/ this into one big frame graph. This can be loaded into memory so we can do fast graph traversal for inference and reasoning over the knowledge base. The Wiki flow pipeline can also take raw Wikipedia dumps and [convert](doc/guide/ these into a set of documents with structured annotations extracted from the Wiki markup. This also produces [phrase tables](doc/guide/ that are used for mapping names to entities. There is a [SLING Python API](doc/guide/ for accessing all this information and we also have a [bot](python/wikibot) for uploading extracted facts to Wikidata.


SMACK is a *bounded software verifier*, verifying the assertions in its input programs up to a given bound on loop iterations and recursion depth. SMACK can verify C programs, such as the following:


This version is derived from the original via Quintus Prolog after some compatibility modifications for SWI-Prolog and adding a module header that allows using it safely together with other applications.


[Smatch]( is an evaluation tool for [AMR]( (Abstract Meaning Representation). It computes the Smatch score (defined below) of two AMR graphs in terms of their matching triples (edges) by finding a variable (node) mapping that maximizes the count, `M`, of matching triples, then:


A symbolic model checker for [Dynamic Epistemic Logic](


This work includes data from NextKB, which was compiled by the Qualitative Reasoning Group at Northwestern University. NextKB is freely available under the Creative Commons Attribution 4.0 license from The included data was created by contributors to the Qualitative Reasoning Group, contributors to Cycorp's OpenCyc, University of California at Berkeley's FrameNet project, the VerbNet project, and Princeton University's WordNet project. For details of attributions, please see



SNARK, SRI's New Automated Reasoning Kit, is a theorem prover intended for applications in artificial intelligence and software engineering. SNARK is geared toward dealing with large sets of assertions; it can be specialized with strategic controls that tune its performance; and it has facilities for integrating special-purpose reasoning procedures with general-purpose inference.

snowman[Snowman] is a native code to C/C++ decompiler, supporting x86, AMD64, and ARM architectures. You can use it as a standalone GUI application, command-line tool, IDA plug-in, or a library. Snowman is link:doc/licenses.asciidoc[free software].


We are building innovative products for various social networks which fill the critical gap - Social Networks were meant for user’s not for businesses. Our tools and products view Social from a business point of view and fill those gaps which social networks cannot fill exquisitely. Business should own their social data and they should be incharge of what they want to do with it, generate reports and analyze data to make informed and improved business decisions. This is possible when things are open and businesses have freedom to choose, we believe open source is a way to make this possible. So that brands and businesses can embrace social technology with an open mind in an open and connected world.


An open source social media data mining software (event detection + influence analysis)


This is a code refactoring of Limsi's source extractor program in order to expose source extraction as a web service. This is a Spring Boot application deployed in a Docker image.


In the ./sources directory are subdirectories for each language you wish to be able to identify. Each subdirectory contains examples of programs written in that language. The name of the directory is significant - it is the value returned by the SourceClassifier.identify() method.


spaCy is a library for advanced natural language processing in Python and Cython. `See here `_ for documentation and details. spaCy is built on the very latest research, but it isn't researchware. It was designed from day 1 to be used in real products. It's commercial open-source software, released under the MIT license.


An experiment with parsing natural language and classifying the [speech act]( of the sentence. This is especially important when a machine is trying to understand the meaning of a sentence in an environment, like a chat session, where missing punctuation is common.


This repository is a final archived version of Please, contact that repository's mantainer for further information.


The framework contains an example experiment using the GeoQuery corpus. To use development fold 0 for testing, and training on the other folds, use: ``java -jar dist/spf-1.4.jar geoquery/experiments/template/dev.cross/dev.fold0.exp`` The log and output files are written to a newly generated directory in the experiment directory: ``geoquery/experiments/template/dev.cross/``


This is the implementation of an agent that considers and handles the states just as humans does. It checks the cards on the board, nobles, its owned coins and development cards, then takes an action.


A JavaScript Multiagent Board Game Framework Based On Monte Carlo Methods. German: "Ein multiagentenbasiertes JavaScript-Framework zur flexiblen Implementation digitaler browserbasierter Brettspiele und spielübergreifender künstlicher Intelligenz."



spread0r is a txt reader, which makes your reading twice as fast as usual


SRLIE ===== SRLIE is a component of Open IE 4.x that automatically identifies n-ary extractions from English sentences. SRLIE is designed for Web-scale information extraction, where target relations are not specified in advance.


This software is used for generating PDDL files out of model descriptions. PDDL is a well-known artificial intelligence planning language. Please note that even though is application generates PDDL, it is not used to interpret PDDL. Users of this software are referred to open-source PDDL planners such as OPTIC planner for this task (see [link](


SimpleScreenRecorder is a screen recorder for Linux. Despite the name, this program is actually quite complex. It's 'simple' in the sense that it's easier to use than ffmpeg/avconv or VLC :).


# Star Ruler 2 Star Ruler 2 is a massive scale 4X/RTS set in space. Explore dozens, hundreds, or even thousands of systems in a galaxy of your choosing, expand across its planets, exploit the resources you find, and ultimately exterminate any who stand in your way. The fate of your empire depends on your ability to master the economy, field a military, influence galactic politics, and learn what you can about the universe.


[Agency]( is a one page agency portfolio theme for [Bootstrap]( created by [Start Bootstrap]( This theme features several content sections, a responsive portfolio grid with hover effects, full page portfolio item modals, a responsive timeline, and a working PHP contact form.


A relatively brief manual can be found in resources/introduction/index.html # description Statechum is a framework that implements a number of regular grammar inference algorithms. Regular grammars can be represented as finite state machines. Once the grammar / state machine has been generated, StateChum can visualise it, and provides a selection of state-machine analysis and testing algorithms.


This is an entirely preliminary, undocumented, unsupported release of stet. Files may be missing. Scatology may be unexpurgated. I don't have much time to help you with this right now. You need RT; we're using version 3.2. There are perl dependencies. There are unstated assumptions. But you asked for it. You got it.


# Improving Neural Story Generation by Targeted Common Sense Grounding This repository contains the code to replicate the paper "Improving Neural Story Generation by Targeted Common Sense Grounding".


This project is a demo of using the artificial intelligence automated planning library [strips](, in node.js.


STRIPState is a framework for managing state in an application without mutation based on STRIPS and situation calculus.


This directory contains knowledge base files written in KIF, and files in WordNet data file format (see ). Several alternative WordNet mapping files are present.


Simple library and command line utility for extracting summary from HTML pages or plain texts. The package also contains simple evaluation framework for text summaries. Implemented summarization methods:


Superglus is an interactive fiction (text adventures) authoring system strongly based on Professional adventure writing system.


This collection of verification tasks is constructed and maintained as a common benchmark for evaluating the effectiveness and efficiency of state-of-the-art verification technology.


This repository contains the scaffolding to initialize and keep a local development environment for the Software Heritage Python stack. In particular, it contains pointers to the Git repositories of all Software Heritage Python modules. The repositories are managed using [myrepos][1] (see the .mrconfig file), and the `mr` command.


SWIM is a compact library that implements the basic functionality of [Genetic Programming (GP)](#fg), a popular stochastic approach to program synthesis. I developed its early version in the process of preparing my recent [book](#bps) on behavioral program synthesis using GP.


SWING (Summarizer from WING) is a multiple-document news summarization system by the Web Information Retrieval/Natural Language Group (WING) at the National University of Singapore.


A tutorial for DCG's in swi-Prolog


This model is used to predict symptoms that are closely related to a given symptom. It can be used in cases (read apps) where the user enters a symptom, and a list of similar symptoms pop up, of which the user can select the ones he's suffering from, and these can be further fed into a model that can then predict the disease the person is suffering from, and redirect him to the associated specialist. The latter part isn't included here.


This function reads and processes the data file, then initializes the SymptomTree class using this processed data. This class contains attributes for the DecisionTreeClassifier model (model), the cleaned NAMCS dataset (data), a dictionary mapping diagnoses to unique identifier codes (diagnosis dict), a dictionary mapping unique codes to diagnosis strings (rev_diagnosis_dict), the x training dataset (x_train), the y training dataset (y_train), the x testing dataset (x_test), the y testing dataset (y_test), predicted diagnoses (y_hat), and a lookup attribute.


SyntheaTM is a Synthetic Patient Population Simulator. The goal is to output synthetic, realistic (but not real), patient data and associated health records in a variety of formats.


SyPet is a novel type-directed tool for component-based synthesis. The key novelty of our approach is the use of a compact Petri-net representation to model relationships between methods in an API. Given a target method signature S, our approach performs reachability analysis on the underlying Petri-net model to identify sequences of method calls that could be used to synthesize an implementation of S. The programs synthesized by our algorithm are guaranteed to type check and pass all test cases provided by the user.


# Sytora Sytora is a multilingual symptom-disease classification app. Translation is managed through the UMLS coding standard. A multinomial Naive Bayes classifier is trained on a handpicked dataset, which is freely available under CC4.0.


(6) Run T2 as follows (replace "Debug" by "Release" for the release build) $ mono "$T2DIR/src/bin/Debug/T2.exe" For example, to execute the testsuite: $ pushd "$T2DIR/test" && mono "$T2DIR/src/bin/Debug/T2.exe" -tests


Source code for the TABARI C++ event coding program. This is a GitHub mirror for the code found at


A more extensive set of dictionaries can be found incorporated into the zipped files of the various data sets at


# Event Nugget Extraction using Deep Neural Networks This repository contains the files for our Event Nugget Detection systems that was submitted to the TAC 2015 shared task on Event Nugget Detection. It is described in the paper [Event Nugget Detection, Classification and Coreference Resolution using Deep Neural Networks and Gradient Boosted Decision Trees](


This repository contains the code for automated labeling of FrameNet roles in arbitrary sense-labeled and linguistically preprocessed text as described in section 4 of our TACL paper.


This AI is packaged using [Dist::Zilla](


Tagsistant is a semantic file system for Linux, a personal tool to catalog files using tags (labels, mnemonic informations) rather than directories.


This is my version of the project for the Introduction to SWI-Prolog class.


The [Test Anything Protocol]( is a text-based interface between test scripts and a test harness. A wide range of tools exist for running, rendering and analyzing test results. By writing your Prolog tests with TAP, you get access to all this testing infrastructure. For example, [interactive HTML output](


## What is Tarski Tarski is a framework for the specification, modeling and manipulation of [AI planning]( problems. Tarski is written in Python and includes parsers for major modeling languages (e.g., [PDDL](, [FSTRIPS](, [RDDL](, along with modules to perform other common tasks such as logical transformations, reachability analysis, grounding of first-order representations and problem reformulations.


TML (Tau Meta-Language) is a variant of Datalog. It is intended to serve as a translator between formal languages (and more uses, see under the Philosophy section). The main difference between TML and common Datalog implementations is that TML works under the Partial Fixed-Point (PFP) semantics, unlike common implementations that follow the Well-Founded Semantics (WFS) or stratified Datalog. By that TML (like with WFS) imposes no syntactic restrictions on negation, however unlike WFS or stratified Datalog it is PSPACE complete rather than P complete. TML's implementation heavily relies on BDDs (Binary Decision Diagrams) in its internals. This gives it extraordinary performance in time and space terms, and allowing negation to be feasible even over large universes. In fact negated bodies, as below, do not consume more time or space than positive bodies by any means, thanks to the BDD mechanism.


The [TEI]( is an international and interdisciplinary standard used by libraries, museums, publishers, and academics to represent all kinds of literary and linguistic texts, using an encoding scheme that is maximally expressive and minimally obsolescent.


This is version 3 of the TEI-EMACS installation: a more or less complete SGML/XML authoring system, which combines GNU-Emacs with PSGML and a host of other relevant emacs customizations for writing and validating SGML or XML documents. Most XML-emacs subsystems have their own help system or documentation.


This is a rewrite of a userspace USB driver for TEMPer devices presenting a USB ID like this: `0c45:7401 Microdia` My device came from [M-Ware ID7747]( and also reports itself as 'RDing TEMPerV1.2'.


Temperance is a logic programming library for Common Lisp.


This documentation aims to explain how experiments with the planners introduced by [[Jiménez, Jonsson and Palacios, 2015]](#ref-tmp-planning-icaps15) and [[Furelos-Blanco, Jonsson, Palacios and Jiménez, 2018]](#ref-tmp-planning-coplas18) can be run.


# Tensorflow RNN to Events Prediction **[NOTE]**: *This notebook was made with [Tensorflow v.0.8.0]( and code is not compatible with the newest release of Tensorflow. For the moment I don't have time to upgrade the code so you can use notebook more as an illustration of GDELT dataset and time series analysis.*


This is a TensorFlow implementation of the [WaveNet generative neural network architecture]( for text generation.


This is a TensorFlow implementation of the [WaveNet generative neural network architecture]( for audio generation.


TerminusDB is an open source model driven graph database for knowledge graph representation designed specifically for the web-age.


This is the code for the Tetrad Project; an introduction can be found here:


This repository contains the original implementation of the evaluation methods presented in [Reference-less Quality Estimation of Text Simplification Systems]( (1st Workshop on Automatic Text Adaption, INLG 2018). The version that was used at submission time is on branch [submission](


# Text-to-LogicForm Text-to-LogicForm is a simple code for leveraging a syntactic graph for semantic parsing using a nov


TextBelt Open Source is a REST API that sends outgoing SMS. It uses a free mechanism for sending texts, different from the more reliable paid version available at


[Textus][] is an open-source platform for presenting and working with cultural and historical texts.


This is the repository for the project for AI for Games course.


### Arugument instruction - bsize: batch size - out: the output folder will contains log, best model and result report - tie_embedding: all means tie the encoder/decoder/projection w embedding, we found it can speed up the training - bert_mode: the mode of using BERT bert_token indicates we use the subtoken vocabulary from BERT; bertbase indicates we use BERT base version (due to the memory issue, we did not try BERT large version yet) - environment: the path config of the experiment. Please change it in model/ to fit to your system


This program takes data in the text-oriented ICEWS .tab files downloaded from DataVerse study 28075 and converts this to a more conventional data format using the CAMEO codes. The conversion process is described in detail in the file `text_to_CAMEO_documentation.pdf`.


Prolog is a **programming language** that is rooted in formal logic. It supports *backtracking* and *unification* as built-in features. Prolog allows us to elegantly solve many tasks with short and general programs.


This is the General Game Player used in the 2011 General Game Playing Competition


If you are not scared to blindly run the changed command, there is a `require_confirmation` [settings](#settings) option:


This repository serves for the development of the Theorema system, see also


A code searching tool similar to `ack`, with a focus on speed.


This will compile all classes and package them into a jar for use on a Hadoop cluster.


TIFMO (Textual Inference Forward-chaining MOdule) is an unsupervised Recognizing Textual Entailment (RTE) system based on Dependency-based Compositional Semantics (DCS) and logical inference.


TIFMO (Textual Inference Forward-chaining MOdule) is an unsupervised Recognizing Textual Entailment (RTE) system based on Dependency-based Compositional Semantics (DCS) and logical inference.


A plugin for visualizing a temporal planner's output as a timeline.


Tocc is a tag-based file management system. It also includes a tag-based file system called Toccfs. The goal of Tocc is to provide a better system for classifying files which is more flexible than classic file systems based on a tree of files and directories.


*torchnet* is a framework for [torch]( which provides a set of abstractions aiming at encouraging code re-use as well as encouraging modular programming.


[TORCS][TORCS] is a open-source racing car simulation. We use it as driving simulation to evaluate our [plan recognition][prGolog] system.


Disclaimer: the dataset for this competition contains text that may be considered profane, vulgar, or offensive.


TrailDB is an efficient tool for storing and querying series of events. This repository contains the core C library and the `tdb` command line tool.


This branch has the following patches:


This is a small interface built to play with small language models in the terminal.


Please note that only this git repo contains the most recent version of the addon. Due to the review process it may take a while before the updates show up on the page.


*Universal-transpiler* is a source-to-source compiler that translates a small subset of several programming languages into several others. It is also able to translate several metasyntax notations, such as EBNF and ABNF. The translation is not always 100% accurate, but I hope it will still be useful.


A general-purpose **Tran**sition-based abstract synta**X** parser that maps natural language queries into machine executable source code (e.g., Python) or logical forms (e.g., lambda calculus). **[Online Demo](**.


- The jig will print the feedback on the screen. Each feedback is a json dumped string.


# TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension - This repo contains code for the paper Mandar Joshi, Eunsol Choi, Daniel Weld, Luke Zettlemoyer. [TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension][triviaqa-arxiv] In Association for Computational Linguistics (ACL) 2017, Vancouver, Canada.


This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.


This is the main repository for the Tarsqi Toolkit (TTK), a set of processing components for extracting temporal information from news wire texts. TTK extracts time expressions, events, subordination links and temporal links; in addition, it can ensure consistency of temporal information.


** Introduction :PROPERTIES: :org:pin: 0 :ID: 8ff5465c-8ffc-4237-8302-964fbaab6454 :END: This is an experiment in building purely text-based user interfaces (TUI's). The ultimate goal is to explore new paradigms for user interface design and development using Emacs. To this end, tui.el implements an API based on the popular React JavaScript framework in order to reduce the demands involved with designing and building complex text-based UI's.


This tool is meant to be used as a web service running locally on your network or personal machine. It will load HIT template files generated by the Amazon Mechanical Turk web GUI provided to requesters for creating HITs. Input CSV files are also uploaded to create a HIT based on the template with each row of values in the CSV file.


This is an [OwnTracks]( TurnKey-Linux back-end, with the following features:


A library for communicating with devices that use the [Tuya]( cloud network. These devices are branded under many different names, but if port 6668 is open on your device chances are this library will work with it. Currently only supports smart plugs, but it should be fairly trivial to add other types of devices.


Output: ------------- The output contains the tokenized and tagged words separated by spaces with tags separated by forward slash '/' Example output:


UDepLambda is a framework to convert Universal Dependencies trees to Logical Forms. It maps natural language to logical forms in an almost language-independent framework. For more details, please refer to our papers below.


Prolog is a logical programming based on a variant of 1st order logic. To 'program' in Prolog you create a knowledge base of facts and rules about the problem. Then you may query the knowledge base. Prolog uses a modified backchaining algorithm to search the knowledge base in an attempt to prove the query.


### Running on raw text data * Prepare a data directory `data` containing sub-directories `rsd` and `ltf`. The `rsd` sub-directory contains RSD (Raw Source Data, ending with `*.rsd.txt`), and `ltf` sub-directory has LTF (Logical Text Format, ending with `*.ltf.xml`) files. * If you have RSD files, please use the [`aida_utilities/`]( to generate the LTF files. ```bash docker run --rm -v ${ltf_dir}:${ltf_dir} -v ${rsd_dir}:${rsd_dir} -i limanling/uiuc_ie_m36 /opt/conda/envs/py36/bin/python /aida_utilities/ --seg_option nltk+linebreak --tok_option nltk_wordpunct --extension .rsd.txt ${rsd_dir} ${ltf_dir} ``` * If you have LTF files, please use the AIDA ltf2rsd tool (`LDC2018E62_AIDA_Month_9_Pilot_Eval_Corpus_V1.0/tools/ltf2txt/ltf2rsd.perl`) to generate the RSD files. * Start services ```bash sh ``` * Run the scripts. Note that the file paths are absolute paths. ```bash sh ${data_root} ``` For example, ```bash sh ${PWD}/data/testdata_dryrun ```


UKB is a collection of programs for performing graph-based Word Sense Disambiguation and lexical similarity/relatedness using a pre-existing knowledge base.


# The Upper Library Ontology (for metadata on theorem prover libraries) This repository contains the [OWL2]( implementation of the Upper Library Ontology [ulo.owl](ulo.owl) and [OWLDoc documentation](OWLDoc/).


First, it is a broad, general reference structure of 34,000 concepts, which provides a scaffolding to link and interoperate other datasets and domain vocabularies. Second, it is a base vocabulary for the construction of other concept-based domain ontologies, also designed for interoperation.


This Coq library aims to formalize a substantial body of mathematics using the univalent point of view.


[Unison]( is a new programming language, currently under active development. It's a modern, statically-typed purely functional language, similar to Haskell, but with the ability to describe entire distributed systems with a single program. Here's an example of a distributed map-reduce implementation:


An algorithm for parsing any planning problem in PDDL format.


An extension to the [Universal PDDL Parser]( to handle multi-agent domains.


`Universe `_ is a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications. This is the ``universe`` open-source library, which provides a simple `Gym `__ interface to each Universe environment.


The codebase implements a starter agent that can solve a number of `universe` environments. It contains a basic implementation of the [A3C algorithm](, adapted for real-time environments.


Overview ======== Repo contains the domains, generators, and scripts in general for the inaugural edition of the unsolvability IPC.


`montague` is a little CCG semantic parsing library for Scala.


# USC Distantly-supervised Relation Extraction System This repository puts together recent models and data sets for **sentence-level relation extraction** *using knowledge bases (i.e., distant supervision)*. In particular, it contains the source code for WWW'17 paper *[CoType: Joint Extraction of Typed Entities and Relations with Knowledge Bases](*.


The User Simulator is a tool designed to generate network and host activity for training purposes. It is intended for use in a closed network primarily consisting of Windows and Linux virtual machines. Other operating systems may be compatible, but are untested. The Linux version does not have access to all the features of the Windows version. In particular, the Windows version can run several of the programs in MS Office, while the Linux version obviously cannot.


This uses [rtlamr][rtlamr] to process the radio broadcasts by the meter. I live in a less dense location than the blog author so only picked up three meters using the `idm+` message. My meter included a serial number on its face that directly matched one of those three meters so it was very easy to get the right reading.



Vagrant-mutate is a vagrant plugin to convert vagrant boxes to work with different providers.


*vagrant-vbguest* is a [Vagrant]( plugin which automatically installs the host's VirtualBox Guest Additions on the guest system.


This repository hosts tools for AI Planning plans and planning models.


![GitHub Workflow Status (branch)]( ![GitHub release (latest by date)](


Reproducible experiments are in `/vbsix-lang2program/paper_experiments/experiments/`, organized according to their domain, search algorithm, and random-seed. Each experiment's directory contains its data as described above. The directory `/vbsix-lang2program/paper_experiments/code/` holds two versions of the source code: - strongsup_baseline - The [source code]( accompanying the paper "[From Language to Programs: Bridging Reinforcement Learning and Maximum Marginal Likelihood](". This code should be used to reproduce the basleine beam-search experiments. - strongsup_vbsix - The source code accompanying our paper. This code should be used to reproduce the VBSIX and ablations experiments.



Veewee is a tool for easily (and repeatedly) building custom [Vagrant]( base boxes, KVMs, and virtual machine images.


This is a simple bot for implemented in SWI-Prolog.


This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.


This project builds [Graphviz]( with [Emscripten]( and provides a simple wrapper for using it in the browser.


This is a list of ContentMine virtual machines, with descriptions, in reverse date order (i.e. most recent first).


VOnDA is a framework for the implementation of reactive dialogue management functionality in dialogue systems for virtual agents. Although domain-independent, VOnDA is tailored towards dialogue systems with a focus on social communication, which implies the need of a long-term memory and high user adaptivity.


A list of the default keybindings for VS Code is surprisingly hard to find, even in the VS Code source, so I collected them all here. I've also included `negative` keybindings, which unmap the keybindings.


This repository ("`Code - OSS`") is where we (Microsoft) develop the [Visual Studio Code]( product together with the community. Not only do we work on code and issues here, we also publish our [roadmap](, [monthly iteration plans](, and our [endgame plans]( This source code is available to everyone under the standard [MIT license](


This Visual Studio Code extension provides emacs-like keybindings and operations. This is inspired by [the great vscode extension by hiro-sun]( and its forks such as [vscode-emacs-friendly by Sebastian Zaha](, [vscode-emacs-improved by rkwan94]( and [vscode-emacs-neon by NotKyon](


This extension makes VS Code a great place for modeling planning domains.



* !0th rule: Any sufficiently complicated Lisp or Scheme program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of ISO Prolog. * Translating Lisp to Prolog gives Prolog * Metaobject Protocol * Common Lisp Object System * Instant Prolog Ecosystem/Development Libraries (days, not years) * Several decades of Common Lisp libraries may be translated to useable Prolog development libraries. * Maintain your code from original Lisp or translated Prolog (though wont translate back) * Settings to try to emulate handwritten code ([Examples]( * Forms (at REPL) are transpiled to Prolog, Compiled to WAM, Call/Executed. * *only* 2-3 slower than SBCL * Gives to prolog more than we can list! * Simular to how CLISP is indespensable sometimes. * _a_ Common Lisp used for sanity testing * Makes debugging easy for Prolog and Lisp experts * Picks up freebies .. whatever the host Prolog system offers such as * Garbage Collection * Memoization/Coinduction * Dynamic Extent * Exception Handling * Unwind-Protect/Cleanup * Native Locatives * Two-way calling and embedding from C/C++/Python/C#/Mono/Scala/Java/Haskell/LUA/Perl * Makes Plaform Executables and. DLL/So files ([Quick Start]( * * (too enormous to go into) * Developed/Installed as a SWI-Prolog pack * []( `` ## Incompleteness must fix for release worthiness * Bugs Running/Translating: * Fully working LOOP (must-fix) * SWANK (must-fix) * PAIP Book code (bug in-progress) * [DAYDREAMER]( (in-progress) * [KNOWLEDGE MACHINE]( * Quicklisp (bug must-fix) * ASDF-INSTALL (bug must-fix) * Add missing impls * delete-package (must-fix) * (more to be Listed) (not here) * Tests ([in-progress]( * Must pass 70% or above CL-ANSI tests (bug in-progress) * Ensure passes _all_ CL-ANSI tests (with --ansi) (feature always in-prgress) * Hardest part is making sure it throws/complains about all the things it needs to * need more tests! * FFI (bug in-progress) * Use ? * Using SWICLI as FFI (SWICLI's FFI itself still needs work but works for YAP as well) ## TODO _Features_ * Document prolog source-code this pack! (indeed, a feature!) * Keep later `copy_term/2's` cheap, (feature in-progress) * Experment with way to passes entire term object object references as atoms (nb_current/2 allows access to the object's property map) * [(FAKE TODO![Build Status](]( * Untangle the `pack` install deps * Moving predicates to logicmoo_utils from logicmoo_base (Still in progress) * DEpackifed version for Portability? * YAP-Prolog (in-progress) (which Lisp to Prolog benchmarking shows about 5x speedup) * TODO: Sicstus, B-Prolog, Bin-Prolog, EcLiPSe Prolog and Jekejeke * Low-Priority: PrologCafe, Yield-Prolog


Waybackpack is a command-line tool that lets you download the entire Wayback Machine archive for a given URL.


Karma is an information integration tool that enables users to quickly and easily integrate data from a variety of data sources including databases, spreadsheets, delimited text files, XML, JSON, KML and Web APIs. Users integrate information by modeling it according to an ontology of their choice using a graphical user interface that automates much of the process. Karma learns to recognize the mapping of data to ontology classes and then uses the ontology to propose a model that ties together these classes. Users then interact with the system to adjust the automatically generated model. During this process, users can transform the data as needed to normalize data expressed in different formats and to restructure it. Once the model is complete, users can published the integrated data as RDF or store it in a database.


This repository contains all scripts associated with my research on topical Web-page classification. You can read the full paper describing the task, experiments, and results [here](paper.pdf).


Tap the screen then say a colour — the grammar string contains a large number of HTML keywords to choose from, although we've removed most of the multiple word colors to remove ambiguity. We did keep goldenrod, cos, well.


### What is weblegends? weblegends is a DFHack plugin that runs a web server, inside Dwarf Fortress, that allows you to view your entire world's history, artifacts, settlments, heros, and so much more... over the internet or just locally.


WebNav is a benchmark task for evaluating an agent with abilities to understand natural language and plan on partially observed environments. In this challenging task, an agent navigates through a web site consisting of web pages and hyperlinks to find a web page in which a query appears.


WebODE is an extensible ontology-engineering suite based on an application server, whose development started in 1999 and whose **support was discontinued in 2006**. The core of WebODE was its ontology access service, used by all the services and applications plugged into the server. The WebODE's Ontology Editor allowed editing and browsing WebODE ontologies, and was based on HTML forms and Java applets.


The following would be the standard approach of identifying the ontological primitives in QR. An ontological primitive (e.g., a quantity) has: * Exactly one **identifier**, consisting of an integer that is automatically assigned by an internal counter. The integer is appended to the path of the circle URI. Example: `localhost:5000/circle/17`. This is also used for dereferencing the circle and for sending HTTP requests from the client to the server. * Zero or more **descriptive label**s. The most recently assigned descriptive label is set as the `rdfs:label` of the identifier and is displayed in the User Interface. All other descriptive labels are asserted as `qsim:old_label` literals (possibly including the timestamp of its abolition). Example: `< localhost:5000/circle/17, rdfs:label, "boom" >`, `< localhost:5000/circle/17, qsim:old_label, 'Tree' >`. If the user types text in a circle that is not a URI, then we assume it is a descriptive label. * Zero or more **concept name**s that are existing URIs in the LOD. If the user types text in a circle that is a URI, this is assumed to be a concept name. An `owl:sameAs` relation with the identifier is asserted.


Wekan is an completely [Open Source][open_source] and [Free software][free_software] collaborative kanban board application with MIT license.


A redaction tool for structured data. Run `wernicke` with JSON on stdin, get redacted values out. Preserves structure and (to some extent) semantics. You might want this because you have test data where the actual values are sensitive. Because the changes are consistent within the data and the overall data structure is preserved, there a better chance your data will stay suitable for testing, even though it's been scrubbed.


Whirl is a toy esoteric language. See the [classic Whirl webpage]( for more info!


This project is provided as is and is missing dependencies. Feel free to re-use parts for your own system, but please do not expect it to run out of the box.



* _WNprolog-3.0BF.tar.gz_ is a bugfix release of _WNprolog-3.0_. It fixes some known problems, including the transitive hyponym bug.


world-universities-csv ====================== This is a forked copy of two CSV files with universities in the US and around the world.


To try to cure myself I've written a new WordStar mode for emacs: its name is **WorMstar** (because WordStar is like a worm in my head...) and the elisp file that contains it is named `wm-mode.el`.


WWW::Flatten is a web crawling tool for freezing pages into standalone. I believe this works better than wget or "Saving as, complete" in browsers.


XChange is a Java library providing a simple and consistent API for interacting with 60+ Bitcoin and other crypto currency exchanges providing a consistent interface for trading and accessing market data.


This unpacked included both patches to allow execution on Android 4.0.4 devices and without Bluetooth 4.0 (But remember if u dont have Bluetooth 4.0 the app will crash and there is nothing we can do)


XLM supports multi-GPU and multi-node training, and contains code for: - **Language model pretraining**: - **Causal Language Model** (CLM) - **Masked Language Model** (MLM) - **Translation Language Model** (TLM) - **GLUE** fine-tuning - **XNLI** fine-tuning - **Supervised / Unsupervised MT** training: - Denoising auto-encoder - Parallel data training - Online back-translation


**XLNet** is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs [Transformer-XL]( as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking.


This library contains several development tools, although not all are listed here, the most stable and relevant ones follows:


YAGO is a large semantic knowledge base, derived from Wikipedia, WordNet, WikiData, GeoNames, and other data sources. Currently, YAGO knows more than 17 million entities (like persons, organizations, cities, etc.) and contains more than 150 million facts about these entities.


This is the pipeline to run YAGO 4.


[Yancy]( is a simple content management system (CMS) for administering content in a database. Yancy accepts a configuration file that describes the data in the database and builds a website that lists all of the available data and allows a user to edit data, delete data, and add new data.


This file should not be added to git's managed files.


YodaQA is an open source Factoid Question Answering system that can produce answer both from databases and text corpora using on-the-fly information extraction. By default, open domain question answering is performed on top of the Freebase and DBpedia knowledge bases as well as the texts of enwiki articles.


This repository represents Ultralytics open-source research into future object detection methods, and incorporates our lessons learned and best practices evolved over training thousands of models on custom client datasets with our previous YOLO repository **All code and models are under active development, and are subject to modification or deletion without notice.** Use at your own risk.


# DESCRIPTION **youtube-dl** is a command-line program to download videos from and a few more sites. It requires the Python interpreter, version 2.6, 2.7, or 3.2+, and it is not platform specific. It should work on your Unix box, on Windows or on macOS. It is released to the public domain, which means you can modify it, redistribute it or use it however you like.


_Youtube-upload_ is a command line Python script that uploads videos to Youtube (it should work on any platform -GNU/Linux, BSD, OS X, Windows, ...- that runs Python) using theYoutube [APIv3](



This should report that it passes all the tests. If not, something might be wrong with your configuration, or there may be some incompatibility between the script and your system. If you suspect the later, let me know the details!


It is a predicate has an Ordinal argument.


A WSS (Secure Web Sockets) and/or MQTT based event notification server that broadcasts new events to any authenticated listeners. (As of 0.6, it also includes a non secure websocket option, if that's how you want to run it)


I can't believe it has been 20 years already since the release of The Matrix movie.