Current git codebases, sorted alphabetically

  • Git codebases have been gathered manually with RADAR from online sources, or from github or similar. RADAR is not spidering yet and we have not yet automatically processed all systems for descriptions, hence only some descriptions are displayed.

2013-Advent-Staging


The Catalyst Advent Calendar is using the [POD](http://perldoc.perl.org/perlpod.html) format. For each day of the month there is a corresponding pod file into the `root` directory. If you don't feel comfortable to write the article using the POD format, don't worry. Into the `examples/` directory of this repository there are few examples from previous years.

abot


[![GoDoc](http://img.shields.io/badge/go-documentation-blue.svg?style=flat-square)](https://godoc.org/github.com/itsabot/abot) [![Travis CI](https://img.shields.io/travis/itsabot/abot.svg?style=flat-square)](https://travis-ci.org/itsabot/abot) Abot (pronounced *Eh-Bot*, like the Canadians) is a digital assistant framework that enables anyone to easily build a digital assistant similar to Apple's Siri, Microsoft's Cortana, Google Now, or Amazon Alexa. Further, Abot supports a human-aided training backend enabling anyone to build services like Facebook M.

Abstractive-Summarization-With-Transfer-Learning


This creates two tfrecord files under the data folder.

acceptability_prediction


This package contains scripts and tools for doing unsupervised acceptability prediction. For a full description of the software, please refer to the publication listed at the bottom of this document. Datasets are hosted on our project website.

accounts-assessor


This repository hosts a program that derives, validates, and corrects the financial information that it is given. The program uses redundancy to carry out its validations and corrections. By this it is meant that knowledge of parts of a company's financial data imposes certain constraints on the company's other financial data. If the program is given a company's ledger, then it knows what the balance sheet should look like. If the program is given a company's balance sheet, then it has a rough idea of what the ledger should look like.

ACE-in-GF


This project implements a subset of the syntax of Attempto Controlled English (ACE) version 6.7 in Grammatical Framework (GF) and ports it to ~20 natural languages (see the Makefile for the currently supported languages). Note that this project does not implement the mapping of ACE sentences to discourse representation structures.

AceRules


AceRules is a rule engine based on Attempto Controlled English (ACE).

AceWiki


AceWiki is a semantic wiki based on controlled natural language.

ACL-2014-irony


* The actual database - a flat sqlite file - is ironate.db.zip, it needs to be unzipped, of course. * The database-schema.txt file (in this directory) contains information regarding the database. * See irony_stats.py for instructions on how to reproduce our analyses. This also gives examples on working with the database in python (and in SQL, since we issue queries directly). Note that this requires sklearn, numpy, & statsmodels modules to be installed.

acl2016-convincing-arguments


> **Abstract:** We propose a new task in the field of computational argumentation in which we investigate qualitative properties of Web arguments, namely their convincingness. We cast the problem as relation classification, where a pair of arguments having the same stance to the same prompt is judged. We annotate a large datasets of 16k pairs of arguments over 32 topics and investigate whether the relation "A is more convincing than B" exhibits properties of total ordering; these findings are used as global constraints for cleaning the crowdsourced data. We propose two tasks: (1) predicting which argument from an argument pair is more convincing and (2) ranking all arguments to the topic based on their convincingness. We experiment with feature-rich SVM and bidirectional LSTM and obtain 0.76-0.78 accuracy and 0.35-0.40 Spearman's correlation in a cross-topic evaluation. We release the newly created corpus UKPConvArg1 and the experimental software under open licenses.

acl2016-modality-verbclasses


This repository contains code for experiments described in my ACL paper.

acl2016-optimizing-rouge


In this project, an approximation of ROUGE-N is derived. This approximation is linearly factorizable into the individual scores of sentences which can be then optimize via Integer Linear Programming (ILP). This repositery contains the code for our optimizer which takes scored sentences and extract the best summary according to the ROUGE approximation.

acl2016-supersense-embeddings


> This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.

activity_prediction


There is a copy of the paper in this repository in the file called `Wilson_ACL_2019.pdf`.

AdaGram.jl


Adaptive Skip-gram (AdaGram) model is a nonparametric extension of famous Skip-gram model implemented in word2vec software which is able to learn multiple representations per word capturing different word meanings. This projects implements AdaGram in Julia language.

adpp-journal


This repository contains the implementation of algorithms PP, RPP, SDPP, SDRPP, ADPP, and ADRPP described in the article

aetheria


Aetheria Game Engine is a system for playing text adventure (interactive fiction) games, written in Java. Game worlds are represented in XML, with Beanshell code to account for complex object behaviour. PUCK (Playable Universe Construction Kit) is a graphical IDE that can be used to build such XML files.

AFP


agentpolis


Agentpolis is a fully agent-based platform for modeling transportation systems. It comprises a high-performance discrete-event simulation core, a cohesive set of high-level abstractions for building extensible agent-based models and a library of predefined components frequently used in transportation and mobility models. Together with a suite of supporting tools, Agentpolis enables rapid prototyping and execution of data-driven simulations of a wide range of mobility and transportation phenomena.

agentpolis-demo


In this repository, we demonstrate how to use [Agentpolis](https://github.com/aicenter/agentpolis) to simulate urban transportation scenarios. It contains a Python script that illustrates how to convert raw OpenStreetMap data to geoJSON format used by Agentpolis. Further, it contains an example Java code that exploits the functionality of Agentpolis to simulate and visualize movement of several vehicles over the road network specified in the input geoJSON files.

AgentSpeak


* Semantically a goal marks a certain state of the world an agent _wishes to bring about_ [AgentSpeak, p.40] * _Achievement goals_ triggers an _achievement goal addition_ which leads to the execution of a corresponding [plan](#plan) * On agent start, there can exists one _initial goal_ only (like the ```main``` function in Java, C/C++) * Each agent can track _more than one goal_ at the same time otherwise the agent idles (the suspending state is not used) * Goals are triggered by external events which will match by the goal name * Goals will be resolved into [plans](#plan) with equal name (and allowed context), the [plan](#plan) is the intantiiation of the goal * Goals are run in parallel independed from other goals * A goal is a sequence of [plans](#plan) which must all finished successfully * A goal is part of exactly one [intention](#intention) * If a goal can match a [desire](#desire) (the goal is near to the desire) it can add an event to match the desire [belief](#belief) * If the agent is in sleeping / hibernate state and the ```wakeup``` method is called, it triggers the wakeup-goal

ai-economist


This repo contains an implementation of Foundation, a framework for flexible, modular, and composable environments that **model socio-economic behaviors and dynamics in a society with both agents and governments**.

AI-metrics


This repository contains a [Jupyter Notebook](http://jupyter.org/), which you can see live at [https://eff.org/ai/metrics](https://www.eff.org/ai/metrics). It collects problems and metrics / datasets from the artificial intelligence and machine learning research literature, and tracks progress on them. You can use it to see how things are progressing in specific subfields or AI/ML as a whole, as a place to report new results you've obtained, and as place to look for problems that might benefit from having new datasets/metrics designed for them, or as a source to build on for data science projects.

Ai-Papers


# Ai_Papers This is a catalog for the foundations and emergence of AI research. Understanding the historic development of computational logic from primary sources is useful in gaining insight on the current state of AI.

aida


AIDA is a framework and online tool for entity detection and disambiguation. Given a natural-language text, it maps mentions of ambiguous names onto canonical entities (e.g., individual people or places) registered in the Wikipedia-derived [YAGO2][YAGO] [YAGO2] knowledge base.

aima-lisp


This repository was the original code base, back in 1995. Since then, the Java and Python versions have become more popular, and this Lisp version is no longer up-to-date. But it is here for whatever use you want to make of it.

AIND-Planning


This project includes skeletons for the classes and functions needed to solve deterministic logistics planning problems for an Air Cargo transport system using a planning search agent. With progression search algorithms like those in the navigation problem from lecture, optimal plans for each problem will be computed. Unlike the navigation problem, there is no simple distance heuristic to aid the agent. Instead, you will implement domain-independent heuristics. ![Progression air cargo search](images/Progression.PNG)

AIRIS_Public


AIRIS is an Artificial General Intelligence (AGI) project that combines aspects of Reinforcement Learning (RL) with more traditional symbolic techniques (GOFAI).

AIWar


AIWar is a game that let you create artificial intelligences to control space ships. The goal is to assemble a fighter army to destroy the ennemy base. To do that, you must get minerals with miningship and create fighters with your ressources. And you should also defend yourself again the ennemy army. The first team that destroies the ennemy base wins the match !

alexa


I put together a list of resources at [https://bit.ly/alexaskill](https://bit.ly/alexaskill), which is a public Instapaper folder I set up to make sharing the list of links easy. The slides will refer to each of these links. I’d recommend having this open in a tab so you can refer back to the links easily.

alexa-avs-sample-app


This project provides a step-by-step walkthrough to help you build a **hands-free** [Alexa Voice Service](https://developer.amazon.com/avs) (AVS) prototype in 60 minutes, using wake word engines from [Sensory](https://github.com/Sensory/alexa-rpi) or [KITT.AI](https://github.com/Kitt-AI/snowboy). Now, in addition to pushing a button to "start listening", you can now also just say the wake word "Alexa", much like the [Amazon Echo](https://amazon.com/echo). You can find step-by-step instructions to set up the hands-free prototype on [Raspberry Pi](../../wiki/Raspberry-Pi), or follow the instructions to set up the push-to-talk only prototype on [Linux](../../wiki/Linux), [Mac](../../wiki/Mac), or [Windows](../../wiki/Windows).

alexa-skills-list


1-Minute Mindfulness from Walking Affirmations is a skill that allows you to take a break from the world around you & enter into a one minute sound meditation.

alien


To use alien, you will need several other programs. Alien is a perl program, and requires perl version 5.004 or greater. If you use slackware, make sure you get perl 5.004, the perl 5.003 in slackware does not work with alien!

ALL

Language Acquisition ITS
ALL is a system that supports many tasks of language learning. Knowledge of other languages is deemed essential to education of the mind and, when combined with clear, opens the door to immense quantities of knowledge. ALL supports this task for both written and spoken language (a necessity). It interfaces with bard, clear, and picform.

allennlp


An [Apache 2.0](https://github.com/allenai/allennlp/blob/master/LICENSE) NLP research library, built on PyTorch, for developing state-of-the-art deep learning models on a wide variety of linguistic tasks.

alpha-zero-general


A simplified, highly flexible, commented and (hopefully) easy to understand implementation of self-play based reinforcement learning based on the AlphaGo Zero paper (Silver et al). It is designed to be easy to adopt for any two-player turn-based adversarial game and any deep learning framework of your choice. A sample implementation has been provided for the game of Othello in PyTorch, Keras, TensorFlow and Chainer. An accompanying tutorial can be found [here](http://web.stanford.edu/~surag/posts/alphazero.html). We also have implementations for GoBang and TicTacToe.

ambiverse-nlu


A list of existing pipelines can be found in `de.mpg.mpi_inf.ambiversenlu.nlu.entitylinking.uima.pipelines.PipelineType`, where you can also define new pipelines.

amfv


A Mind Forever Voyaging is a 1985 interactive fiction game written by Steve Meretzky and published by Infocom.

amr-eager


AMR-EAGER [1] is a transition-based parser for Abstract Meaning Representation (http://amr.isi.edu/).

amr-eager-multilingual


AMR-EAGER [1] is a transition-based parser for Abstract Meaning Representation (http://amr.isi.edu/). This repository provides an extension of AMR-EAGER to English, Italian, Spanish, German and Chinese. See [2] for a detailed explanation and experiments.

amziexpertsystemsinprolog


This is a fork of the amzi expert systems in prolog ported to swi-prolog and put in git instead of awkward file by file dl on amzi site

android-BluetoothLeGatt


This sample demonstrates how to use the Bluetooth LE Generic Attribute Profile (GATT) to transmit arbitrary data between devices.

ANGRYsearch


* open terminal in the directory with the release files * set `install.sh` as executable and run it

animanager


Animanager_ is a command line program for advanced anime watching management.

anno-pipeline


If you'd like to annotate a file that contains a single document without any SGML markup, add "--sgml f". However, for annotating a large quantity of files this is unadvisable, because loading the Stanford models takes a couple of minutes. It is more efficient to include several documents in one file (and documents should be formatted like parses).

antlrworks


***** Release check-list - make sure all the bugs are resolved in http://www.antlr.org/jira/browse/AW - make sure ANTLRWorks is compiled against the correct version of ANTLR and ST sources - update the ANTLR and ST jar files in main/lib - change version number (and date when it applies) into these files: - main/build.properties - main/resources/properties/strings.properties - main/plugin/src/org/antlr/works/plugin/properties/strings.properties - update history in: - main/History - update online files (ask Terence for the path): - index.html - update.xml and such files for new versions - push release notes and such to doc dir - build ANTLRWorks by running ant on the main build file: $ cd main $ ant - verify the following in main/dist folder: - file versions are correct - jar file is running fine - OS X application is launching fine - upload files online: - antlrworks-1.x.zip - antlrworks-1.x-src.zip - antlrworks-1.x.jar - branch the release in p4 (main -> release/1.x)

APE


This document explains how APE (ACE Parsing Engine) is compiled and used.

api


apls


This is the source for building the core Amzi! Prolog + Logic Server system.

aptly


Aptly is a swiss army knife for Debian repository management.

arabic-tagger


This package provides a sequence tagger implementation customized for Arabic features, including a named entity detection model especially intended for Arabic Wikipedia. It was trained on labeled ACE and ANER data as well as an unlabeled Wikipedia corpus. Learning is with the structured perceptron, optionally in a cost-augmented fashion. Feature extraction is handled as a preprocessing step prior to learning/decoding.

arc


This program is a command-line based tool that can be used to analyze systems modelled using the AltaRica language.

argdown


[Argdown](https://christianvoigt.github.io/argdown) is a simple syntax for analyzing complex argumentation.

arggen-candela


This repository contains code for our ACL19's paper [Argument Generation with Retrieval, Planning, and Realization](http://xinyuhua.github.io/resources/acl2019/acl2019.pdf).

argmin2015-DiGAT


> This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.

argmin2016-unshared-task


* This site contains supplementary data for the Unshared Task * See [the corresponding call for papers](call-for-papers.txt) and visit the [official workshop website](http://argmining2016.arg.tech/).

argumentation-logic-visualizer


This program was created in order to explore Argumentation Logic, a concept created by Prof. Antonis Kakas, Dr. Francesca Toni and Prof. Paolo Mancarella.

arisu


# arisu arisu is a bot for discord written for [Let's all love Lain](https://discord.gg/JZwtnzJ) in python using discord.py!

ark-sage


Ark-SAGE is a Java library that implements the L1-regularized version of **S**parse **A**dditive **G**enerativ**E** models of Text (SAGE). SAGE is an algorithm for learning sparse representations of text. Details of the algorithm is described in

ark-tweet-nlp


where the jar file is the one included in the release download. The tagger outputs tokens, predicted part-of-speech tags, and confidences. Use the "--help" flag for more information. On Unix systems, "./runTagger.sh" invokes the tagger; e.g.

art-DCGAN


### Scraping Images from Wikiart `genre-scraper.py` will allow you to scrape artworks from wikiart based on their genres. The usage is quite simple. In `genre-scraper.py` there is a variable called `genre_to_scrape` - simply change that to any of the genre's listed on [this page](https://www.wikiart.org/en/paintings-by-genre/), or to any of the values in the huge list of comments right after `genre_to_scrape` is defined.

ascent


## Introduction ASCENT is a pipeline for extracting and consolidating commonsense knowledge from the world wide web. ASCENT is capable of extracting facet-enriched assertions, for example, `lawyer; represents; clients; [LOCATION] in courts` or `elephant; uses; its trunk; [PURPOSE] to suck up water`. A web interface of the ASCENT knowledge base for 10,000 popular concepts can be found at https://ascent.mpi-inf.mpg.de/.

ask-alexa-pykit


A minimalist framework for developing apps (skills) for the Amazon Echo's SDK: The Alexa Skills Kit (ASK).

ask-alexa-twitter


ask-alexa-pykit is currently at version 0.3 Latest changes: - The main changes between v0.2 - v0.3 is the removal of the RequestHandler class, I started finding the design of that class was not very modular and didn't seem to lend itself well to easy use since it would have to be subclassed to add significantly new functionality. Instead I divided up the function of the RequestHandler into 3 simple APIs - the Request, the VoiceHandler function, and the ResponseBuilder. - The Request object contains information about the Alexa request - such as intent, slots, userId etc. - A VoiceHandler function (specified with an annotation) takes a request as an input, performs some arbitrary logic on top of it, and returns a Response. - The ResponseBuilder is an encapsulated way to construct responses for a VoiceHandler. A Response can be constructed by called ResponseBuilder.create_response. - This way each part of the code has an unambiguous responsbility, hopefully leading to an extremely easy API. - I had to do a little magic using the inspect module in dialog.py to make it happen, hopefully the code is not too hard to understand. - Check out voice handlers for the new way to map a VoiceHandler to an intent - the new Handlers are more like AWS Lambda functions. When writing a new skill, you can simply copy this code, generate the intent schema and fill out some custom functions in the voice_handlers.py file.

aurum-datadiscovery


Aurum is a work in progress, we expect to release its first open-source version in the 4th quarter of 2018. We are happy to accept contributions of the community. If you are interested in contributing take a look at the [CONTRIBUTING](../CONTRIBUTING.md) and feel free to email raulcf@csail.mit.edu We also have a code of conduct:

automated-programming-framework


The Automated Programming Framework (APF) is a tool to generate compilations to PDDL such that off-the-shelf classical planners can compute solutions from which we can induce programs or controllers. This is a framework that covers several publications in generalized planning (see [references](#references)), so it includes different compilations in the same code that can be called with configuration files.

automates


This repository holds the source code for the AutoMATES documentation and several component pipelines.

autoplay


Autoplay is a learning environment for creating agents that play text-based games. Supported games include the popular Zork series and other z-machine interpretable files (specifically the .z5 format). These games are provided as part of this repository.

avs-device-sdk


This diagram illustrates the data flows between components that comprise the AVS Device SDK for C++.

awesome-emacs


- [[https://www.emacswiki.org/emacs/UndoTree][undo-tree]] - Visualize the whole undo history in buffer as a tree, and you can access anywhere in it. - [[https://github.com/nschum/highlight-symbol.el][highlight-symbol]] - Auto/manually highlight the same symbols in code, navigate in them, or replace string. - [[https://github.com/Fanael/rainbow-delimiters][rainbow-delimiters]] - Highlights parentheses, brackets, and braces according to their depth. - [[https://github.com/emacsmirror/rainbow-mode][rainbow-mode]] - Colorize color names in buffers. - [[https://github.com/benma/visual-regexp.el][visual-regexp]] - Replace via RegExp, with real-time visual feedback directly in the buffer. - [[https://github.com/benma/visual-regexp-steroids.el/][visual-regexp-steroids]] - The same as visual-regexp, but use modern regular expressions instead of Emacs-style. - [[https://www.emacswiki.org/emacs/WhiteSpace][whitespace]] - =[built-in]= Visualize blanks (tab/space/newline). - [[https://github.com/coldnew/linum-relative][linum-relative]] - display relative line number in the left margin in emacs. - [[https://emacsredux.com/blog/2014/08/25/a-peek-at-emacs-24-dot-4-prettify-symbols-mode/][prettify-symbol-mode]] - =[built-in]= displaying characters as fancy symbols (e.g. =lambda= -> =λ=). - [[https://github.com/jorgenschaefer/typoel][typo.el]] - Emacs extension for typographical editing. - [[https://github.com/fgeller/highlight-thing.el][highlight-thing]] - Light-weight minor mode to highlight thing under point using built-ins. - [[https://github.com/larstvei/Focus][focus]] - Dim the font color of text in surrounding paragraphs. - [[https://github.com/hlissner/emacs-solaire-mode][Solaire mode]] - Visually distinguish file-visiting windows from other types of windows (like popups or sidebars) by giving them a slightly different background. - [[https://github.com/Malabarba/beacon][beacon]] - Never lose your cursor again. - [[https://github.com/gonewest818/dimmer.el][dimmer.el]] - Interactively highlight which buffer is active by dimming the others. - [[https://github.com/k-talo/volatile-highlights.el][volatile-highlights.el]] - Minor mode for visual feedback on some operations in Emacs. - [[https://github.com/ankurdave/color-identifiers-mode][color-identifiers-mode]] - Color Identifiers is a minor mode for Emacs that highlights each source code identifier uniquely based on its name. - [[https://github.com/emacsorphanage/yascroll][yascroll-el]] - Yet Another Scroll Bar Mode. - [[https://github.com/jcs-elpa/goto-line-preview][goto-line-preview]] - Preview line when executing `goto-line` command. - [[https://github.com/tsdh/highlight-parentheses.el][highlight-parentheses.el]] - highlight surrounding parentheses. - [[https://github.com/sulami/literate-calc-mode.el][literate-calc-mode]] - display live =calc= results inline - [[https://gitlab.com/matsievskiysv/math-preview][math-preview]] - Preview TeX equations inline

awesome-knowledge-graph


* [AllegroGraph](https://franz.com/agraph/allegrograph/) - high-performance, persistent graph database that scales to billions of quads * [Apache Jena](https://jena.apache.org/) - open source Java framework for building Semantic Web and Linked Data applications * [Eclipse RDF4J](http://rdf4j.org/) - (formerly known as Sesame) is an open source Java framework for processing RDF data. This includes parsing, storing, inferencing and querying of/over such data. It offers an easy-to-use API that can be connected to all leading RDF storage solutions. It allows you to connect with SPARQL endpoints and create applications that leverage the power of linked data and Semantic Web. * [GraphDB](http://graphdb.ontotext.com/graphdb/) - enterprise ready Semantic Graph Database, compliant with W3C Standards * [Virtuoso](https://virtuoso.openlinksw.com/) - a "Data Junction Box" that drives enterprise and individual agility by deriving a Semantic Web of Linked Data from existing data silos * [Hoply](https://github.com/amirouche/hoply/) - explore bigger than RAM relational data in the comfort of Python.

AwesomeMRC


This repo is our research summary and playground for MRC. More features are coming.

axioms


This work is supported by Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA).

Babel2


Babel2 is a general framework for implementing and running your agent-based experiments, both in a simulated environment or embodied in grounded robots. It connects our core technologies such as [Fluid Construction Grammar](www.fcg-net.org) and Incremental Recruitment Language (IRL) with mechanisms for multi-agent interactions, robotic embodiment, cognitive processing and learning. An extensive monitoring system opens up every detail of Babel2’s intermediate representations and underlying dynamics. A modular design ensures that the system can be used in a wide variety of scenarios. It is therefore possible to use each component individually, according to your needs.

baleen


Baleen is an extensible text processing capability that allows entity-related information to be extracted from unstructured and semi-structured data sources. It makes available in a structured format things of interest otherwise stored in formats such as text documents - references to people, organisations, unique identifiers, location information.

Baseline4VTKEL


The visual and textual mentions of a *man* shown in the red text and in the red box refer to the same entity, and they should be linked together. The other visual mention i.e. *racket*, *ball* and *logo* should be linked to different entities. These three entities are not known (i.e., they are not part of the initial knowledgebase **K**), and therefore three new entities of type *racket, ball* and *logo* should be added to the knowledge base, i.e., the **A-box** of **K** should be extended with the assertions *Racket(enew1)*, *Ball(enew2)* and *Logo(enew3)*. The visual and textual mentions of *R.Federer* is also referring to the same entity. However, this time the entity is known (i.e., **YAGO** contains an entity for *man*) and therefore the two mentions should be linked to the same entity. For the other textual mentions, i.e., *Lukas Lacko*, *Wimbledon*, *London*, *2018*, we already have instances in the **knowledgebase**, so we have to link them to these entities. (For details read our papers: coming soon!)

baselines


OpenAI Baselines is a set of high-quality implementations of reinforcement learning algorithms.

bashlex


bashlex is a Python port of the parser used internally by GNU bash.

bashreduce


We have a new bottleneck: we're limited by how quickly we can partition/pump our dataset out to the nodes. awk and sort begin to show their limitations (our clever awk script is a bit cpu bound, and @sort -m@ can only merge so many files at once). So we use two little helper programs written in C (yes, I know! it's cheating! if you can think of a better partition/merge using core unix tools, contact me) to partition the data and merge it back.

BayesDB


run\_dha\_example.py ([github](https://github.com/mit-probabilistic-computing-project/BayesDB/blob/master/examples/dha/run_dha_example.py)) is a basic example of analysis using BayesDB. For a first test, run the following from inside the top level BayesDB dir

bayou


# Bayou Bayou is a data-driven program synthesis system for Java API idioms that uses the novel technique of Neural Sketch Learning.

bbb-install


The `bbb-install.sh` is a shell script that automates the [install steps](http://docs.bigbluebutton.org/2.0/20install.html#step-by-step-install) for installing BigBlueButton 2.0.

bddem


bddem is a library for manipulating Binary Decision Diagrams in SWI-Prolog (http://www.swi-prolog.org/).

bdi-abm-integration


This software realises a mechanism for integrating Belief-Desire-Intention (BDI) reasoning into agents within an agent-based simulation (ABM). The concept is described in the following papers papers:

bea2016-spelling-difficulty


This project contains experiments for spelling error prediction. The pre-processing steps for error extraction from learner corpora could also be used for other error types. The experiments are described in detail in the paper "Predicting the Spelling Difficulty of Words for Language Learners". Please use the following citation:

BedSit


BedSit is a **Bed**rock upon which to build your **Sit**uation driven application. It provides objects and categories that work with either [SitCalc](https://github.com/PaulBrownMagic/SitCalc) or [STRIPState](https://github.com/PaulBrownMagic/STRIPState) allowing you to get on with making your application without having to worry about such details.

behaviac


- behaviac is a framework of the game AI development, and it also can be used as a rapid game prototype design tool - behaviac supports the behavior tree, finite state machine and hierarchical task network - Behaviors can be designed and debugged in the designer, exported and executed by the game - The designer can only run on the Windows platforms, The run time library is implemented with C++ and C#, and it supports all major platforms (Windows, Linux, Android, iOS, Unity etc.) and Unity. - The C++ version is suitable for the client and server side. - [Website](http://www.behaviac.com/) for documents, tutorials, API,FAQ,source code, downloads,etc. - BehaviacSetup*.exe is the setup package with the binary editor and demo executable. You can download/clone the source code from [github behaviac](https://github.com/Tencent/behaviac)

behavior3js


This library include the following core structures...

BehaviorTree.CPP


This __C++ 14__ library provides a framework to create BehaviorTrees. It was designed to be flexible, easy to use, reactive and fast.

berkeley-entity


This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

bfg-repo-cleaner


The BFG is a simpler, faster ([10 - 720x](https://docs.google.com/spreadsheet/ccc?key=0AsR1d5Zpes8HdER3VGU1a3dOcmVHMmtzT2dsS2xNenc) faster) alternative to `git-filter-branch` for cleansing bad data out of your Git repository:

BFWS-public


This project is a joint work by Nir Lipovetzky, and Hector Geffner.

bibanon


# The Bibliotheca Anonoma The **Bibliotheca Anonoma** is a wiki designed to collect, document, and safeguard the products and history of internet culture; which constitutes **the shared experience of humanity on a network that defines our lives**.

bitlbee-discord


This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version.

bixo


Bixo is an open source Java web mining toolkit that runs as a series of Cascading pipes. It is designed to be used as a tool for creating customized web mining apps. By building a customized Cascading pipe assembly, you can quickly create a workflow using Bixo that fetches web content, parses, analyzes, and publishes the results.

blockly


Google's Blockly is a library that adds a visual code editor to web and mobile apps. The Blockly editor uses interlocking, graphical blocks to represent code concepts like variables, logical expressions, loops, and more. It allows users to apply programming principles without having to worry about syntax or the intimidation of a blinking cursor on the command line. All code is free and open source.

bobtailbot


This is a simple little chatbot written in Clojure, mostly to have fun and learn about Clojure and also chatbots, AI, you name it. It can either talk through the command-line or connect to an irc server. For the moment, with its default brain, it only accepts simple facts described in SVO sentences with proper names, and simple general rules and queries, as depicted in the example interaction below.

BotHack


A ttyrec of one Medusa run is in the repo: https://github.com/krajj7/BotHack/blob/master/ttyrec/wizmode-exploration-dlvl1-28medusa.ttyrec?raw=true

bow


BoxCars


## BoxCars116k dataset The dataset was created for the paper and it is possible to download it from our [website](https://medusa.fit.vutbr.cz/traffic/data/BoxCars116k.zip) The dataset contains 116k of images of vehicles with fine-grained labels taken from surveillance cameras under various viewpoints. See the paper [**BoxCars: Improving Vehicle Fine-Grained Recognition using 3D Bounding Boxes in Traffic Surveillance**](https://doi.org/10.1109/TITS.2018.2799228) for more statistics and information about dataset acquisition. The dataset contains tracked vehicles with the same label and multiple images per track. The track is uniquely identified by its id `vehicle_id`, while each image is uniquely identified by `vehicle_id` and `instance_id`. It is possible to use class `BoxCarsDataset` from `lib/boxcars_dataset.py` for working with the dataset; however, for convenience, we describe the structure of the dataset also here. The dataset contains several files and folders: * **images** - dataset images and masks * **atlas.pkl** - *BIG* structure with jpeg encoded images, which can be convenient as the whole structure fits the memory and it is possible to get the images on the fly. To load the atlas (or any other pkl file), you can use function `load_cache` from `lib/utils.py`. To decode the image (in RGB channel order), use the following statement. ```python atlas = load_cache(path_to_atlas_file) image = cv2.cvtColor(cv2.imdecode(atlas[vehicle_id][instance_id], 1), cv2.COLOR_BGR2RGB) ```

boycott-api


This is a RESTful API for Node.js (version >=0.10.x) as an attempt to create a crowdsourced database for boycotted venues, corporations, organizations, events, etc.

brat


In an attempt to keep all user-facing documentation in one place, please visit the [brat homepage][brat] which contains extensive documentation and examples of how to use and configure brat. We apologise for only providing minimal documentation along with the installation package but the risk of having out-dated documentation delivered to our end-users is unacceptable.

brawl-public-game-001


## Data Release ## This release consists of some data from a BRAWL prototype. We created a small enterprise network, described below. We then ran a single game using the MITRE CALDERA research project as a red bot.

bt-builder


This is prototype code for building a behaviour tree from examples of expert behaviour. This code is explained in the accompanying paper [Building Behavior Trees from Observations in Real-Time Strategy Games](https://www.cs.auckland.ac.nz/research/gameai/publications.php).

bue-common-open


The BBN Speech, Language, and Multimedia Group uses an internal Java library of common utility functions written by many people, `bue-common`. We sometimes make releases of open-source software which depend on parts of this library, requiring that certain classes by open-sourced as well. This repository contains the (small) open-source portion of this library.

Buka


**Buka** is a modern software that helps you manage your ebook at ease. With a simple, clean and straight-forward user interface, **Buka** aims to gather your ebooks for a reading experience without hassles. **Buka** currently support .PDF format with configurations that helps user focus more on the content.

bundler_sfm


Bundler is a structure-from-motion system for unordered image collections (for instance, images from the Internet). Bundler takes a set of images, image features, and image matches as input, and produces a 3D reconstruction of the camera and (sparse) scene geometry as output. The system, described in [1] and [2], reconstructs the scene incrementally, a few images at a time, using a modified version of the Sparse Bundle Adjustment package of Lourakis and Argyros [3] as the underlying optimization engine.

BYU-Agent-2016


This is the source code for the agent that [won](http://atkrye.github.io/IEEE-CIG-Text-Adventurer-Competition/2016/11/07/Results/) the IEEE CIG 2016 Text-based adventure AI Competition. It has been formatted to work with [autoplay](https://github.com/danielricks/autoplay).

BYU-Analogical-Reasoning-Dataset


A set of analogy tasks of the form A:B::C:D, intended as a benchmark for analogical reasoning and planning. Analogies are augmented with Penn Treebank part-of-speech tags and include both one-to-many and many-to-one relationships. The dataset contains 23,692 analogies in all.

caevo


This software is released under the Apache License, Version 2.0. See LICENSE in the project root directory for all details. Portions of this software were originally developed at the United States Naval Academy as NavyTime, and then expanded into CAEVO at the 2013 SCALE Workshop at Johns Hopkins University. Software from Steven Bethard's ClearTK system is also included as separate sieves.

caldera


CALDERA is an automated adversary emulation system that performs post-compromise adversarial behavior within Windows Enterprise networks. It generates plans during operation using a [planning system](#planning-system) and a pre-configured adversary model based on the [Adversarial Tactics, Techniques & Common Knowledge](https://attack.mitre.org) (ATT&CK™) project. These features allow CALDERA to dynamically operate over a set of systems using variable behavior, which better represents how human adversaries perform operations than systems that follow prescribed sequences of actions.

candcapi


[C&C tools](http://svn.ask.it.usyd.edu.au/trac/candc "C\&C tools") is a suite of software for linguistic analysis of the English language, including a tokenizer, several taggers and a parser. [Boxer](http://svn.ask.it.usyd.edu.au/trac/candc/wiki/boxer "Boxer") is a tools for deep semantic analysis that takes in input the output of the C\&C parser. Together, the C&C tools and Boxer form a pipeline toolchain to perform a complete analysis on English text. Here is an example:

Car-Parking-Planner


This assignment considers the Situation Calculus and Planning. It focuses on: - Formalizing a planning problem, using Situation Calculus to represent the world. - Implementing the model and verifying its correctness using a planner based on the Golog syntax. - Extending the model as well as its implementation in order to deal with additional aspects of the environment.

cartago


A Java-based Framework for Programming Environments in Agent-oriented Applications.

cascade-server


# CASCADE CASCADE is a research project at MITRE which seeks to automate much of the investigative work a “blue-team” team would perform to determine the scope and maliciousness of suspicious behavior on a network using host data.

cas_access_example


A simple access example for CAS.

cat


This is the repository for the ACL 2020 paper [Embarrassingly Simple Unsupervised Aspect Extraction](https://www.aclweb.org/anthology/2020.acl-main.290/). In this work, we extract aspects from restaurant reviews with attention that uses RBF kernels.

catabotrescue-unity


This is the propositional version of the game.

CatMUD


CatMUD is a MUD server (and MUD game) written in Prolog. It is not designed to be robust, nor widely used, so it's probably not going to stand up to a regular MUD environment.

causeofwhy


This project uses several libraries that either need to be installed or

CCG


This version is able to handle forward and backward application, forward and backward composition and forward type-raising (which is enough to parse sentences written in French)

cel


CEL is a lightweight Description Logic reasoner for large-scale biomedical ontologies. The CEL Plug-ing uses the [OWL API](https://owlcs.github.io/owlapi/) and lets CEL be used as a plug-in for [Protege](https://protege.stanford.edu/).

ch.bfh.bti7064.w2013.PrologParser


A non-complete parser for the Prolog programming language

chalk


A [Prolog-ish][Prolog] interpreter written in Rust, intended perhaps for use in the compiler, but also for experimentation.

chess-alpha-zero


This project is based in two main resources: 1) DeepMind's Oct19th publication: [Mastering the Game of Go without Human Knowledge](https://www.nature.com/articles/nature24270.epdf?author_access_token=VJXbVjaSHxFoctQQ4p2k4tRgN0jAjWel9jnR3ZoTv0PVW4gB86EEpGqTRDtpIz-2rmo8-KG06gqVobU5NSCFeHILHcVFUeMsbvwS-lxjqQGg98faovwjxeTUgZAUMnRQ). 2) The great Reversi development of the DeepMind ideas that @mokemokechicken did in his repo: https://github.com/mokemokechicken/reversi-alpha-zero

Chinese_SF


This is RPI BLENDER Chinese slot filling system. Definition of slot filling: Slot filling aims at collecting from a large-scale multi-source corpus the values (“slot fillers”) for certain attributes (“slot types”) of a query entity, which is a person or some type of organization.[1]

chrome-gnome-shell


This repository contains Web extension for Google Chrome/Chromium, Vivaldi, Opera (and other WebExtensions capable browsers) and native host messaging connector that provides integration with GNOME Shell and the corresponding extensions repository https://extensions.gnome.org/.

chunkedextractor


The chunked extractors project is a collection of three extractors.

cicero


Cicero is an Open Source implementation of the [Accord Project Template Specification][apspec]. It defines the structure of natural language templates, bound to a data model, that can be executed using request/response JSON messages.

CiteSeerX


This is the source code for the [CiteSeerX academic digital library.](http://citeseerx.ist.psu.edu)

cl-gambol


The GAMBOL package is a trivially modified extraction of the logic programming portion of the Frolic system written at the University of Utah. I have made a few changes to get it to compile under a modern Common Lisp, in addition to a few style changes that don't alter any functionality.

cl-ggp


`cl-ggp` is a tiny framework for writing [general game players][GGP] in Common Lisp.

cl-prolog2


This is a realization of Marc Kuo's ["modelling approach to OR (operations research)"](https://kuomarc.wordpress.com/2012/03/05/the-uncommon-lisp-approach-to-operations-research/) for Prolog language.

clai


Command Line Artificial Intelligence `CLAI` is an open-sourced project aimed to bring the power of AI to the command line. Using CLAI, users of Bash can access a wide range of skills that will enhance their command line experience. This repository contains the source code and documentation to get you started.

clark


This is the schedule: _Event_0x7f3e9007a690: 0.00 s global_start_event: 0.00 s _Event_0x7f3e8f7dcd10: 5.00 s _Event_0x7f3e8f7fc750: 15.16 s _Event_0x7f3e8f797cd0: 20.16 s _Event_0x7f3e8f7c33d0: 25.16 s _Event_0x7f3e8f7d0a90: 30.16 s _Event_0x7f3e8f77c410: 35.16 s _Event_0x7f3e8f7471d0: 47.16 s _Event_0x7f3e8f747e50: 57.16 s _Event_0x7f3e8f6d4590: 69.16 s _Event_0x7f3e8f704450: 79.16 s _Event_0x7f3e8f695190: 79.16 s global_end_event: 79.16 s ```

classical-domains


This repository is a simple collection of PDDL files. Currently only classical problems are included, but more are expected to be added in the future.

Clex


[clex_lexicon.pl](clex_lexicon.pl) is a large English lexicon derived from COMLEX. It conforms to the [ACE Lexicon Specification](http://attempto.ifi.uzh.ch/site/docs/ace_lexicon.html) and can be used as a drop-in replacement for the (small) lexicon file included in the [APE source distribution](https://github.com/Attempto/APE).

ClioPatria


ClioPatria is an extension of the SWI-Prolog RDF infratructure (`semweb' package) that provides you with a ready-to-run web-server that can be extended into a full-fledged Semantic Web application. The semweb package provides reading and writing RDF (XML and Turtle), storage and querying by means of rdf(Subject, Predicate, Object). ClioPatria adds the following:

cloud-solver


This project is the bases for [solver.planning.domains](http://solver.planning.domains/) -- a web service that provides access to an automated planner. Please report any bugs or feature requests you may have on the [[issue list](https://bitbucket.org/planning-researchers/cloud-solver/issues)] for the project.

cloudforfree.org


This is the official website for CloudForFree http://cloudforfree.org

cltools


A collection of tools for manipulating Common Logic texts. See http://www.common-logic.org

cluewebextractor


`--output-dir` is an optional switch that specifies an output directory for the extracted content. If not used, cluewebextractor will either: not use a directory, if input is a single file, or; use the name of the input directory as the output directory, if input is a directory.

clyc


This native Common Lisp version will be refactored, documented, and modernized yielding a much smaller and easier to modify system. It should also run inferences faster than the layered and semi-interpreted Java version, which emulates a Lisp-like environment (SubL/CycL).

clyc-old


This native Common Lisp version will be refactored, documented, and modernized yielding a much smaller and easier to modify system. It should also run inferences faster than the layered and semi-interpreted Java version, which emulates a Lisp-like environment (SubL/CycL).

coauthor


**Coauthor** is a tool for group collaboration, discussion, keeping track of notes/results of meetings, etc., in particular to enable **[supercollaboration](http://erikdemaine.org/supercollaboration/)**. Coauthor's primary goal is to ease multiauthor collaboration on unsolved problems in theoretical computer science, so e.g. you'll find LaTeX math support, but it has proved useful in other fields too.

codmap-2015


The repository contains the PDDL<->MA-PDDL conversion scripts and competition running scripts.

Colin2-TRH


This package contains COLIN-TRH, a planner for domains with time windows. For more details, see the papers:

coling-peoples2016-opinion-prediction


This project contains experimental code for classying opinion and persuasiveness from speech using vanilla long short-term memory network (LSTMs) recurrent neural nets implementation from Keras.

colis-language


The oracle file is a Yaml-serialised file of the following format:

colore


Many tasks require correct and meaningful communication and integration among intelligent agents and information resources. A major barrier to such interoperability is semantic heterogeneity: different applications, databases, and agents may ascribe disparate meanings to the same terms or use distinct terms to convey the same meaning. Even when software applications use the same terminology, they often associate different semantics with the terms. This clash over the meaning of the terms prevents the seamless exchange of information among the applications. The development and application of ontologies play a central role in achieving semantic integration. An ontology is a computer-interpretable specification that is used by an agent, application, or other information resource to declare what terms it uses, and what the terms mean. Ontologies support the semantic integration of software systems through a shared understanding of the terminology in their respective ontologies.

compass


## Author Compass is written by [Chris Eppstein](http://chriseppstein.github.io/).
Chris is a software engineer at [LinkedIn](http://www.linkedin.com/) and a member of the [Sass](https://github.com/nex3/sass) core team.

CompCert


## Overview The CompCert C verified compiler is a compiler for a large subset of the C programming language that generates code for the PowerPC, ARM, x86 and RISC-V processors.

Computational-Journalism-Publishers-Workbench


### Latest release: 2.9.2, 2013-07-30 - cutesy release code name "Practice! Practice! Practice!" ### [Quick Start](https://github.com/znmeb/Computational-Journalism-Publishers-Workbench/wiki/Quick-Start) ### [What's New?](https://github.com/znmeb/Computational-Journalism-Publishers-Workbench/wiki/What%27s-New) ### [Road Map](https://github.com/znmeb/Computational-Journalism-Publishers-Workbench/wiki/Road-Map) ### Questions? Problems? Just want to talk about computational journalism? * [Follow @znmeb on Twitter](https://twitter.com/znmeb) * [File an issue on Github](https://github.com/znmeb/Computational-Journalism-Publishers-Workbench/issues/new) * [Frontiers of Journalism on Scoop.it](http://www.scoop.it/t/computational-and-data-journalism) * [R for Journalists on Scoop.it](http://www.scoop.it/t/r-for-journalists)

ComSem


The repository contains scripts and data used in the [Computational Semantics](https://www.rug.nl/ocasys/rug/vak/show?code=LIX021M05) course at the University of Groningen.

conceptGraph


Answer Graph Criterias to check for: 1. w is a well formed CG 2. w is true if the data base is correct 3. The entire query graph q is covered by a join from w 4. For every concept in q that has a value, the corresponding concept in w has the same value. 5. For every concept in q that had a question mark, the corresponding concept in w has a value.

conceptnet5


This Python package contains a toolset for loading new datasets into ConceptNet 5, and it serves the HTML and JSON Web APIs for it. You don't need it to simply access ConceptNet 5; see http://conceptnet5.media.mit.edu for more information.

concerto


Concerto is a lightweight 100% JavaScript schema language and runtime. It works in both a Node.js process and in your browser. The browserified version of Concerto is ±280KB. We are working on making it even smaller.

contingent-plan-executor


This repository contains the the logic of dialog planner. It is deployed as a bluemix python application with a NoSQL db database that is supposed to store solutiions generated by planner.

Contribute-To-This-Project


This is a tutorial to help first-time contributors to participate in a simple and easy project.

copernic


copernic is web application that is (mostly) implemented with Python programming language. It is supported by a database that is a triple store versioned. It is possible to do time traveling queries at any point in history while still being efficient to query and modify the latest version. The versioned triple store is implemented using a novel approach dubbed generic tuple store. copernic goal is to demonstrate that versioned databases allow to implement workflows that ease cooperation.

copycat


An implementation of [Douglas Hofstadter](http://prelectur.stanford.edu/lecturers/hofstadter/)'s Copycat algorithm. The Copycat algorithm is explained [on Wikipedia](https://en.wikipedia.org/wiki/Copycat_%28software%29), and that page has many links for deeper reading. See also [Farglexandria](https://github.com/Alex-Linhares/Farglexandria).

coq


Coq is a formal proof management system. It provides a formal language to write mathematical definitions, executable algorithms and theorems together with an environment for semi-interactive development of machine-checked proofs.

Cosmos


# COSMOS Cosmos is an open source semantic search engine that focuses on the retrieval of information from PDF documents. While created with the intention of automating the process of scientific discovery and analysis, the components can be applied generally to stacks of documents.

CotD


City of the Damned is a simple fast-paced coffee-break roguelike inspired by a 7DRL entry "City of the Condemned" by Tapio (http://www.roguebasin.com/index.php?title=City_of_the_Condemned).

CountryInfo-1


This is the GitHub repository for the CountryInfo.txt and related utility programs. CountryInfo.txt is a general purpose file intended to facilitate natural language processing of news reports and political texts. It was originally developed to identify states for the text filtering system used in the development of the Correlates of War project dataset MID4, then extended to incorporate CIA World Factbook and WordNet information for the development of TABARI dictionaries. File contains about 32,000 lines with country names, synonyms and other alternative forms, major city and region names, and national leaders. It covers about 240 countries and administrative units (e.g. American Samoa, Christmas Island, Hong Kong, Greenland). It is internally documented and almost but not quite XML.

cpan-api


A Web Service for the CPAN ==========================

CPArec


CPArec is a tool for verifying recursive C programs via source-to-source program transformation. It uses a recursion-free program analyzer CPAChecker as a black box and computes function summaries from the inductive invariants generated by CPAChecker_. Such function summaries enable CPArec to check recursive programs.

cpm


Description: This program is a ncurses based console tool to manage passwords and store them public key encrypted in a file - even for more than one person. The encryption is handled via GnuPG so the programs data can be accessed via gpg as well, in case you want to have a look inside. The data is stored as as zlib compressed XML so it's even possible to reuse the data for some other purpose.

Crepe


This repository contains code in Torch 7 for text classification from character-level using convolutional networks. It can be used to reproduce the results in the following article:

CRFAE-Dep-Parser


This repository contains the code to reproduce the experiment result of the paper [CRF autoencoder for unsupervised dependency parsing](http://sist.shanghaitech.edu.cn/faculty/tukw/emnlp17CJT.pdf) on WSJ data set and PASCAL dataset.

CROMER


CROMER (CROss-document Main Events and entities Recognition) is a novel web-based tool to manually annotate event and entity coreference across clusters of documents. The tool has been developed so as to handle large collections of documents, perform collaborative annotation (several annotators can work on the same clusters), and enable the linking of the annotated data to external knowledge sources. Given the availability of semantic information encoded in Semantic Web resources, this tool is designed to support annotators in linking entities and events to DBPedia and Wikipedia, so as to facilitate the automatic retrieval of additional semantic information. In this way, event modelling and chaining is made easy, while guaranteeing the highest interconnection with external resources.

CRYENGINE


In order to compile, you will need to download the SDKs for the particular release you are trying to build. They can be found [here](https://github.com/CRYTEK-CRYENGINE/CRYENGINE/releases).

cryptogram


This is a small program to help you solve cryptograms.

crystal


Crystal is a natural language question answering program. It converts natural text into a semantic representation based on Discourse Representation Theory and performs inferences on the result. Its features include anaphora and presupposition resolution, semantic reasoning through the use of WordNet and VerbNet databases and logical inference. The application currently covers only a small subset of English, but it is sufficiently interesting to mess around.

CSK


QUASIMODO is a system to extract commonsense knowledge from query logs and QA forums.

csplib


Each problem is stored in the `Problems` directory. The best way to get a feeling for how a problem is stored is to look at an existing problem (Problem/prob001 is a good start).

CTCDecoder


The RNN output matrix of the **Mini example** testcase contains 2 time-steps (t0 and t1) and 3 labels (a, b and - representing the CTC-blank). Best path decoding (see left figure) takes the most probable label per time-step which gives the path "--" and therefore the recognized text "" with probability 0.6\*0.6=0.36. Beam search, prefix search and token passing calculate the probability of labelings. For the labeling "a" these algorithms sum over the paths "-a", "a-" and "aa" (see right figure) with probability 0.6\*0.4+0.4\*0.6+0.4*0.4=0.64. The only path which gives "" still has probability 0.36, therefore "a" is the result returned by beam search, prefix search and token passing.

cuad


This repository contains code for the [Contract Understanding Atticus Dataset (CUAD)](https://www.atticusprojectai.org/cuad), a dataset for legal contract review curated by the Atticus Project. It is part of the associated paper [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268) by Dan Hendrycks, Collin Burns, Anya Chen, and Spencer Ball.

CVC4


CVC4 is a tool for determining the satisfiability of a first order formula modulo a first order theory (or a combination of such theories). It is the fourth in the Cooperating Validity Checker family of tools (CVC, CVC Lite, CVC3) but does not directly incorporate code from any previous version.

cycic-transformers


This repository demonstrates how to train and test on the CycIC dataset using the popular transformers library from huggingface. The original example scripts can be found at [transformers/examples/multiple-choice/](https://github.com/huggingface/transformers/tree/master/examples/multiple-choice). Here, they have been extended with an additional data processing class for the CycIC task.

d20-rpg


A cross-platform C++ game based on the [D20 System](http://en.wikipedia.org/wiki/D20_System) from Dungeons and Dragons.

DALI


DALI is a meta interpreter built on top of Sicstus Prolog (R) (at the moment).

dantalian


Dantalian is a Python 3 library to assist file organization and tagging using hard links.

darknet


# Darknet # Darknet is an open source neural network framework written in C and CUDA. It is fast, easy to install, and supports CPU and GPU computation.

dart


This is the top-level repository for the DART project. Check out the [project webpage](http://cps-sei.github.io/dart) and [wiki](https://github.com/cps-sei/dart/wiki) for more details.

dataid


The DBpedia DataID Unit is a DBpedia group with the goal of describing LOD datasets via RDF files, to host and deliver these metadata files together with the dataset in a uniform way, create and validate such files and deploy the results for the DBpedia and its local chapters. Established vocabularies like [DCAT](http://www.w3.org/TR/vocab-dcat/), [VoID](http://vocab.deri.ie/void), [Prov-O](http://www.w3.org/TR/prov-o/) and [SPARQL Service Description](http://www.w3.org/TR/sparql11-service-description/) are to be reused for maximum compatibility. This way, we hope to establish a uniform and accepted way to describe and deliver dataset metadata for arbitrary LOD datasets and to put existing standards into practice.

DataId-Ontology


# DataId-Ontology The DBpedia DataID core vocabulary is a meta-data system for detailed descriptions of datasets and their different manifestations. Established vocabularies like DCAT, VoID, Prov-O and FOAF are reused for maximum compatibility to establish a uniform and accepted way to describe and deliver dataset metadata for arbitrary datasets and to put existing standards into practice. In addition DataID can describe the relations of Agents (like persons or organizations) to datasets in regard to their rights and responsibilities.

datalogsolve


DATALOG_SOLVE is a new static analyzer which implements a powerful, fully automatable method to evaluate Datalog queries by using Boolean Equation Systems (BESs).

dataverse


Dataverse is an [open source][] web application for sharing, citing, analyzing, and preserving research data (developed by the [Data Science and Products team](http://www.iq.harvard.edu/people/people/data-science-products) at the [Institute for Quantitative Social Science](http://iq.harvard.edu/) and the [Dataverse community][]).

daydreamer


DAYDREAMER is a trademark of Erik T. Mueller.

dbpedia-spotlight


All the original code produced for DBpedia Spotlight is licensed under [Apache License, 2.0](http://www.apache.org/licenses/LICENSE-2.0.html). Some modules have dependencies on [LingPipe](http://alias-i.com/lingpipe/) under the [Royalty Free License](http://alias-i.com/lingpipe/licenses/lingpipe-license-1.txt). Some of our original code (currently) depends on GPL-licensed or LGPL-licensed code and is therefore also GPL or LGPL, respectively. We are currently cleaning up the dependencies to release two builds, one purely GPL and one purely Apache License, 2.0.

dcg_util


This module is a collection of predicates and combinators for working with Prolog's definite clause grammars (DCG). As much as possible, I've tried to make these rules symmetric so that you can use them for both parsing and generating.

debmake-doc


This takes long time and isn't friendly for debug.

Deep-NLP-Resources


Dictionary ---- - Bilingual Dictionary - [CC-CEDICT](https://cc-cedict.org/wiki/start) A bilingual dictionary between English and Chinese. - Pronouncing Dictionary - [CMUdict](http://www.speech.cs.cmu.edu/cgi-bin/cmudict) The Carnegie Mellon University Pronouncing Dictionary is an open-source machine-readable pronunciation dictionary for North American English that contains over 134,000 words and their pronunciations.

deepcoder


100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:00<00:00, 147.26it/s] summary: solved 53/100 (53.0%) nb_steps wall_ms count 100.000000 100.000000 mean 628.420000 50.401287 std 412.020181 32.888645 min 1.000000 0.053883 25% 175.500000 15.448511 50% 830.000000 70.028543 75% 1000.000000 77.140987 max 1002.000000 102.509022 ``` (gas is a limit on the number of nodes explored per problem)

DeepEnglishV01


  • Java, with Spring as the framework
  • Machine learning. I use it to determine whether the questions are good enough to examine your understanding in reading the article you submitted
  • Distractor generator algorithm. I use it to generate 4 (four) options as the possible answers. They can be tricky and I think it's good to check whether you really understand the main concept of the article
  • Content extractor. I use it to extract only the important and suitable parts of an article that comes from the URL you submitted
  • Text summarizer. It is a part of Classifier4J, a Java library for text classification. I use it to create a summary of your article
  • Web crawler (spider). I use it to find all pages in a website that contain your requested keyword

deeplearning4nlp-tutorial


This GIT repository accompanies the UKP lectures and seminars on Deep Learning for Natural Language Processing. In contrast to other tutorials, this tutorial focuses on the usage of deep learning methods.

DeepMind-Atari-Deep-Q-Learner


This project contains the source code of DQN 3.0, a Lua-based deep reinforcement learning architecture, necessary to reproduce the experiments described in the paper "Human-level control through deep reinforcement learning", Nature 518, 529–533 (26 February 2015) doi:10.1038/nature14236.

deeptype


This repository contains code necessary for designing, evolving type systems, and training neural type systems. To read more about this technique and our results [see this blog post](https://blog.openai.com/discovering-types-for-entity-disambiguation/) or [read the paper](https://arxiv.org/abs/1802.01021).

DeFacto


A Fact-Validation framework :x: :white_check_mark:

DefMiner


This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

deft2013


This software (and data set) is intended for students as a way to experiment machine learning with Weka and for researcher as a reproducible experiment of DEFT 2013. THIS IS NOT SOMETHING EASY TO USE AND DEPLOY. You need machine learning, java and weka skills to use this package. We can provide a litle help ... but not too much :-)

DeftEval


This work was developed as final project for AI Course Fall 2019/2020 offering at AlexU Faculty of Engineering. It is our offical contribution for [Deft Eval Competition Subtask 1](https://competitions.codalab.org/competitions/22759) and running on it's offical [dataset](https://github.com/adobe-research/deft_corpus). It was an amazing experience and a great oppurtinuity to learn and explore the NLP world ! We would like to thank you the organziers of the compeition for their great work and for their willingness to help hrough forum.

DELiC4MT


DELiC4MT is a piece of software that allows to perform diagnostic evaluation of Machine Translation systems over linguistic checkpoints, i.e. source-language lexical elements and grammatical constructions specified by the user. For more details see our paper in the Credits section.

dendrite


This was inspired by the opencyc bot @aindalis and have set up in #logicmoo on freenode. There is an interesting synergy of the zulip group chat UX that I think could play well with a knowledge-base-repl type gizmo.

depdep


Depdep is a merciless sentinel which will seek sensitive files containing critical info leaking through your network. Basically, it is a fast and practical sensitive data search tool maintaining personal & commercial data privacy for companies and institutions. It can very well be used by auditors making sure that their network doesn't leak any unauthorized non-compliant data through windows & unix/linux shares. The usage is easy and configurable, however, certain technical knowledge is necessary, such as using linux console, ability of writing and understanding basic regular expressions, tough the configuration file comes with several sensitive information patterns, etc.

dependency-graph-similarity-measure


This package provides a framework for calculating similarity between a pair of dependency parses according to *path overlap*. A very simple example can be run using SolverExample.scala.

deptreeviz


We at [NatS](https://nats-www.informatik.uni-hamburg.de) have a long history of visualizing dependency trees. This library is a spin-off from our dependency parser [jwcdg](https://gitlab.com/nats/jwcdg), which comes with its own editing and visualization tools.

derplanner


### Fact Database Fact database is a collection of typed tuples, representing domain knowledge about the world.

detoxify


A complete list of all the identity labels available can be found [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data).

dialog_games


This repository contains implementations of dialog games for abstract argumentation frameworks and for two extensions that I developed during my PhD, namely *abductive* argumentation frameworks and *property-based* argumentation frameworks.

Dictionaries


There are five separate input dictionaries or lists that PETRACH makes use of: the verb dictionary, the actor dictionary, the agent dictionary, the discard list, and the issues list. The following sections describe these files in greater detail. In addition to this documentation, which is intended for individuals planning to work on dictionaries, the source code contains internal documentation on how the dictionary information is stored by the program.

dig-etl-engine


myDIG is a tool to build pipelines that crawl the web, extract information, build a knowledge graph (KG) from the extractions and provide an easy to user interface to query the KG. The project web page is [DIG](http://usc-isi-i2.github.io/dig/).

disambiguate


This repository contains a set of easy-to-use tools for training, evaluating and using neural WSD models.

discourse-parsing


This repository contains code for a shift-reduce discourse parser based on rhetorical structure theory. A detailed system description can be found at http://arxiv.org/abs/1505.02425.

discriminative-ibr


The src/ibr/ directory contains the discriminative IBR code. Run src/ibr/discrim_ibr.py for usage instructions.

disease_owl


Ontology has attracted much attention from both academia and industry. Handling uncertainty reasoning is important in research on ontology. For example, when a patient is suffering from cirrhosis, the appearance of abdominal vein varices is four times more likely than the presence of bitter taste. Such medical knowledge is crucial for decision-making in various medical applications but is missing from existing medical ontologies. In this paper, we aim to discover medical knowledge probabilities from electronic medical record (EMR) texts to enrich ontologies. We first build an ontology by discovering meaningful entity mentions from EMRs. Then, we propose a symptom dependency-aware naïve Bayes classifier that is built on the assumption that there is a particular level of dependency among symptoms. To ensure the accuracy of diagnostic classification, we add the value of the probability of a disease to the ontology in innovative ways. Results: We conduct a series of experiments to demonstrate that the proposed method can discover meaningful and accurate probabilities for medical knowledge. Based on over 30,000 deidentified medical records, we explore 336 abdominal diseases and 81 related symptoms. Among these 336 gastrointestinal diseases, the probabilities of 31 diseases are obtained through our method. These 31 probabilities of disease and 189 conditional probabilities between diseases and symptoms are added to the generated ontology. Conclusion: In this paper, we propose a medical knowledge probability discovery method based on the analysis and extraction of EMR text data to enrich a medical ontology with probability information. The experimental results show that the proposed method can effectively discover accurate medical knowledge probability information from EMR data. Further, the proposed method can efficiently and accurately calculate the probability of a patient suffering from a specific disease, revealing the advantage of the combination of ontology and the symptom dependency-aware naïve Bayes classifier.

dist-selfish-fd


This specific problem has 4 agents called "rover[0-3]", so open agents file and insert the following:

divisi2


This project is no longer maintained ====================================

dkpro-argumentation


The class hierarchy contains two central classes, ``ArgumentComponent`` and ``ArgumentRelation``.

dkpro-uby


# DKPro Uby DKPro Uby is a Java framework for creating and accessing sense-linked lexical resources in accordance with the UBY-LMF lexicon model, an instantiation of the ISO standard Lexicon Markup Framework (LMF). The software library includes the following modules:

dl-setup


* Go to the [Nvidia website](http://www.geforce.com/drivers) and find the latest drivers for your graphics card and system setup. You can download the driver from the website and install it, but doing so makes updating to newer drivers and uninstalling it a little messy. Also, doing this will require you having to quit your X server session and install from a Terminal session, which is a hassle. * We will install the drivers using apt-get. Check if your latest driver exists in the ["Proprietary GPU Drivers" PPA](https://launchpad.net/~graphics-drivers/+archive/ubuntu/ppa). Note that the latest drivers are necessarily the most stable. It is advisable to install the driver version recommended on that page. Add the "Proprietary GPU Drivers" PPA repository. At the time of this writing, the latest version is 361.42, however, the recommended version is 352:

dl4ir-webnav


WebNav is a benchmark task for evaluating an agent with abilities to understand natural language and plan on partially observed environments. In this challenging task, an agent navigates through a web site consisting of web pages and hyperlinks to find a web page in which a query appears.

dmoz-urlclassifier


DMOZ is the largest, most comprehensive human-edited directory of the Web. It was historically known as the Open Directory Project (ODP). It contains a categorized list of Web URLs. Their listings are updated on a monthly bases and published in [RDF files](http://rdf.dmoz.org/rdf/).

Domains-General-Game-Playing


GDL Translator Readme --------------------- This directory contains a Python program that translates game definitions from Game Description Language (GDL; usually stored in files with .kif extension) into self-contained Soar agents that simulate the mechanics of the game in working memory and productions. See

Domains-Planning-Domain-Definition-Language


This directory contains a Java program that can translate a domain specification written in the Planning Domain Definition Language (PDDL) 1.2 into a Python SML environment and a set of Soar rules that propose legal operators. The program was generated by ANTLR v3.1.3 from a PDDL grammar written by Zeyn Saigol at University of Birmingham.

DOOM-3-BFG


This file contains the following sections:

dossier


An open database of companies, focused on determining subsidiary and branch relationships.

dpb


The book is available in German now. It is written in NoWeb and contains

DPLP


1. Run the Stanford CoreNLP with the given bash script **corenlp.sh** with the command "*./corenlp.sh path_to_dplp/data*" - This is a little awkward, as I am not sure how to call the Stanford parser from any other directory.

dprolog


An extension of prolog that allows rules to be labelled with a belief (a real number between 0 and 1 inclusive) and given a label so that proofs can be generated with a belief attached to them and rules can argued about.

dragnet


This project was originally inspired by Kohlschütter et al, [Boilerplate Detection using Shallow Text Features](http://www.l3s.de/~kohlschuetter/publications/wsdm187-kohlschuetter.pdf) and Weninger et al [CETR -- Content Extraction with Tag Ratios](http://web.engr.illinois.edu/~weninge1/cetr/), and more recently by [Readability](https://github.com/buriy/python-readability).

Dshell


An extensible network forensic analysis framework. Enables rapid development of plugins to support the dissection of network packet captures.

dunyazad


A story generation system (with choices(!)).

duprkit


**Everything on the master branch is broken due to the ongoing redesign. And unluckily the latest release is outdated. Please look forward to the next major release.**

dwarf-fortress


ABOUT ````` Dwarf Fortress is a single-player fantasy game. You can control a dwarven outpost or an adventurer in a randomly generated, persistent world.

eager-beaver


This bundle contains the source code for a general game player (for more information on general game playing see http://www.ggp.org) written in Java. The player is based on a framework written by Sam Schreiber (http://code.google.com/p/ggp-base/). Build files are provided for use with Apache Ant.

EASDRL


### POS data 1. ``{domain}_dependency.pkl`` contains the part-of-speech data for action name extractor 2. ``{domain}_arg_pos.pkl`` contains the part-of-speech data for action argument extractor

EasySRL


A pretrained model is available [here](https://drive.google.com/file/d/0B7AY6PGZ8lc-R1E3aTA5WG54bWM/view?usp=sharing).

ec


DreamCoder is a wake-sleep algorithm that finds programs to solve a given set of tasks in a particular domain.

eCause


A Web-Mining Causal Relation Butler

ecipedia-usc


This repository contains a partial mapping of Jerry Hobbs and Andrew Gordon's [background theory axioms](https://isi.edu/~hobbs/csk.html), and additional spatial axioms, all developed at USC, for inclusion on the CwC program's [ECIpedia](https://ecipedia.sift.net/eci-web).

edb-debugger


edb is a cross platform x86/x86-64 debugger. It was inspired by [Ollydbg](http://www.ollydbg.de/ "Ollydbg"), but aims to function on x86 and x86-64 as well as multiple OS's. Linux is the only officially supported platform at the moment, but FreeBSD, OpenBSD, OSX and Windows ports are underway with varying degrees of functionality.

edits


This software is a new tool based on Edit Distance Textual Entailment Suite - EDITS. The original version of EDTS can still be found on the SourceForge svn (http://sourceforge.net/p/edits/code/HEAD/tree/). The version 2.1 of EDITS is integrated in the system developed by the Excitement project (http://www.excitement-project.eu/).

eis


The basic concept of an agent used in EIS is that of an agent that performs actions in the environment and receives percepts from its environments. This is a [standard and generic definition of an agent](http://en.wikipedia.org/wiki/Intelligent_agent) as used in Artificial Intelligence.

eisbot


EISBot is a [StarCraft: Brood War](http://us.blizzard.com/en-us/games/sc/) bot developed by Ben Weber at [UC Santa Cruz](http://games.soe.ucsc.edu/) as part of his dissertation research. The main objective for the project is to identify the capabilities necessary for expert Starcraft gameplay and to realize these capabilities in a game-playing agent.

elasticsearch


Elasticsearch is a distributed RESTful search engine built for the cloud. Features include:

eldiablo


On the technical side of things, EL:DIABLO provides the information and scripts necessary to set up a [virtual machine](https://en.wikipedia.org/wiki/Virtual_machine) on a user's computer. For those not familiar, this can be thought of as a computer within a computer. EL:DIABLO relies on [Vagrant](https://www.vagrantup.com/), and by extension [VirtualBox](https://www.virtualbox.org/), to set up this virtual environment. These two pieces of software allow for the easy setup and use of a virtual machine. Thus, two of the files contained within EL:DIABLO are a `Vagrantfile`, which gives instructions to Vagrant on how to setup the virtual machine, and `bootstrap.sh`, which is a [shell script](https://en.wikipedia.org/wiki/Shell_script) that installs the necessary software within the virtual machine.

ELF


ELF is an Extensive, Lightweight, and Flexible platform for game research. We have used it to build our Go playing bot, `ELF OpenGo`__, which achieved a 14-0 record versus four global top-30 players in April 2018. The final score is 20-0 (each professional Go player plays 5 games).

elk-reasoner


ELK is an ontology reasoner that aims to support the OWL 2 EL profile. See http://elk.semanticweb.org/ for further information.

elle


elle (codename lulu) is a simple program that manages and helps clean your computer. (currently only supports windows). this program is in its infancy and is in no way complete. if you like to try the current program do the following:

Elsa


Elsa is a tool that analyses your code without loading or running it. It can track types and provide helpful hints when things don't match up before you even try to run the code.

emacs


This directory tree holds version 27.0.50 of GNU Emacs, the extensible, customizable, self-documenting real-time display editor.

emacs-bash-completion


A simpler and more complete alternative to bash-completion.el is to run a bash shell in a buffer in term mode(M-x `ansi-term'). Unfortunately, many Emacs editing features are not available when running in term mode. Also, term mode is not available in shell-command prompts.

emacs-chess


chess.el is an Emacs Lisp library and several clients on top of the underlying library functionality for performing various activities related to the game of chess.

emacs-ffi


This is an FFI for Emacs. It is based on libffi and relies on the dynamic modules work (available on the Emacs 25 branch) in order to be loaded into Emacs. It is relatively full-featured, but for the time being low-level.

emacs-gargoyle


Gargoyle is an Emacs module

emacs-glulx


This is an implementation of the Glulx virtual machine in Emacs Lisp. Since all input and output from Glulx is via the GLK library there is also an Emacs Lisp implementation of the GLK specification.

emacs-mark-tools


A simple library for navigating the global and local mark rings in Emacs. Simply execute M-x list-marks for a navigable list of the global-mark-list. The prefix argument can be used to limit the list to the buffer's local mark list.

emacs-refactor


Emacs Refactor (EMR) is a framework for providing language-specific refactoring in Emacs. It includes refactoring commands for a variety of languages, including elisp itself!

emacs-shroud


;; -*- mode:org -*- * Emacs-Shroud Interface :PROPERTIES: :ALT_TITLE: Introduction :DESCRIPTION: Shroud secrets manager :END: Shroud is a password manager written in Guile which uses GnuPG in the backend. See Shroud's website at [[https://dthompson.us/projects/shroud.html][this link.]] This package is an Emacs interface to Shroud using the Buffers User Interface library.

emacs-yamlmod


## Overview **YamlMod** is an emacs-module to parse yaml, written in Rust.

emnlp15-dim4auc


This repository contains the code used to perform the classification experiments described in section 4.2 of our EMNLP15 paper. Please use the following citation:

emnlp2015-crowdsourcing


This project runs experiments comparing the benefit of soft labeling and filtering with label aggregation for learning a classification model n natural language tasks. This project is the experiment code described in the paper, "Noise or additional information? Leveraging crowdsource annotation item agreement for natural language tasks" (Jamison and Gurevych, 2015).

emnlp2015-ih-ig


>This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.

emnlp2016-empirical-convincingness


> **Abstract:** This article tackles a new challenging task in computational argumentation. Given a pair of two arguments to a certain controversial topic, we aim to directly assess qualitative properties of the arguments in order to explain why one argument is more convincing than the other one. We approach this task in a fully empirical manner by annotating 26k explanations written in natural language. These explanations describe convincingness of arguments in the given argument pair, such as their strengths or flaws. We create a new crowd-sourced corpus containing 9,111 argument pairs, multi-labeled with 17 classes, which was cleaned and curated by employing several strict quality measures. We propose two tasks on this data set, namely (1) predicting the full label distribution and (2) classifying types of flaws in less convincing arguments. Our experiments with feature-rich SVM learners and Bidirectional LSTM neural networks with convolution and attention mechanism reveal that such a novel fine-grained analysis of Web argument convincingness is a very challenging task. We release the new UKPConvArg2 corpus and software under permissive licenses to the research community.

Empire


Empire is a post-exploitation framework that includes a pure-PowerShell2.0 Windows agent, and a pure Python 2.6/2.7 Linux/OS X agent. It is the merge of the previous PowerShell Empire and Python EmPyre projects. The framework offers cryptologically-secure communications and a flexible architecture. On the PowerShell side, Empire implements the ability to run PowerShell agents without needing powershell.exe, rapidly deployable post-exploitation modules ranging from key loggers to Mimikatz, and adaptable communications to evade network detection, all wrapped up in a usability-focused framework. PowerShell Empire premiered at [BSidesLV in 2015](https://www.youtube.com/watch?v=Pq9t59w0mUI) and Python EmPyre premeiered at HackMiami 2016.

Encyclopedia


This is a collaborative and open Encyclopedia of Proof Systems.

end2end_neural_el


This step requires the entity vectors and the word-embeddings to exist. An essential part of our system are the entity vectors (the equivalent of word-embeddings for entities). You can create your entity vectors by following the instructions of the [next chapter](#gerbil-evaluation), otherwise you can use the provided pretrained ones. We have pretrained 502661 entity vectors. Specifically, we have trained entity vectors for all the candidate entities from all possible spans of AIDA-TestA, AIDA-TestB, AIDA-Training 1, ACE2004, AQUAINT, MSNBC, Clueweb, DBpediaSpotlight, Derczynski, ERD2014, GERDAQ-Dev, GERDAQ-Test, GERDAQ-TrainingA, GERDAQ-TrainingB, KORE50, Microposts2016-Dev, Microposts2016-Test, Microposts2016-Train, N3-RSS-500, N3-Reuters-128, OKE 2015 Task1, OKE 2016 Task1, and the entity relatedness dataset of (Ceccarelli et al., 2013). In more detail, this is done by considering all possible spans of the document as a candidate span and querying our p(e|m) dictionary for all the candidate entities for this span (we keep only the top 30 for each candidate span).

ENHSP-Public


This repository contains ENHSP, which stands for Expressive Numeric Heuristic Planner. It is a forward heuristic search planner, but it is expressive in that it can handle:

Entailment-with-Tensorflow


This repo hosts the code associated with my O'Reilly article, "Textual entailment with TensorFlow: Using neural networks to explore natural language," published on July 17, 2017.

EP-ASP


+ script_bomb.sh is a script to run experiments on ``bomb in the toilet" problems.

EphyraQuestionAnalysis


A collection of [OpenEphyra](http://sourceforge.net/projects/openephyra/) components necessary for question analysis. **Dependencies**: Java, Maven, WordNet. **You may need to set the right locale**, see [build.sh](build.sh). Unlike initial versions relying on LTI repositories, this is a self-sufficient one.

EPK


Single-Agent Planner is a complete epistemic planner without the epistemic closed world assumption for single agent which is logic-based.

ergo


This is the source code for the Ergo compiler. Ergo is the [Accord Project][apmain] language for Smart Legal Contracts.

esbmc


ESBMC, the efficient SMT based model checker, is a software verification tool for C and C++ code bases. The technique is sound but incomplete -- an error found by ESBMC will be correct (modulo errors in the tool), but a successful verification does not guarantee there are no errors.

eso


EternalRocks


EternalRocks is a network worm (i.e. self-replicating), emerged in first half of May 2017. It spreads through public ([The Shadow Brokers NSA dump](https://steemit.com/shadowbrokers/@theshadowbrokers/lost-in-translation)) SMB exploits: `ETERNALBLUE`, `ETERNALCHAMPION`, `ETERNALROMANCE` and `ETERNALSYNERGY`, along with related programs: `DOUBLEPULSAR`, `ARCHITOUCH` and `SMBTOUCH`.

europa


Welcome! EUROPA is a framework to model and tackle problems in Planning, Scheduling and Constraint Programming. EUROPA is typically embedded in a host application. It is designed to be expressive, efficient, extendable and configurable. It includes: - **A Plan Database:** The technology cornerstone of EUROPA for storage and manipulation of plans as they are initialized and refined. The EUROPA Plan Database integrates a rich representation for actions, states, objects and constraints with powerful algorithms for automated reasoning, propagation, querying and manipulation. - **A Problem Solver:** A core solver to automatically find and fix flaws in the plan database. It can be configured to plan, schedule or both. It can be easily customized to integrate specialized heuristics and resolution operations. - **A Tool Box:** Europa includes a debugger for instrumentation and visualization of applications. It also includes a very high-level, declarative modeling language for describing problem domains and partial-plans.

Event_Process_Typing


# Semantic Typing of Event Processes This is the repository for the resources in CoNLL 2020 Paper "What Are You Trying Todo? Semantic Typing of Event Processes". This repository contains the source code and links to some datasets used in our paper.

Excitement-Open-Platform


This repository contains both the code and the documentation (i.e. wiki pages) of the next Excitement Open Platform (EOP) release, which is an open source software platform containing state-of-the-art algorithms for recognizing texual entailment relations: _given two text fragments, one named text and the other named hypothesis, the task consists in recognizing whether the hypothesis can be inferred from the text_

Excitement-Open-Platform-old


This repository contains both the code and the documentation (i.e. wiki pages) of the next Excitement Open Platform (EOP) release. EOP is an open source software platform containing state-of-the-art algorithms for recognizing texual entailment relations: _given two text fragments, one named text and the other named hypothesis, the task consists in recognizing whether the hypothesis can be inferred from the text_

exemplar


EXEMPLAR is an open relation extraction system originating from a research project at the University of Alberta. Relation extraction is the task of, given a text corpus, identifying relations (e.g., acquisition, spouse, employment) among named entities (e.g., people, organizations). While traditional systems are limited to the relations predetermined by the user, open relation extraction systems like EXEMPLAR are able to identify instances of any relation described in the text.

ExiL


## What's this? ExiL (Expert System in Lisp) is a **CLIPS-based expert system building tool** written in Common Lisp, with forward chainging and a very basic backward chaining inference engine. It was developed along my computer science master's thesis and is meant for **academic purposes**, not for real-case scenerios (at least yet).

explainshell


explainshell is a tool (with a web interface) capable of parsing man pages, extracting options and explain a given command-line by matching each argument to the relevant help text in the man page.

extraction-framework


## About DBpedia DBpedia is a crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia, and to link the different data sets on the Web to Wikipedia data. We hope that this work will make it easier for the huge amount of information in Wikipedia to be used in some new interesting ways. Furthermore, it might inspire new mechanisms for navigating, linking, and improving the encyclopedia itself.
To check out the projects of DBpedia, visit the [official DBpedia website](http://dbpedia.org).

Extremely-Fine-Grained-Entity-Typing


### Acknowledgement We thank [Choi et al](https://homes.cs.washington.edu/~eunsol/papers/acl_18.pdf) for the release of the Ultra-Fine dataset and the basic model: [https://github.com/uwnlp/open_type](https://github.com/uwnlp/open_type).

f2lp


The package contains the following files:

factorie


This directory contains the source of FACTORIE, a toolkit for probabilistic modeling based on imperatively-defined factor graphs. More information, see [the FACTORIE webpage](http://factorie.cs.umass.edu).

fairseq


Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks.

fastmoe


An easy-to-use and efficient system to support the Mixture of Experts (MoE) model for PyTorch.

fault_tolerant_router


Fault Tolerant Router is a daemon, running in background on a Linux router or firewall, monitoring the state of multiple internet uplinks/providers and changing the routing accordingly. LAN/DMZ internet traffic (outgoing connections) is load balanced between the uplinks using Linux *multipath routing*. The daemon monitors the state of the uplinks by routinely pinging well known IP addresses (Google public DNS servers, etc.) through each outgoing interface: once an uplink goes down, it is excluded from the *multipath routing*, when it comes back up, it is included again. All of the routing changes are notified to the administrator by email.

fawkes


Fawkes is a component-based Software Framework for Robotic Real-Time Applications for various Platforms and Domains.

fbctf


The Facebook CTF is a platform to host Jeopardy and “King of the Hill” style Capture the Flag competitions.

fibo


FIBO is a trademark of EDM Council, Inc. It is also standardized by the [Object Management Group](https://www.omg.org/index.htm).

Fido


FIDO is an orchestration layer used to automate the incident response process by evaluating, assessing and responding to malware. FIDO’s primary purpose is to handle the heavy manual effort needed to evaluate threats coming from today's security stack and the large number of alerts generated by them. As an orchestration platform FIDO can make using your existing security tools more efficient and accurate by heavily reducing the manual effort needed to detect, notify and respond to attacks against a network.

figer


This distribution contains the source code for the experiments presented in the following research publication ([PDF](http://xiaoling.github.com/pubs/ling-aaai12.pdf)):

figment-multi


This is an extension to the old [FIGMENT](https://github.com/yyaghoobzadeh/figment/).

find


**Android users:** [download the current version of the app](https://play.google.com/store/apps/details?id=com.hcp.find). _Sorry iPhone users but [the Apple store prevents apps that access WiFi information](https://doc.internalpositioning.com/faq/#can-i-use-an-iphone), so I will be unable to release a iPhone version._

FireNET


# FireNet FireNet is an artificial intelligence project for real-time fire detection.


FireNet is a real-time fire detection project containing an annotated dataset, pre-trained models and inference codes, all created to ensure that machine learning systems can be trained to detect fires instantly and eliminate false alerts. This is part of DeepQuest AI's to train machine learning systems to perceive, understand and act accordingly in solving problems in any environment they are deployed.

FirstAid


This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

fluxgui


The f.lux indicator applet `fluxgui` is an indicator applet to control `xflux`, an application that makes the color of your computer's display adapt to the time of day: warm at night, and like sunlight during the day. Reducing blue light exposure in the evening can help you fall asleep at night. See https://justgetflux.com/research.html for more details.

FM_Transmitter_RPi3


This project uses the general clock output to produce frequency modulated radio communication. It is based on idea originaly posted here: [http://icrobotics.co.uk/wiki/index.php/Turning_the_Raspberry_Pi_Into_an_FM_Transmitter](http://icrobotics.co.uk/wiki/index.php/Turning_the_Raspberry_Pi_Into_an_FM_Transmitter), but does not use DMA controller in order to distribute samples to output (clock generator),so sound quality is worse as in PiFm project and only mono transmition is available but this makes possible to run it on all kind of boards.

fonduer


Fonduer is a framework for building knowledge base construction (KBC) applications from *richy formatted data* and is implemented as a library on top of a modified version of Snorkel_.

Food-Recipe-CNN


Maturaarbeit 2018: This work makes usage of deep convolutional neural networks with Keras to classify images into 230 food categories and to output a matching recipe. The dataset contains >400'000 food images and >300'000 recipes from chefkoch.de.

Food100_YOLO_Tools


This is the set of tools and configurations used by the YOLO Real-Time Food Detection article at

foodkg.github.io


This dataset includes mappings to some of the concepts found in: - DBpedia - schema.org - FoodOn - Units Ontology - ChEBI

foodon


An easy place to browse FoodOn is at [https://www.ebi.ac.uk/ols/ontologies/foodon](https://www.ebi.ac.uk/ols/ontologies/foodon). As well the URI's of terms in the ontology resolve to the comprehensive [Ontobee ontology lookup service](http://www.ontobee.org/). It is organized according to the upper level BFO ontology, so most terms can be browsed by starting at the OBI "entity" term (e.g. in [Ontobee](http://www.ontobee.org/ontology/FOODON?iri=http://purl.obolibrary.org/obo/BFO_0000001)).

fossology


FOSSology is a open source license compliance software system and toolkit. As a toolkit you can run license, copyright and export control scans from the command line. As a system, a database and web ui are provided to give you a compliance workflow. In one click you can generate an SPDX file, or a ReadMe with all the copyrights notices from your software. FOSSology deduplication means that you can scan an entire distro, rescan a new version, and only the changed files will get rescanned. This is a big time saver for large projects.

fpm


* If fpm is not helping you make packages easily, then there is a bug in fpm. * If you are having a bad time with fpm, then there is a bug in fpm. * If the documentation is confusing, then this is a bug in fpm.

fpos


A CSV transaction export from any of the following banks can be processed by `fpos`

fprime


F´ (F Prime) is a component-driven framework that enables rapid development and deployment of spaceflight and other embedded software applications. Originally developed at the Jet Propulsion Laboratory, F´ has been successfully deployed on several space applications. It is tailored but not limited to small-scale spaceflight systems such as CubeSats, SmallSats, and instruments.

Framester


This repository contains the Framester resource, the main outcome of the framester project (https://w3id.org/framester). All the RDF files are serialized in TURTLE format. The corresponding triples can be also found uploaded on the Framester's SPARQL endpoint available at (https://w3id.org/framester/sparql). A series of statistics (e.g. number of triples, predicats, classes) are available at (https://w3id.org/framester/stats).

frdcsa-panoply-git-20200329


The FRDCSA (https://frdcsa.org) has been under development for 20 years as of writing ([2020-03-29,02:53:26]). It is a comprehensive free/libre artificial intelligence system. Mainly it collects other A.I. systems and gets them all to talk to each other. However, it has quite a lot of original code as well, maybe over 2 million lines of code. The most important individual project is the Free Life Planner (https://github.com/aindilis/free-life-planner).

frozen-bubble


This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License version 2, as published by the Free Software Foundation.

fs


`FS` is a classical planner that works with the Functional STRIPS planning language [[Geffner, 2000]](#ref-geffner-fstrips-2000), a modeling language based on the quantifier-free fragment of first-order logic that includes constant, function and predicate symbols, but no variable symbols. The increased expressiveness of the Functional STRIPS language with respect to propositional languages such as standard STRIPS (which is indeed subsumed by Functional STRIPS) often results in problem encodings which are more compact, more readable, have fewer ground actions and preserve the structural properties of the problem in a manner which allows the derivation of more effective heuristics.

fsearch


FSearch is a fast file search utility, inspired by Everything Search Engine. It's written in C and based on GTK+3.

fuel


FUEL is a succinct Scala framework for implementing metaheuristic algorithms, in particular evolutionary algorithms. It originated in my work on the book "Behavioral Program Synthesis with Genetic Programming" (Springer 2016, , )

fuse-taglayer


A read-only tag-filesystem overlay for hierarchical filesystems

Gadgetbridge


Gadgetbridge is an Android (4.4+) application which will allow you to use your Pebble or Mi Band without the vendor's closed source application and without the need to create an account and transmit any of your data to the vendor's servers.

gaia-interchange


This repository contains resources to support the AIDA Interchange Format (AIF). It consists of:

galvanise_v2


There is a small interpreter in the statemachine to do the propagation, which has inlined code depending on the number of outputs to be triggered. The ordering of basic blocks generated by the compiler are forced in way that follow the common code path (about 90% of the time, ie when there are no triggers). Ultimately, the implementation has quite a large overlap with Sancho's propnet statemachine, which since they documented in detail and seems to be the fastest way to propagate (at this point in time) - it made it very hard to do anything else. Nevertheless, I experimented a bit with some hybrid propnet/state machines and still think if given more meta-timing games such as speed chess could get an order of magnitude faster via splitting the network up some more, and generating code to replace some of the propnet.

galvanise_zero


* What? galvanise is a [[https://en.wikipedia.org/wiki/General_game_playing][General Game Player]], where games are written in [[https://en.wikipedia.org/wiki/Game_Description_Language][GDL]]. The original galvanise code was converted to a library [[https://github.com/richemslie/ggplib][ggplib]] and galvanise_zero adds AlphaZero style learning. Much inspiration was from Deepmind's related papers, and the excellent Expert Iteration [[https://arxiv.org/abs/1705.08439][paper]]. A number of Alpha*Zero open source projects were also inspirational: LeelaZero and KataGo (XXX add links).

game


An hack-and-slash style mult-player dungeon crawl blending the heuristics of NetHack with a combat engine inspired by Minnesota Dungeon (Minneapolis Dungeon, Larry's Maze, et. al.).

gams


GAMS is an extension of an earlier project called SMASH.

Gateway


Gateway is a movement and a project to create a service for cooperative storywriting and textual roleplaying that is free software and belongs to the community.

gcd


# A General-Purpose Algorithm for Constrained Sequential Inference This repository contains the archived code for the CoNLL 2019 paper [A General-Purpose Algorithm for Constrained Sequential Inference](https://cogcomp.seas.upenn.edu/papers/DeutschUpRo19.pdf).

gdelt_download


I am no longer associated with the GDELT project as noted [here](http://blog.gdelt.org/2014/01/20/gdelt-suspension/), so I will not continue to update this package. There is a fork of this project [here](https://github.com/00krishna/gdelt_download) that has some updates available.

gdl-parser


This is a parser for GDL (game description language). GDL is a subset [Datalog](https://en.wikipedia.org/wiki/Datalog), but when used for GGP (general game playing) it is sent in KIF (knowledge interchange format). This parser focuses on GDL and not KIF for the purpose of GGP and is currently being used in [ggp-rs](https://github.com/gsingh93/ggp-rs).

gdl-perf


This is a framework for testing the performance of Game Description Language (GDL) interpreters and reasoners used in General Game Playing. It allows for automatically running tests on a wide variety of reasoners across a wide variety of games, with minimal human intervention. It also supplies tools for analyzing the outputs of these tests.

gdl-perf--asking-for-password


This is a framework for testing the performance of Game Description Language (GDL) interpreters and reasoners used in General Game Playing. It allows for automatically running tests on a wide variety of reasoners across a wide variety of games, with minimal human intervention. It also supplies tools for analyzing the outputs of these tests.

gector


This repository provides code for training and testing state-of-the-art models for grammatical error correction with the official PyTorch implementation of the following paper: > [GECToR – Grammatical Error Correction: Tag, Not Rewrite](https://arxiv.org/abs/2005.12592)
> [Kostiantyn Omelianchuk](https://github.com/komelianchuk), [Vitaliy Atrasevych](https://github.com/atrasevych), [Artem Chernodub](https://github.com/achernodub), [Oleksandr Skurzhanskyi](https://github.com/skurzhanskyi)
> Grammarly
> [15th Workshop on Innovative Use of NLP for Building Educational Applications (co-located with ACL 2020)](https://sig-edu.org/bea/current)

gekko


Gekko is a Bitcoin TA trading and backtesting platform that connects to popular Bitcoin exchanges. It is written in JavaScript and runs on [Node.js](http://nodejs.org).

gentoo-libbash


This is the README file for libbash

geopoint


This library expects latitude and longitude in EPSG:4326 (WGS84). To convert between different projections check out [Proj4js](http://proj4js.org//)

gerbil


This project is a benchmarking platform for entity annotation and disambiguation tools. It also has been extended for Question Answering (see [`QuestionAnswering` branch](https://github.com/dice-group/gerbil/tree/QuestionAnswering)).

GF


The Grammatical Framework (=GF) is a grammar formalism based on type theory. It consists of

ggp


ggp-base


A simple Prover-based state machine implementation is included in GGP Base, so you don't need to worry about the details of converting a game description into a state machine. To write a gamer based on StateMachineGamer, derive your class from players.gamer.statemachine.StateMachineGamer. Applications like the PlayerPanel should automatically recognize your new class and it should appear in their lists of available players right away.

GGP-Botter


GGP-Botter is a GGP Bot framework written in SWI-Prolog. It provides an interface for communication with GGP Server, as well as some helper functions (TODO) which will come in handy when creating your own bot.

ggp-rs


`ggp-rs` is a library for creating GGP (general game playing) players in Rust that is based off of [GGP Base](https://github.com/ggp-org/ggp-base). While GGP Base allows the creation of players backed by a propositional network or a logic prover, this library currently only supports logic prover based players. The performance of this logic prover is comparable to the one in GGP Base.

ggp-zero


Although many games have been trained, there is a multitude of games left to try. There are some game types which are completely unsupported right now, for starters:

ggpe


A General Game Playing Engine using YAP Prolog

ghiro


Sometime forensic investigators need to process digital images as evidence. There are some tools around, otherwise it is difficult to deal with forensic analysis with lot of images involved. Images contain tons of information, Ghiro extracts these information from provided images and display them in a nicely formatted report. Dealing with tons of images is pretty easy, Ghiro is designed to scale to support gigs of images. All tasks are totally automated, you have just to upload you images and let Ghiro does the work. Understandable reports, and great search capabilities allows you to find a needle in a haystack. Ghiro is a multi user environment, different permissions can be assigned to each user. Cases allow you to group image analysis by topic, you can choose which user allow to see your case with a permission schema.

git-secret


`git-secret` is a bash tool which stores private data inside a git repo. `git-secret` encrypts files with permitted users' public keys, allowing users you trust to access encrypted data using pgp and their secret keys.

gitfs


gitfs is a [FUSE](http://fuse.sourceforge.net/) file system that fully integrates with git. You can mount a remote repository's branch locally, and any subsequent changes made to the files will be automatically committed to the remote.

github-network-analysis


An analysis and visualization of collaboration between top GitHub repositories, focused on the relationship between programming languages used and the network structure.

gitRecommender


gitRecommender ============== gitRecommender Final project for Artificial Intelligence. It is a recommender system that will suggest github repositories you might be interested in.

gitrob


Gitrob is a tool to help find potentially sensitive files pushed to public repositories on Github. Gitrob will clone repositories belonging to a user or organization down to a configurable depth and iterate through the commit history and flag files that match signatures for potentially sensitive files. The findings will be presented through a web interface for easy browsing and analysis.

gnes


This command downloads the latest GNES image (based on [Alpine Linux](https://alpinelinux.org/)) and runs it in a container. When the container runs, it prints an informational message and exits.

gnucash-perl


This is a set of scripts that will be able to manipulate the Gnucash XML files.

go-vncdriver


A fast VNC driver.

goal-plan-recognition-dataset


This repository contains datasets for goal and plan recognition as planning.

GoedelGod


This repository contains computer-assisted formalizations of ontological proofs.

golog


This is a Golog interpreter written in Haskell and applications of it. [Golog](http://www.cs.toronto.edu/cogrobo/main/) is an action language based on the [situation calculus](http://en.wikipedia.org/wiki/Situation_calculus). There are many dialects of Golog; this is one of them.

gophi


GOPHI (*Generation Of Parenthesized Human Input*) is a system for generating a literal reading of Abstract Meaning Representation (AMR) structures. The system, written in [SWI-Prolog](http://www.swi-prolog.org "SWI-Prolog"), uses a symbolic approach to transform the original rooted graph into a tree of constituents that is transformed into an English sentence by [jsRealB](https://github.com/rali-udem/JSrealB "GitHub - rali-udem/JSrealB: A JavaScript bilingual text realizer for web development").

gourmet


Gourmet Recipe Manager is a manager, editor, and organizer for recipes. It has a plugin architecture which allows you to enable extensions to Gourmet's base functionality. For example, there is a nutritional plugin that allows Gourmet to help you calculate nutritional information for any recipe. There are also a wide variety of import and export plugins that let Gourmet read and write recipes in various formats.

gp-ark-tweet-nlp


`gp-ark-tweet-nlp` is a PL/Java Wrapper for [`Ark-Tweet-NLP`](http://www.ark.cs.cmu.edu/TweetNLP/) - a state-of-the-art parts-of-speech tagger for Twitter. This package enables you to perform part-of-speech tagging on Tweets, using SQL. If your environment is an MPP system like Pivotal's Greenplum Database you can piggyback on the MPP architecture and achieve implicit parallelism in your part-of-speech tagging tasks.

GPGOAP


##Introduction GOAP, or Goal Oriented Action Planning is a powerful tool to create game AI. For all the details I will refer to [Jeff Orkin's collection of articles](http://web.media.mit.edu/~jorkin/goap.html). But in short: GOAP will let computer controlled characters (NPCs) make action plans that can achieve desired goals. It will do so in a highly maintainable, easily extendible, highly modular fashion. Naive implementation of AI code will invariably blow up for any non trivial problem. GOAP on the other hand, is robust and is unlikely to buckle under large complexity. This software implements GOAP in the C programming language. It does so in a generic fashion, which makes it suitable for many projects.

GPGPU


This project is a multi-core GPGPU (general purpose graphics processing unit) IP core, implemented in SystemVerilog. Documentation is available here: https://github.com/jbush001/GPGPU/wiki. Pull requests/contributions are welcome.

gpt-2


You can read about GPT-2 and its staged release in our [original blog post](https://blog.openai.com/better-language-models/), [6 month follow-up post](https://openai.com/blog/gpt-2-6-month-follow-up/), and [final post](https://www.openai.com/blog/gpt-2-1-5b-release/).

gpt-2-output-dataset


This dataset contains: - 250K documents from the WebText test set - For each GPT-2 model (trained on the WebText training set), 250K random samples (temperature 1, no truncation) and 250K samples generated with Top-K 40 truncation

gpt-neo-2.7B


GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model.

GPT2


An implementation of training for [GPT2](https://openai.com/blog/better-language-models/) that supports both GPUs and TPUs. The dataset scripts are a bit hacky and will probably need to be adapted to your needs. ## Requirements For GPUs:

grakn


Building intelligent systems starts at the database. Grakn is an intelligent database: a knowledge graph engine to organise complex networks of data and make it queryable.

grammars


A collection of grammars to write lexers, parsers, compilers for various languages and purposes.

grammars-v4


This repository is a collection of Antlr4 grammars.

Graph2Seq


# Graph2Seq Graph2Seq is a simple code for building a graph-encoder and sequence-decoder for NLP and other AI/ML/DL tasks.

graphbrain


Graphbrain is an Artificial Intelligence open-source software library and scientific research tool. Its aim is to facilitate automated meaning extraction and text understanding, as well as the exploration and inference of knowledge.

GraphSAGE


This directory contains code necessary to run the GraphSage algorithm. GraphSage can be viewed as a stochastic generalization of graph convolutions, and it is especially useful for massive, dynamic graphs that contain rich feature information. See our [paper](https://arxiv.org/pdf/1706.02216.pdf) for details on the algorithm.

grobid


GROBID is a machine learning library for extracting, parsing and re-structuring raw documents such as PDF into structured TEI-encoded documents with a particular focus on technical and scientific publications. First developments started in 2008 as a hobby. In 2011 the tool has been made available in open source. Work on GROBID has been steady as side project since the beginning and is expected to continue until at least 2020 :)

grocy


## Motivation A household needs to be managed. I did this so far (almost 10 years) with my first self written software (a C# windows forms application) and with a bunch of Excel sheets. The software is a pain to use and Excel is Excel. So I searched for and tried different things for a (very) long time, nothing 100 % fitted, so this is my aim for a "complete household management"-thing. ERP your fridge!

guides


Welcome to the hack.guides() content repository. This repository contains published and unpublished versions of awesome technical guides written by our community. You can browse all the guides right here or head over to our [companion site](http://www.pluralsight.com/guides) for a more focused reading experience.

guile-log


__GUILE_LOG__ What it is: Guile log is a logic programming framework that has strong continuation support meaning that stalling of algorithm is well supported. It also sports most of the logic programming features you see in common prolog softwares like swi-prolog and guile-log comes with a prolog engine as well as a minikanren engine as well as an internal scheme interface to logic programming which is the guile-log interface.

gum


gvgai-cig-2015


This is the framework for the General Video Game Competition 2014 - http://www.gvgai.net/

gvgai-private


This is the framework for the General Video Game Competition 2014 - http://www.gvgai.net/

gym


**OpenAI Gym is a toolkit for developing and comparing reinforcement learning algorithms.** This is the ``gym`` open-source library, which gives you access to a standardized set of environments.

ha-tpb-planner


This paper introduces an approach to human-aware epistemic planning in which a rational intelligent agent plans its actions for encouraging a human to proceed in a social virtual reality (VR) environment. In order to persuade the human user to execute specific actions, the agent adapts the virtual environment by adjusting motivators in the environment. The agent's model of the human is based on the theory of planned behavior (TPB), a cognitive theory to explain and predict human behavior. The intelligent agent manipulates the environment, a process where the agent conducts epistemic actions, i.e., adapting the environment and observing human responses, in order to understand the human's behavior and encourage human actions. An action reasoning framework is introduced that defines transitions between goal-oriented human activities in the virtual scenario. The proposed human-aware planning architecture can also be applied in environments that are not virtual, by utilizing modern mobile devices which have built-in sensors that measure motion, orientation, and various environmental conditions.

hades


This is a work-in-progress repository for the CLiPS HAte speech DEtection System (HADES).

HAnDS


This repository contains the code and data to reproduce the experiments of the paper "[Fine-grained Entity Recognition with Reduced False Negatives and Large Type Coverage](https://openreview.net/forum?id=HylHE-9p6m)".

HandwritingRecognitionSystem


This repository is the Tensorflow implementation of the Handwriting Recognition System described in [Handwriting Recognition of Historical Documents with Few Labeled Data](https://www.researchgate.net/publication/325993975_Handwriting_Recognition_of_Historical_Documents_with_Few_Labeled_Data). Please cite the paper if you use this code in your research paper.

HelloWorldEnvironment


This environment creates a simple whiteboard showing messages that can be written there by the entity that it creates.

helm


Helm is a fork of `anything.el` originaly written by Tamas Patrovic and can be considered to be its successor. `Helm` sets out to clean up the legacy code in `anything.el` and provide a cleaner, leaner and more modular tool, that's not tied in the trap of backward compatibility.

HIAS


The **Peter Moss Leukemia AI Research HIAS Network** is an open-source Hospital Intelligent Automation System. The system's server powers an intelligent network using a locally hosted, encrypted IoT server and proxy.

HiddenAttributeModels


A Hadoop script for automatically extracting the needed messages and cleaning them is available in prepare_data/hadoop/. It expects to find reddit_comments and reddit_submission is in the user's home directory. If you opt to extract the messages yourself rather than using Hadoop, you will need to run prepare_data/clean_input_msg.py to clean the messages' text.

HOL


This is the distribution directory for the Kananaskis release of HOL4. See http://hol-theorem-prover.org for online resources.

home-assistant


Home Assistant is a home automation platform running on Python 3. The goal of Home Assistant is to be able to track and control all devices at home and offer a platform for automating control.

home-assistant.github.io


This is the source for the [Home-Assistant.io website](https://home-assistant.io).

HRLPlus


In his book *Proofs and Refutations*, Lakatos identifies seven methods by which mathematical discovery and justification can occur. These methods suggest ways in which concept definitions, conjectures and proofs gradually evolve via interaction between mathematicians. Different mathematicians may have different interpretations of a conjecture, examples or counterexamples of it, and beliefs regarding its value or theoremhood. Through discussion, concepts are refined and conjectures and proofs modified. For instance, when a counterexample is found, one might look for general properties which make it fail a conjecture, and then modify the conjecture by excluding that type of counterexample (piecemeal exclusion). Alternatively, one might generalise from the positives and then limit the conjecture to examples of that type (strategic withdrawal). Another reaction might be to deny that the object is a counterexample on the grounds that the conjecture refers to objects of a different type (monster barring). Given a faulty proof, a counterexample may be used to highlight areas of weakness in the proof, and to either modify the proof or the conjecture which it purports to prove (lemma incorporation).

hs100


The [tp-link Wi-Fi Smart Plug model HS100](http://www.tp-link.us/products/details/HS100.html) is an embedded Linux computer with a Wifi chip, a 110/220 V AC relay with a 15 A current limit, and a US-style grounded electrical socket. You pair with it by establishing an ad-hoc network between the plug and a smartphone (also called Wifi direct). After giving your router's SSID and access information, the plug connects to it and you can control the plug with the app provided by tp-link, called Kasa. One downside of using Kasa is that it's really not much more than a wall-switch in an app, though it does have pretty rich timer features which are nice. But you can't do things like turn the light on or off in response to events on the internet. Tp-link does provide a network control mode, but you have to pass control of your plug over to them, which isn't particularly great if you endeavor to remain the master of your own domain, haha only serious.

ht


This is HT 2.1.0; Have fun...

HTN-Translation


HTNTranslation is a program for translating [Hierarchical Task Network](http://www.aaai.org/Papers/AAAI/1994/AAAI94-173.pdf) problems into [PDDL](http://www.jair.org/media/1129/live-1129-2132-jair.pdf). This is an extension of the work described in "[Translating HTNs to PDDL](http://www.umiacs.umd.edu/publications/translating-htns-pddl-small-amount-domain-knowledge-can-go-long-way)," handling both totally ordered and partially ordered subtasks.

HTTP-Proxy


This module is a pure Perl HTTP proxy.

hydra


This is a package for GNU Emacs that can be used to tie related commands into a family of short bindings with a common prefix - a Hydra.

HyperFoods


A vectorial representation for every ingredient and recipe was generated using Word2Vec. An SVC model was trained to return recipes’ cuisines from their set of ingredients. South Asian, East Asian and North American cuisines were predicted with more than 73% accuracy. African, Southern European and Middle East cuisines contain the highest number of cancer-beating molecules. Finally, it was developed a web application able to predict the ingredients from an image, suggest new combinations and retrieve the cuisine the recipe belongs, along with a score for the expected number of negative interactions with antineoplastic drugs (github.com/warcraft12321/HyperFoods).

i7grip


This is a mid-delevopment snapshot. The target release date for Version 2 is 1 September 2013.

iaj


ICE


ide


IEEE-CIG-Text-Adventurer-Competition


Contained within the Example Project folder of this repository, there is an example Java Eclipse project, which contains a minimal `Agent` that explores the game through random movements.

iggp


This repository contains the first version of the IGGP dataset, which is discussed in detail in the paper:

ike


im2latex-dataset


- The end result should have two files and one directory (names can be changed in `formula2image.py`: - `im2latex.lst` - Each line is in format `formula_idx image_name render_type` - formula_idx is the line number where formula is in `im2latex_formulas.lst` - image_name is the name of the image connected to this rendering (without '.png') - render_type is the name of render setup used, defined in `formula2image.py` - `im2latex_formulas.lst` - Each line contains one formula - `/formula_images` - Directory where images are stored

im2markup


A general-purpose, deep learning-based system to decompile an image into presentational markup. For example, we can infer the LaTeX or HTML source from a rendered image.

im2recipe


This repository contains the code to train and evaluate models from the paper: _Learning Cross-modal Embeddings for Cooking Recipes and Food Images_

implie


IMPLIE (IMPLicit relation Information Extraction) is a program that extracts binary relations from English sentences where the relationship between the two entities is not explicitly stated in the text. IMPLIE supports the following target relations out-of-the-box: *has nationality*, *has job title*, *has province*, *has city*, and *has religion*. However, other relations can be supported by providing a list of keywords for a new target relations. This is possible because IMPLIE uses a target independent syntactic language model.

indigolog-code


This is the root of the IndiGolog system. There are a few things you should

InductorParser


Inductor Parser =============== The Inductor Parser is a simple-to-use C++ Template-based parser. It is small and easy to understand, debug and extend.

InductorProlog


The following features are for sure *not* in the Inductor Prolog engine (this is not an exhaustive list): - asserting or retracting anything besides a fact - declaring a function as dynamic like `dynamic(myRule/1)`: Anything can be changed in IndProlog, this declaration is not necessary - `;` (or) - `->` (if) - syntax like `a == b` instead of `==(a, b)` - `"` inside comments. Use `"This is a quote 'inside another quote' "` instead - Any Metaprogramming features or rules like `call`

indus


INDUS is a porject for knowledge acquisition and data integration from heterogeneous distributed data, particularly from bio-informatics databases. This is migrated from http://sourceforge.net/projects/indus-project/

infer


Infer is a static analysis tool for Java, Objective-C and C, written in [OCaml](https://ocaml.org/). Check out the documentation at . See [FILES.md](FILES.md) for a quick overview of the files in `infer/bin`.

InferSent


*InferSent* is a *sentence embeddings* method that provides semantic representations for English sentences. It is trained on natural language inference data and generalizes well to many different tasks.

Inform6


This is version 6.33 of the Inform compiler, copyright (c) Graham Nelson 1993 - 2014 Full release notes and instructions are available at http://www.ifarchive.org/indexes/if-archiveXinfocomXcompilersXinform6.html

Instinct-Server


This is a Java command line application encapsulated within an Eclipse project. It provides a TCP/IP based server for communication with the [R5 Robot], and within it the Instinct Planner. The R5 Robot also requires the [Instinct Planner].

interpolate


This module provides similar functionality for Prolog. It uses the same syntax as Unix shell, Perl, PHP, Tcl, etc. Namely, a local variable name prefixed with `$`. Interpolation is supported in all of the following string types:

INVAL


The Planning Domain Definition Language (PDDL) is a modelling language for expressing AI planning problems, and used as the input language of a large number of general-purpose AI planning systems. The role of a plan validator is to check if a plan (generated by an AI planner or manually written) is valid, according to the domain and problem specification. A validator is a very useful tool for debugging a domain/problem specification, a planner implementation, and indeed the specification of PDDL itself.

inversecooking


This code uses Python 3.6 and PyTorch 0.4.1 cuda version 9.0.

itsimple


This file is part of itSIMPLE.

itsimple-orig


This file is part of itSIMPLE.

iwfms


A Intelligent workflow management system. Specially looking at modelling the workflow of a hospital and drug distribution process.

jabbah


What is JABBAH?

JABBAH is a Java Aplication framework for the translation Between BPM (Business Process Models) And HTN-PDDL (Hierarchical Planning Domains). 

The JABBAH system provides a neat tool for analysts that need to perform resource allocation analysis on business workflows, embedding a non-trivial transformation of BPMN-expressed workflows in terms of Hierarchical Task Networks. By providing a fully automated support of the analysis, allowing engineers to exploit the vastly diffused Business Process Management Notation (BPMN) standard for workflow specification, and neatly presenting the results, this system may appeal a very wide and relevant audience. Hencefore, JABBAH may have a considerable potential impact outside the planning community.

Where can I find further details?

A scientific paper about JABBAH was presented at ICKEPS 2009 (Award of excellence), and further improvements were presented in BPM 2010 Demo Track


A extended scientific paper has been recently published at the Knowledge Engineering Review journal, available here.

Have a look at the new video screencast as well.

Who developed it?

Arturo González Ferrer created JABBAH under the supervision of professors Juan Fernández Olivares and Luis Castillo Vidal. See Contact Info for details.


jacana


#### Introduction jacana-align is a token-based word-aligner for English parallel sentences described in the following paper:

jamr


This is the JAMR Parser, updated for SemEval 2016 Task 8.

janitor


The Code Janitor is a utility for finding "objectionable" content in source code trees before releasing them to the public. These can be things your developers wrote (like profanity, insults, confessions, and so on), or things that indicate code that might be inappropriate to use in the project (like copyright notices or license statements).

jason


Jason is an interpreter for an extended version of AgentSpeak. It implements the operational semantics of that language, and provides a platform for the development of multi-agent systems, with many user-customisable features. Jason is available as Open Source, and is distributed under GNU LGPL.

java-deeplearning


Deep learning is a form of state-of-the-art machine learning that can learn to recognize patterns in data unsupervised.

JavaPengine-old


A Java language client for Torbjörn Lager's _Pengines_ distributed computing library for _[SWI-Prolog](http://swi-prolog.org)_ .

jbt


JBT is a Java framework for building and running behaviour trees. In the past few years, behaviour trees have been widely accepted as a tool for defining the behaviour of video games characters. However, to the best of our knowledge, there is no free-software Java implementation of such concept. With JBT we intend to provide a solid framework to build and run behaviour trees in Java.

jdageem


JDageem is an extensible Java package that includes several implementations of parsing and training algorithms for dependency grammar induction. More specifically, JDageem includes:

jmNL


There is no comprehensive documentation, if you have questions please ask. A [guide](https://confluence.ict.usc.edu/display/VHTK/Creating+a+New+Virtual+Human+with+the+FLoReS+Dialogue+Manager) was written for the [VHTK](https://vhtoolkit.ict.usc.edu/). It is a work in progress, so some aspects are still undocumented and may not be fully in sink with the current capabilities in the trunk. If you have any questions please submit an [Issue](https://github.com/fmorbini/jmNL/issues).

jquery-xmlrpc


This is a small library that sits on top of jQuery for communicating with XML-RPC services - without worrying about the horrible bloat of XML-RPC. Using this library, you can pass JSON parameters to the library, and receive responses in JSON. Encoding the JSON document is handled for you, intelligently mapping types between the two languages.

JSrealB


**Natural Language Generation (NLG)** is a field of artificial intelligence that focuses on the development of systems that produce text for different applications, for example the textual description of massive datasets or the automation of routine text creation.

julian


This module uses [semantic versioning](http://semver.org/).

julien


Julien is a retrieval stack built for performing experiments in Information Retrieval research. The current version of Julien is 0.1, mostly because it's been under development and I haven't had time to *really* set it up for a release. Right now the documentation is spotty, but I will be shoring it up in the coming weeks. The scaladocs can be found at http://ayr.cs.umass.edu/julien-docs/julien .

kaf-naf-parser


This library converts KAF to NAF and NAF to KAF. It also contains a webservice for doing exactly this.

kaggle-jigsaw-multilingual-toxic-comment-classification-3rd-place-solution


WARNING! Do no install pytorch-xla-env-setup.py before starting TF code. In this case there is an incompatibility in using TPU via TF and via PyTorch in the same instance runtime. The valid sequence of running (including install packages) is in ./train.py and ./inference.py.

Kaku


Kaku is an highly integrated music player that supports different online platforms like YouTube, SoundCloud, Vimeo and more. Available on `Windows`, `Linux` and `macOS` !

kalm-qa


# Code * `metaqa/original/`The original MetaQA vanilla dataset for 2-hop and 3-hop training and testing questions (https://github.com/yuyuz/MetaQA). * `metaqa/rectified/`The rectified version of the MetaQA vanilla dataset. The original MetaQA dataset contains errorneous answers (as is discussed in our paper). We inspected the errors from the original MetaQA dataset and created a rectified version which contains the correct answers for the multi-hop questions in MetaQA. * `metaqa/cnl_input/` The MetaQA vanilla dataset in ACE CNL grammar. Note, this dataset only contains the multi-hop questions (not answers). It is used as the input to KALM-QA to get the corresponding queries in Prolog. * `tools/metaqa_to_cnl/` JAVA code that converts MetaQA n-hop English questions (NL) to CNL format. The input files (e.g., 2_hop_training.pl) are found in `metaqa/cnl_input/` directory. * `tools/intermediate_query_processing/MetaQABatch.java` JAVA code that processes the intermediate MetaQA Prolog query generated by the Prolog program. This program replaces singleton variables with anonymous variables. * `query/template/2_hop_template/` In this directory, query_template.txt contains the unique query templates for 2-hop MetaQA queries (testing). query_group_by_template.txt groups the 2-hop MetaQA queries (testing) by the query template. 2_hop_template.txt shows the query template for each query in query_group_by_template.txt. * `query/template/3_hop_template/` In this directory, query_template.txt contains the unique query templates for 3-hop MetaQA queries (testing). query_group_by_template.txt groups the 3-hop MetaQA queries (testing) by the query template. 3_hop_template.txt shows the query template for each query in query_group_by_template.txt. * `query/2_hop_test/` This directory contains the MetaQA 2-hop Prolog queries (metaqa_query.pl), MetaQA KB encoded in Prolog (metaqa_fact.pl), MetaQA 2-hop testing question-answer pairs encoded in Prolog (metaqa_answer.pl), background rules (background.pl), a program checking whether the query returns the correct answers (metaqa_check_answer.pl), an entrypoint program (mk.pl). By running the program, it will generate a file containing the results which compares KALM-QA answers with MetaQA answers (metaqa_result.txt). **Note that** the question-answer pairs are from the original MetaQA vanilla dataset. As is discussed in the paper, there are errors in this dataset. As a result, once you run the program, you may find the mismatches between KALM-QA answers and MetaQA answers. Error analysis will be displayed in a separate directory. The directories `2_hop_training`, `3_hop_testing`, and `3_hop_training` follow the same structure. * `error_analysis/2_hop` This directory contains the errors for 2-hop testing data. total_errors.txt has all the errors. fild_id_errors.txt has the errors that are caused by the issue where MetaQA doesn't distinguish the different films that share the same film ID. others_error.txt has all the rest errors caused by unknown reasons. We have manually checked 736 (50%) of the "other errors" and added the reasons why MetaQA doesn't return the correct answers. The analysis is in metaqa_error_analysis.txt. * `error_analysis/3_hop` This directory contains the errors for 3-hop testing data. total_errors.txt has all the errors. fild_id_errors.txt has the errors that are caused by the issue where MetaQA doesn't distinguish the different films that share the same film ID. others_error.txt has all the rest errors caused by unknown reasons. We have manually checked 1628 (50%) of the "other errors" and added the reasons why MetaQA doesn't return the correct answers. The analysis is in metaqa_error_analysis.txt. * `kalm-qa/` The source code for KALM-QA (Prolog).

KAT


1. Place your KAnnSpec into the KAnnSpec/ directory. 2. Place your document into the content/ directory. Make sure it only contains the actual document content (inside the body tag) 3. Edit line 3 in js/index.js and change "content/sample1.html" to the path of the document you want to use and change "KAnnSpecs/omdoc-annotations.xml" to the annotation you want to create. 4. Run ```grunt run``` if it is not already running. 5. Navigate to localhost:3000 and see the demo at work.

kbp-2014-event-arguments


This is code developed by BBN to support the [2014 KBP Event Argument Shared Task](http://www.nist.gov/tac/2014/KBP/Event/index.html). A draft of the description of this task may be found [here](https://docs.google.com/document/d/1NRrRciiPMEZfqdjXEljyzWn-Zlw-jEm0PBqT-t1owJ0/edit?usp=sharing).

ke4ir-evaluation


This GitHub project contains the Java code (based on [Lucene](http://lucene.apache.org/c), [Sesame](http://rdf4j.org/), and [RDFpro](http://rdfpro.fbk.eu/)) implementing a simple evaluation system that allow configuring and evaluating KE4IR on arbitrary document collections and queries for which relevance judgments are known. You can use this code, together with the data available on KE4IR [webpage](http://pikes.fbk.eu/ke4ir), to replicate the evaluation results reported in the KE4IR paper. You can also use this code as a basis for experimenting with a variation of KE4IR, or even with a different approach that can be casted in the framework of KE4IR (augmentation of term vectors with semantic terms obtained via knowledge extraction).

KEEL


# KEEL KEEL (Knowledge Extraction based on Evolutionary Learning) is an open source (GPLv3) Java software tool that can be used for a large number of different knowledge data discovery tasks. KEEL provides a simple GUI based on data flow to design experiments with different datasets and computational intelligence algorithms (paying special attention to evolutionary algorithms) in order to assess the behavior of the algorithms. It contains a wide variety of classical knowledge extraction algorithms, preprocessing techniques (training set selection, feature selection, discretization, imputation methods for missing values, among others), computational intelligence based learning algorithms, hybrid models, statistical methodologies for contrasting experiments and so forth. It allows to perform a complete analysis of new computational intelligence proposals in comparison to existing ones.

kerkerkruip


Kerkerkruip is a short-form roguelike in the interactive fiction medium, featuring meaningful tactical and strategic depth, innovative game play, zero grinding, and a sword & sorcery setting that does not rehash tired clichés.

keylogger


This is a keylogger for Linux written in Rust, ported from my [original keylogger](https://github.com/gsingh93/simple-key-logger) in C. It works by reading directly from the keyboard device in `/dev/input/`. The keylogger attempts to detect the keyboard device upon startup, but if one cannot be detected or if multiple are detected, you must specify the path to the device file manually.

kglib


At present this repo contains one project: [*Knowledge Graph Convolutional Networks* (KGCNs)](https://github.com/graknlabs/kglib/tree/master/kglib/kgcn).

KnowHowDataset


- The *Process - Inputs* datasets contain detailed information about the inputs of the sets of instructions, including links to [DBpedia](http://wiki.dbpedia.org/) resources - The *Process - Outputs* datasets contains detailed information about the outputs of the sets of instructions, including links to [DBpedia](http://wiki.dbpedia.org/) resources - The *Process - Step Links* datasets contains links between different sets of instructions

knowledge-expansion


Data ---- This repository contains the following datasets for experiments:

kobdig


A more rigorous description of the framework is given in Célia da Costa Pereira and Andrea G. B. Tettamanzi. "An Integrated Possibilistic Framework for Goal Generation in Cognitive Agents". In Proceedings of the 9th International conference on autonomous agents and multiagent systems (AAMAS 2010), pages 1239–1246.

koordinator2000


For example, you would vote that tiny progressive political party, if you knew your vote would matter. So let's get to work to make it matter. Don't waste your vote until you know there is a mass large enough to make it count.

KOSHIK


An NLP framework for large scale processing using Hadoop. KOSHIK supports parsing of text in multiple languages including English, Swedish, and Chinese.

KRLPapers


We release [OpenKE](https://github.com/thunlp/openKE), an open source toolkit for KRL/KE. This repository provides a standard KRL/KE training and testing framework. Currently, the implemented models in OpenKE include TransE, TransH, TransR, TransD, RESCAL, DistMult, ComplEx and HolE.

lamtram


Then, we can perform training with `lamtram-train`. Here is a typical way to run it with options:

LangPro


# [LangPro](https://github.com/kovvalsky/LangPro): Natural [Lang](https://github.com/kovvalsky/LangPro)uage Theorem [Pro](https://github.com/kovvalsky/LangPro)ver LangPro is a tableau-based theorem prover for natural logic and language. See the [online demo](https://naturallogic.pro/LangPro/) (not the latest version).

LAPKT-public


In order to compile some of the examples, you will also need a version >= 1.49 of the Boost C++ libraries available on your system. You can check the version you have either manually by looking at the macro defined in `boost/version.hpp` or, on debian systems, by running `dpkg -s libboost-dev`. Be aware that systems such as the Ubuntu 12.04LTS release ship with older versions of Boost.

LaZagne


Description ---- The __LaZagne project__ is an open source application used to __retrieve lots of passwords__ stored on a local computer. Each software stores its passwords using different techniques (plaintext, APIs, custom algorithms, databases, etc.). This tool has been developed for the purpose of finding these passwords for the most commonly-used software. At this moment, it supports 22 Programs on Microsoft Windows and 12 on a Linux/Unix-Like OS.

ld41


This is our entry for Ludum Dare 41, a silly text based minesweeper game.

ldspider


The project is a co-operation between [Andreas Harth](http://harth.org/andreas/) at [AIFB](http://www.aifb.kit.edu/) and [Juergen Umbrich](http://umbrich.net) at [DERI](http://www.deri.ie/). [Aidan Hogan](http://sw.deri.org/~aidanh/), Tobias Kaefer and [Robert Isele](http://www.wiwiss.fu-berlin.de/en/institute/pwo/bizer/team/IseleRobert.html) are contributing.

LeafNATS


This playground is a pytorch implementation of a learning framework for implementing different models for the neural abstractive text summarization and beyond. It is an extension of [NATS](https://github.com/tshi04/NATS) toolkit, which is a toolkit for Neural Abstractive Text Summarization. The goal of this framework is to make it convinient to try out new ideas in abstractive text summarization and other language generation tasks.

lean-mode


This is the Emacs mode for the [Lean theorem prover][lean].

learningbyreading


A Learning by Reading pipeline of NLP and Entity Linking tools.

learn_to_soar


This is intended to eventually be a set of reusable components, something like daveray's bebot. However, I'm amazingly incompetent at Soar programming, so first I need to learn.

LEGOEval


![](https://github.com/yooli23/LEGOEval/blob/master/banner.png) # LEGOEval LEGOEval is a toolkit for dialogue system evaluation via crowdsourcing, see our [demo video](https://www.youtube.com/watch?v=Dg6mafRGOpg&ab_channel=JoshArnold).

lemonUbyExport


> This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.

Leo-III


Leo-III [SWB16] is an automated theorem prover for (polymorphic) higher-order logic which supports all common TPTP dialects, including THF, TFF and FOF as well as their rank-1 polymorphic derivatives [SWB17]. It is based on a paramodulation calculus with ordering constraints and, in tradition of its predecessor LEO-II [BP15], heavily relies on cooperation with external (mostly first-order) theorem provers for increased performance. Nevertheless, Leo-III can also be used as a stand-alone prover without employing any external cooperation.

LeoPARD


This project contains the data structure framework LeoPARD underlying the Leo-III prover.

let-over-lambda


Add symbols for anaphoric macro internals, `IT`, `THIS`, and `SELF` to package exports for better end-user experience. Will be available in April 2015 release of Quicklisp.

libarchive


This distribution bundle includes the following components: * libarchive: a library for reading and writing streaming archives * tar: the 'bsdtar' program is a full-featured 'tar' implementation built on libarchive * cpio: the 'bsdcpio' program is a different interface to essentially the same functionality * cat: the 'bsdcat' program is a simple replacement tool for zcat, bzcat, xzcat, and such * examples: Some small example programs that you may find useful. * examples/minitar: a compact sample demonstrating use of libarchive. * contrib: Various items sent to me by third parties; please contact the authors with any questions.

libreoffice-impress-templates


For example, the `libreoffice-templates` package (description: "Additional set of templates for LibreOffice") that is available in Ubuntu, only contains the 8 default templates that come with LibreOffice itself. Installing this package thus has no effect on the templates available to the user in Impress, and no other template packages appear to be available.

linguist


An AI running on [NuPIC](https://github.com/numenta/nupic) using the CLA to build a model of language, and predict the rest of a user's word, phrase, sentence.

LinkedHypernymsDataset


For other languages than English you need to download TreeTagger from http://www.cis.uni-muenchen.de/~schmid/tools/TreeTagger/ and install it. There is a special file in the GATE directory plugins/Tagger_Framework/resources/TreeTagger/tree-tagger-LANG-gate which must be specified and targeted to the installed TreeTagger application (this file is generated during the TreeTagger installation step in the cmd/ directory).

linkipedia


Linkipedia is an entity extraction and linking service that you can set up yourself against a set of ontologies and other RDF datasets you choose. It will use the interlinks available in the RDF to score the overall informativeness of each term and use the context of the text you submit to find the closest matches.

LipNet


### Random split (Unmaintained) Create symlink from ``training/random_split/datasets/video`` to your video dataset folder (which contains ``s*`` directory).

lisp5000


A small dialect of Common Lisp based upon lisp500

llama


LLAMA is a graph storage and analysis system that supports mutability and out-of-memory execution built on top of the compressed sparse row (CSR) representation. Its goal is to perform comparably to immutable main-memory analysis systems for graphs that fit in memory and to match or outperform existing out-of-memory analysis systems for graphs that exceed main memory.

llamapun


--- At its core, **llamapun** is a [Rust](http://rust-lang.org/) implementation that aims at minimal footprint and optimal runtime, in order to safely scale to corpora of millions of documents and tens of billions ot tokens.

Logical-Document-Structure


To start a training run use **lstm_training.py** with custom parameters like number of LSTM units, dropout, IOB file path etc. . You can call the important scripts with -h to get help. All output of a training run will land in the *modelzoo* directory. To configure and run a training via a parameter grid use (just change for docker) **runner.py**. To get an overview of the performance of the trained models via **runner.py** it will generate an csv formatted file containing metrics that can be visualised with **swarmplot_scores.py**. The *modelzoo* directory contains examples (only one model was committed to this repo).

logicmoo_ec


![#f03c15](https://placehold.it/15/f03c15/000000?text=+) **NOTICE**: This is a work in progress and is being updated weekly.

logicmoo_nlu


```` Quite a bit more output about 123 seconds later will see something like... ```` % List of possible data transformations % /home/nlutest/.local/share/swi-prolog/pack/logicmoo_nlu/prolog/logicmoo_nlu/nl_pipeline.pl:592 % installed_converter(parser_all, input_to_acetext(+input, -acetext)). % installed_converter(parser_all, tokens_to_acetext(+tokens, -acetext)). % installed_converter(get_ape_results, ace_to_pkif(+acetext, -kif(p))). % installed_converter(ace_to_drs, call_tokenizer(+acetext, guess+on, -sentences:set, -sentencesToParse)). % installed_converter(ace_to_drs, paragraphs_to_drs(+sentences:list, guess+on, catch+off, startID+1, -sentences, -syntaxTrees, -drs0, -messages, -time)). % installed_converter(ace_to_drs, call_parser(+sentences:list, startID+1, -syntaxtrees, -drs0:reversed_set)). % installed_converter(ace_to_drs, acetext_to_drs(+acetext, -sentences:set, -syntaxTrees, -drs0, -messages)). % installed_converter(tokenizer, tokenize(+input, -tokens)). % installed_converter(tokens_to_sentences, tokens_to_sentences(+tokens:set, -sentences:set)). % installed_converter(tokens_to_sentences, tokens_to_paragraphs(+tokens:set, -sentences:set)). % installed_converter(drs_fol_pnf, drs_pnf(+drs, -fol)). % installed_converter(drs_fol_pnf, drs_fol(+drs, -pnf)). % installed_converter(get_ape_results, fol_to_pkif(+pnf, -kif(p))). % installed_converter(get_ape_results, fol_to_pkif(+fol, -kif(f))). % installed_converter(get_ape_results, fol_to_pkif(+drs, -kif(d))). % installed_converter(get_ape_results, fol_to_pkif(+sdrs, -kif(s))). % installed_converter(drs_to_ace, drs_to_ace(+drs0, -paraphrase:set)). % installed_converter(drs_to_drslist, drslist_to_ace(+drs0:list, -paraphrase:set)). % installed_converter(drs_to_drslist, drs_to_drslist(+drs0, -drs:set)). % installed_converter(drs_to_sdrs, drs_to_sdrs(+drs, -sdrs)). % installed_converter(parser_chat80, into_text80(+tokens, -text80)). % installed_converter(parser_chat80, sent_to_parsed(+text80, -syntaxTree80)). % installed_converter(parser_chat80, i_sentence(+syntaxTree80, -i_sentence)). % installed_converter(parser_chat80, clausify80(+i_sentence, -clausify80)). % installed_converter(parser_chat80, simplify80(+clausify80, -simplify80)). % installed_converter(parser_chat80, qplan(+simplify80, -qplan)). % installed_converter(parser_chat80, results80(+qplan, -results80)). % /home/nlutest/.local/share/swi-prolog/pack/logicmoo_nlu/prolog/logicmoo_nlu/nl_pipeline.pl:595 % parser_all_complete....... chat80("Which countries have a population exceeding 10 million?"). chat80("Which countries contain a city?"). chat80("Which countries contain 2 cities?"). chat80("Which countries contain 3 cities?"). chat80("Which countries contain more than 3 cities?"). chat80("Which countries contain more than 2 cities?"). chat80("Which continents contain more than 4 cities?"). chat80("Which asian countries have a population exceeding 10 million?"). chat80("What is the average area of the countries in each continent?"). chat80("What is a river?"). chat80("What is a river that is in asia?"). chat80("Which rivers are not in asia?"). chat80("What is a river that is not happy?"). chat80("does afghanistan border china?"). chat80("what is the capital of upper_volta?"). chat80("where is the largest country?"). chat80("which countries are european?"). chat80("which country's capital is london?"). chat80("which is the largest african country?"). chat80("how large is the smallest american country?"). chat80("what is the ocean that borders african countries and that borders asian countries?"). chat80("what are the capitals of the countries bordering the baltic?"). chat80("how many countries does the danube flow through?"). chat80("what is the total area of countries south of the equator and not in australasia?"). chat80("what is the average area of the countries in each continent?"). chat80("is there more than one country in each continent?"). chat80("is there some ocean that does not border any country? "). chat80("what are the countries from which a river flows into the black_sea?"). chat80("what are the continents no country in which contains more than two cities whose population exceeds 1 million? "). chat80("which country bordering the mediterranean borders a country that is bordered by a country whose population exceeds the population of india?"). chat80("which countries have a population exceeding 10 million?"). chat80("which countries with a population exceeding 10 million border the atlantic?"). chat80("what percentage of countries border each ocean?"). chat80("what countries are there in europe?"). chat80([which, is, the, largest, african, country, ?]). chat80("which countries are bordered by two seas?", [[egypt, iran, israel, saudi_arabia, turkey]]). chat80("How many rivers are not in asia?", 25). chat80("How many rivers are in asia?", 16). chat80("How many asian countries have a population exceeding 10 million?", 20). chat80("How many countries have a population exceeding 10 million?", 50). chat80("What are the continents in which no country contains more than 3 cities?", [africa, antarctica, australasia, europe]). chat80("What are the continents not containing a country?", [antarctica]). chat80("What are the continents no country in which contains more than two cities whose population exceeds 1 million ?", [africa, antarctica, australasia]). chat80("What are the continents in which no country contains more than two cities whose population exceeds 1 million?", [africa, antarctica, australasia]). chat80("What are the continents containing a country in which contains more than two cities whose population exceeds 1 million?", [america, asia, europe]).

logicmoo_nlu_old


This NLU/NLG ToolKit uses the following projects into a usable pipeline

logicmoo_planners


With PDDL, Boolean variables are created from the PDDL predicates. Variables are named after the PDDL predicates, `variable().` Each variable contains exactly two values (one `true`, one `false`) of the form `value(, )`. Note that with PDDL, variables and values are named identically.

logseq


[![latest release version](https://img.shields.io/github/v/release/logseq/logseq)](https://github.com/logseq/logseq/releases) [![License](https://img.shields.io/github/license/logseq/logseq?color=blue)](https://github.com/logseq/logseq/blob/master/LICENSE.md) [![Twitter follow](https://img.shields.io/badge/follow-%40logseq-blue.svg?style=flat&logo=twitter)](https://twitter.com/logseq) [![discord](https://img.shields.io/discord/725182569297215569?label=discord&logo=Discord&color=blue)](https://discord.gg/KpN4eHY) [![total](https://opencollective.com/logseq/tiers/badge.svg?color=blue)](https://opencollective.com/logseq)

logtalk2


The overall copyright and permission notice for Logtalk can be found in the "LICENSE.txt" file in this directory. Logtalk follows the Artistic License 2.0. The copyright notice and license applies to all files in this release (including sources, documentation, and examples) unless otherwise explicitly stated.

logtalk3


This file is part of Logtalk Copyright 1998-2016 Paulo Moura

lpnes


This is a collection of solutions to exercises found in Learn Prolog Now! textbook by Patrick Blackburn, Johan Bos, and Kristina Striegnitz.

lps-demo-web


This repository holds the frontend web app for the [lps.js](https://github.com/mauris/lps.js) demonstration website, made using [Angular framework](https://angular.io/) and bundled with Webpack. The server-side repository of the web app can be found at https://github.com/mauris/lps-demo-web-api

lrec2016-ubyline


Ubyline is an Apache-licensed, web-based sense annotation tool whose user interface is optimized for lexical sample data. Ubyline supports a wide range of sense inventories in several languages, including WordNet and GermaNet.

LS2


lsdsem2017-story-cloze


This repository contains the code needed to reproduce the results reported in Bugert et al., *LSDSem 2017: Exploring Data Generation Methods for the Story Cloze Test*.

lua-signal


This is a signal library for Lua 5.1. It depends on ANSI C signals and has some extensions that are available in POSIX, such as kill().

lucida


Lucida is a speech and vision based intelligent personal assistant inspired by [Sirius](http://sirius.clarity-lab.org). Visit [our website](http://lucida.ai) for tutorial, and [Lucida-users](http://groups.google.com/forum/#!forum/lucida-users) for help. The project is released under [BSD license](LICENSE), except certain submodules contain their own specific licensing information. We would love to have your help on improving Lucida, and see [CONTRIBUTING](CONTRIBUTING.md) for more details.

Ludii


Ludii is a general game system being developed as part of the [ERC-funded Digital Ludeme Project (DLP)](http://ludeme.eu/). This repository hosts the publicly available source code for Ludii. A precompiled build (Ludii.JAR) can be downloaded from [Ludii's downloads page](https://ludii.games/download.php).

LudiiAI


This repository is now deprecated; all AI source code for Ludii is included in the main open-source Ludii repo at https://github.com/Ludeme/Ludii.

LudiiAICompetition


This repository, as well as the [Ludii Example AI repository](https://github.com/Ludeme/LudiiExampleAI), are written for the latest public pre-release of Ludii available at the time of this writing: **Ludii 0.9.3**. **This is the version of Ludii that we will use for the AI competition at CoG 2020**. We do plan to release newer versions of Ludii in between, but the API may not remain 100% the same. Therefore **we now fix the version that will be used for the competition at CoG 2020 at 0.9.3**. --> ---

LudumDare


This makes a settings.db file in your LD install root Please don't check this file in.

lwaptk


In order to compile some of the examples, you will also need a version >= 1.49 of the Boost C++ libraries available on your system. You can check the version you have either manually by looking at the macro defined in `boost/version.hpp` or, on debian systems, by running `dpkg -s libboost-dev`. Be aware that systems such as the Ubuntu 12.04LTS release ship with older versions of Boost.

MaastCTS2


Source code of the MaastCTS2 agent for General Video Game playing. Champion of the 2016 GVG-AI Single-Player Track, and runner-up of the 2016 GVG-AI Two-Player Track. This repository contains code for both the Single-Player and Two-Player variants.

macaw


Here is an example of the Telegram interface for Macaw. It supports multi-modal interactions (text, speech, click, etc).

MADP


MultiAgentDecisionProcess (MADP) is a toolbox for scientific research in decision-theoretic planning and learning in multiagent systems. It is designed to be rather general, but most effort has been put in planning algorithms for discrete Dec-POMDPs.

magentix


Magentix2 is an agent platform for open Multiagent Systems. Its main objective is to bring agent technology to real domains: business, industry, logistics, e-commerce, health-care, etc.

magpie-corpus


# MAGPIE Corpus This is the **MAGPIE Corpus**, a large sense-annotated corpus of potentially idiomatic expressions (PIEs), based on the British National Corpus (BNC). Potentially idiomatic expressions are like idiomatic expressions, but the term also covers literal uses of idiomatic expressions, such as 'I leave work *at the end of the day*.' for the idiom 'at the end of the day'. The corpus contains 56,622 instances, covering 1,756 different idiom types, all of which have crowdsourced meaning labels. For details, see our [LREC paper](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.35.pdf).

maia-chess


A collection of chess engines that play like humans, from ELO 1100 to 1900.

males


- Setting the correct parameters 1. Define the parameter space in the ATP.ini 2. Check the settings in setup.ini. In particular the PROBLEMS parameter under search must be a file which contains the training problems. 3. Define the start strategies in strategies.ini

marelle


This will install marelle for all users, putting the executable in `/usr/local/bin/marelle`.

MarI-O


MarI/O is a program made of neural networks and genetic algorithms that kicks butt at Super Mario World.

Marpa--R2


This is a working repository for RELEASE 2 of the Marpa.

masscan


This is the fastest Internet port scanner. It can scan the entire Internet in under 6 minutes, transmitting 10 million packets per second.

massim_2020


[scenario.md](docs/scenario.md) contains the description of the current scenario.

Master-Thesis


This is my master's thesis with presentation slides.

mat2vec


1. Make sure you have `python3.6` and the `pip` module installed. We recommend using [conda environments](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html). 1. Navigate to the root folder of this repository (the same folder that contains this README file) and run `pip install -r requirements.txt`. Note: If you are using a conda env and any packages fail to compile during this step, you may need to first install those packages separately with `conda install package_name`. 1. Wait for all the requirements to be downloaded and installed. 1. Run `python setup.py install` to install this module. This will also download the Word2vec model files. If the download fails, manually download the [model](https://storage.googleapis.com/mat2vec/pretrained_embeddings), [word embeddings](https://storage.googleapis.com/mat2vec/pretrained_embeddings.wv.vectors.npy) and [output embeddings](https://storage.googleapis.com/mat2vec/pretrained_embeddings.trainables.syn1neg.npy) and put them in mat2vec/training/models. 1. Finalize your chemdataextractor installation by executing ``cde data download`` (You may need to restart your virtual environment for the cde command line interface to be found). 1. You are ready to go!

mathlib


[Mathlib](https://leanprover-community.github.io) is a user maintained library for the [Lean theorem prover](https://leanprover.github.io). It contains both programming infrastructure and mathematics, as well as tactics that use the former and allow to develop the latter.

mc-aixi


This software package consists of a simple implementation of MC-AIXI-CTW, an intelligent agent that learns from experience how to perform well in a wide variety of environments. This includes, but is not limited to the example games provided in this package, such as Tic Tac Toe, Pacman, and Kuhn Poker.

mcapl


This software distribution consists of:

mdr


MDR is a library detect and extract listing data from HTML page. It implemented base on the `Finding and Extracting Data Records from Web Pages `_ but change the similarity to tree alignment proposed by `Web Data Extraction Based on Partial Tree Alignment `_ and `Automatic Wrapper Adaptation by Tree Edit Distance Matching `_.

mdswriter


MDSWriter is a software for manually creating multi-document summarization corpora and a platform for developing complex annotation tasks spanning multiple steps.

media_frames_corpus


This repository contains the metadata for all articles in the Media Frames Corpus (version 2), along with the beginning and end (and associated framing dimension) of all annotated spans of text. All of this information is in a single JSON file in the annotations/ directory, with one file for each issue (immigration, smoking, and same-sex marriage). To obtain the actual articles, however, it is necessary to have access to Lexis-Nexis academic.

Megatron-LM


[Megatron](https://arxiv.org/pdf/1909.08053.pdf) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor and pipeline), and multi-node pre-training of [GPT](https://arxiv.org/abs/2005.14165) and [BERT](https://arxiv.org/pdf/1810.04805.pdf) using mixed precision.

MEPK


# Multi-agent Epistemic Planner with Knowledge This is a planner for multi-agent epistemic planning. This code is continuously updated. We are planning to release a brand new version of MEPK and more details about it will be presented. You are welcome to follow this work.

mesh-transformer-jax


A haiku library using the `xmap` operator in Jax for model parallelism of transformers.

meta-dataset


This repository contains accompanying code for the article introducing Meta-Dataset, [arxiv.org/abs/1903.03096](https://arxiv.org/abs/1903.03096).

metagol


Metagol is an inductive logic programming (ILP) system based on the meta-interpretive learning framework. Please contact Andrew Cropper (a.cropper13@imperial.ac.uk) with any questions / bugs.

Mi-Band


miBand.customVibration(times, on_time, off_time); where `times` is an int value to determine **how many times** will vibrate(I recommend to use between 1-3 times only) and `on_time` is the time in milliseconds that each vibration will be **On** (not more than 500 milliseconds) and `off_time` is the **pause** between each consecutive vibration ### LED Color To change the LED color, you can use

miaow


MIAOW is an open source implementation of the AMD Southern Islands GPU ISA.

microccg


MicroCCG ======== MicroCCG is an adversarial Combinatory Categorial Grammar (CCG) planner for the Real-Time Strategy (RTS) Game microRTS. This agent was developed to participate in the CIG 2018 microRTS tournament. Details about microRTS can be found on the [microRTS Github page](https://github.com/santiontanon/microrts).

microrts


microRTS is a small implementation of an RTS game, designed to perform AI research. The advantage of using microRTS with respect to using a full-fledged game like Wargus or StarCraft (using BWAPI) is that microRTS is much simpler, and can be used to quickly test theoretical ideas, before moving on to full-fledged RTS games.

Miser


Miser is a Python library that can be used for writing scripts that'll help you project costs and figure out how to accumulate money. It's in unstable alpha.

MISP


MISP, Malware Information Sharing Platform and Threat Sharing, is an open source software solution for collecting, storing, distributing and sharing cyber security indicators and threat about cyber security incidents analysis and malware analysis. MISP is designed by and for incident analysts, security and ICT professionals or malware reverser to support their day-to-day operations to share structured informations efficiently.

MITIE


This project provides free (even for commercial use) [state-of-the-art](../../wiki/Evaluation) information extraction tools. The current release includes tools for performing [named entity extraction](http://blog.dlib.net/2014/04/mitie-completely-free-and-state-of-art.html) and [binary relation detection](http://blog.dlib.net/2014/07/mitie-v02-released-now-includes-python.html) as well as tools for training custom extractors and relation detectors.

mizar-items


A further problem is to report the results. The goal is a dependency table that shows, for each Mizar "item", which Mizar items it depends upon.

ml-cread


This is the source code of the paper [CREAD: Combined Resolution of Ellipses and Anaphora in Dialogues](https://arxiv.org/abs/2105.09914). In this work, we propose a novel joint learning framework of modeling coreference resolution and query rewriting for complex, multi-turn dialogue understanding. The coreference resolution [MuDoCo](https://github.com/facebookresearch/mudoco) dataset augmented with our query rewrite annotation is released as well.

mlj19-iggp


This repository consists of the code used to run the experiment and three zip files:

MMT


modern_perl_book


Perl is a popular, powerful, and widely used programming language. Over its twenty year lifespan, it's powered millions of systems worldwide, moving trillions of dollars. More importantly, it's helped countless people get their work done effectively.

Mojo-Discord


This is a set of Perl Modules designed to implement parts of the Discord public API, build on Mojo::UserAgent and Mojo::IOLoop.

mojo-pg


A tiny wrapper around [DBD::Pg](https://metacpan.org/pod/DBD::Pg) that makes [PostgreSQL](https://www.postgresql.org) a lot of fun to use with the [Mojolicious](https://mojolicious.org) real-time web framework.

MOLIERE


This repo contains the code described in the publication: *MOLIERE: Automatic Biomedical Hypothesis Generation System*

morbig


Morbig is a parser for shell scripts written in the POSIX shell script language. It parses the scripts statically, that is without executing them, and constructs a concrete syntax tree for each of them. The concrete syntax trees are built using constructors according to the shell grammar of the POSIX standard.

moses


MOSES is a machine-learning tool; it is an "evolutionary program learner". It is capable of learning short programs that capture patterns in input datasets. These programs can be output in either the `combo` programming language, or in python. For a given data input, the programs will roughly recreate the dataset on which they were trained.

mowgli-in-the-jungle


Mowgli-in-the-jungle is a library of functionalities that help building a commonsense QA solution on a variety of tasks.

mppp


mp++ is a C++11 library for multiprecision arithmetic, currently supporting arbitrary-precision integers, rationals and floats, and quadruple-precision floats.

MPS


muc3


# muc3 This is the text corpus created by the [DARPA TIPSTER Program](http://www.itl.nist.gov/iaui/894.02/related_projects/tipster/) for the third [Message Understanding Conference (MUC-3)](https://en.wikipedia.org/wiki/Message_Understanding_Conference) in 1991, and reused for MUC-4 in 1992, before finding a permanent home at the [National Institute of Standards and Technology (NIST)](http://www.nist.gov/) when the TIPSTER Program finished. The corpus contains news reports covering terrorist activities in Latin America.

MUD_Interpretors


This program is a implementation of it.

MUD_WebTHEA


A collection of modules for parsing and manipulating OWL2 ontologies in Prolog. It is developed with SWI-Prolog in mind, but the goal is to maximize portability with other prologs, such as Yap and XSB.

MulVAL


%MulVAL is an cybersecurity reasoning engine that can be applied on top of multiple contexts (cloud, IoT, enterprise network, etc )

muzero-general


A commented and [documented](https://github.com/werner-duvaud/muzero-general/wiki/MuZero-Documentation) implementation of MuZero based on the Google DeepMind [paper](https://arxiv.org/abs/1911.08265) (Nov 2019) and the associated [pseudocode](https://arxiv.org/src/1911.08265v2/anc/pseudocode.py). It is designed to be easily adaptable for every games or reinforcement learning environments (like [gym](https://github.com/openai/gym)). You only need to add a [game file](https://github.com/werner-duvaud/muzero-general/tree/master/games) with the hyperparameters and the game class. Please refer to the [documentation](https://github.com/werner-duvaud/muzero-general/wiki/MuZero-Documentation) and the [example](https://github.com/werner-duvaud/muzero-general/blob/master/games/cartpole.py).

mycroft-core


Mycroft is a hackable open source voice assistant.

mylar


If you now take a look at the local Meteor-MongoDb (with a gui like Robomongo or the meteor mongo-shell, you will see a field named "message_enc" that contains the encryption of the message. There should be no field "message", which before contained the unencrypted data and will only appear on the client when the message is successfully decrypted.

myshinytemplate.com


This is the official website for MyShinyTemplate http://myshinytemplate.com

naacl-bea2016-writing-study


InViEdit is a web-based writing environment for evaluating methods in intelligent writing assistance.

NAF


This document describes NAF, the NLP Annotation Format. NAF is a stand-off, multilayered annotation schema for representing linguistic annotations.

NAMAS


This project contains the Abs. neural abstractive summarization system from the paper

narchy


**Tasks** can arrive at any time. There are no restrictions on their content as far as they can be expressed in __Narsese__ (the I/O language of NARS). - By default, NARS makes *no assumptions* about the meaning or truth value of input beliefs and goals. - How to choose proper inputs and interpret possible outputs for each application is an *open problem* to be solved by its users. :warning:

narsese


This is a SWI-Prolog pack that runs Narsese like OpenNARS

nativefier


Nativefier is a command line tool that allows you to easily create a desktop application for any web site with succinct and minimal configuration. Apps are wrapped by [Electron](http://electron.atom.io) in an OS executable (`.app`, `.exe`, etc.) for use on Windows, OSX and Linux.

Natron


Natron is a free open-source (MPLv2 license) video compositing software, similar in functionality to Adobe After Effects or Nuke by The Foundry.

Natural-Language-Processing


This repository contains source code of the project for sentiment analysis of a given text using the publically available lexical resource called [SentiWordNet](http://sentiwordnet.isti.cnr.it/). Sentiwordnet files are to be downloaded and added to the folder to compile this source code. Input is to be given in a file named input which is to be placed in the project folder.

NaturalLanguageForm


An experimental form that uses natural language instead of the usual form layout. Values are entered using custom input elements.

NaturalLI


NaturalLI is a Natural Logic reasoning engine aimed at fast inference from a large database of known facts. The project's primary goal is to infer whether arbitrary common-sense facts are true, given a large database of known facts. The system is described in:

naturalproofs


This repo contains:

neural-cli


After this runs it will then print a plot of the hypothesis error against the size of training set the weights where learned on. Below is an example graph plotted from the iris dataset.

neural-enhance


A list of example command lines you can use with the pre-trained models provided in the GitHub releases:

neural-style


An implementation of [neural style][paper] in TensorFlow.

neural-style-tf


This is a TensorFlow implementation of several techniques described in the papers: * [Image Style Transfer Using Convolutional Neural Networks](http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf) by Leon A. Gatys, Alexander S. Ecker, Matthias Bethge * [Artistic style transfer for videos](https://arxiv.org/abs/1604.08610) by Manuel Ruder, Alexey Dosovitskiy, Thomas Brox * [Preserving Color in Neural Artistic Style Transfer](https://arxiv.org/abs/1606.05897) by Leon A. Gatys, Matthias Bethge, Aaron Hertzmann, Eli Shechtman

Neural_DRS


This folder contains scripts to use our neural seq2seq model to produce DRSs. It contains code to reproduce either our [TACL paper](https://www.aclweb.org/anthology/Q18-1043.pdf), our [IWCS paper](https://www.aclweb.org/anthology/W19-0504/) or our [EMNLP paper](https://www.aclweb.org/anthology/2020.emnlp-main.371.pdf). The models rely on [OpenNMT](http://opennmt.net/), [Marian](https://marian-nmt.github.io/) and [AllenNLP](https://allennlp.org/), respectively.

newspaper


"Newspaper is an amazing python library for extracting & curating articles." -- `tweeted by`_ Kenneth Reitz, Author of `requests`_

NewsScraper


A project that, at its core, scrapes news data from the internet and extracts binary relations from the news using ReVerb.

ngPAWS


ngPAWS (pronunced n-g-paws) is an authoring system based on the Professional Adventure Writing System, thus the name ngPAWS stands for "next generation PAWS".

nl2bash


This repository contains the data and source code release of the paper: [NL2Bash: A Corpus and Semantic Parser for Natural Language Interface to the Linux Operating System](http://victorialin.net/pubs/nl2bash.pdf).

NL2code


A syntactic neural model for parsing natural language to executable code [paper](https://arxiv.org/abs/1704.01696).

nlp-lotr


A lot of these names were places, and many were of little importance or were not proper nouns at all, so only the first 39 names and 27 places were kept, in `names-edited.txt` and `places-edited.txt`.

NLP-progress


This document aims to track the progress in Natural Language Processing (NLP) and give an overview of the state-of-the-art (SOTA) across the most common NLP tasks and their corresponding datasets.

nlprolog


This is an implementation of [NLProlog](todo), a method for approaching Question Answering tasks with Prolog-like reasoning over natural language statements.

nlu-server


A server that supplies web-services for NLU (Natural Language Understanding) and NLG (Natural Language Generation) for a negotiation agent.

NLU_datasets_with_task_oriented_dialogue


There is an [implementation](https://github.com/sz128/slot_filling_and_intent_detection_of_SLU) of joint training of slot filling and intent detection for SLU, which is evaluated on ATIS and SNIPS datasets.

nmt


Lastly, we haven't mentioned *projection_layer* which is a dense matrix to turn the top hidden states to logit vectors of dimension V. We illustrate this process at the top of Figure 2.

nomic


This is an instance of the game [Nomic](https://en.wikipedia.org/wiki/Nomic) driven by Github interactions:

Nomyx


A Nomic game in Haskell

normalization


This script implements the two most common algorithms for database normalization, BCNF decomposition and 3NF synthesis. It was written as an exercise while studying for an exam in a databases class.

notably


The initial code is based on Yakuake which is a drop down terminal emulator based on KDE Konsole technology.

NOUS


# NOUS: Construction, Querying and Reasoning in Dynamic Knowledge Graphs Automated construction of knowledge graphs (KG) remains an expensive technical challenge that is beyond the reach for most enterprises and academic institutions. NOUS is an end-to-end framework for developing custom knowledge graphs driven analytics for arbitrary application domains. The uniqueness of our system lies A) in its combination of curated KGs along with knowledge extracted from unstructured text, B) support for advanced trending and explanatory questions on a dynamic KG, and C) the ability to answer queries where the answer is embedded across multiple data sources.

NOUS-KG


# NOUS : Construction and Querying of Dynamic Knowledge Graphs Automated construction of knowledge graphs remains an expensive technical challenge that is beyond the reach for most enterprises and academic institutions. NOUS is an end-to-end framework for developing custom knowledge graphs driven analytics for arbitrary application domains. The uniqueness of our system lies A) in its combination of curated KGs along with knowledge extracted from unstructured text, B) support for advanced trending and explanatory questions on a dynamic KG, and C) the ability to answer queries where the answer is embedded across multiple data sources.

NTG-Papers


This repository presents a collection of previous research papers of Neural Text Generation (NTG), as well as a taxonomy constructed according to publication time, method paradigm or paper type.

nupic


The Numenta Platform for Intelligent Computing (**NuPIC**) is a machine intelligence platform that implements the [HTM learning algorithms](http://numenta.com/learn/hierarchical-temporal-memory-white-paper.html). HTM is a detailed computational theory of the neocortex. At the core of HTM are time-based continuous learning algorithms that store and recall spatial and temporal patterns. NuPIC is suited to a variety of problems, particularly anomaly detection and prediction of streaming data sources.

nut


An implementation of Cross-Language Structural Correspondence Learning (CLSCL). See [Prettenhofer2010]_ for a detailed description and [Prettenhofer2011]_ for more experiments and enhancements.

nutrition-facts


This is a web component that gets nutrition facts in json format and outputs a nicely formatted Nutrition Facts with live text.

nvidia-docker


A signed copy of the [Contributor License Agreement](https://raw.githubusercontent.com/NVIDIA/nvidia-docker/master/CLA) needs to be provided to digits@nvidia.com before any change can be accepted.

oke-challenge-2016


This folder contains guidelines and materials for the Open Knowledge Extraction challenge at [ESWC 2016](http://2016.eswc-conferences.org/).

old-sirius


<<<<<<< HEAD Lucida is a speech and vision based intelligent personal assistant. ======= Lucida is a speech and vision based intelligent personal assistant based on Sirius. Visit the provided readmes in [lucida](lucida) for instructions to build Lucida and follow the instructions to build [lucida-suite here](http://sirius.clarity-lab.org/sirius-suite/). >>>>>>> 2a18f6852666636214a8d5d76ac7d543e9cd8428 Post to [Lucida-users](http://groups.google.com/forum/#!forum/sirius-users) for more information and answers to questions. The project is released under [BSD license](LICENSE), except certain submodules contain their own specific licensing information. We would love to have your help on improving Lucida, and see [CONTRIBUTING](CONTRIBUTING.md) for more details. <<<<<<< HEAD =======

OLED


``OLED`` is an online ('single-pass') Inductive Logic Programming system for learning logical theories from data streams. It has been designed having in mind the construction of knowledge bases for event recognition applications, in the form of domain-specific axioms in the Event Calculus, i.e. rules that specify the conditions under which simple, low-level events initiate or terminate complex event. However, `OLED` can practically be used within any domain where ILP is applicable (preferably, large volumes of sequential data with a time-like structure).

ontological-pathfinding


Ontological Pathfinding (OP) is a scalable first-order rule mining algorithm. It achieves scalability via a series of parallelization and optimization techniques: a relational knowledge base model to apply inference rules in batches, a new rule mining algorithm that parallelizes the join queries, a novel partitioning algorithm to break the mining tasks into smaller independent sub-tasks, and a pruning strategy to eliminate unsound and resource-consuming rules before applying them. Combining these techniques, OP is the first rule mining algorithm that mines 36,625 inference rules from Freebase, the largest public knowledge base with 112 million entities and 388 million facts.

open-in-editor


This scripts allows opening your text editor from a link on a webpage/within a browser extension via MIME. See a short [[https://karlicoss.github.io/promnesia-demos/jump_to_editor.webm][demo]].

open-sesame


A frame-semantic parser for automatically detecting [FrameNet](https://framenet.icsi.berkeley.edu/fndrupal/) frames and their frame-elements from sentences. The model is based on softmax-margin segmental recurrent neural nets, described in our paper [Frame-Semantic Parsing with Softmax-Margin Segmental RNNs and a Syntactic Scaffold](https://arxiv.org/abs/1706.09528). An example of a frame-semantic parse is shown below

openalpr


OpenALPR is an open source *Automatic License Plate Recognition* library written in C++ with bindings in C#, Java, Node.js, and Python. The library analyzes images and video streams to identify license plates. The output is the text representation of any license plate characters.

openccg


OpenCCG is a system for parsing and generating text using [combinatory categorial grammar](https://en.wikipedia.org/wiki/Combinatory_categorial_grammar) for syntax and [hybrid logic dependency semantics](https://www.aclweb.org/anthology/P02-1041) for, well, the semantic representation.

opencog


OpenCog is a framework for developing AI systems, especially appropriate for integrative multi-algorithm systems, and artificial general intelligence systems. Though much work remains to be done, it currently contains a functional core framework, and a number of cognitive agents at varying levels of completion, some already displaying interesting and useful functionalities alone and in combination.

opencyc


The OpenCyc Platform is your gateway to the full power of Cyc, the world's largest and most complete general knowledge base and commonsense reasoning engine. OpenCyc contains hundreds of thousands of Cyc terms organized in a carefully designed ontology. Cycorp offers this ontology at no cost and encourages you to make use of, and extend, this ontology rather than starting your own from scratch. OpenCyc can be used as the basis of a wide variety of intelligent applications such as:

OpenEats


OpenEats is a recipe management site that allows users to create, share, and store recipes. OpenEats was created using django, a python web framework and several django plugins. Some of the features of OpenEats are;

OpenEphyra


This repository contains a resurrected and repaired version of OpenEphyra . It was branched from the latest version of OpenEphyra on SoundForge , as of March, 2014, for use in the OpenCog artificial intelligence system (Copyright (C) 2014 [OpenCog Foundation](http://www.opencog.org/)).

openface


This research was supported by the National Science Foundation (NSF) under grant number CNS-1518865. Additional support was provided by the Intel Corporation, Google, Vodafone, NVIDIA, and the Conklin Kistler family fund. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and should not be attributed to their employers or funding sources.

openie


Open IE ====== This project contains the principal Open Information Extraction (Open IE) system from the University of Washington (UW). An Open IE system runs over sentences and creates extractions that represent relations in text. For example, consider the following sentence.

openiot


OpenIoT is a joint effort of prominent open source contributors towards enabling a new range of open large scale intelligent IoT (Internet-of- things) applications according to a utility cloud computing delivery model.

OpenNMT-py


This is the [PyTorch](https://github.com/pytorch/pytorch) version of the [OpenNMT](https://opennmt.net) project, an open-source (MIT) neural machine translation framework. It is designed to be research friendly to try out new ideas in translation, summary, morphology, and many other domains. Some companies have proven the code to be production ready.

openprs


This README is somehow outdated. Please see this page for more update information, in particular with respect to installation wich is now quite easy using robotpkg.

OpenSPIFe


See the license files for the original and updated contributions. The initial release of Open SPIFe to open source is given by the NASA Open Source Agreement and third-party licenses including Apache License 2.0, Eclipse Public License 1.0, Mozilla Public License 2.0, and GNU General Public License 3.0.

OpenSubtitlesDownload


**OpenSubtitlesDownload.py** is a small Linux software written in python, built to help you **quickly find and download subtitles for your favorite videos**. It can be used as a nautilus script, or as a regular application working under GNOME or KDE desktop environments. You can also use it in full CLI mode (Command Line Interface) on your NAS, Raspberry Pi or wherever you want to bundle it really!

OpenTimelineIO


OpenTimelineIO is an interchange format and API for editorial cut information. OTIO is not a container format for media, rather it contains information about the order and length of cuts and references to external media.

openwifimap-api


OpenWiFiMap is a database and map for free network WiFi routers (freifunk and others, too!).

open_nsfw


# Open nsfw model This repo contains code for running Not Suitable for Work (NSFW) classification deep neural network Caffe models. Please refer our [blog](https://yahooeng.tumblr.com/post/151148689421/open-sourcing-a-deep-learning-solution-for) post which describes this work and experiments in more detail.

open_spiel


OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games. OpenSpiel supports n-player (single- and multi- agent) zero-sum, cooperative and general-sum, one-shot and sequential, strictly turn-taking and simultaneous-move, perfect and imperfect information games, as well as traditional multiagent environments such as (partially- and fully- observable) grid worlds and social dilemmas. OpenSpiel also includes tools to analyze learning dynamics and other common evaluation metrics. Games are represented as procedural extensive-form games, with some natural extensions. The core API and games are implemented in C++ and exposed to Python. Algorithms and tools are written both in C++ and Python. There is also a branch of pure Swift in the `swift` subdirectory.

open_type


This repository contains code for the following paper:

opinion_miner_deluxe


Opinion miner based on machine learning that can be trained using a list of KAF/NAF files. It is important to notice that the opinion miner module will not call to any external module to obtain features. It will read all the features from the input KAF/NAF file, so you have to make sure that your input file contains all the required information in advance (tokens, terms, polarities, constituents, entitiess, dependencies...).

optical-illusion-dataset


A greatly reduced dataset of only images that have eye-bending patterns is here (**569** images, hand picked):

oqa


This is a repository for the code and data from the paper _Open Question Answering Over Curated and Extracted Knowledge Bases_ from KDD 2014. If you use any of these resources in a published paper, please use the following citation:

org-brain


You can think of =org-brain= as a combination of a wiki and a mind map, where each wiki page / mind map node is an =org-mode= file which resides in your =org-brain-path=, or a headline with an ID property in one of those files. These are called /entries/. Entries can be linked together, and you can then view the network of links as a mind map, using =M-x org-brain-visualize=. Here's [[https://www.youtube.com/watch?v=3EGOwfWok5s&t=][a video introducing =org-brain=]].

org-mind-map


# org-mind-map This is an emacs package that creates graphviz directed graphs from org-mode files. This project is currently unmaintained! If anyone would like to take this over an fix up my (very messy) code, please let me know.

oro


ors


ossert


#### Pulse, for last year/quarter/month (amount + delta from total) - Open and Closed Issues - Open and Merged PRs - Releases Count - Downloads divergence - Downloads degradation per release (will come later) - Stale Branches Count

ossmeter


OSSMETER is an EU-funded research product that is developing a platform for monitoring the quality of open-source software projects.

packages-eclisp


ECL, like many other free programs, can be built and installed a GNU tool called Autoconf. This is a set of automatically generated scripts that detect the features of your machine, such as the compiler type, existing libraries, desired installation path, and configures ECL accordingly. The following procedure describes how to build ECL using this procedure and it applies to all platforms except for the Windows ports.

Pacman


5. Generic tactic considerations: The ATTACK, HUNT and DEFEND tactics also share some common heuristic features. Each tactic penalises the two team mate agents from moving too close together. This means that they cover more area both when attacking - allowing more food to be eaten, and when defending - cornering the enemy more easily. This is also advantagous when hunting, as there is a greater chance the one of the agents will directly spot the invader.

paip-el


This code is a port of the Common Lisp programs found in the book [Paradigms of Artificial Intelligence Programming](http://norvig.com/paip.html) written by Peter Norvig. The goal of this project is to enable Emacs extension developers to easily use the programming techniques in PAIP. The project focuses on providing the developers with good modular software tools, rather than helping them to understand AI programming techniques. If you would like to learn it, I recommend you install the [SBCL](http://www.sbcl.org/), a Common Lisp implementation, run and hack the original code by Norvig in Common Lisp.

paip-lisp


This is an open-source repository for the book *Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp* by Peter Norvig (1992), and the code contained therein. The copyright has reverted to the author, who has shared it here under MIT license.

parallella-hw


This repository contains open source board and FPGA designs associated with the Parallella project.

parma


Note: There is a newer version of this codebase [here](https://github.com/hltcoe/parma2), and this should be considered deprecated.

parma2


Parma is a Predicate ARguMent Alignment tool, described in the following publications:

pattern-recognition-for-text-documents-classification


This is the result of my thesis for graduating on Electrical Engineering. It is a simple classification system with the following specs:

PCCoder


1. max_program_len dictates the maximum depth of the search. 2. The result file has a json dictionary line for each program predicted. The dictionary contains the predicted program and some details about the search, like the amount of time the search took and the final beam size. 3. Use --search_method to change the method from the default CAB search to DFS.

pddl-instances


This repository contains PDDL benchmark instances in a **consistent structure.**

pddl-tools


This code is made publicly available by SIFT, LLC under the terms of the 3-clause BSD license, attached as [[file:license.txt][license.txt]].

pddl2smv


This is a collection of translator from pddl format to smv format. They are all based on Fabio Patrizi first version to translate pddl files to [TLV](https://cs.nyu.edu/acsys/tlv/) files.

PDDLMemory


A short-term memory module for AI planning

PDDLtoGraph


PDDLtoGraph is a simple program for visualising PDDL files as relatedness and causal graphs, written in python. It also determines the diameter and the radius of the graph.

pdrt-sandbox


This is an implementation of the formal framework of Projective Discourse Representation Theory (Venhuizen et al. 2013; 2014), which is an extension of standard Discourse Representation Theory (Kamp 1981; Kamp & Reyle 1993) with projection pointers.

pdtb-parser


Replace the argument `examples/wsj_2300.txt` with the file or the folder containing text files you want to parse. The resulting pipe and auxiliary files would be in a folder named `output` in each folder containing text files. Note that when the argument is a folder, the parser will search for files ending in `.txt` in the folder and all of it's subfolders.

pegasus


Pegasus WMS is a configurable system for mapping and executing scientific workflows over a wide range of computational infrastructures including laptops, campus clusters, supercomputers, grids, and commercial and academic clouds. Pegasus has been used to run workflows with up to 1 million tasks that process tens of terabytes of data at a time.

pen.el


** Vision At its heart, emacs is an operating system based on a =tty=, which is a text stream.

pet


The PERICLES Extraction Tool (PET) is an open source (Apache 2 licensed) Java software for the extraction of significant information from the environment where digital objects are created and modified. This information supports object use and reuse, e.g. for a better long-term preservation of data. The Tool was developed entirely for the PERICLES EU project [http://www.pericles-project.eu/](http://www.pericles-project.eu/) by Fabio Corubolo, University of Liverpool, and Anna Eggers, Göttingen State and University Library.

petrarch


This will install the program with a command-line hook. You can now run the program using:

pfc


This is a modification of Tim Finin's PFC.

pharos


The Pharos static binary analysis framework is a project of the Software Engineering Institute at Carnegie Mellon University. The framework is designed to facilitate the automated analysis of binary programs. It uses the ROSE compiler infrastructure developed by Lawrence Livermore National Laboratory for disassembly, control flow analysis, instruction semantics, and more.

phoenix_pipeline


This system links a series of Python programs to convert the files which have been downloaded by scraper_connection.py to coded event data which is uploaded to a web site designated in the config file. The system processes a single day of information, but this can be derived from multiple text files.

pifuhd


This repository contains a pytorch implementation of "Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization".

piranha


Piranha is a C++11-based computer algebra library for the manipulation of algebraic objects, such as polynomials and Poisson series, commonly encountered in celestial mechanics.

pix2pixHD


This code borrows heavily from [pytorch-CycleGAN-and-pix2pix](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix).

plammar


A Prolog grammar written in Prolog, for parsing and serialising Prolog code.

planet


This project provides the open source implementation of the PlaNet agent introduced in [Learning Latent Dynamics for Planning from Pixels][paper]. PlaNet is a purely model-based reinforcement learning algorithm that solves control tasks from images by efficient planning in a learned latent space. PlaNet competes with top model-free methods in terms of final performance and training time while using substantially less interaction with the environment.

Planimation


A tool to animate plans generated from PDDL definitions.

planning-features


This project intends to be the most comprehensive and robust platform possible for extracting scalar features from PDDL domains and problem instances for AI planning problems.

platform


[Unison](http://unisonweb.org) is a new programming platform, currently under active development. This repo contains the code for the Unison node backend (written in Haskell, lives in the `node` directory, with source in `src`), and the Unison editor (currently written in Elm, found in the folder `editor-elm`).

plcop


This project makes use of two external repositories:

plOpenGL


What is plOpenGL ---------------- plOpenGL is an open source project that aims to develop a complete cross-platform SWI-Prolog binding for the OpenGL, GLU and GLUT libraries.

plsheet


You might be interested in http://www.j-paine.org/excelsior_2004/intro.html . This is an early version of my structure-discovery program, to which I gave a Prolog-TLI-style interface with a command language that could pass spreadsheets around as values and operate on them.

Polygames


This README is a work in progress, please feel very free to post issues - we are happy to help. Save up computational power: you can find checkpoints here: http://dl.fbaipublicfiles.com/polygames/checkpoints/list.txt (feel free to open an issue for discussing which checkpoint you should use for which game/problem!).

polysemous


This project is an attempt to increase the accuracy of such queries by reducing the problems associated with polysemy by identifying the meaning of each word in a document (a process called sense tagging) and using those senses in place of words to search for a document.

portia


Portia is a tool that allows you to visually scrape websites without any programming knowledge required. With Portia you can annotate a web page to identify the data you wish to extract, and Portia will understand based on these annotations how to scrape data from similar pages.

PraxiconDB


The PRAXICON is a conceptual knowledge base in which concepts have both symbolic

Predicting-Diseases-From-Symptoms


This is an attempt to predict diseases from the given symptoms. A decision tree was trained on two datasets, one had the scraped data from [here](http://people.dbmi.columbia.edu/~friedma/Projects/DiseaseSymptomKB/index.html).

Predicting-Human-Card-Selection-in-Magic-The-Gathering-with-Contextual-Preference-Ranking


This will run the whole training for one epoch and regularly output the current progress, while saving the network.

PredictionIO


PredictionIO is an open source machine learning framework for developers and data scientists. It supports event collection, deployment of algorithms, evaluation, querying predictive results via REST APIs.

prgolog-old


This is a [situation calculus][SitCalc]- and [Golog][Golog]-based system written in [Mercury][Mercury]. See this [paper][Paper] or [these slides][Slides] for more information.

PrincipiaMetaphysica


This repository contains a computer-assisted formalization of Ed Zalta's principia metaphysica, which is based on Zalta's theory of abstract objects. This work is based on a second-order modal logic which employs relational type theory as a foundation.

ProbCog


**ProbCog** is a statistical relational learning and reasoning system that supports efficient learning and inference in relational domains. We provide an extensive set of open-source tools for both undirected and directed statistical relational models.

probe


This script contain the settings used for PROBE in IPC-7

procedural-extraction


This code provides a framwork for extracting procedural information from documents. Please refer to our ACL paper ([arXiv](https://arxiv.org/abs/1906.11384)) for further descriptions.

Project_CodeNet


A decade ago, Marc Andreessen [famously wrote](https://a16z.com/2011/08/20/why-software-is-eating-the-world/) that "software is eating the world." Software now permeates every part of our existence; Google services combine for [2 billion lines of code](https://www.wired.com/2015/09/google-2-billion-lines-codeand-one-place/), and a modern vehicle [contains around](https://www.technologyreview.com/2012/12/03/181350/many-cars-have-a-hundred-million-lines-of-code/) 100 million lines of code. It's a monumental challenge to create, debug, maintain, and update these complex software systems. Recently, a fast-growing discipline known as AI for Code aims to help software developers improve their productivity by automating the software engineering process. AI for Code researchers have been leveraging technologies like NLP and augmenting them with code analysis and compilation techniques to perform a myriad of practical tasks, such as code search, summarization, and completion, as well as code-to-code translation. The discipline isn't limited to academic research either: Ruchir Puri, IBM Research's chief research scientist, discussed in a recent [podcast](https://open.spotify.com/episode/7gHPbVBHEgSdrACTow7Gql) how technologies from AI for Code are being used to modernize legacy software by helping migrate monolithic applications to microservices for IBM's enterprise clients.

prolog-analyzer


A static analyzing tool for Prolog written in Clojure and Prolog. The tool uses specs for predicates based on [plspec](https://github.com/wysiib/plspec) to find errors statically.

prolog-checkers


A Player vs AI game of checkers implemented in Prolog.

prolog-dungeon-battle


Locked Door: As seen in the map above, there is a locked door just before the dragon.

Prolog-Graphplan


The [Graphplan algorithm](http://en.wikipedia.org/wiki/Graphplan) is an [automatic planning](http://en.wikipedia.org/wiki/Automated_planning) algorithm that can compute, given a set of rules, a plan of action to go from an initial state to a final state.

Prolog-Scheduling-Problem


This project is part of the course Declarative Programming taught at Vrije Universiteit Brussel. It can be executed by running the _swipl_ program in the directory of this project. SWI-Prolog is available [here](http://www.swi-prolog.org/). First, one of the instances should be loaded. This can be done by one of the following commands:

prolog-to-minizinc


This is the compiler's output:

prologmud_I7


This version is derived from the original via Quintus Prolog after some compatibility modifications for SWI-Prolog and adding a module header that allows using it safely together with other applications.

ProofNumber-Search


## Proof-Number Search Proof-Number search (PNS) is a best-first tree search algorithm applied to determine the definite value of AND/OR trees. PNS does not require domain knowledge, only terminal positions need to be recognized. PNS can be used to solve games and endgame positions.

propbank-release


This release updates the annotations for Ontonotes data and the English Web Treebank. An additional 160,000 predicates of data has been annotated in the BOLT corpora, and will be made public when LDC releases BOLT to the general catalog. This will also host other English Propbank annotations whenever we are able to post them.

prova


Prova is an economic and efficient, Java JVM based, open source rule language for reactive agents and event processing. It combines imperative, declarative and functional programming styles. It is designed to work in distributed Enterprise Service Bus and OSGi environments.

pseudogen


A tool to automatically generate pseudo-code from source code.

Public


examples.align contains the example alignments described in the paper above.

puck


Puck is a high-speed, high-accuracy parser for natural languages. It's (currently) designed for use with grammars trained with the Berkeley Parser and on NVIDIA cards. On recent-ish NVIDIA cards (e.g. a GTX 680), around 400 sentences a second with a full Berkeley grammar for length <= 40 sentences.

pvs


pvslib


NASALib is a continuing collaborative effort that has spanned over 3 decades, to aid in research related to theorem proving sponsored by NASA (https://shemesh.larc.nasa.gov/fm/pvs/). It consists of a collection of formal development (i.e., libraries) written in the Prototype Verification System ([PVS](http://pvs.csl.sri.com)), contributed by SRI, NASA,NIA, and the PVS community, and maintained by the [NASA/NIA Formal Methods Team at LaRC](http://shemesh.larc.nasa.gov/fm).

pyhop


Pyhop is a simple HTN planner written in Python. It works in both Python 2 and 3.

PyPhox


This program was authored by John Beieler (jbeieler@caerusassociates).

pyro


Pyro is a flexible, scalable deep probabilistic programming library built on PyTorch. Notably, it was designed with these principles in mind: - **Universal**: Pyro is a universal PPL -- it can represent any computable probability distribution. - **Scalable**: Pyro scales to large data sets with little overhead compared to hand-written code. - **Minimal**: Pyro is agile and maintainable. It is implemented with a small core of powerful, composable abstractions. - **Flexible**: Pyro aims for automation when you want it, control when you need it. This is accomplished through high-level abstractions to express generative and inference models, while allowing experts easy-access to customize inference.

python-kasa


python-kasa is a Python library to control TPLink smart home devices (plugs, wall switches, power strips, and bulbs) using asyncio. This project is a maintainer-made fork of [pyHS100](https://github.com/GadgetReactor/pyHS100) project.

pytodoist


**PyTodoist** is a Python package for interacting with `Todoist `_. It hides the underlying API calls with higher-level abstractions that make it easy to use Todoist with Python.

py_trees


PyTrees is a python implementation of behaviour trees designed to facilitate the rapid development of medium sized decision making engines for use in fields like robotics. Brief feature list:

qgrep


qgrep is an implementation of grep database, which allows you to perform grepping (i.e. full-text searches using regular expressions) over a large set of files. Searches use the database which is a compressed and indexed copy of the source data, thus they are much faster compared to vanilla grep -R.

QuaterNet


This is the implementation of the approach described in the paper: > Dario Pavllo, David Grangier, and Michael Auli. [QuaterNet: A Quaternion-based Recurrent Model for Human Motion](https://arxiv.org/abs/1805.06485). In arXiv preprint arXiv:1805.06485, 2018.

Racer


Racer is a knowledge representation system that implements a highly optimized tableau calculus for the description logic SRIQ(D). Racer is provided with a BSD-3 license (see the file LICENSE.txt).

radare2


r2 is a rewrite from scratch of radare in order to provide a set of libraries and tools to work with binary files.

ReAgent


#### Overview ReAgent is an open source end-to-end platform for applied reinforcement learning (RL) developed and used at Facebook. ReAgent is built in Python and uses PyTorch for modeling and training and TorchScript for model serving. The platform contains workflows to train popular deep RL algorithms and includes data preprocessing, feature transformation, distributed training, counterfactual policy evaluation, and optimized serving. For more detailed information about ReAgent see the white paper [here](https://research.fb.com/publications/horizon-facebooks-open-source-applied-reinforcement-learning-platform/).

Real-Time-Voice-Cloning


# Real-Time Voice Cloning This repository is an implementation of [Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis](https://arxiv.org/pdf/1806.04558.pdf) (SV2TTS) with a vocoder that works in real-time. Feel free to check [my thesis](https://matheo.uliege.be/handle/2268.2/6801) if you're curious or if you're looking for info I haven't documented. Mostly I would recommend giving a quick look to the figures beyond the introduction.

reasonablepy


Reasonable Python is a module which adds F-Logic to Python. This is an initial package and is still pretty unstable. Any bug repport is very appriciated.

reasoning-smem-soar


This is a baseline implementation. General use cases could guide restrictions that still permit tractible inference. See the slides for more conclusions.

rebel


Implementation of [ReBeL](https://arxiv.org/abs/2007.13544), an algorithm that generalizes the paradigm of self-play reinforcement learning and search to imperfect-information games. This repository contains implementation only for [Liar's Dice](https://en.wikipedia.org/wiki/Liar%27s_dice) game.

receipt-parser


Updating your housekeeping book is a tedious task: You need to manually find the shop name, the date and the total from every receipt. Then you need to write it down. At the end you want to calculate a sum of all bills. Nasty. So why not let a machine do it?

recipe-interpretation


# Recipe Interpretation This repository contains the code for [*Mise en Place*: Unsupervised Interpretation of Instructional Recipes](http://homes.cs.washington.edu/~yejin/Papers/emnlp15_cooking.pdf) by Chloe Kiddon, Ganesa Thandavam Ponnuraj, Luke Zettlemoyer, and Yejin Choi.

RecipeParser


A PHP library for parsing recipe data from HTML.

recorder


The _OwnTracks Recorder_ is a lightweight program for storing and accessing location data published via [MQTT](https://mqtt.org/) (or HTTP) by the [OwnTracks](http://owntracks.org) apps. It is a compiled program which is easy to install and operate even on low-end hardware, and it doesn't require an external database.

REFRACTIVE


A tool to extract knowledge from syntactic and semantic relations.

REL


REL is a modular Entity Linking package that is provided as a Python package as well as a web API. REL has various meanings - one might first notice that it stands for relation, which is a suiting name for the problems that can be tackled with this package. Additionally, in Dutch a 'rel' means a disturbance of the public order, which is exactly what we aim to achieve with the release of this package.

relationfactory


RelationFactory is a relation extraction and knowledge-base population system. It was the top-ranked system in TAC KBP 2013 English Slot-filling (http://www.nist.gov/tac/2013/KBP/index.html). If you want to use RelationFactory in a TAC benchmark, please contact the authors (see LICENSE for details). RelationFactory uses SVMLight (http://svmlight.joachims.org/) for classification, so you must agree to the License of SVMLight, especially to it being restricted to scientific use only.

repairnator


Repairnator is an open-source project for [automated program repair](https://en.wikipedia.org/wiki/Automatic_bug_fixing). All kinds of repair are considered: test failure repair, compilation error repair, static warning repair, crash repair, etc. Repairnator is integrated with continuous integration (Travis CI, Jenkins, etc.) and makes pull-requests with fixes. The project is hosted at the [Eclipse](https://www.eclipse.org/) open-source foundation.

repo-supervisor


This tool allows you to setup a `webhook` that waits for the Pull Requests and scans all interesting files to check for leaked secrets. Every time PR is updated it rescans latest changes and generates a report.

ReqWiki


ReqWiki is a novel open source web-based approach for software requirements engineering. It is based on a semantic wiki that includes natural language processing (NLP) assistants, which work collaboratively with humans on the requirements specification documents. It is the first Requirements Engineering tool that combines wiki technology for collaborative use and semantic knowledge representation for formal queries and reasoning with natural language processing assistants within a single, cohesive interface.

resolution-theorem-prover


A resolution theorem prover written in Lisp for UMaine's COS470: Artificial Intelligence course.

retro-baselines


This is a set of baseline algorithms for the [Retro Contest](https://github.com/openai/retro-contest).

retro-gym


Gym Retro is a wrapper for video game emulator cores using the Libretro API to turn them into Gym environments. It includes support for multiple classic game consoles and a dataset of different games. It runs on Linux, macOS and Windows with Python 3.5 and 3.6 support.

retro-gym-orig


Gym Retro is a wrapper for video game emulator cores using the Libretro API to turn them into Gym environments. It includes support for multiple classic game consoles and a dataset of different games. It runs on Linux, macOS and Windows with Python 3.5 and 3.6 support.

reverb-core


ReVerb is a program that automatically identifies and extracts binary relationships from English sentences. ReVerb is designed for Web-scale information extraction, where the target relations cannot be specified in advance and speed is important.

rhymediscovery


A python package for detecting rhyme schemes in poetry. With standard configuration, it achieves about 65% accuracy in the `rhymedata `_ corpus.

RISeC


This dataset contains 260 cooking recipe texts which are the same as [CURD](https://www.cs.cmu.edu/~ark/CURD/) and [SIMMR](https://camel.abudhabi.nyu.edu/simmr/). The corpus development is detailed in [our short paper](https://www.aclweb.org/anthology/2020.aacl-main.82). If our work contributes to your research, please cite the paper. ``` @inproceedings{jiang-etal-2020-recipe, title = "Recipe Instruction Semantics Corpus ({RIS}e{C}): {R}esolving Semantic Structure and Zero Anaphora in Recipes", author = "Jiang, Yiwei and Zaporojets, Klim and Deleu, Johannes and Demeester, Thomas and Develder, Chris", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.aacl-main.82", pages = "821--826"} ```

rits


Please solve: 1/2 + 3/4 |: 4/6. This is wrong. You cannot just sum the numerators when the denominators are different! Let us first find a common multiple of 2 and 4! Please enter a common multiple of 2 and 4: |: 2. This is wrong. 2 is no common multiple of 2 and 4, since 2 is not divisible by 4! So, let's try again! Please enter a common multiple of 2 and 4: |: 3. This is wrong. 3 is not a common multiple of 2 and 4, since 3 is not divisible by 2! So, let's try again! Please enter a common multiple of 2 and 4: |: 5. This is wrong. I see you are having a hard time with this. Hint: 2 * 4 = 8 is a possible solution. So, let's try again! Please enter a common multiple of 2 and 4: |: 8. Good, the solution is correct. There is also a smaller solution! Now apply this knowledge to the original task! Please solve: 1/2 + 3/4 |: 10/8. Good, the solution is correct, but not minimal. Please cancel common divisors in: 10/8 |: 1/4. This is wrong! Unfortunately, I cannot give any useful hints here. So, let's try again! Please cancel common divisors in: 10/8 |: 5/0. The denominator of a fraction cannot be 0. So, let's try again! Please cancel common divisors in: 10/8 |: 5/4. Good, the solution is correct and also minimal. Very nice! the interaction history: [solve(1/2+3/4),internal(1/2+3/4=4/6),solve(cm(2,4)),internal(cm(2,4)=2),solve(cm(2,4)),internal(cm(2,4)=3),solve(cm(2,4)),internal(cm(2,4)=5),solve(cm(2,4)),internal(cm(2,4)=8),solve(1/2+3/4),internal(1/2+3/4=10/8),solve(cancel(10/8)),internal(cancel(10/8)=1/4),solve(cancel(10/8)),internal(cancel(10/8)=5/0),solve(cancel(10/8)),internal(cancel(10/8)=5/4)] true.

rltk


The Record Linkage ToolKit (RLTK) is a general-purpose open-source record linkage platform that allows users to build powerful Python programs that link records referring to the same underlying entity. Record linkage is an extremely important problem that shows up in domains extending from social networks to bibliographic data and biomedicine. Current open platforms for record linkage have problems scaling even to moderately sized datasets, or are just not easy to use (even by experts). RLTK attempts to address all of these issues.

rogueutils


A small collection of utilities for making roguelikes

rosettacode-pm


This document describes L version B<0.0.15>.

RosettaCodeData


This Git Repository contains of all the code samples available on http://rosettacode.org, along with instructions and supplemental tools to help get them running on your local machine.

rosette


[Rosette](http://emina.github.io/rosette/) is a solver-aided programming language that extends [Racket](http://racket-lang.org) with language constructs for program synthesis, verification, and more. This repository includes the source code for Rosette, as well as several example solver-aided DSLs.

rpitx


**rpitx** is a radio transmitter for Raspberry Pi (B, B+, PI2, PI3B,PI3B+,PIZero,PiZerow) that transmits RF directly to GPIO. It can handle frequencies from 5 KHz up to 1500 MHz.

rpi_lcars


The code is an example of implementing a custom MovieOS-style interface for your RaspberryPi projects that include the RaspberryPi touch screen (e.g. home automation control panel). The LCARS assets can be replaced with assets from any other style of user interface (e.g. from games, cartoons, or TV series).

RTEC


RTEC is an extension of the [Event Calculus](https://en.wikipedia.org/wiki/Event_calculus) that supports highly-scalable stream processing. It is written in Prolog and has been tested under [YAP 6.2](http://www.dcc.fc.up.pt/~vsc/Yap/).

rtl-entropy


This software has been tested on debian linux 7.1, but should work on any linux distribution, and might run on OS X and other POSIX compliant operating systems.

rtl_433


rtl_433 (despite the name) is a generic data receiver, mainly for the 433.92 MHz, 868 MHz (SRD), 315 MHz, 345 MHz, and 915 MHz ISM bands.

rubikssolver


#### Test case The following is a cube map for a solved cube with the Left side rotated 90 degrees:

rudel


Rudel is a collaborative editing environment for GNU Emacs. Its purpose is to share buffers with other users in order to edit the contents of those buffers collaboratively. Rudel supports multiple backends to enable communication with other collaborative editors using different protocols, though currently Obby (for use with the Gobby editor) is the only fully-functional one.

rudibugger


A video demonstrating rudibugger can be found [here](https://youtu.be/nSotEVZUEyw).

ruletaker


This repo contains tools and utilities to: 1. Generate datasets of theories and assertions meant to test the logical reasoning capabilities of a model. For details see the paper [Transformers as Soft Reasoners over Language](https://arxiv.org/abs/2002.05867). 2. Run existing theories through a theorem proving engine to obtain labels.

runtime


This project contains the GOAL runtime (standalone)

safehouse


Safehouse is a __headless__ (I didn't write any js or templates), __developer-focused__ (you config it by editing the source code), __scale-invariant__ (it only has one user) django server. You text it or (eventually) email it codewords and parameters, and it does stuff. Like send you a joke. Or text a bunch of your friends saying you're having a serious mental episode and need to talk to someone _right now_ before you cut off your hands.

SafeLearner


This is a documentation on how to install and use the codes of **SafeLearner**. It is licensed under [Apache-2.0 license](https://github.com/arcchitjain/SafeLearner/blob/master/LICENSE).

sapareplan


This repository contains the code to deploy and run the Sapa Replan planner (http://rakaposhi.eas.asu.edu/kartik-dissertation.pdf), which derives from the Sapa codebase.

sasa-tool


This project depends on NLTK, the natural language toolkit, which also depends on other libraries. Please follow the instructions for installing this library at http://nltk.org/install.html . (windows users may need to consult http://selfsolved.com/problems/setuptools-06c11-fails-to-instal)

saul


Saul is a modeling language implemented as a domain specific language (DSL) in Scala. The main goal of Saul is to facilitate designing machine learning models with arbitrary configurations for the application programmer, including:

SBCG


This is a proof-of-concept implementation of a (very!) small fragment of an English Sign-Based Construction Grammar, adapted to adhere to classic CxG assumptions. The grammar is implemented in ProFIT, a Prolog extension with Features, Inheritance, and Templates originally developed by Gregor Erbach (Universitaet des Saarlandes) in 1994. The present version of ProFIT has been ported to modern SICStus Prolog (3.8 or higher) by Mats Carlson. None of these individuals have any knowledge of the present project or share any of the blame for any of its shortcomings.

science-parse


There is a new version of science-parse out that works in a completely different way. It has fewer features, but higher quality in the output. Check out the details at https://github.com/allenai/spv2.

sciKnowMineProject


* [triageServer](https://github.com/BMKEG/triageServer) generates the web archive (*.war) file that runs on a web application container (such as Jetty, Tomcat, Glassfish, etc). * [skmTriage](https://github.com/BMKEG/skmTriage) contains the server-side logic for all administrative commands to generate, populate and edit the underlying database * [triageClientApp](https://github.com/BMKEG/triageClientApp) generates the *.swf file for the Flex web-application * [triageClientComponents](https://github.com/BMKEG/triageClientComponents) generates the *.swc library containing all the logic of the triageModule Flex component. * [skmCore](https://github.com/BMKEG/skmCore) provides a basic layer on top of the digitalLibrary for other text mining applications using UIMA. * [digitalLibraryDao](https://github.com/BMKEG/digitalLibraryDao) provides a data access to the system for base citaiton and document functions. * [lapdftext](https://github.com/BMKEG/lapdftext) is the core library for manipulating PDF documents. * [lapdftextVpdmf](https://github.com/BMKEG/lapdftextVpdmf) links the lapdftext library to the VPDMf framework via the FTD model. * [bmkeg-as-parent](https://github.com/BMKEG/bmkeg-as-parent) manages maven meta-data for AS projects. * [bmkeg-parent](https://github.com/BMKEG/bmkeg-parent) manages maven meta-data for Java projects.

sciqa-arcade198-dataset


This is the human-annotated AI2 Reasoning Challenge (ARC) dataset (ARCADE198) from the following paper:

scone


Scone is a knowledge representation and reasoning system – a knowledge-base system or KBS – that has been developed by Scott Fahlman’s research group in the Language Technologies Institute of Carnegie Mellon University. Scone, by itself, is not a complete AI or decision-making system, and does not aspire to be; rather, it is a software component – a sort of smart active memory system – that is designed to be used in a wide range of software applications, both in AI and in other areas. Scone deals just with symbolic knowledge. Things like visualization, motor memory, and memory for sound sequences are also important for human-like intelligence, but we believe that those will have specialized representations of their own, linked in various ways to the symbolic memory.

scoobie


This is a project to provide Semantic Web programmers with [Information Extraction](http://gate.ac.uk/ie/) (IE) functionalities. SCOOBIE can be initialised with any kind of RDF graph. It interprets the occurrence of URI references being described with RDF properties as descriptions of formal instances. On the basis of an RDF graph with contained instances, SCOOBIE offers following methods:

scraper


We face a tradeoff between seeking the broadest geographic coverage we can get (meaning including every local paper we can find) and accuracy and relevance (which would lead us to include only large, well-known, and high quality news outlets). We're trying to balance the two objectives by including a third column indicating whether the source is one is a wire service, a dependable news source with solid international coverage, or a local source that may contribute extra noise to the data and may require specialized actor dictionaries. The distinction between the latter two is hazy and requires a judgement call. Eventually, these labels can be used to build event datasets that are either optimized for accuracy and stability (at the cost of sparseness), or micro-level, geographically dispersed (but noisy) coverage.

scrapy


Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.

Screenshot-Redaction


## How Redaction Works The redaction process is currently mostly static and fairly simple. In the future the process will be more flexible allowing submission of photos for processing or even regions of photos. The process initially uses Tesseract OCR to find words inside the image. Once this process is finished, users are notified of completion. If a user chooses to view the redactions, the currently enabled word dictionaries are applied to the results. Dictionaries can choose to white list or black list with their own internal rules. The end result is a screenshot with zero or more words wrapped in boxes and blacked out.

scxmlgui


This is an attempt to build a graphical user interface for editing SCXML finite state machines.

sde


Structured Data Extractor (SDE) is an implementation of DEPTA (Data Extraction based on Partial Tree Alignment), a method to extract data from web pages (HTML documents). DEPTA was invented by Yanhong Zhai and Bing Liu from University of Illinois at Chicago and was published in their paper: "Structured Data Extraction from the Web based on Partial Tree Alignment" (IEEE Transactions on Knowledge and Data Engineering, 2006). Given a web page, SDE will detect data records contained in the web page and extract them into table structure (rows and columns). You can download the application from this link: Download Structured Data Extractor.

Usage

  1. Extract sde.zip.
  2. Make sure that Java Runtime Environment (version 5 or higher) already installed on your computer.
  3. Open command prompt (Windows) or shell (UNIX).
  4. Go to the directory where you extract sde.zip.
  5. Run this command: java -jar sde-runnable.jar URI_input path_to_output_file
  6. You can pass URI_input parameter refering to a local file or remote file, as long as it is a valid URI. URI refering to a local file must be preceded by "file:///". For example in Windows environment: "file:///D:/Development/Proyek/structured_data_extractor/bin/input/input.html" or in UNIX environment: "file:///home/seagate/input/input.html".
  7. The path to output file parameter is formatted as a valid path in the host operating system like "D:\Data\output.html" (Windows) or "/home/seagate/output/output.html" (UNIX).
  8. Extracted data can be viewed in the output file. The output file is a HTML document and the extracted data is presented in HTML tables.

Source Code

SDE source code is available at GitHub.

Dependencies

SDE was developed using these libraries:

  • Neko HTML Parser by Andy Clark and Marc Guillemot. Licensed under Apache License Version 2.0.
  • Xerces by The Apache Software Foundation. Licensed under Apache License Version 2.0.

License

SDE is licensed under the MIT license.

Author

Sigit Dewanto, sigitdewanto11[at]yahoo[dot]co[dot]uk, 2009.

SDL_Manual


This project will improve the Game Development tutorials for Perl using the SDL library. The primary goal is to introduce newcomers to Game Development in Perl. The secondary goal is to attract people to try Perl as a Game Scripting and Prototyping language.

search-engine


Approach0 is a math-aware search engine.

Second-Brain


A curated list of awesome Public Zettelkastens 🗄️ / Second Brains 🧠 / Digital Gardens 🌱

secret-bridge


A bridge to help increase your ability to detect secrets shared on Github.

Selenium-Remote-Driver


[Selenium WebDriver][wd] is a test tool that allows you to write automated web application UI tests in any programming language against any HTTP website using any mainstream JavaScript-enabled browser. This module is a Perl implementation of the client for the Webdriver [JSONWireProtocol that Selenium provides.][jsonwire]

selenium-server-deb-package


This project is meant to automate debian package for selenium-server It will automatically download selenium-server from google code file repository and package it with init.d scripts.

self_dialogue_corpus


# The Self-dialogue Corpus This is an early release of the Self-dialogue Corpus containing 24,165 conversations, or 3,653,313 words, across 23 topics. For more information on the data, please see [our corpus paper](https://arxiv.org/pdf/1809.06641.pdf) or [our submission to the Alexa Prize](http://alexaprize.s3.amazonaws.com/2017/technical-article/edina.pdf).

semafor


This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

semafor-semantic-parser


This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

semagrams


A *semagram* is a flexible structure for encoding the semantics of a given concept via a slot-filler structure.

semeval2018-task4


This will produce the following output files, saved in the directory models/semeval-winning-model/answers/friends_test_scene/ :

SemEval2020-Task6


This repository aims to solve DeftEval: Extracting term-definition pairs in free text

semeval2020_task11


- `configs`: yaml configs for the system - `datasets`: contains the task datasets, which can be downloaded from the team competition webpage - `results`: the folder for submissions - `span_identification`: code for the task SI - `ner`: pytorch-transformers RoBERTa model with CRF (end-to-end) - `dataset`: the scripts for loading and preprocessing source dataset - `submission`: the scripts for obtaining and evaluating results - `technique_classification`: code for the task TC (the folder has the same structure as `span_identification`) - `tools`: tools provided by the competition organizers; contain useful functions for reading datasets and evaluating submissions - `visualization_example`: example of visualization of results for both tasks

sempre


A semantic parser maps natural language utterances into an intermediate logical form, which is "executed" to produce a denotation that is useful for some task.

semviz


A web demo for visualizing Semafor parses

sensitive-data-scrubber


A Clojure library designed to scrub sensitive data such as social security numbers and credit card numbers from strings.

SentiWordNet


SentiWordNet is a lexical resource for opinion mining. SentiWordNet assigns to each synset of WordNet three sentiment scores: positivity, negativity, objectivity. SentiWordNet is described in details in the papers:

servo-platform


This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

sh


A shell parser, formatter, and interpreter. Supports [POSIX Shell], [Bash], and [mksh]. Requires Go 1.12 or later.

shalmaneser


[SHALMANESER](http://www.coli.uni-saarland.de/projects/salsa/shal/) is a SHALlow seMANtic parSER.

SharpWit


Wit.ai is an online service that takes a natural language sentence, ie. 'I have a meeting tomorrow', and sends back data that can be easily interpreted by software, ie. 'intent: appointment, datetime: 2014-03-02T00:00:00.000+01:00'.

sheep


2. USAGE: ./scripts/dist-partition.sh [options... -o $OUTPUT_FILE] $GRAPH $NUM_PARTITIONS $GRAPH may be a .net (SNAP) or a .dat (XSS/Graph500 binary) file. There is a snap2xss conversion utility in llama/utils By default, $GRAPH = test/hep-th.dat and $NUM_PARTITIONS = 2 If $NUM_PARTITIONS = 0, then we skip the partitioning phase.

shellcheck


ShellCheck is a GPLv3 tool that gives warnings and suggestions for bash/sh shell scripts:

sherlock


The following is an example of the command line to run all the tests for Sherlock. This invocation hides the progress text that Sherlock normally outputs, and instead shows the verbose output of the tests.

ShinyCMS


ShinyCMS is an open source CMS built in Perl using the Catalyst framework.

shop2


SHOP2 -- Simple Hierarchical Ordered Planner 2 -- is a domain-independent planning system based on Hierarchical Task Network (HTN) planning. In the 2002 International Planning Competition, SHOP2 received one of the top four awards, one of the two awards for distinguished performance.

shop3


This repository contains the open source version of the SHOP3 planner.

shroud


Shroud is a simple secret manager with a command line interface. The password database is stored as a Scheme s-expression and encrypted with a [[gnupg.org][GnuPG]] key.

sigir2016-collection-for-focused-retrieval


This software was used to extract, clean, annotate, and evaluate the corpus described in our SIGIR 2016 article.

sigmakee


Sigma is an integrated development environment for logical theories that extend the Suggested Upper Merged Ontology. There is a public installation with read-only functions enabled linked from http://www.ontologyportal.org

sikuli


A new version of Sikuli(X) is available since 2013
as a follow up development

SimGen


SimGen is a simulation language, originally created by Simularity, Inc.

simp-isar-mode


This is a very shitty Emacs mode for **basic** displaying and editing of Isabelle files (.thy) the idea is to avoid opening a fully fledged JEdit for trivial stuff.

simple-key-logger


SKeylogger is a simple keylogger. I had previously been using a few other open source keyloggers, but they stopped working when I upgraded my operating system. I tried to look through the code of those keyloggers, but it was undocumented, messy, and complex. I decided to make my own highly documented and very simple keylogger.

sirius


Lucida is a speech and vision based intelligent personal assistant based on Sirius. Visit the provided readmes in [lucida](lucida) for instructions to build Lucida and follow the instructions to build [lucida-suite here](http://sirius.clarity-lab.org/sirius-suite/). Post to [Lucida-users](http://groups.google.com/forum/#!forum/sirius-users) for more information and answers to questions. The project is released under [BSD license](LICENSE), except certain submodules contain their own specific licensing information. We would love to have your help on improving Lucida, and see [CONTRIBUTING](CONTRIBUTING.md) for more details.

SitCalc


SitCalc is a framework for managing state in an application without mutation based on situation calculus.

sitcalc_async_knowledge


This is a reasoning engine for multi-agent epistemic queries in the situation calculus. It was developed as part of the PhD thesis (and subsequent journal paper submission) for:

Situations


This repository provides the top-level definition for interpretations of Situations in Logtalk.

SkillsExtractorCognitiveSearch


The Skills Extractor is a Named Entity Recognition (NER) model that takes text as input, extracts skill entities from that text, then matches these skills to a knowledge base (in this sample a simple JSON file) containing metadata on each skill. It then returns a flat list of the skills identified.

sling


The SLING project is still work in progress. We do not yet have a full system that can extract facts from arbitrary text, but we have built a number of the subsystems needed for such a system. The SLING frame store is our basic framework for building and manipulating frame semantic graph structures. The [Wiki flow pipeline](doc/guide/wikiflow.md) can take a raw dump of Wikidata and [convert](doc/guide/wikiflow.md#wikidata-import) this into one big frame graph. This can be loaded into memory so we can do fast graph traversal for inference and reasoning over the knowledge base. The Wiki flow pipeline can also take raw Wikipedia dumps and [convert](doc/guide/wikiflow.md#wikipedia-import-and-parsing) these into a set of documents with structured annotations extracted from the Wiki markup. This also produces [phrase tables](doc/guide/wikiflow.md#name-and-phrase-tables) that are used for mapping names to entities. There is a [SLING Python API](doc/guide/pyapi.md) for accessing all this information and we also have a [bot](python/wikibot) for uploading extracted facts to Wikidata.

smack


SMACK is a *bounded software verifier*, verifying the assertions in its input programs up to a given bound on loop iterations and recursion depth. SMACK can verify C programs, such as the following:

small_adventure_games


This version is derived from the original via Quintus Prolog after some compatibility modifications for SWI-Prolog and adding a module header that allows using it safely together with other applications.

smatch


[Smatch](http://amr.isi.edu/evaluation.html) is an evaluation tool for [AMR](http://amr.isi.edu/) (Abstract Meaning Representation). It computes the Smatch score (defined below) of two AMR graphs in terms of their matching triples (edges) by finding a variable (node) mapping that maximizes the count, `M`, of matching triples, then:

SMCDEL


A symbolic model checker for [Dynamic Epistemic Logic](https://plato.stanford.edu/entries/dynamic-epistemic).

smem-question-answering


This work includes data from NextKB, which was compiled by the Qualitative Reasoning Group at Northwestern University. NextKB is freely available under the Creative Commons Attribution 4.0 license from http://qrg.northwestern.edu/nextkb/index.html. The included data was created by contributors to the Qualitative Reasoning Group, contributors to Cycorp's OpenCyc, University of California at Berkeley's FrameNet project, the VerbNet project, and Princeton University's WordNet project. For details of attributions, please see http://www.qrg.northwestern.edu/nextkb/license.html

smm


SNARK


SNARK, SRI's New Automated Reasoning Kit, is a theorem prover intended for applications in artificial intelligence and software engineering. SNARK is geared toward dealing with large sets of assertions; it can be specialized with strategic controls that tune its performance; and it has facilities for integrating special-purpose reasoning procedures with general-purpose inference.

snowman


http://derevenets.com/[Snowman] is a native code to C/C++ decompiler, supporting x86, AMD64, and ARM architectures. You can use it as a standalone GUI application, command-line tool, IDA plug-in, or a library. Snowman is link:doc/licenses.asciidoc[free software].

socioboard-core


We are building innovative products for various social networks which fill the critical gap - Social Networks were meant for user’s not for businesses. Our tools and products view Social from a business point of view and fill those gaps which social networks cannot fill exquisitely. Business should own their social data and they should be incharge of what they want to do with it, generate reports and analyze data to make informed and improved business decisions. This is possible when things are open and businesses have freedom to choose, we believe open source is a way to make this possible. So that brands and businesses can embrace social technology with an open mind in an open and connected world.

SONDY


An open source social media data mining software (event detection + influence analysis)

source-extractor


This is a code refactoring of Limsi's source extractor program in order to expose source extraction as a web service. This is a Spring Boot application deployed in a Docker image.

sourceclassifier


In the ./sources directory are subdirectories for each language you wish to be able to identify. Each subdirectory contains examples of programs written in that language. The name of the directory is significant - it is the value returned by the SourceClassifier.identify() method.

spaCy


spaCy is a library for advanced natural language processing in Python and Cython. `See here `_ for documentation and details. spaCy is built on the very latest research, but it isn't researchware. It was designed from day 1 to be used in real products. It's commercial open-source software, released under the MIT license.

speech-acts-classifier


An experiment with parsing natural language and classifying the [speech act](https://en.wikipedia.org/wiki/Speech_act) of the sentence. This is especially important when a machine is trying to understand the meaning of a sentence in an environment, like a chat session, where missing punctuation is common.

spelling-experiments


This repository is a final archived version of https://github.com/zesch/spelling-experiments. Please, contact that repository's mantainer for further information.

spf


The framework contains an example experiment using the GeoQuery corpus. To use development fold 0 for testing, and training on the other folds, use: ``java -jar dist/spf-1.4.jar geoquery/experiments/template/dev.cross/dev.fold0.exp`` The log and output files are written to a newly generated directory in the experiment directory: ``geoquery/experiments/template/dev.cross/``

splendor-prolog-agent


This is the implementation of an agent that considers and handles the states just as humans does. It checks the cards on the board, nobles, its owned coins and development cards, then takes an action.

SpoookyJS


A JavaScript Multiagent Board Game Framework Based On Monte Carlo Methods. German: "Ein multiagentenbasiertes JavaScript-Framework zur flexiblen Implementation digitaler browserbasierter Brettspiele und spielübergreifender künstlicher Intelligenz."

SPR


spread0r


spread0r is a txt reader, which makes your reading twice as fast as usual

srlie


SRLIE ===== SRLIE is a component of Open IE 4.x that automatically identifies n-ary extractions from English sentences. SRLIE is designed for Web-scale information extraction, where target relations are not specified in advance.

ssciPDDLPlanner


This software is used for generating PDDL files out of model descriptions. PDDL is a well-known artificial intelligence planning language. Please note that even though is application generates PDDL, it is not used to interpret PDDL. Users of this software are referred to open-source PDDL planners such as OPTIC planner for this task (see [link](https://github.com/Dunes/janitor/tree/master/optic)).

ssr


SimpleScreenRecorder is a screen recorder for Linux. Despite the name, this program is actually quite complex. It's 'simple' in the sense that it's easier to use than ffmpeg/avconv or VLC :).

StarRuler2-Source


# Star Ruler 2 Star Ruler 2 is a massive scale 4X/RTS set in space. Explore dozens, hundreds, or even thousands of systems in a galaxy of your choosing, expand across its planets, exploit the resources you find, and ultimately exterminate any who stand in your way. The fate of your empire depends on your ability to master the economy, field a military, influence galactic politics, and learn what you can about the universe.

startbootstrap-agency


[Agency](http://startbootstrap.com/template-overviews/agency/) is a one page agency portfolio theme for [Bootstrap](http://getbootstrap.com/) created by [Start Bootstrap](http://startbootstrap.com/). This theme features several content sections, a responsive portfolio grid with hover effects, full page portfolio item modals, a responsive timeline, and a working PHP contact form.

statechum


A relatively brief manual can be found in resources/introduction/index.html # description Statechum is a framework that implements a number of regular grammar inference algorithms. Regular grammars can be represented as finite state machines. Once the grammar / state machine has been generated, StateChum can visualise it, and provides a selection of state-machine analysis and testing algorithms.

stet


This is an entirely preliminary, undocumented, unsupported release of stet. Files may be missing. Scatology may be unexpurgated. I don't have much time to help you with this right now. You need RT; we're using version 3.2. There are perl dependencies. There are unstated assumptions. But you asked for it. You got it.

story-generation


# Improving Neural Story Generation by Targeted Common Sense Grounding This repository contains the code to replicate the paper "Improving Neural Story Generation by Targeted Common Sense Grounding".

strips


This project is a demo of using the artificial intelligence automated planning library [strips](https://www.npmjs.com/package/strips), in node.js.

STRIPState


STRIPState is a framework for managing state in an application without mutation based on STRIPS and situation calculus.

sumo


This directory contains knowledge base files written in KIF, and files in WordNet data file format (see ). Several alternative WordNet mapping files are present.

sumy


Simple library and command line utility for extracting summary from HTML pages or plain texts. The package also contains simple evaluation framework for text summaries. Implemented summarization methods:

Superglus


Superglus is an interactive fiction (text adventures) authoring system strongly based on Professional adventure writing system.

sv-benchmarks


This collection of verification tasks is constructed and maintained as a common benchmark for evaluating the effectiveness and efficiency of state-of-the-art verification technology.

swh-environment


This repository contains the scaffolding to initialize and keep a local development environment for the Software Heritage Python stack. In particular, it contains pointers to the Git repositories of all Software Heritage Python modules. The repositories are managed using [myrepos][1] (see the .mrconfig file), and the `mr` command.

swim


SWIM is a compact library that implements the basic functionality of [Genetic Programming (GP)](#fg), a popular stochastic approach to program synthesis. I developed its early version in the process of preparing my recent [book](#bps) on behavioral program synthesis using GP.

SWING


SWING (Summarizer from WING) is a multiple-document news summarization system by the Web Information Retrieval/Natural Language Group (WING) at the National University of Singapore.

swipldcgtut


A tutorial for DCG's in swi-Prolog

symptom-disease


This model is used to predict symptoms that are closely related to a given symptom. It can be used in cases (read apps) where the user enters a symptom, and a list of similar symptoms pop up, of which the user can select the ones he's suffering from, and these can be further fed into a model that can then predict the disease the person is suffering from, and redirect him to the associated specialist. The latter part isn't included here.

symptom-tree


This function reads and processes the data file, then initializes the SymptomTree class using this processed data. This class contains attributes for the DecisionTreeClassifier model (model), the cleaned NAMCS dataset (data), a dictionary mapping diagnoses to unique identifier codes (diagnosis dict), a dictionary mapping unique codes to diagnosis strings (rev_diagnosis_dict), the x training dataset (x_train), the y training dataset (y_train), the x testing dataset (x_test), the y testing dataset (y_test), predicted diagnoses (y_hat), and a lookup attribute.

synthea


SyntheaTM is a Synthetic Patient Population Simulator. The goal is to output synthetic, realistic (but not real), patient data and associated health records in a variety of formats.

sypet


SyPet is a novel type-directed tool for component-based synthesis. The key novelty of our approach is the use of a compact Petri-net representation to model relationships between methods in an API. Given a target method signature S, our approach performs reachability analysis on the underlying Petri-net model to identify sequences of method calls that could be used to synthesize an implementation of S. The programs synthesized by our algorithm are guaranteed to type check and pass all test cases provided by the user.

sytora


# Sytora Sytora is a multilingual symptom-disease classification app. Translation is managed through the UMLS coding standard. A multinomial Naive Bayes classifier is trained on a handpicked dataset, which is freely available under CC4.0.

T2


(6) Run T2 as follows (replace "Debug" by "Release" for the release build) $ mono "$T2DIR/src/bin/Debug/T2.exe" For example, to execute the testsuite: $ pushd "$T2DIR/test" && mono "$T2DIR/src/bin/Debug/T2.exe" -tests

TABARI-Code


Source code for the TABARI C++ event coding program. This is a GitHub mirror for the code found at

TABARI-Dictionaries


A more extensive set of dictionaries can be found incorporated into the zipped files of the various data sets at

tac2015-event-detection


# Event Nugget Extraction using Deep Neural Networks This repository contains the files for our Event Nugget Detection systems that was submitted to the TAC 2015 shared task on Event Nugget Detection. It is described in the paper [Event Nugget Detection, Classification and Coreference Resolution using Deep Neural Networks and Gradient Boosted Decision Trees](https://www.ukp.tu-darmstadt.de/fileadmin/user_upload/Group_UKP/publikationen/2015/2015_TAC_Event_Nugget_Detection.pdf).

tacl2016-trainingdata4srl


This repository contains the code for automated labeling of FrameNet roles in arbitrary sense-labeled and linguistically preprocessed text as described in section 4 of our TACL paper.

TAEB-AI-Behavioral


This AI is packaged using [Dist::Zilla](http://dzil.org).

tagsistant


Tagsistant is a semantic file system for Linux, a personal tool to catalog files using tags (labels, mnemonic informations) rather than directories.

talespin-annie


This is my version of the project for the Introduction to SWI-Prolog class.

tap


The [Test Anything Protocol](http://testanything.org/) is a text-based interface between test scripts and a test harness. A wide range of tools exist for running, rendering and analyzing test results. By writing your Prolog tests with TAP, you get access to all this testing infrastructure. For example, [interactive HTML output](http://www.spurkis.org/TAP-Formatter-HTML/test-output.html).

tarski


## What is Tarski Tarski is a framework for the specification, modeling and manipulation of [AI planning](https://en.wikipedia.org/wiki/Automated_planning_and_scheduling) problems. Tarski is written in Python and includes parsers for major modeling languages (e.g., [PDDL](https://en.wikipedia.org/wiki/Planning_Domain_Definition_Language), [FSTRIPS](https://dl.acm.org/citation.cfm?id=566359), [RDDL](https://en.wikipedia.org/wiki/Planning_Domain_Definition_Language#RDDL)), along with modules to perform other common tasks such as logical transformations, reachability analysis, grounding of first-order representations and problem reformulations.

tauchain_prolog


TML (Tau Meta-Language) is a variant of Datalog. It is intended to serve as a translator between formal languages (and more uses, see under the Philosophy section). The main difference between TML and common Datalog implementations is that TML works under the Partial Fixed-Point (PFP) semantics, unlike common implementations that follow the Well-Founded Semantics (WFS) or stratified Datalog. By that TML (like with WFS) imposes no syntactic restrictions on negation, however unlike WFS or stratified Datalog it is PSPACE complete rather than P complete. TML's implementation heavily relies on BDDs (Binary Decision Diagrams) in its internals. This gives it extraordinary performance in time and space terms, and allowing negation to be feasible even over large universes. In fact negated bodies, as below, do not consume more time or space than positive bodies by any means, thanks to the BDD mechanism.

TEI


The [TEI](https://www.tei-c.org) is an international and interdisciplinary standard used by libraries, museums, publishers, and academics to represent all kinds of literary and linguistic texts, using an encoding scheme that is maximally expressive and minimally obsolescent.

tei-emacs


This is version 3 of the TEI-EMACS installation: a more or less complete SGML/XML authoring system, which combines GNU-Emacs with PSGML and a host of other relevant emacs customizations for writing and validating SGML or XML documents. Most XML-emacs subsystems have their own help system or documentation.

temper-python


This is a rewrite of a userspace USB driver for TEMPer devices presenting a USB ID like this: `0c45:7401 Microdia` My device came from [M-Ware ID7747](http://www.m-ware.de/m-ware-usb-thermometer-40--120-c-emailbenachrichtigung-id7747/a-7747/) and also reports itself as 'RDing TEMPerV1.2'.

temperance


Temperance is a logic programming library for Common Lisp.

temporal-planning


This documentation aims to explain how experiments with the planners introduced by [[Jiménez, Jonsson and Palacios, 2015]](#ref-tmp-planning-icaps15) and [[Furelos-Blanco, Jonsson, Palacios and Jiménez, 2018]](#ref-tmp-planning-coplas18) can be run.

tensorflow-rnn-events-prediction


# Tensorflow RNN to Events Prediction **[NOTE]**: *This notebook was made with [Tensorflow v.0.8.0](https://github.com/tensorflow/tensorflow/releases/tag/v0.8.0) and code is not compatible with the newest release of Tensorflow. For the moment I don't have time to upgrade the code so you can use notebook more as an illustration of GDELT dataset and time series analysis.*

tensorflow-tex-wavenet


This is a TensorFlow implementation of the [WaveNet generative neural network architecture](https://deepmind.com/blog/wavenet-generative-model-raw-audio/) for text generation.

tensorflow-wavenet


This is a TensorFlow implementation of the [WaveNet generative neural network architecture](https://deepmind.com/blog/wavenet-generative-model-raw-audio/) for audio generation.

terminus-server


TerminusDB is an open source model driven graph database for knowledge graph representation designed specifically for the web-age.

tetrad


This is the code for the Tetrad Project; an introduction can be found here:

text-simplification-evaluation


This repository contains the original implementation of the evaluation methods presented in [Reference-less Quality Estimation of Text Simplification Systems](https://www.aclweb.org/anthology/W18-7005) (1st Workshop on Automatic Text Adaption, INLG 2018). The version that was used at submission time is on branch [submission](https://github.com/facebookresearch/text-simplification-evaluation/tree/submission).

Text-to-LogicForm


# Text-to-LogicForm Text-to-LogicForm is a simple code for leveraging a syntactic graph for semantic parsing using a nov

textbelt


TextBelt Open Source is a REST API that sends outgoing SMS. It uses a free mechanism for sending texts, different from the more reliable paid version available at https://textbelt.com.

textus


[Textus][] is an open-source platform for presenting and working with cultural and historical texts.

text_rpg_ai


This is the repository for the project for AI for Games course.

text_simplification


### Arugument instruction - bsize: batch size - out: the output folder will contains log, best model and result report - tie_embedding: all means tie the encoder/decoder/projection w embedding, we found it can speed up the training - bert_mode: the mode of using BERT bert_token indicates we use the subtoken vocabulary from BERT; bertbase indicates we use BERT base version (due to the memory issue, we did not try BERT large version yet) - environment: the path config of the experiment. Please change it in model/model_config.py to fit to your system

text_to_CAMEO


This program takes data in the text-oriented ICEWS .tab files downloaded from DataVerse study 28075 and converts this to a more conventional data format using the CAMEO codes. The conversion process is described in detail in the file `text_to_CAMEO_documentation.pdf`.

the-power-of-prolog


Prolog is a **programming language** that is rooted in formal logic. It supports *backtracking* and *unification* as built-in features. Prolog allows us to elegantly solve many tasks with short and general programs.

The-Turk


This is the General Game Player used in the 2011 General Game Playing Competition http://games.stanford.edu

thefuck


If you are not scared to blindly run the changed command, there is a `require_confirmation` [settings](#settings) option:

Theorema


This repository serves for the development of the Theorema system, see also http://www.risc.jku.at/research/theorema/software/.

the_silver_searcher


A code searching tool similar to `ack`, with a focus on speed.

thrax


This will compile all classes and package them into a jar for use on a Hadoop cluster.

tifmo


TIFMO (Textual Inference Forward-chaining MOdule) is an unsupervised Recognizing Textual Entailment (RTE) system based on Dependency-based Compositional Semantics (DCS) and logical inference.

tifmo-old


TIFMO (Textual Inference Forward-chaining MOdule) is an unsupervised Recognizing Textual Entailment (RTE) system based on Dependency-based Compositional Semantics (DCS) and logical inference.

timeline-viewer


A planning.domains plugin for visualizing a temporal planner's output as a timeline.

tocc


Tocc is a tag-based file management system. It also includes a tag-based file system called Toccfs. The goal of Tocc is to provide a better system for classifying files which is more flexible than classic file systems based on a tree of files and directories.

torchnet


*torchnet* is a framework for [torch](http://torch.ch) which provides a set of abstractions aiming at encouraging code re-use as well as encouraging modular programming.

torcs-drivers


[TORCS][TORCS] is a open-source racing car simulation. We use it as driving simulation to evaluate our [plan recognition][prGolog] system.

toxic-comments-classification


Disclaimer: the dataset for this competition contains text that may be considered profane, vulgar, or offensive.

traildb


TrailDB is an efficient tool for storing and querying series of events. This repository contains the core C library and the `tdb` command line tool.

transformers


This branch has the following patches:

transformstorm


This is a small interface built to play with small language models in the terminal.

transparentwindows


Please note that only this git repo contains the most recent version of the addon. Due to the review process it may take a while before the updates show up on the extensions.gnome.org page.

transpiler


*Universal-transpiler* is a source-to-source compiler that translates a small subset of several programming languages into several others. It is also able to translate several metasyntax notations, such as EBNF and ABNF. The translation is not always 100% accurate, but I hope it will still be useful.

tranX


A general-purpose **Tran**sition-based abstract synta**X** parser that maps natural language queries into machine executable source code (e.g., Python) or logical forms (e.g., lambda calculus). **[Online Demo](http://moto.clab.cs.cmu.edu:8081/)**.

trec-dd-jig


- The jig will print the feedback on the screen. Each feedback is a json dumped string.

triviaqa


# TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension - This repo contains code for the paper Mandar Joshi, Eunsol Choi, Daniel Weld, Luke Zettlemoyer. [TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension][triviaqa-arxiv] In Association for Computational Linguistics (ACL) 2017, Vancouver, Canada.

ts


This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.

ttk


This is the main repository for the Tarsqi Toolkit (TTK), a set of processing components for extracting temporal information from news wire texts. TTK extracts time expressions, events, subordination links and temporal links; in addition, it can ensure consistency of temporal information.

tui.el


** Introduction :PROPERTIES: :org:pin: 0 :ID: 8ff5465c-8ffc-4237-8302-964fbaab6454 :END: This is an experiment in building purely text-based user interfaces (TUI's). The ultimate goal is to explore new paradigms for user interface design and development using Emacs. To this end, tui.el implements an API based on the popular React JavaScript framework in order to reduce the demands involved with designing and building complex text-based UI's.

turkle


This tool is meant to be used as a web service running locally on your network or personal machine. It will load HIT template files generated by the Amazon Mechanical Turk web GUI provided to requesters for creating HITs. Input CSV files are also uploaded to create a HIT based on the template with each row of values in the CSV file.

turnkey-owntracks


This is an [OwnTracks](http://owntracks.org) TurnKey-Linux back-end, with the following features:

tuyapi


A library for communicating with devices that use the [Tuya](http://tuya.com) cloud network. These devices are branded under many different names, but if port 6668 is open on your device chances are this library will work with it. Currently only supports smart plugs, but it should be fairly trivial to add other types of devices.

twitter_nlp


Output: ------------- The output contains the tokenized and tagged words separated by spaces with tags separated by forward slash '/' Example output:

UDepLambda


UDepLambda is a framework to convert Universal Dependencies trees to Logical Forms. It maps natural language to logical forms in an almost language-independent framework. For more details, please refer to our papers below.

uidaho-cs470-prolog


Prolog is a logical programming based on a variant of 1st order logic. To 'program' in Prolog you create a knowledge base of facts and rules about the problem. Then you may query the knowledge base. Prolog uses a modified backchaining algorithm to search the knowledge base in an attempt to prove the query.

uiuc_ie_pipeline_fine_grained


### Running on raw text data * Prepare a data directory `data` containing sub-directories `rsd` and `ltf`. The `rsd` sub-directory contains RSD (Raw Source Data, ending with `*.rsd.txt`), and `ltf` sub-directory has LTF (Logical Text Format, ending with `*.ltf.xml`) files. * If you have RSD files, please use the [`aida_utilities/rsd2ltf.py`](https://github.com/limanling/uiuc_ie_pipeline_finegrained_source_code/blob/master/aida_utilities/rsd2ltf.py) to generate the LTF files. ```bash docker run --rm -v ${ltf_dir}:${ltf_dir} -v ${rsd_dir}:${rsd_dir} -i limanling/uiuc_ie_m36 /opt/conda/envs/py36/bin/python /aida_utilities/rsd2ltf.py --seg_option nltk+linebreak --tok_option nltk_wordpunct --extension .rsd.txt ${rsd_dir} ${ltf_dir} ``` * If you have LTF files, please use the AIDA ltf2rsd tool (`LDC2018E62_AIDA_Month_9_Pilot_Eval_Corpus_V1.0/tools/ltf2txt/ltf2rsd.perl`) to generate the RSD files. * Start services ```bash sh set_up_m36.sh ``` * Run the scripts. Note that the file paths are absolute paths. ```bash sh pipeline_full_en.sh ${data_root} ``` For example, ```bash sh pipeline_full_en.sh ${PWD}/data/testdata_dryrun ```

ukb


UKB is a collection of programs for performing graph-based Word Sense Disambiguation and lexical similarity/relatedness using a pre-existing knowledge base.

ulo


# The Upper Library Ontology (for metadata on theorem prover libraries) This repository contains the [OWL2](https://www.w3.org/TR/owl2-overview/) implementation of the Upper Library Ontology [ulo.owl](ulo.owl) and [OWLDoc documentation](OWLDoc/).

UMBEL


First, it is a broad, general reference structure of 34,000 concepts, which provides a scaffolding to link and interoperate other datasets and domain vocabularies. Second, it is a base vocabulary for the construction of other concept-based domain ontologies, also designed for interoperation.

UniMath


This Coq library aims to formalize a substantial body of mathematics using the univalent point of view.

unison


[Unison](https://unisonweb.org) is a new programming language, currently under active development. It's a modern, statically-typed purely functional language, similar to Haskell, but with the ability to describe entire distributed systems with a single program. Here's an example of a distributed map-reduce implementation:

universal-pddl-parser


An algorithm for parsing any planning problem in PDDL format.

universal-pddl-parser-multiagent


An extension to the [Universal PDDL Parser](https://github.com/aig-upf/universal-pddl-parser) to handle multi-agent domains.

universe


`Universe `_ is a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications. This is the ``universe`` open-source library, which provides a simple `Gym `__ interface to each Universe environment.

universe-starter-agent


The codebase implements a starter agent that can solve a number of `universe` environments. It contains a basic implementation of the [A3C algorithm](https://arxiv.org/abs/1602.01783), adapted for real-time environments.

unsolve-ipc-2016


Overview ======== Repo contains the domains, generators, and scripts in general for the inaugural edition of the unsolvability IPC.

upshot-montague


`montague` is a little CCG semantic parsing library for Scala.

USC-DS-RelationExtraction


# USC Distantly-supervised Relation Extraction System This repository puts together recent models and data sets for **sentence-level relation extraction** *using knowledge bases (i.e., distant supervision)*. In particular, it contains the source code for WWW'17 paper *[CoType: Joint Extraction of Typed Entities and Relations with Knowledge Bases](https://arxiv.org/pdf/1610.08763.pdf)*.

usersim


The User Simulator is a tool designed to generate network and host activity for training purposes. It is intended for use in a closed network primarily consisting of Windows and Linux virtual machines. Other operating systems may be compatible, but are untested. The Linux version does not have access to all the features of the Windows version. In particular, the Windows version can run several of the programs in MS Office, while the Linux version obviously cannot.

utility-monitor


This uses [rtlamr][rtlamr] to process the radio broadcasts by the meter. I live in a less dense location than the blog author so only picked up three meters using the `idm+` message. My meter included a serial number on its face that directly matched one of those three meters so it was very easy to get the right reading.

UVI


vagrant-mutate


Vagrant-mutate is a vagrant plugin to convert vagrant boxes to work with different providers.

vagrant-vbguest


*vagrant-vbguest* is a [Vagrant](http://vagrantup.com) plugin which automatically installs the host's VirtualBox Guest Additions on the guest system.

VAL


This repository hosts tools for AI Planning plans and planning models.

vampire


![GitHub Workflow Status (branch)](https://img.shields.io/github/workflow/status/vprover/vampire/CI/master) ![GitHub release (latest by date)](https://img.shields.io/github/v/release/vprover/vampire)

vbsix-lang2program


Reproducible experiments are in `/vbsix-lang2program/paper_experiments/experiments/`, organized according to their domain, search algorithm, and random-seed. Each experiment's directory contains its data as described above. The directory `/vbsix-lang2program/paper_experiments/code/` holds two versions of the source code: - strongsup_baseline - The [source code](https://github.com/kelvinguu/lang2program) accompanying the paper "[From Language to Programs: Bridging Reinforcement Learning and Maximum Marginal Likelihood](https://arxiv.org/abs/1704.07926)". This code should be used to reproduce the basleine beam-search experiments. - strongsup_vbsix - The source code accompanying our paper. This code should be used to reproduce the VBSIX and ablations experiments.

vct


veewee


Veewee is a tool for easily (and repeatedly) building custom [Vagrant](https://github.com/mitchellh/vagrant) base boxes, KVMs, and virtual machine images.

vindinium-swi


This is a simple bot for http://vindinium.org/ implemented in SWI-Prolog.

virtstoryteller


This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

viz.js


This project builds [Graphviz](http://www.graphviz.org) with [Emscripten](http://kripken.github.io/emscripten-site/) and provides a simple wrapper for using it in the browser.

vms


This is a list of ContentMine virtual machines, with descriptions, in reverse date order (i.e. most recent first).

vonda


VOnDA is a framework for the implementation of reactive dialogue management functionality in dialogue systems for virtual agents. Although domain-independent, VOnDA is tailored towards dialogue systems with a focus on social communication, which implies the need of a long-term memory and high user adaptivity.

vs-code-default-keybindings


A list of the default keybindings for VS Code is surprisingly hard to find, even in the VS Code source, so I collected them all here. I've also included `negative` keybindings, which unmap the keybindings.

vscode


This repository ("`Code - OSS`") is where we (Microsoft) develop the [Visual Studio Code](https://code.visualstudio.com) product together with the community. Not only do we work on code and issues here, we also publish our [roadmap](https://github.com/microsoft/vscode/wiki/Roadmap), [monthly iteration plans](https://github.com/microsoft/vscode/wiki/Iteration-Plans), and our [endgame plans](https://github.com/microsoft/vscode/wiki/Running-the-Endgame). This source code is available to everyone under the standard [MIT license](https://github.com/microsoft/vscode/blob/main/LICENSE.txt).

vscode-emacs-mcx


This Visual Studio Code extension provides emacs-like keybindings and operations. This is inspired by [the great vscode extension by hiro-sun](https://github.com/hiro-sun/vscode-emacs) and its forks such as [vscode-emacs-friendly by Sebastian Zaha](https://github.com/SebastianZaha/vscode-emacs-friendly), [vscode-emacs-improved by rkwan94](https://github.com/rkwan94/vscode-emacs) and [vscode-emacs-neon by NotKyon](https://github.com/NotKyon/vscode-emacs-neon).

vscode-pddl


This extension makes VS Code a great place for modeling planning domains.

VST


wam_common_lisp


* !0th rule: Any sufficiently complicated Lisp or Scheme program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of ISO Prolog. * Translating Lisp to Prolog gives Prolog * Metaobject Protocol * Common Lisp Object System * Instant Prolog Ecosystem/Development Libraries (days, not years) * Several decades of Common Lisp libraries may be translated to useable Prolog development libraries. * Maintain your code from original Lisp or translated Prolog (though wont translate back) * Settings to try to emulate handwritten code ([Examples](https://github.com/TeamSPoon/wam_common_lisp/tree/master/prolog/wam_cl/README.md)) * Forms (at REPL) are transpiled to Prolog, Compiled to WAM, Call/Executed. * *only* 2-3 slower than SBCL * Gives to prolog more than we can list! * Simular to how CLISP is indespensable sometimes. * _a_ Common Lisp used for sanity testing * Makes debugging easy for Prolog and Lisp experts * Picks up freebies .. whatever the host Prolog system offers such as * Garbage Collection * Memoization/Coinduction * Dynamic Extent * Exception Handling * Unwind-Protect/Cleanup * Native Locatives * Two-way calling and embedding from C/C++/Python/C#/Mono/Scala/Java/Haskell/LUA/Perl * Makes Plaform Executables and. DLL/So files ([Quick Start](https://github.com/TeamSPoon/wam_common_lisp/blob/master/README.md#makeanexecutableandrunit)) * * (too enormous to go into) * Developed/Installed as a SWI-Prolog pack * [http://www.swi-prolog.org/pack/list?p=wam_common_lisp](http://www.swi-prolog.org/pack/list?p=wam_common_lisp) `` ## Incompleteness must fix for release worthiness * Bugs Running/Translating: * Fully working LOOP (must-fix) * SWANK (must-fix) * PAIP Book code (bug in-progress) * [DAYDREAMER](https://github.com/eriktmueller/daydreamer) (in-progress) * [KNOWLEDGE MACHINE](http://www.cs.utexas.edu/users/mfkb/RKF/km.html) * Quicklisp (bug must-fix) * ASDF-INSTALL (bug must-fix) * Add missing impls * delete-package (must-fix) * (more to be Listed) (not here) * Tests ([in-progress](https://github.com/TeamSPoon/wam_common_lisp/tree/master/t)) * Must pass 70% or above CL-ANSI tests (bug in-progress) * Ensure passes _all_ CL-ANSI tests (with --ansi) (feature always in-prgress) * Hardest part is making sure it throws/complains about all the things it needs to * need more tests! * FFI (bug in-progress) * Use https://github.com/JanWielemaker/ffi ? * Using SWICLI as FFI (SWICLI's FFI itself still needs work but works for YAP as well) ## TODO _Features_ * Document prolog source-code this pack! (indeed, a feature!) * Keep later `copy_term/2's` cheap, (feature in-progress) * Experment with way to passes entire term object object references as atoms (nb_current/2 allows access to the object's property map) * [(FAKE TODO![Build Status](https://travis-ci.org/rla/simple-template.svg)](https://rlaanemets.com/post/show/adding-travis-to-swi-prolog-packs)) * Untangle the `pack` install deps * Moving predicates to logicmoo_utils from logicmoo_base (Still in progress) * DEpackifed version for Portability? * YAP-Prolog (in-progress) (which Lisp to Prolog benchmarking shows about 5x speedup) * TODO: Sicstus, B-Prolog, Bin-Prolog, EcLiPSe Prolog and Jekejeke * Low-Priority: PrologCafe, Yield-Prolog

waybackpack


Waybackpack is a command-line tool that lets you download the entire Wayback Machine archive for a given URL.

Web-Karma


Karma is an information integration tool that enables users to quickly and easily integrate data from a variety of data sources including databases, spreadsheets, delimited text files, XML, JSON, KML and Web APIs. Users integrate information by modeling it according to an ontology of their choice using a graphical user interface that automates much of the process. Karma learns to recognize the mapping of data to ontology classes and then uses the ontology to propose a model that ties together these classes. Users then interact with the system to adjust the automatically generated model. During this process, users can transform the data as needed to normalize data expressed in different formats and to restructure it. Once the model is complete, users can published the integrated data as RDF or store it in a database.

Web-page-classification


This repository contains all scripts associated with my research on topical Web-page classification. You can read the full paper describing the task, experiments, and results [here](paper.pdf).

web-speech-api


Tap the screen then say a colour — the grammar string contains a large number of HTML keywords to choose from, although we've removed most of the multiple word colors to remove ambiguity. We did keep goldenrod, cos, well.

weblegends


### What is weblegends? weblegends is a DFHack plugin that runs a web server, inside Dwarf Fortress, that allows you to view your entire world's history, artifacts, settlments, heros, and so much more... over the internet or just locally.

WebNav


WebNav is a benchmark task for evaluating an agent with abilities to understand natural language and plan on partially observed environments. In this challenging task, an agent navigates through a web site consisting of web pages and hyperlinks to find a web page in which a query appears.

webODE


WebODE is an extensible ontology-engineering suite based on an application server, whose development started in 1999 and whose **support was discontinued in 2006**. The core of WebODE was its ontology access service, used by all the services and applications plugged into the server. The WebODE's Ontology Editor allowed editing and browsing WebODE ontologies, and was based on HTML forms and Java applets.

WebQR


The following would be the standard approach of identifying the ontological primitives in QR. An ontological primitive (e.g., a quantity) has: * Exactly one **identifier**, consisting of an integer that is automatically assigned by an internal counter. The integer is appended to the path of the circle URI. Example: `localhost:5000/circle/17`. This is also used for dereferencing the circle and for sending HTTP requests from the client to the server. * Zero or more **descriptive label**s. The most recently assigned descriptive label is set as the `rdfs:label` of the identifier and is displayed in the User Interface. All other descriptive labels are asserted as `qsim:old_label` literals (possibly including the timestamp of its abolition). Example: `< localhost:5000/circle/17, rdfs:label, "boom" >`, `< localhost:5000/circle/17, qsim:old_label, 'Tree' >`. If the user types text in a circle that is not a URI, then we assume it is a descriptive label. * Zero or more **concept name**s that are existing URIs in the LOD. If the user types text in a circle that is a URI, this is assumed to be a concept name. An `owl:sameAs` relation with the identifier is asserted.

wekan


Wekan is an completely [Open Source][open_source] and [Free software][free_software] collaborative kanban board application with MIT license.

wernicke


A redaction tool for structured data. Run `wernicke` with JSON on stdin, get redacted values out. Preserves structure and (to some extent) semantics. You might want this because you have test data where the actual values are sensitive. Because the changes are consistent within the data and the overall data structure is preserved, there a better chance your data will stay suitable for testing, even though it's been scrubbed.

Whirl


Whirl is a toy esoteric language. See the [classic Whirl webpage](http://bigzaphod.github.com/Whirl/) for more info!

wikulu


This project is provided as is and is missing dependencies. Feel free to re-use parts for your own system, but please do not expect it to run out of the box.

won


wordnet-prolog


* _WNprolog-3.0BF.tar.gz_ is a bugfix release of _WNprolog-3.0_. It fixes some known problems, including the transitive hyponym bug.

world-universities-csv


world-universities-csv ====================== This is a forked copy of two CSV files with universities in the US and around the world.

wormstar


To try to cure myself I've written a new WordStar mode for emacs: its name is **WorMstar** (because WordStar is like a worm in my head...) and the elisp file that contains it is named `wm-mode.el`.

WWW-Flatten


WWW::Flatten is a web crawling tool for freezing pages into standalone. I believe this works better than wget or "Saving as, complete" in browsers.

XChange


XChange is a Java library providing a simple and consistent API for interacting with 60+ Bitcoin and other crypto currency exchanges providing a consistent interface for trading and accessing market data.

XiaomiMiBand


This unpacked included both patches to allow execution on Android 4.0.4 devices and without Bluetooth 4.0 (But remember if u dont have Bluetooth 4.0 the app will crash and there is nothing we can do)

XLM


XLM supports multi-GPU and multi-node training, and contains code for: - **Language model pretraining**: - **Causal Language Model** (CLM) - **Masked Language Model** (MLM) - **Translation Language Model** (TLM) - **GLUE** fine-tuning - **XNLI** fine-tuning - **Supervised / Unsupervised MT** training: - Denoising auto-encoder - Parallel data training - Online back-translation

xlnet


**XLNet** is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs [Transformer-XL](https://arxiv.org/abs/1901.02860) as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking.

xtools


This library contains several development tools, although not all are listed here, the most stable and relevant ones follows:

yago3


YAGO is a large semantic knowledge base, derived from Wikipedia, WordNet, WikiData, GeoNames, and other data sources. Currently, YAGO knows more than 17 million entities (like persons, organizations, cities, etc.) and contains more than 150 million facts about these entities.

yago4


This is the pipeline to run YAGO 4.

Yancy


[Yancy](https://metacpan.org/pod/Yancy) is a simple content management system (CMS) for administering content in a database. Yancy accepts a configuration file that describes the data in the database and builds a website that lists all of the available data and allows a user to edit data, delete data, and add new data.

yesbot


This file should not be added to git's managed files.

yodaqa


YodaQA is an open source Factoid Question Answering system that can produce answer both from databases and text corpora using on-the-fly information extraction. By default, open domain question answering is performed on top of the Freebase and DBpedia knowledge bases as well as the texts of enwiki articles.

yolov5


This repository represents Ultralytics open-source research into future object detection methods, and incorporates our lessons learned and best practices evolved over training thousands of models on custom client datasets with our previous YOLO repository https://github.com/ultralytics/yolov3. **All code and models are under active development, and are subject to modification or deletion without notice.** Use at your own risk.

youtube-dl


# DESCRIPTION **youtube-dl** is a command-line program to download videos from YouTube.com and a few more sites. It requires the Python interpreter, version 2.6, 2.7, or 3.2+, and it is not platform specific. It should work on your Unix box, on Windows or on macOS. It is released to the public domain, which means you can modify it, redistribute it or use it however you like.

youtube-upload


_Youtube-upload_ is a command line Python script that uploads videos to Youtube (it should work on any platform -GNU/Linux, BSD, OS X, Windows, ...- that runs Python) using theYoutube [APIv3](https://developers.google.com/youtube/v3/).

z3


zeros-silo


This should report that it passes all the tests. If not, something might be wrong with your configuration, or there may be some incompatibility between the script and your system. If you suspect the later, let me know the details!

zf-in-agda


It is a predicate has an Ordinal argument.

zmeventserver


A WSS (Secure Web Sockets) and/or MQTT based event notification server that broadcasts new events to any authenticated listeners. (As of 0.6, it also includes a non secure websocket option, if that's how you want to run it)

zone-matrix-wake-up


I can't believe it has been 20 years already since the release of The Matrix movie.