Connectionism is a technical term for a group of related techniques. These techniques include areas such as Artificial Neural Networks, Semantic Networks and a few other similar ideas. My present focus is on neural networks (though I am looking for resources on the other techniques). Neural networks are programs designed to simulate the workings of the brain. They consist of a network of small mathematical-based nodes, which work together to form patterns of information. They have tremendous potential and currently seem to be having a great deal of success with image processing and robot control.

These are libraries of code or classes for use in programming within the Connectionist field. They are not meant as stand alone applications, but rather as tools for building your own applications.

**Software for Flexible Bayesian Modeling**-
This software implements flexible Bayesian models for regression and classification applications that are based on multilayer perceptron neural networks or on Gaussian processes. The implementation uses Markov chain Monte Carlo methods. Software modules that support Markov chain sampling are included in the distribution, and may be useful in other applications.

**BELIEF**-
BELIEF is a Common Lisp implementation of the Dempster and Kong fusion and propagation algorithm for Graphical Belief Function Models and the Lauritzen and Spiegelhalter algorithm for Graphical Probabilistic Models. It includes code for manipulating graphical belief models such as Bayes Nets and Relevance Diagrams (a subset of Influence Diagrams) using both belief functions and probabilities as basic representations of uncertainty. It uses the Shenoy and Shafer version of the algorithm, so one of its unique features is that it supports both probability distributions and belief functions. It also has limited support for second order models (probability distributions on parameters).

**bpnn.py**-
- Web site: http://arctrix.com/nas/python/bpnn.py

A simple back-propogation ANN in Python.

**brain**-
- Web site: http://harthur.github.com/brain/

Brain is a lightweight JavaScript library for neural networks. It implements the standard feedforward multi-layer perceptron neural network trained with backpropagation.

**brain-simulator**-
- Web site: http://www.briansimulator.org/

Brian is a clock-driven simulator for spiking neural networks. It is designed with an emphasis on flexibility and extensibility, for rapid development and refinement of neural models. Neuron models are specified by sets of user-specified differential equations, threshold conditions and reset conditions (given as strings). The focus is primarily on networks of single compartment neuron models (e.g. leaky integrate-and-fire or Hodgkin-Huxley type neurons). It is written in Python and is easy to learn and use, highly flexible and easily extensible. Features include:

- a system for specifying quantities with physical dimensions
- exact numerical integration for linear differential equations
- Euler, Runge-Kutta and exponential Euler integration for nonlinear differential equations
- synaptic connections with delays
- short-term and long-term plasticity (spike-timing dependent plasticity)
- a library of standard model components, including integrate-and-fire equations, synapses and ionic currents
- a toolbox for automatically fitting spiking neuron models to electrophysiological recordings

**Caffe2**-
- Web site: http://caffe2.ai/

Caffe2 is a lightweight, modular, and scalable deep learning framework. Building on the original Caffe, Caffe2 is designed with expression, speed, and modularity in mind. Caffe2 aims to provide an easy and straightforward way for you to experiment with deep learning and leverage community contributions of new models and algorithms. You can bring your creations to scale using the power of GPUs in the cloud or to the masses on mobile with Caffe2's cross-platform libraries.

**chainer**-
- Web site: http://chainer.org/

Chainer's goad is to bridge the gap between algorithms and implementations of deep learning. Chainer is a flexible framework for neural networks which enables writing complex architectures simply and intuitively. Chainer adopts a "Define-by-Run" scheme, i.e., the network is defined on-the-fly via the actual forward computation. More precisely, Chainer stores the history of computation instead of programming logic. This strategy enables to fully leverage the power of programming logic in Python. For example, Chainer does not need any magic to introduce conditionals and loops into the network definitions.

**CNNs**-
- Web site: http://www.isiweb.ee.ethz.ch/haenggi/CNNsim.html
- Newer Version: http://www.isiweb.ee.ethz.ch/haenggi/CNNsim_adv_manual.html
- Old Page: http://www.ce.unipr.it/research/pardis/CNN/cnn.html

Cellular Neural Networks (CNN) is a massive parallel computing paradigm defined in discrete N-dimensional spaces. A visualizing CNN Simulator which allows to track the way in which the state trajectories evolve, thus gaining an insight into the behavior of CNN dynamics. This may be useful for forming an idea how a CNN 'works', especially for those people who are not experienced in CNN theory.

**CNTK**-
- Web site: https://github.com/Microsoft/CNTK

CNTK, the Computational Network Toolkit by Microsoft Research, is a unified deep-learning toolkit that describes neural networks as a series of computational steps via a directed graph. In this directed graph, leaf nodes represent input values or network parameters, while other nodes represent matrix operations upon their inputs. CNTK allows to easily realize and combine popular model types such as feed-forward DNNs, convolutional nets (CNNs), and recurrent networks (RNNs/LSTMs). It implements stochastic gradient descent (SGD, error backpropagation) learning with automatic differentiation and parallelization across multiple GPUs and servers.

**CONICAL**-
- Web site: http://strout.net/conical/

CONICAL is a C++ class library for building simulations common in computational neuroscience. Currently its focus is on compartmental modeling, with capabilities similar to GENESIS and NEURON. A model neuron is built out of compartments, usually with a cylindrical shape. When small enough, these open-ended cylinders can approximate nearly any geometry. Future classes may support reaction-diffusion kinetics and more. A key feature of CONICAL is its cross-platform compatibility; it has been fully co-developed and tested under Unix, DOS, and Mac OS.

**deeplearn.js**-
- Web site: https://deeplearnjs.org/

deeplearn.js is an open source hardware-accelerated JavaScript library for machine intelligence. deeplearn.js brings performant machine learning building blocks to the web, allowing you to train neural networks in a browser or run pre-trained models in inference mode.

We provide two APIs, an immediate execution model (think NumPy) and a deferred execution model mirroring the TensorFlow API. deeplearn.js was originally developed by the Google Brain PAIR team to build powerful interactive machine learning tools for the browser, but it can be used for everything from education, to model understanding, to art projects.

**Detectron**-
- Web site: https://research.fb.com/downloads/detectron/
- Web site: https://github.com/facebookresearch/Detectron

Detectron is Facebook AI Research's software system that implements state-of-the-art object detection algorithms, including Mask R-CNN. It is written in Python and powered by the Caffe2 deep learning framework.

The goal of Detectron is to provide a high-quality, high-performance codebase for object detection research. It is designed to be flexible in order to support rapid implementation and evaluation of novel research. Detectron includes implementations of the following object detection algorithms; Mask R-CNN, RetinaNet, Faster R-CNN, RPN, Fast R-CNN, and R-FCN. Using the following backbone network architectures; ResNeXt{50,101,152}, ResNet{50,101,152}, Feature Pyramid Networks (with ResNet/ResNeXt), and VGG16.

**dynet**-
- Web site: https://github.com/clab/dynet

DyNet (Dynamic neural network library) is a neural network library developed by Carnegie Mellon University and many others. It is written in C++ (with bindings in Python) and is designed to be efficient when run on either CPU or GPU, and to work well with networks that have dynamic structures that change for every training instance. For example, these kinds of networks are particularly important in natural language processing tasks, and DyNet has been used to build state-of-the-art systems for syntactic parsing, machine translation, morphological inflection, and many other application areas.

**Encog**-
- Web site: http://www.heatonresearch.com/

Encog is an advanced neural network and machine learning framework. Encog contains classes to create a wide variety of networks, as well as support classes to normalize and process data for these neural networks. Encog trains using multithreaded resilient propagation. Encog can also make use of a GPU to further speed processing time. A GUI based workbench is also provided to help model and train neural networks. Encog has been in active development since 2008. Encog is available for Java, .Net and Silverlight.

**FANN**-
- Web site: http://leenissen.dk/fann/

Fast Artificial Neural Network Library is a free open source neural network library, which implements multilayer artificial neural networks in C with support for both fully connected and sparsely connected networks. Cross-platform execution in both fixed and floating point are supported. It includes a framework for easy handling of training data sets. It is easy to use, versatile, well documented, and fast. PHP, C++, .NET, Ada, Python, Delphi, Octave, Ruby, Prolog Pure Data and Mathematica bindings are available. A reference manual accompanies the library with examples and recommendations on how to use the library. A graphical user interface is also available for the library.

**FCNN**-
- Web site: http://fcnn.sourceforge.net/

FCNN (Fast Compressed Neural Networks) is a free open source C++ library for artificial neural network computations. It is easy to use and extend, written in modern C++ and is very fast (to author's best knowledge it is the fastest freely available neural network library). All FCNN classes are templated to support both single and double precision computations. Internal representation of network in FCNN differs from all other libraries allowing true code modularisation with simultaneous speed improvements.

**ffnet**-
- Web site: http://ffnet.sourceforge.net/

ffnet is a fast and easy-to-use feed-forward neural network training solution for python. Many nice features are implemented: arbitrary network connectivity, automatic data normalization, very efficient training tools, network export to fortran code.

**Joone**-
- Web site: http://sourceforge.net/projects/joone/

Joone is a neural net framework to create, train and test neural nets. The aim is to create a distributed environment based on JavaSpaces both for enthusiastic and professional users, based on the newest Java technologies. Joone is composed of a central engine that is the fulcrum of all applications that already exist or will be developed. The neural engine is modular, scalable, multitasking and tensile. Everyone can write new modules to implement new algorithms or new architectures starting from the simple components distributed with the core engine. The main idea is to create the basis to promote a zillion of AI applications that revolve around the core framework.

**Keras**-
- Web site: https://github.com/fchollet/keras

Keras is a minimalist, highly modular neural networks library, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.

Use Keras if you need a deep learning library that:

- allows for easy and fast prototyping (through total modularity, minimalism, and extensibility).
- supports both convolutional networks and recurrent networks, as well as combinations of the two.
- supports arbitrary connectivity schemes (including multi-input and multi-output training).
- runs seamlessly on CPU and GPU.

**Matrix Class**-
- FTP site: ftp://ftp.cs.ucla.edu/pub/

A simple, fast, efficient C++ Matrix class designed for scientists and engineers. The Matrix class is well suited for applications with complex math algorithms. As an demonstration of the Matrix class, it was used to implement the backward error propagation algorithm for a multi-layer feed-forward artificial neural network.

**NEAT**-
- Web site: http://nn.cs.utexas.edu/project-view.php?RECORD_KEY(Projects)=ProjID&ProjID(Projects)=14
- Web site: http://www.cs.ucf.edu/~kstanley/neat.html

Many neuroevolution methods evolve fixed-topology networks. Some methods evolve topologies in addition to weights, but these usually have a bound on the complexity of networks that can be evolved and begin evolution with random topologies. This project is based on a neuroevolution method called NeuroEvolution of Augmenting Topologies (NEAT) that can evolve networks of unbounded complexity from a minimal starting point.

The research as a broader goal of showing that evolving topologies is necessary to achieve 3 major goals of neuroevolution: (1) Continual coevolution: Successful competitive coevolution can use the evolution of topologies to continuously elaborate strategies. (2) Evolution of Adaptive Networks: The evolution of topologies allows neuroevolution to evolve adaptive networks with plastic synapses by designating which connections should be adaptive and in what ways. (3) Combining Expert Networks: Separate expert neural networks can be fused through the evolution of connecting neurons between them.

**NEUGO**-
- Web site: https://github.com/wh1t3w01f/neugo

NEUGO is a simple neural network framework in Go. You can use this framework for fast prototyping that involves neural networks, simply by two steps: configure and run. This framework does NOT include implementation of training methods; I had EAGO in mind when I started this project. EAGO will train the neural nets through its NE (NeuroEvolution) package.

**NeuroLab**-
- Web site: http://packages.python.org/neurolab/

NeuroLab - a library of basic neural networks algorithms with flexible network configurations and learning algorithms for Python. To simplify the using of the library, interface is similar to the package of Neural Network Toolbox (NNT) of MATLAB (c). The library is based on the package numpy (http://numpy.scipy.org), some learning algorithms are used scipy.optimize (http://scipy.org).

**NuPIC**-
- Web site: http://www.numenta.org/
- Web site: https://github.com/numenta/nupic

The Numenta Platform for Intelligent Computing (NuPIC) is built around Cortical learning algorithms, a new variation of HTM networks (Hierarchical Temporal Memory). Based on Jeff Hawkins idea as laid out in his On Intelligence book. NuPIC consists of the Numenta Tools Framework and the Numenta Runtime Engine.

**Pulcinella**-
- Web site: http://iridia.ulb.ac.be/pulcinella/

Pulcinella is written in CommonLisp, and appears as a library of Lisp functions for creating, modifying and evaluating valuation systems. Alternatively, the user can choose to interact with Pulcinella via a graphical interface (only available in Allegro CL). Pulcinella provides primitives to build and evaluate uncertainty models according to several uncertainty calculi, including probability theory, possibility theory, and Dempster-Shafer's theory of belief functions; and the possibility theory by Zadeh, Dubois and Prade's. A User's Manual is available on request.

**PyTorch**-
- Web site: http://pytorch.org/

PyTorch is a python package that provides two high-level features:

- Tensor computation (like numpy) with strong GPU acceleration
- Deep Neural Networks built on a tape-based autograd system

It is a deep learning research platform that provides a numpy like ndarray (tensor) that can utilize GPUs and is meant to be used as a framework. It provides a unique, dynamic way of building neural networks using reverse-mode auto-differentiation that sort of works like using and replaying a tape recorder. It is designed to be fast and flexible for a large variety of research problems.

**scnANNlib**-
SCN Artificial Neural Network Library provides a programmer with a simple object-oriented API for constructing ANNs. Currently, the library supports non-recursive networks with an arbitrary number of layers, each with an arbitrary number of nodes. Facilities exist for training with momentum, and there are plans to gracefully extend the functionality of the library in later releases.

**TensorFlow**-
- Web site: https://www.tensorflow.org/

TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API.

**theano**-
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Theano features:

- tight integration with NumPy - Use numpy.ndarray in Theano-compiled functions.
- transparent use of a GPU - Perform data-intensive calculations up to 140x faster than with CPU.(float32 only)
- efficient symbolic differentiation - Theano does your derivatives for function with one or many inputs.
- speed and stability optimizations - Get the right answer for log(1+x) even when x is really tiny.
- dynamic C code generation - Evaluate expressions faster.
- extensive unit-testing and self-verification - Detect and diagnose many types of errors.

**UTCS Neural Nets Research Group Software**-
- Web site: http://nn.cs.utexas.edu/soft-list.php

A bit different from the other entries, this is a reference to a collection of software rather than one application. It was all developed by the UTCS Neural Net Research Group. Here's a summary of some of the packages available:

- Natural Language Processing
- MIR - Tcl/Tk-based rapid prototyping for sentence processing
- SPEC - Parsing complex sentences
- DISCERN - Processing script-based stories, including
- PROC - Parsing, generation, question answering
- HFM - Episodic memory organization
- DISLEX - Lexical processing
- DISCERN - The full integrated model

- FGREPNET - Learning distributed representations

- Self-Organization
- LISSOM - Maps with self-organizing lateral connections.
- FM - Generic Self-Organizing Maps

- Neuroevolution
- Enforced Sub-Populations (ESP) for sequential decision
tasks
- Non-Markov Double Pole Balancing

- Symbiotic, Adaptive NeuroEvolution (SANE; predecessor of
ESP)
- JavaSANE - Java software package for applying SANE to new tasks
- SANE-C - C version, predecessor of JavaSANE
- Pole Balancing - Neuron-level SANE on the Pole Balancing task

- NeuroEvolution of Augmenting Topologies (NEAT) software for evolving neural networks using structure

- Enforced Sub-Populations (ESP) for sequential decision
tasks

**Various (C++) Neural Networks**-
Example neural net codes from the book, The Pattern Recognition Basics of AI. These are simple example codes of these various neural nets. They work well as a good starting point for simple experimentation and for learning what the code is like behind the simulators. The types of networks available on this site are: (implemented in C++)

- The Backprop Package
- The Nearest Neighbor Algorithms
- The Interactive Activation Algorithm
- The Hopfield and Boltzman machine Algorithms
- The Linear Pattern Classifier
- ART I
- Bi-Directional Associative Memory
- The Feedforward Counter-Propagation Network

These are various applications, software kits, etc. meant for research in the field of Connectionism. Their ease of use will vary, as they were designed to meet some particular research interest more than as an easy to use commercial package.

**Aspirin - MIGRAINES**-
(am6.tar.Z on ftp site)

The software that we are releasing now is for creating, and evaluating, feed-forward networks such as those used with the backpropagation learning algorithm. The software is aimed both at the expert programmer/neural network researcher who may wish to tailor significant portions of the system to his/her precise needs, as well as at casual users who will wish to use the system with an absolute minimum of effort.

**DDLab**-
- Web site: http://www.ddlab.com/

DDLab is an interactive graphics program for research into the dynamics of finite binary networks, relevant to the study of complexity, emergent phenomena, neural networks, and aspects of theoretical biology such as gene regulatory networks. A network can be set up with any architecture between regular CA (1d or 2d) and "random Boolean networks" (networks with arbitrary connections and heterogeneous rules). The network may also have heterogeneous neighborhood sizes.

**Emergent**-
Note: this is a descendant of PDP++

emergent is a comprehensive, full-featured neural network simulator that allows for the creation and analysis of complex, sophisticated models of the brain in the world. With an emphasis on qualitative analysis and teaching, it also supports the workflow of professional neural network researchers. The GUI environment allows users to quickly construct basic networks, modify the input/output patterns, automatically generate the basic programs required to train and test the network, and easily utilize several data processing and network analysis tools. In addition to the basic preset network train and test programs, the high level drag-and-drop programming interface, built on top of a scripting language that has full introspective access to all aspects of networks and the software itself, allows one to write programs that seamlessly weave together the training of a network and evolution of its environment without ever typing out a line of code. Networks and all of their state variables are visually inspected in 3D, allowing for a quick "visual regression" of network dynamics and robot behavior.

**GENESIS**-
- Web site: http://genesis-sim.org/

GENESIS (short for GEneral NEural SImulation System) is a general purpose simulation platform which was developed to support the simulation of neural systems ranging from complex models of single neurons to simulations of large networks made up of more abstract neuronal components. GENESIS has provided the basis for laboratory courses in neural simulation at both Caltech and the Marine Biological Laboratory in Woods Hole, MA, as well as several other institutions. Most current GENESIS applications involve realistic simulations of biological neural systems. Although the software can also model more abstract networks, other simulators are more suitable for backpropagation and similar connectionist modeling.

**JavaBayes**-
- Web site: http://www.cs.cmu.edu/~javabayes/

The JavaBayes system is a set of tools, containing a graphical editor, a core inference engine and a parser. JavaBayes can produce:

- the marginal distribution for any variable in a network.
- the expectations for univariate functions (for example, expected value for variables).
- configurations with maximum a posteriori probability.
- configurations with maximum a posteriori expectation for univariate functions.

**Jbpe**-
Jbpe is a back-propagation neural network editor/simulator.

Features

- Standart back-propagation networks creation.
- Saving network as a text file, which can be edited and loaded back.
- Saving/loading binary file
- Learning from a text file (with structure specified below), number of learning periods / desired network energy can be specified as a criterion.
- Network recall

**lens**-
- Web site: http://tedlab.mit.edu/~dr/Lens/

Lens is an efficient, yet flexible, neural network simulator that runs on a variety of platforms, is able to handle large, complex simulations, but is also reasonably easy for novices to operate. Lens has three main design objectives:

- Speed: Both in performance and development time.
- Flexibility: Hundreds of scriptable functions and components.
- Customizability: Modular code to allow for easy extensions.

**Nengo**-
- Web site: http://www.nengo.ca/

Nengo (Nengo Neural Simulator) is a graphical and scripting based software package for simulating large-scale neural systems.

To use it, you define groups of neurons in terms of what they represent, and then form connections between neural groups in terms of what computation should be performed on those representations. Nengo then uses the Neural Engineering Framework (NEF) to solve for the appropriate synaptic connection weights to achieve this desired computation. Nengo also supports various kinds of learning. Nengo helps make detailed spiking neuron models that implement complex high-level cognitive algorithms.

Among other things, Nengo has been used to implement motor control, visual attention, serial recall, action selection, working memory, attractor networks, inductive reasoning, path integration, and planning with problem solving.

The Spaun http://models.nengo.ca/spaun neural simulator is implemented in Nengo and its source is available as well.

**Neural Network Generator**-
- FTP site: ftp://ftp.idsia.ch/pub/rafal/

The Neural Network Generator is a genetic algorithm for the topological optimization of feedforward neural networks. It implements the Semantic Changing Genetic Algorithm and the Unit-Cluster Model. The Semantic Changing Genetic Algorithm is an extended genetic algorithm that allows fast dynamic adaptation of the genetic coding through population analysis. The Unit-Cluster Model is an approach to the construction of modular feedforward networks with a ''backbone'' structure.

NOTE: To compile this on Linux requires one change in the Makefiles. You will need to change '-ltermlib' to '-ltermcap'.

**NEURON**-
- Web site: http://www.neuron.yale.edu/

NEURON is an extensible nerve modeling and simulation program. It allows you to create complex nerve models by connecting multiple one-dimensional sections together to form arbitrary cell morphologies, and allows you to insert multiple membrane properties into these sections (including channels, synapses, ionic concentrations, and counters). The interface was designed to present the neural modeler with a intuitive environment and hide the details of the numerical methods used in the simulation.

**Neuroph**-
- Web site: http://neuroph.sourceforge.net/

Neuroph is lightweight Java neural network framework to develop common neural network architectures. It contains well designed, open source Java library with small number of basic classes which correspond to basic NN concepts. Also has nice GUI neural network editor to quickly create Java neural network components.

**PDP++**-
- Web site: http://archive.cnbc.cmu.edu/Resources/PDP++/PDP++.html
- FTP mirror (US): ftp://grey.colorado.edu/pub/oreilly/pdp++/

NOTE: Renamed to Emergent

As the field of Connectionist modeling has grown, so has the need for a comprehensive simulation environment for the development and testing of Connectionist models. Our goal in developing PDP++ has been to integrate several powerful software development and user interface tools into a general purpose simulation environment that is both user friendly and user extensible. The simulator is built in the C++ programming language, and incorporates a state of the art script interpreter with the full expressive power of C++. The graphical user interface is built with the Interviews toolkit, and allows full access to the data structures and processing modules out of which the simulator is built. We have constructed several useful graphical modules for easy interaction with the structure and the contents of neural networks, and we've made it possible to change and adapt many things. At the programming level, we have set things up in such a way as to make user extensions as painless as possible. The programmer creates new C++ objects, which might be new kinds of units or new kinds of processes; once compiled and linked into the simulator, these new objects can then be accessed and used like any other.

**RNS**-
RNS (Recurrent Network Simulator) is a simulator for recurrent neural networks. Regular neural networks are also supported. The program uses a derivative of the back-propagation algorithm, but also includes other (not that well tested) algorithms.

Features include

- freely choosable connections, no restrictions besides memory or CPU constraints
- delayed links for recurrent networks
- fixed values or thresholds can be specified for weights
- (recurrent) back-propagation, Hebb, differential Hebb, simulated annealing and more
- patterns can be specified with bits, floats, characters, numbers, and random bit patterns with Hamming distances can be chosen for you
- user definable error functions
- output results can be used without modification as input

**Semantic Networks in Python**-
The semnet.py module defines several simple classes for building and using semantic networks. A semantic network is a way of representing knowledge, and it enables the program to do simple reasoning with very little effort on the part of the programmer.

The following classes are defined:

**Entity**: This class represents a noun; it is something which can be related to other things, and about which you can store facts.**Relation**: A Relation is a type of relationship which may exist between two entities. One special relation, "IS_A", is predefined because it has special meaning (a sort of logical inheritance).**Fact**: A Fact is an assertion that a relationship exists between two entities.

With these three object types, you can very quickly define knowledge about a set of objects, and query them for logical conclusions.

**Simbrain**-
- Web site: http://www.simbrain.net/

SIMBRAIN is a free tool for building, running, and analyzing neural-networks (computer simulations of brain circuitry). Simbrain aims to be as visual and easy-to-use as possible. Unique features of Simbrain include its integrated "world components" and its ability to represent a network's state space. Simbrain is written in Java.

**SNNS**-
Stuttgart Neural Net Simulator. The simulator kernel is written in C and quite fast. It supports over 20 different network architectures, has 2D and 3D X-based graphical representations, the 2D GUI has an integrated network editor, and can generate a separate NN program in C. SNNS is very powerful, though a bit difficult to learn at first. To help with this it comes with example networks and tutorials for many of the architectures. ENZO, a supplementary system allows you to evolve your networks with genetic algorithms.

**SyntaxNet**-
SyntaxNet, an open-source neural network framework implemented in TensorFlow that provides a foundation for Natural Language Understanding (NLU) systems. SyntaxNet is a framework for what's known in academic circles as a syntactic parser, which is a key first component in many NLU systems. Given a sentence as input, it tags each word with a part-of-speech (POS) tag that describes the word's syntactic function, and it determines the syntactic relationships between words in the sentence, represented in the dependency parse tree. These syntactic relationships are directly related to the underlying meaning of the sentence in question.

**TOOLDIAG**-
- Web site: http://www.inf.ufes.br/~thomas/home/soft.html
- Alt site: http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas/neural/systems/tooldiag/0.html

TOOLDIAG is a collection of methods for statistical pattern recognition. The main area of application is classification. The application area is limited to multidimensional continuous features, without any missing values. No symbolic features (attributes) are allowed. The program in implemented in the 'C' programming language and was tested in several computing environments.

**XNBC**-
- Web site: http://www.b3e.jussieu.fr/xnbc/

XNBC v8 is a simulation tool for the neuroscientists interested in simulating biological neural networks using a user friendly tool.

XNBC is a software package for simulating biological neural networks.

Four neuron models are available, three phenomenologic models (xnbc, leaky integrator and conditional burster) and an ion-conductance based model. Inputs to the simulated neurons can be provided by experimental data stored in files, allowing the creation of `hybrid'' networks.

Next Previous Contents