A Scala implementation of Micrograd, approached with a pure and functional style. This project was inspired by and built while following Andrej Karpathy's YouTube video Spelled-out intro to neural networks.
Inside, you'll discover:
- A domain-specific language for crafting mathematical expressions.
- A pure functional backpropagation engine.
- The foundational elements for a small neural network library.
- Utilities for visualizing expression graphs and observing neuron training.
To get started, you'll need:
- SBT: For compiling and running the Scala code.
- Graphviz: For rendering the mathematical expression visualizations.
Once you have SBT and Graphviz installed, you can dive into the demos:
Sicrograd's DSL allows you to intuitively define variables (like weights and biases) and combine them to construct mathematical expressions. These expressions can then be visualized.
Here’s a taste:
The output of this code is the following diagram:
Sicrograd can train simple Multi-Layer Perceptrons (MLPs). The repository includes demos for tasks like function approximation and binary classification.
The SinFunctionDemo trains a single-layer MLP to approximate the sin function for values between [0, 2π].
The network is trained on randomly generated data points:
And here's how the network's output compares to the actual sin function after training:
The MoonDataSetDemo provides a full example of training an MLP as a binary classifier on the classic "moon" dataset.
The data used to train the neural network are shown below:
Here is the achieved decision boundary on the moon dataset:

For a peek into the learning process, the VisualizeNeuronTraining class demonstrates how a neuron's weights are updated by the gradient descent algorithm. While the GIF aesthetics are a work-in-progress, it effectively illustrates the iterative nature of training:
.png)




_-_y**2.gif)


