Skip to content

I build the Micrograd autogradient engine, which is a functioning neural network with forward pass, backward propagation, and stochastic gradient descent, all built from scratch. This is derived from the great @karpathy micrograd lecture. Each notebook is complete with Andrei's lecture code and speech, as well as my own code, anecdotes and addition

License

Notifications You must be signed in to change notification settings

mattsankner/micrograd

Repository files navigation

Micrograd Autograd Engine

In this series of notebooks, I build a bare-bones neural network (Multi-Layer Perceptron) with python. It is complete with forward pass, backpropogation, and stochastic gradient descent. To do this, I re-create Andrej Karpathy's YoutTube lecture, "The Spelled-out Intro to Neural Networks and Backpropagation: Building Micrograd." in a more byte-sized manner, complete with Andrei's and my own anecdotes. I separate concepts between notebooks so you can access multiple versions of the code throughout the lecture with different focuses in each notebook. This is a robust, and more step-by-step, readable, and workbook like version of how you might learn the concepts required to build Micrograd.

We will build Micrograd with 100 lines of Python code, as well as build a library on top of Micrograd for the neurons, layers, and MLP's! Agenda, concepts, and important links are below.

Agenda & Concepts:

  • Building micrograd, an autogradient engine that can evaluate the gradient of a loss function for the weights of a neural network
  • Neural Networks as math expressions
  • Calcualting derivatives to update the weights using the chain rule
  • Building a topological graph of nodes to illustrate this process
  • Elements of a neural network from neuron to multi-layer perceptron
  • The flow of training a neural network -> forward pass, backward pass, gradient descent, optimization
  • Common issues
  • Build a binary classifier that uses micrograd at the end

Notebooks (Meant to be completed in order)

Notebook 1: Derivatives

Open In Colab View in nbviewer

Notebook 2: Chain Rule

Open In Colab View in nbviewer

Notebook 3: Backpropagation

Open In Colab View in nbviewer

Notebook 4: Activation

Open In Colab View in nbviewer

Notebook 5: PyTorch

Open In Colab View in nbviewer

Notebook 6: Gradient Descent

Open In Colab View in nbviewer

Notebook 7: Final

Open in Colab View in nbviewer

Notebook 8: Binary Classifier Exercise

Open In Colab View in nbviewer

Andrej's code/github:

  • Open In Colab Code Part 1 of Video
  • Open In Colab Code Part 2 of Video
  • Open In Colab Solutions

Andrej's Video: Watch on YouTube

About

I build the Micrograd autogradient engine, which is a functioning neural network with forward pass, backward propagation, and stochastic gradient descent, all built from scratch. This is derived from the great @karpathy micrograd lecture. Each notebook is complete with Andrei's lecture code and speech, as well as my own code, anecdotes and addition

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published