A small walk-through to show why ReLU is non linear!
-
Updated
May 28, 2021 - Jupyter Notebook
A small walk-through to show why ReLU is non linear!
Towards a regularity theory for ReLU networks (construction of approximating networks, ReLU derivative at zero, theory)
Backward pass of ReLU activation function for a neural network.
Implemented back-propagation algorithm on a neural network from scratch using Tanh and ReLU derivatives and performed experiments for learning purpose
Add a description, image, and links to the relu-derivative topic page so that developers can more easily learn about it.
To associate your repository with the relu-derivative topic, visit your repo's landing page and select "manage topics."