Multilayer Perceptron

arrow icon

A machine learning project implementing a fully connected multilayer perceptron from scratch, without relying on external machine learning frameworks.

Command-line interface showcasing available options and usage information

The network supports forward propagation, backpropagation, and training via gradient-based optimization, including an implementation of the Adam optimizer for faster and more stable convergence.

Training and evaluation metrics produced by the multilayer perceptron

Multiple activation functions are available, including ReLU and sigmoid, allowing the model to be adapted to different classification scenarios and dataset characteristics.

Loss curve showing convergence during training

The architecture and training parameters are fully configurable through a dedicated configuration file, enabling control over network depth, layer sizes, learning rates, batch sizes, and activation functions without modifying the code.

Accuracy curve showing model performance over training epochs