A machine learning project implementing a fully connected multilayer perceptron from scratch, without relying on external machine learning frameworks.
The network supports forward propagation, backpropagation, and training via gradient-based optimization, including an implementation of the Adam optimizer for faster and more stable convergence.
Multiple activation functions are available, including ReLU and sigmoid, allowing the model to be adapted to different classification scenarios and dataset characteristics.
The architecture and training parameters are fully configurable through a dedicated configuration file, enabling control over network depth, layer sizes, learning rates, batch sizes, and activation functions without modifying the code.