About Documentation Tutorial

Documentation dl/flint_dl.hpp

Flint's C++ Deep Learning Framework

C++ Deep Learning Framework

The Deep learning framework is installed with the library and header only. You can include the general functionality of the framework with the
<flint/dl/flint_dl.hpp>
header which just includes the other headers. The library heavily uses new C++ features, preferring templates and concepts over object orientation and inheritance since it is more flexible, comfortable and performant than complex inheritance structures. As an user of the library - thanks to the
auto
type deduction of C++ - this should not concern you too much, but if you want to write your own layer, optimizers, ... you should become familiar with such features - don't worry though, their usage in the library should be fairly intuitive.
Models dl/models.hpp, Trainer dl/trainer.hpp

A Model is the abstraction of a complete neural network consisting of multiple layers. It facilitates functions to pass inputs through the complete network and to train it.
The file dl/models.hpp describes the concept of a model and implements the most common ones. For the training process an additional class for the data loading was introduces that is contained in trainer.hpp and described in this documentation too.

Layers dl/layers.hpp, Activations dl/activations.hpp

A layer represents a function application (a forward pass) that receives an input tensor and outputs its result. There are basically two types of layers: trainable ones (with parameters like weights, filters and biases) and untrainable ones (simple permutation of the input, like acitivation functions or dropout). The concept of a layer and derivable classes representing layers with weights (
class Layer
) and untrainable layers (
class UntrainableLayer
) are included in dl/layers.hpp.
Implementations of common deep learning layers are included in the folder dl/layers/ and include fully connected layers, convolution layers, dropout layers and so on. The implementation of activation functions (dl/activations.hpp) are also included in this documentation, since they are just untrainable layers.

Optimizers dl/optimizers.hpp, Losses dl/losses.hpp

Optimizers receive a weight and the gradient of an error tensor to that weight to optimize it. Each weight has its own optimizer and the concept and an implementation of a common one (Adam) is included in dl/optimizers.hpp. Because of the relation loss functions (contained in dl/losses.hpp) care contained in this documentation too.