PrevUpHomeNext

cpp dlib dnn cnn


c++ dlib dnn cnn - Posted on Jun 8, 2024 - See http://dlib.net - Logs Home - d0020

cpp dlib dnn cnn

CNN - The convolutional neural network of c++ dlib.

c++ dlib deep neural network, machine learning, convolutional neural network.

Neural Network Layers

CNN Layers

c++ dlib layer is also called SUBNET, a subnet has its own subnet, and a subnet can be used as subnet of other subnet.

Images or samples

Images or samples are stored in dlib::matrx .

using image_type = dlib::matrix<dlib::rgb_pixel>;

using sample_type = dlib::matrix<unsignd long>;

Input Layer

Input image data. Input layer is the first layer of dlib nn layers.

using input_layer = dlib::input<image_type>;

Convolutional Layer

Algorithm: Sum [ f(x) g(a-x) ]

using con_layer = dlib::con<
	2,	// num_filters
	4,	// nr
	4,	// nc
	4,	// stride_y
	4,	// stride_x
	input_layer	// subnet
>;

In this example, input_layer is used as the subnet of con_layer.

ReLU Layer

Algorithm: Max (0, x)

using relu_layer = dlib::relu<con_layer>;

In this example, con_layer is used as the subnet of relu_layer.

Pooling Layer

Max Pool

Scan each area of an image and take every max value.

AVG Pool

Scan each area of an image and take every average value.

using max_pool_layer = dlib::max_pool<
	3,	// nr
	3,	// nc
	3,	// stride_y
	3,	// stride_x
	relu_layer	// subnet
>;

In this example, relu_layer is used as the subnet of max_pool_layer .

Fully Connection Layer

For classification and regression of next layers

using fc_layer = dlib::fc<
	2,	// num_outputs
	max_pool_layer	// subnet
>;

In this example, max_pool_layer is used as the subnet of fc_layer .

Loss Layer

The error loss. Less layer is the last layer of dlib nn layers.

dlib::loss_multiclass_log

dlib::loss_multiclass_log: multiclass logistic regression loss layer.

using loss_layer = dlib::loss_multiclass_log<
	fc_layer	// subnet
>;

In this example, fc_layer is used as the subnet of loss_layer .

dlib::loss_mmod

dlib::loss_mmod: Max Margin Object Object Detection Loss Layer .

using loss_layer = dlib::loss_mmod<
	con_layer	// subnet
>;

In this example, con_layer is used as the subnet of loss_layer .

FC-layer can be used as the subnet of layer dlib::loss_multiclass_log, but it can not be used as the subnet of layer dlib::loss_mmod.

cpp example

c++ dlib cnn example.

cpp code

File: dnn.cpp

#include <dlib/dnn.h>
#include <dlib/data_io.h>
#include <utxcpp/core.hpp>

namespace my_dnn
{

using image_type = dlib::matrix<dlib::rgb_pixel>;

using crap_net_type = dlib::loss_mmod<
	dlib::con<
		1,
		1,1,1,1,
		dlib::max_pool<
			2,2,2,2,
			dlib::relu<
				dlib::con<
					3,
					4,2,4,2,
					dlib::input<my_dnn::image_type>
				>
			>
		>
	>
>;

using label_type = typename my_dnn::crap_net_type::training_label_type;

}	// namespace mydnn

int main()
{
	utx::same_assert<my_dnn::label_type, std::vector<dlib::mmod_rect>>();

	std::vector<my_dnn::image_type> images;
	std::vector<my_dnn::label_type> labels;
	dlib::load_image_dataset(images, labels, "path/to/images/a.xml");

	dlib::mmod_options options{labels, 60, 50};
	my_dnn::crap_net_type net{options};
	dlib::dnn_trainer<my_dnn::crap_net_type> trainer{net};
	
	trainer.set_synchronization_file("./training_data.dat");

	trainer.train(images, labels);

	trainer.get_net();

	utx::print("trained net=>\n", net);
}

build with b2 build

jamfile (jamroot)

lib dlib
	:
	:
		<name>dlib
;

exe dnn
	:
		dnn.cpp
	:
		<cxxstd>23
		<library>dlib
		<linkflags>"-lpng -ljpeg"
;

See Also

c++ dlib

utx::same_assert

utx::print

B2 Build


PrevUpHomeNext

utx::print

esv::print