Deep Learning

Some datasets contain highly complex patterns that cannot be effectively captured by the models. Deep Neural Networks (DNNs) are powerful and flexible models designed to handle such intricate data structures. In this chapter, we review three well-known types of DNNs, namely the Multi-Layer Perceptron (MLP) [1-3], Convolutional Neural Network (CNN) [3], and Recurrent Neural Network (RNN) [4].

References

[1] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.

[2] GeeksforGeeks, “Multi-layer perceptron learning in tensorflow,” GeeksforGeeks, accessed: September 30, 2022, https://www.geeksforgeeks.org/deep-learning/multi-layer-perceptron-learning-in-tensorflow/.

[3] Adrian Tam, “Building Multilayer Perceptron Models in PyTorch,” Machine Learning Mastery, accessed: April 8, 2023, https://machinelearningmastery.com/building-multilayer-perceptron-models-in-pytorch/.

[4] Zoumana Keita, “An introduction to convolutional neural networks (CNNs),” Datacamp, accessed: November 14, 2023, https://www.datacamp.com/tutorial/introduction-to-convolutional-neural-networks-cnns.

[5] Cole Stryker, “What is a recurrent neural network (RNN)?,” International Business Machines (IBM), accessed: 2025, https://www.ibm.com/think/topics/recurrent-neural-networks.