Deep Learning Techniques

Expert-defined terms from the Professional Certificate in Artificial Intelligence for Quality Management Pioneers course at London School of Planning and Management. Free to read, free to share, paired with a globally recognised certification pathway.

Deep Learning Techniques

**Activation function #

** A function applied to the output of a neural network layer to introduce non-linearity, enabling the model to learn complex patterns. Common activation functions include the sigmoid, tanh, and ReLU.

**Artificial Neural Network (ANN) #

** A computing system inspired by the human brain's interconnected neurons, designed to learn and solve problems through data processing and pattern recognition.

**Backpropagation #

** A training algorithm for artificial neural networks that calculates the gradient of the loss function with respect to each weight by propagating the error backward through the network layers.

**Convolutional Neural Network (CNN) #

** A type of deep learning model primarily used for image analysis and processing, characterized by convolutional and pooling layers that extract local features.

**Convolutional layer #

** A layer in a CNN that applies a set of learnable filters to the input data, performing a convolution operation to extract local features.

**Deep Learning (DL) #

** A subset of machine learning based on artificial neural networks with multiple layers, enabling the learning of complex patterns and representations from data.

**Dropout #

** A regularization technique for deep learning models that randomly drops a specified percentage of neurons during training, preventing overfitting and improving generalization.

**Epoch #

** A complete pass through the entire training dataset during deep learning model training.

**Fully Connected Layer (FCL) #

** A layer in a neural network where each neuron is connected to every neuron in the preceding and succeeding layers, typically used in the final layers for classification tasks.

**Gradient Descent #

** An optimization algorithm used to minimize the loss function in deep learning models by iteratively adjusting the weights in the direction of the negative gradient.

**Hyperparameter #

** A configuration variable in a deep learning model, such as learning rate, batch size, or number of layers, that is set before training and not learned from the data.

**Long Short #

Term Memory (LSTM):** A type of recurrent neural network (RNN) architecture designed to handle long-term dependencies in sequential data, utilizing specialized memory cells and gate mechanisms.

**Loss function #

** A mathematical function that quantifies the difference between the predicted output and the actual output for a given input, measuring the performance of a deep learning model.

**Overfitting #

** A situation in deep learning where a model learns the training data too well, capturing noise and irrelevant patterns, resulting in poor generalization to unseen data.

**Pooling layer #

** A layer in a CNN that reduces the spatial dimensions of the input data, preserving the most important features and reducing computational complexity.

**Recurrent Neural Network (RNN) #

** A type of deep learning model designed for sequential data analysis, where connections between neurons form directed cycles, enabling the model to maintain an internal state or memory of previous inputs.

**Regularization #

** A technique used to prevent overfitting in deep learning models by adding a penalty term to the loss function, encouraging the model to have simpler weights and avoid learning noise in the training data.

**ReLU (Rectified Linear Unit) #

** A popular activation function in deep learning that outputs the input directly if it is positive, otherwise setting it to zero, introducing non-linearity and reducing the vanishing gradient problem.

**Softmax #

** A generalization of the logistic function used as an activation function in the output layer of a deep learning model for multi-class classification tasks, producing a probability distribution over the classes.

**Transfer learning #

** A deep learning technique where a pre-trained model is fine-tuned for a different but related task, leveraging the knowledge and features learned from the initial task.

**Underfitting #

** A situation in deep learning where a model fails to capture the underlying patterns in the training data, resulting in poor performance on both the training and test datasets.

**Vanishing gradient problem #

** A difficulty in training deep neural networks where the gradients of the loss function become extremely small, making it challenging for the weights to update and learn effectively.

Please note that the character limit for this platform may not allow for a 3000 #

word glossary. However, the above terms and explanations should provide a solid foundation for understanding deep learning techniques in the context of the Professional Certificate in Artificial Intelligence for Quality Management Pioneers.

May 2026 cohort · 29 days left
from £99 GBP
Enrol