Artificial Intelligence Fundamentals

Expert-defined terms from the Professional Certificate in Artificial Intelligence for Asset Management course at London School of Planning and Management. Free to read, free to share, paired with a globally recognised certification pathway.

Artificial Intelligence Fundamentals

Artificial Intelligence Fundamentals Glossary #

Artificial Intelligence Fundamentals Glossary

1 #

Artificial Intelligence (AI)

Artificial Intelligence (AI) refers to the simulation of human intelligence proc… #

These processes include learning, reasoning, problem-solving, perception, and understanding natural language. AI is used to develop systems that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.

2 #

Machine Learning (ML)

Machine Learning (ML) is a subset of AI that enables machines to learn from data… #

ML algorithms use statistical techniques to enable machines to improve their performance on a specific task over time. ML is widely used in various applications such as image recognition, predictive analytics, and recommendation systems.

3 #

Deep Learning

Deep Learning is a subset of ML that uses artificial neural networks to mimic th… #

Deep Learning algorithms can automatically learn representations of data through multiple layers of abstraction. Deep Learning has achieved remarkable success in tasks such as image and speech recognition, natural language processing, and autonomous driving.

4 #

Neural Networks

Neural Networks are a set of algorithms modeled after the human brain's structur… #

They consist of interconnected nodes (neurons) that process information and transmit signals to one another. Neural Networks are the building blocks of Deep Learning and are used in various AI applications to perform tasks such as pattern recognition, classification, and regression.

5 #

Natural Language Processing (NLP)

Natural Language Processing (NLP) is a branch of AI that enables computers to un… #

NLP algorithms analyze and process text and speech data to extract meaning, sentiment, and intent. NLP is used in applications such as chatbots, sentiment analysis, language translation, and speech recognition.

6 #

Reinforcement Learning

Reinforcement Learning is a type of ML that enables agents to learn optimal beha… #

Reinforcement Learning is inspired by behavioral psychology and is used in applications such as game playing, robotics, and autonomous systems.

7 #

Supervised Learning

Supervised Learning is a type of ML where the model is trained on labeled data,… #

The goal of Supervised Learning is to learn a mapping function from input to output that can make predictions on unseen data. Supervised Learning is used in tasks such as classification and regression.

8 #

Unsupervised Learning

Unsupervised Learning is a type of ML where the model is trained on unlabeled da… #

The goal of Unsupervised Learning is to discover patterns, relationships, and structures in the data without explicit guidance. Unsupervised Learning is used in tasks such as clustering, dimensionality reduction, and anomaly detection.

9 #

Classification

Classification is a type of ML task where the goal is to predict the category or… #

Classification algorithms learn to assign input data to predefined categories based on their features. Classification is used in applications such as spam detection, image recognition, and medical diagnosis.

10 #

Regression

Regression is a type of ML task where the goal is to predict a continuous value… #

Regression algorithms learn to model the relationship between input features and output values to make predictions. Regression is used in applications such as stock price forecasting, sales prediction, and housing price estimation.

11 #

Convolutional Neural Networks (CNN)

Convolutional Neural Networks (CNN) are a type of Deep Learning architecture des… #

CNNs consist of convolutional layers that apply filters to extract spatial hierarchies of features from input data. CNNs are widely used in computer vision tasks such as object detection, image classification, and facial recognition.

12 #

Recurrent Neural Networks (RNN)

Recurrent Neural Networks (RNN) are a type of Deep Learning architecture designe… #

RNNs have recurrent connections that allow information to persist over time and capture dependencies in sequences. RNNs are used in applications such as language modeling, machine translation, and speech recognition.

13. Long Short #

Term Memory (LSTM)

Long Short #

Term Memory (LSTM) is a type of RNN architecture designed to overcome the vanishing gradient problem and capture long-range dependencies in sequential data. LSTMs have memory cells that can store and update information over time, making them effective for tasks that require modeling context and relationships over long sequences.

14 #

Generative Adversarial Networks (GANs)

Generative Adversarial Networks (GANs) are a type of Deep Learning architecture… #

The generator learns to generate realistic data samples, while the discriminator learns to distinguish between real and generated samples. GANs are used in tasks such as image generation, style transfer, and data augmentation.

15 #

Feedforward Neural Networks

Feedforward Neural Networks are a type of neural network where information flows… #

Feedforward Neural Networks consist of interconnected layers of neurons that perform computations and transmit signals through activation functions. Feedforward Neural Networks are used in tasks such as pattern recognition, classification, and regression.

16 #

Backpropagation

Backpropagation is a supervised learning algorithm used to train neural networks… #

Backpropagation calculates the gradient of the loss function with respect to the network's parameters and adjusts the weights using gradient descent. Backpropagation is a fundamental technique for training deep neural networks.

17 #

Activation Functions

Activation Functions are mathematical functions applied to the output of neurons… #

Common activation functions include sigmoid, tanh, ReLU, and softmax. Activation functions are crucial for the convergence and performance of neural networks.

18 #

Hidden Layers

Hidden Layers are layers of neurons in a neural network that are neither input n… #

Hidden Layers process input data through weighted connections and activation functions to learn representations of the data. The depth and width of hidden layers in a neural network affect the model's capacity to learn complex patterns and generalize to unseen data.

19 #

Tokenization

Tokenization is the process of breaking down text data into smaller units called… #

Tokenization is a preprocessing step in NLP tasks that enables computers to analyze and process text data at the word level. Tokenization is used in applications such as text classification, sentiment analysis, and language modeling.

20 #

Named Entity Recognition (NER)

Named Entity Recognition (NER) is a subtask of NLP that aims to identify and cla… #

NER algorithms extract entities from unstructured text and assign them predefined categories. NER is used in applications such as information extraction, question answering, and entity linking.

21 #

Sentiment Analysis

Sentiment Analysis is a task in NLP that involves determining the sentiment or e… #

Sentiment Analysis algorithms analyze and classify text based on the underlying sentiment to extract insights from customer reviews, social media posts, and survey responses. Sentiment Analysis is used in applications such as brand monitoring, customer feedback analysis, and opinion mining.

22 #

Language Modeling

Language Modeling is a task in NLP that involves predicting the probability of t… #

Language Models learn the statistical patterns and relationships between words in a corpus of text to generate coherent and contextually relevant text. Language Modeling is used in applications such as speech recognition, machine translation, and text generation.

23 #

Markov Decision Process (MDP)

Markov Decision Process (MDP) is a mathematical framework used in Reinforcement… #

An MDP consists of states, actions, transition probabilities, rewards, and a discount factor. MDPs enable agents to learn optimal policies that maximize cumulative rewards over time by making sequential decisions.

24. Q #

Learning

Q-Learning is a model-free Reinforcement Learning algorithm that learns an optim… #

Q-Learning updates the Q-values of state-action pairs based on rewards received and the maximum Q-value of the next state. Q-Learning is used in applications such as game playing, robotics, and autonomous navigation.

25 #

Policy Gradient Methods

Policy Gradient Methods are a class of Reinforcement Learning algorithms that di… #

Policy Gradient Methods use gradient ascent to update the policy parameters based on the expected return from trajectories. Policy Gradient Methods are used in applications such as robotics, control systems, and game playing.

26 #

Exploratory vs. Exploitative Behavior

Exploratory vs. Exploitative Behavior is a key trade #

off in Reinforcement Learning between exploring new actions to discover better strategies and exploiting known actions to maximize rewards. Balancing exploratory and exploitative behavior is crucial for Reinforcement Learning agents to learn optimal policies in uncertain environments. Strategies such as epsilon-greedy and softmax exploration are used to manage this trade-off.

27 #

Training Data

Training Data is a labeled dataset used to train ML models on examples of input… #

Training Data is used to optimize the model's parameters and learn patterns from the data through iterative training algorithms. High-quality and representative Training Data are essential for building accurate and generalizable ML models.

28 #

Testing Data

Testing Data is an unlabeled dataset used to evaluate the performance of ML mode… #

Testing Data is used to assess the model's generalization and predictive ability on new data samples. Testing Data is separate from Training Data to measure the model's performance independently and detect issues such as overfitting or underfitting.

29 #

Overfitting

Overfitting is a common problem in ML where a model learns to memorize the Train… #

Overfitting occurs when a model is too complex relative to the amount of data available, capturing noise and irrelevant patterns in the data. Techniques such as regularization, dropout, and early stopping are used to prevent overfitting.

30 #

Underfitting

Underfitting is a common problem in ML where a model is too simple to capture th… #

Underfitting occurs when a model is unable to learn the true relationship between input features and output labels. Increasing model complexity, adding features, or using more expressive algorithms can help alleviate underfitting.

31. Cross #

Validation

Cross #

Validation is a technique used to assess the performance and generalization of ML models by partitioning the dataset into multiple subsets for training and testing. Cross-Validation helps evaluate a model's robustness to variations in the data and prevent overfitting or data leakage. Common cross-validation methods include k-Fold Cross-Validation and Leave-One-Out Cross-Validation.

32 #

Clustering

Clustering is a type of Unsupervised Learning task where the goal is to group si… #

Clustering algorithms partition the data into clusters such that data points within the same cluster are more similar to each other than to data points in other clusters. Clustering is used in applications such as customer segmentation, anomaly detection, and recommendation systems.

33 #

Principal Component Analysis (PCA)

Principal Component Analysis (PCA) is a dimensionality reduction technique used… #

PCA identifies the principal components that explain the variance in the data and projects the data onto these components. PCA is used for data visualization, feature selection, and noise reduction.

34 #

Autoencoders

Autoencoders are a type of neural network architecture used for unsupervised lea… #

Autoencoders consist of an encoder that maps input data to a lower-dimensional latent space and a decoder that reconstructs the input data from the latent representation. Autoencoders are used for feature learning, data denoising, and anomaly detection.

35 #

Generative Models

Generative Models are a class of ML models that learn to generate new data sampl… #

Generative Models capture the underlying patterns and structures in the data to produce realistic and novel samples. Generative Models are used in applications such as image generation, text generation, and data synthesis.

These fundamental terms provide a solid foundation in Artificial Intelligence co… #

Understanding these terms is crucial for building intelligent systems, analyzing financial data, and making data-driven decisions in asset management. By mastering these AI fundamentals, learners can enhance their skills, explore advanced topics, and apply AI techniques to real-world asset management challenges.

May 2026 cohort · 28 days left
from £99 GBP
Enrol