Agent AI Glossary
A
- Agentic AI – AI systems that exhibit autonomy, decision-making capabilities, and proactive behavior rather than just responding passively to inputs.
- Autonomy – The ability of an AI agent to act independently without continuous human intervention.
- Action Model – A model that predicts the effects of an AI agent’s actions in an environment.
- Adaptive Learning – The ability of an AI system to adjust its behavior based on feedback and new data.
B
- Behavioural Policy – A set of rules or learned strategies that determine an agent’s actions in different scenarios.
- Belief State – The internal representation an agent maintains about the world, including uncertainties.
- Black-Box AI – AI models whose decision-making processes are opaque and not easily interpretable.
- Bounded Rationality – The idea that AI agents operate with limited computational resources and knowledge when making decisions.
C
- Cognitive Architecture – The underlying framework that supports reasoning, memory, and learning in agentic AI.
- Computational Agency – The level of autonomy and self-directed decision-making an AI system possesses.
- Contextual Awareness – The ability of an AI agent to understand and act based on situational and environmental cues.
- Control Loop – The process by which an agent perceives, decides, and acts in response to an environment.
D
- Decision Theory – The mathematical framework for modelling decision-making under uncertainty.
- Deliberative AI – AI systems that explicitly plan actions before executing them, as opposed to purely reactive AI.
- Deterministic Agent – An AI agent that produces the same output given the same input and state.
- Distributed Agency – A system where multiple AI agents collaborate to achieve goals.
E
- Embodied AI – AI that interacts with the physical world through robotics or sensor-equipped systems.
- Emergent Behaviour – Unplanned or unexpected behaviour arising from an agent’s interactions with its environment.
- Epistemic Uncertainty – Uncertainty due to an agent’s lack of knowledge about the environment or task.
F
- Feedback Loop – The process in which an agent’s actions influence the environment, and the resulting changes affect future actions.
- Few-Shot Learning – The ability of an AI agent to learn from a small number of examples.
- Forward Model – A predictive model used by AI agents to anticipate the outcomes of their actions.
G
- Goal-Oriented Agent – An AI system that operates with the objective of achieving predefined goals.
- Grounding – The process by which an AI system associates symbols or data with real-world meaning.
H
- Hierarchical Reinforcement Learning (HRL) – A reinforcement learning approach where decisions are made at multiple levels of abstraction.
- Human-in-the-Loop (HITL) – A system design where humans provide feedback or intervention in an AI’s decision-making process.
- Hybrid AI – AI systems that combine symbolic reasoning with machine learning to enhance decision-making.
I
- Inference Engine – The part of an AI system that applies logical rules to the knowledge base to derive conclusions.
- Interactive Agent – An AI that engages in dynamic exchanges with users or other agents.
K
- Knowledge Representation – The way an AI stores and organises information about the world.
- Kalman Filter – A mathematical algorithm used in AI for estimating variables in dynamic systems.
L
- Latent Space – The abstract multi-dimensional representation of learned features in AI models.
- LLM Agent – A large language model (LLM) configured with agentic behaviour, capable of planning and autonomous execution.
M
- Markov Decision Process (MDP) – A mathematical framework for modelling decision-making in stochastic environments.
- Meta-Learning – The ability of an AI system to learn how to learn across different tasks.
- Multi-Agent System (MAS) – A system where multiple AI agents interact and collaborate to complete tasks.
N
- Neural-Symbolic AI – AI that combines neural networks with rule-based symbolic reasoning.
- Non-Deterministic Agent – An AI whose outputs can vary even when given the same inputs due to probabilistic decision-making.
O
- Observability – The extent to which an AI agent can perceive its environment fully or partially.
- Ontologies – Structured representations of knowledge that define relationships between concepts in an AI system.
P
- Planning – The process by which an agentic AI system generates a sequence of actions to achieve a goal.
- Perception Module – The part of an AI agent that interprets sensor inputs to understand its surroundings.
- Prompt Engineering – The process of designing inputs to guide an LLM agent’s behaviour effectively.
R
- Reinforcement Learning (RL) – A machine learning technique where an agent learns by interacting with an environment and receiving rewards.
- Robustness – The ability of an AI system to perform reliably under varied conditions and perturbations.
- Rule-Based Agent – An AI system that follows predefined logical rules for decision-making.
S
- Self-Supervised Learning – A form of AI training where the system learns patterns from unlabelled data.
- Semi-Autonomous Agent – An AI that can operate independently but requires occasional human intervention.
- State Space – The set of all possible conditions an agent might encounter.
- Symbolic AI – AI systems that use logic and structured rules rather than data-driven neural networks.
T
- Temporal Difference Learning (TD Learning) – A reinforcement learning method where an agent learns by bootstrapping future rewards.
- Transfer Learning – The ability of an AI model to apply knowledge learned from one task to another.
- Trustworthy AI – AI systems designed to be transparent, fair, and reliable.
U
- Unsupervised Reinforcement Learning – A method where agents learn to explore and discover goals without predefined rewards.
- Utility Function – A mathematical representation of an AI agent’s objectives and preferences.
V
- Value Function – A function that estimates the expected return of taking actions in different states in RL.
- Vector Embeddings – Numeric representations of data that AI agents use for reasoning and similarity comparisons.
W
- World Model – An internal representation of the environment that an AI agent uses for planning and decision-making.
X
- Explainable AI (XAI) – AI systems that provide transparent and interpretable decision-making processes.
Z
- Zero-Shot Learning – The ability of an AI agent to generalise to new situations without prior examples.