Back to Resources

AI Glossary

Key AI concepts explained for business leaders

Agentic AI

Agentic AI refers to artificial intelligence systems that can autonomously plan, reason, and execute multi-step tasks to achieve defined goals. Unlike conversational AI that responds to individual prompts, agentic AI systems break down complex objectives into subtasks, use tools and APIs, make intermediate decisions, and adapt their approach based on results. These agents operate within governance boundaries set by humans and escalate to human oversight at critical decision points. Agentic AI represents the shift from AI as a question-answering tool to AI as an operational participant in business workflows.

Arabic NLU

Arabic Natural Language Understanding (NLU) is the AI capability of comprehending Arabic text beyond surface-level word matching — including intent recognition, entity extraction, sentiment analysis, and contextual meaning. Arabic NLU is significantly more challenging than English NLU due to the language's morphological complexity (a single root can produce dozens of word forms), right-to-left script, the vast difference between Modern Standard Arabic and spoken dialects (Gulf, Egyptian, Levantine, Maghrebi), and the relative scarcity of high-quality Arabic training data. Effective Arabic NLU must handle dialect variation, code-switching between Arabic and English, and cultural context specific to the region.

Large Language Model (LLM)

A Large Language Model (LLM) is an AI system trained on vast amounts of text data that can understand, generate, and reason about human language. LLMs power most modern AI applications — from chatbots and writing assistants to code generation and data analysis. They work by predicting the most likely next words based on context, but their capabilities extend far beyond simple text completion: they can summarize documents, translate between languages, extract structured data from unstructured text, and follow complex instructions. LLMs are the foundation layer that other AI capabilities — like RAG, agentic workflows, and fine-tuning — build upon.

Model-Agnostic

Model-agnostic describes an AI architecture that is designed to work with any foundation model rather than being locked to a single provider. A model-agnostic system abstracts the model layer so that the underlying language model can be swapped — from one provider to another, or from cloud to on-premise — without rebuilding the application. This approach protects against vendor lock-in, enables compliance with data sovereignty regulations (critical in the GCC), allows switching to better models as they become available, and gives organizations leverage in commercial negotiations with model providers.

Retrieval-Augmented Generation (RAG)

Retrieval-Augmented Generation (RAG) is an AI architecture that enhances language model responses by first retrieving relevant information from external knowledge sources — such as company documents, databases, or knowledge bases — before generating an answer. Instead of relying solely on what the model learned during training, RAG grounds responses in your actual data. This dramatically reduces hallucination (fabricated answers), keeps outputs current with your latest information, and enables the AI to cite specific sources. RAG is the technical foundation behind most enterprise AI assistants and customer support systems.