← Back to writing

AI Fundamentals — What Enterprise Professionals Actually Need to Know

You don't need a data science degree to understand AI. You need the right mental models. Here's the guide I wish someone had given me.

Six months ago, I was sitting in a meeting where someone said "we should use RAG for that" and I nodded like I knew exactly what they meant.

I didn't.

I come from SAP. I've spent years working on S/4 HANA transformations — mapping business processes, cleaning master data, convincing finance teams to trust new systems. I know enterprise technology. But the AI side of it? I was faking it more than I'd like to admit.

So I went back to fundamentals. Not to become a data scientist — but to understand enough that I'd never nod along in a meeting again without knowing what was actually being discussed.

This guide is what I wish someone had handed me at the start. No jargon for the sake of jargon. No dumbing down either. Just the core concepts, explained the way I'd explain them to a colleague over coffee.

If you work in enterprise tech — SAP, Oracle, whatever your system — this is for you.

01AI, Machine Learning, and Deep Learning — The Nesting Dolls

Think of it as three layers:

AI is the broad field — any system that performs tasks normally requiring human intelligence. Pattern recognition, language understanding, decision-making.

Machine Learning is a subset of AI where systems learn from data instead of following hard-coded rules. Instead of programming every decision, you feed the system examples and it figures out the patterns.

Deep Learning is a subset of ML using layered neural networks to handle complex tasks like understanding language or recognizing images.

Enterprise parallel

Think of AI as the project, ML as the methodology, and Deep Learning as the specific technique. Just like in SAP — S/4 HANA is the platform, Activate is the methodology, and a specific Fiori app is the implementation.

Most AI you'll encounter at work in 2026 is "narrow AI" — systems built for specific tasks. The general AI that can do everything a human can? That doesn't exist yet.

02Large Language Models (LLMs) — How ChatGPT Actually Works

When you type something into ChatGPT or Claude, here's what happens under the hood:

Training: The model has read billions of text — books, websites, code, papers. Not "memorized" them, but learned patterns from them. Grammar, facts, reasoning, coding conventions — all absorbed through one deceptively simple task: predicting the next word, trillions of times.

Generating: When you ask a question, the model doesn't look up an answer in a database. It generates a response by predicting the most likely next word, one word at a time, based on everything it learned during training.

The key technology is called the Transformer architecture (invented in 2017 at Google). Its innovation — "self-attention" — allows the model to understand how every word in a sentence relates to every other word. That's why it can follow complex instructions and maintain context across long conversations.

The major models right now: the latest GPT-series models from OpenAI, Anthropic's Claude family (with Opus, Sonnet, and Haiku variants), Google's Gemini models, Meta's open-source Llama family, and other open and commercial models like DeepSeek. They differ in training data, capability, cost, and safety measures.

The critical limitation

LLMs can "hallucinate" — generate confident, plausible answers that are completely wrong. Their knowledge has a cutoff date. And they reflect biases in their training data. For enterprise professionals: an LLM can draft a policy document that reads perfectly and is factually wrong. Always verify.

03AI Agents — Beyond Question-and-Answer

This is where things get relevant for enterprise.

A standard chatbot works like this: you ask, it answers, done.

An AI agent is different. It can break a goal into steps, decide which tools to use, execute actions, evaluate results, and adjust its approach — with minimal human input. It operates in a loop: perceive → reason → act → evaluate → repeat.

Practical examples

Why this matters for enterprise: SAP's Joule Studio — recently introduced and being rolled out — lets you build custom AI agents that connect into your S/4 HANA environment. Oracle has introduced agent-based capabilities in Fusion Applications. This isn't theoretical anymore — agents are being embedded into the systems you already run.

My honest take

The agent is only as good as the data and processes it connects to. If your master data has been wrong since the 2017 ECC migration, your AI agent will confidently automate the wrong things. Foundational work first.

04RAG — How AI Knows About Your Company

Here's a problem: LLMs are trained on public data. They don't know your company's refund policy, your internal processes, or last quarter's financial results.

RAG (Retrieval-Augmented Generation) solves this by connecting the LLM to your actual documents before it generates an answer.

How it works

  1. You ask a question — "What's our refund policy for enterprise clients?"
  2. The system searches your company's knowledge base for relevant documents
  3. It retrieves the most relevant sections
  4. The LLM generates an answer grounded in those actual documents — not its general training data

Why it matters

The enterprise application: Imagine an internal helpdesk that can answer questions about your specific SAP configuration, your company's procurement policies, or your HR guidelines — all by reading your actual documentation. That's RAG.

If you've ever struggled with institutional knowledge leaving when experienced consultants roll off a project — RAG is the technical answer to that problem.

05Embeddings — How AI Understands Meaning

This is the concept that made everything click for me.

An embedding converts text (or images, or any data) into a list of numbers — a "vector" — that captures its meaning. Think of it as plotting words on a map: "king" and "queen" would be close together (both royalty), while "king" and "banana" would be far apart.

A famous example: if you take the vector for "king," subtract "man," and add "woman," the closest result is "queen." The math captures relationships between meanings.

Why this matters practically

Enterprise parallel

Think of embeddings as the master data of AI. Just like clean master data is the foundation of a working SAP system, quality embeddings are the foundation of accurate AI retrieval. Garbage in, garbage out — same principle, different technology.

06Vector Databases — Where Embeddings Live

Regular databases find exact matches: "give me all orders where customer_id = 123."

Vector databases find similar meanings: "give me documents that are closest in meaning to this question."

They store millions of embedding vectors and use specialised algorithms to search through them in milliseconds. The popular ones right now: Pinecone (managed, zero-ops), Weaviate (open-source, good for combining with structured data), Qdrant (high-performance), Chroma (great for prototyping), and pgvector (if you already run PostgreSQL).

The connection: Vector databases are the backbone of most RAG systems. Your documents get embedded, stored in a vector database, and when someone asks a question, their query gets embedded too, and the database finds the most relevant matches.

07MCP — The Protocol Connecting AI to Everything

One last concept that's worth knowing, especially if you work in enterprise systems.

MCP (Model Context Protocol) is an emerging open standard introduced by Anthropic that aims to standardise how AI connects to external tools and data sources.

Think of it as USB for AI agents. Before standards like this, every AI tool needed a custom integration for every system it wanted to talk to. MCP creates one standard connector that works across different AI models and tools.

Some enterprise vendors are exploring or adopting similar approaches for their own AI platforms. That means AI agents — whether built by one vendor or another — can increasingly connect to enterprise environments through shared protocols rather than bespoke integrations.

For those of us who've spent years building integrations the hard way, this matters. The hard part of enterprise AI was never making the AI smart enough. It was making all your systems agree on how to share data.

The Bottom Line

You don't need to build these systems to lead AI initiatives. But you do need to understand what's happening inside them — enough to ask the right questions, spot the risks, and make good decisions.

If you come from enterprise systems — SAP, Oracle, whatever platform — you already understand data quality, process mapping, integration challenges, and change management. Those are the hard problems. The AI concepts above? Those you can learn.

I did. And I'm sharing the honest version, not the impressive version.

Between the Hype

A biweekly newsletter on where enterprise systems and AI actually intersect. Not the hype. The reality.

Subscribe on LinkedIn →
SR

Sven Romijn

Enterprise transformation consultant specialising in SAP S/4 HANA and AI adoption. Writing about the intersection of enterprise systems and AI — the reality, not the hype.