How AI Works

AI For Procurement Series - Part 1  

Learn the basics of ChatGPT, LLM's and Prompt's 

The AI For Procurement Series  explains artificial intelligence topics in brief, concise articles tailored for procurement professionals.

This first article lays the foundation by explaining the basic functions associated with artificial intelligence. We'll accomplish this by explaining the meaning of common terms you may have heard.

Artificial Intelligence - This is now a catch-all phrase that could refer to anything from machine learning to human level intelligence. For our purposes we will focus on AI as applied to ChatGPT.

ChatGPT - An interface to a large language model (LLM). Released to the public in November 2022 by the company OpenAI, ChatGPT exploded in popularity in the first quarter of 2023. ChatGPT itself is simply an interface, a way of communicating with a LLM. Importantly, ChatGPT itself is not the LLM, or any sort of 'AI', it is simply a user interface.

Large Language Model (LLM) - The LLM is the actual model trained to interact with human language at large scale. There are many types of LLM's, but the most well known is GPT  from OpenAI. There is also BERT (Google), LLaMA (Facebook), and many others but this article will focus on GPT. LLM's are 'trained' on large corpuses of data comprising millions and even billions of data points. It is this large body of information that gives LLM's the ability to seemingly reason and converse in natural language. Training at large scale is time consuming, costly, and most importantly is a one time event. This means that a LLM only 'knows' information up to its training date (September 2021 for GPT3.5, and December 2023 for GPT4). This is called the 'knowledge cutoff date'.

GPT - Generative Pretrained Transformers are LLM's that generate text by predicting the next word in a sequence. The two most popular GPT's are GPT3.5 and GPT4 from OpenAI. Currently, GPT3.5 powers the free version of ChatGPT, and the paid version allows access to both GPT3.5 and GPT4. While GPT3.5 is good for basic tasks like lookup or summarization, any complex reasoning requires the much more capable GPT4. 

Before introducing more terms, let's visualize what we've covered so far:


So we have the ChatGPT interface communicating with the GPT3.5 or GPT4 LLMs. But how does it communicate? Let's introduce some new terms.

Prompt - Text passed to the LLM. The prompt is commonly thought of as the question entered into ChatGPT. But actually the prompt has 3 main components: System Instruction, Query, and Context. These components are combined together to create the prompt.

System Instruction - Instructions to be followed by the LLM. Think of these as global instructions that are included with every prompt. System Instructions are created by the developer of the interface, so ChatGPT has System Instructions that are unseen by the typical user (there are ways to discover the ChatGPT System Instructions). OpenAI also allows paid users to create Custom Instructions, which essentially act as additional System Instructions that are applied to every prompt.

Query - This is the actual input composed by the user. Most queries are simple, but they can become very complex for advanced users. The structure and language used in the query is crucial to the response received from the LLM, the art of crafting queries is commonly called 'prompt engineering', but this term can also have more complex meanings. 

Context - The information LLM's can answer questions about has two main constraints; time and proprietary content. Time is determined by the knowledge cutoff date discussed above. The really important constraint is proprietary content. For example, there is no way for LLM's to know information that is behind company firewalls. For procurement purposes, LLM's do not know current price and delivery, and they know some component data but not enough to be consistently useful, and they don't know your IPN's or ERP information. But it is possible to provide the LLM with current or proprietary data in the prompt. This is called 'context'.

Once again let's review what we've covered:


ChatGPT combines it's System Instructions with the User Query and optionally any Context data to create the Prompt, which it then sends to the LLM. The answer is called the 'Response'.

Response - The return from an LLM in response to a prompt. This may sound simple, but is actually one of the most powerful aspects of LLMs because the type and format of the response is highly customizable. Responses can be plain text, pdf's, images, tables, data visualizations (graphs, charts, etc.), json, xml, and more. Plain text can be formatted including bullets and lists, tables can be sorted and columns labeled any way imaginable, data visualizations can be customized any number of ways, and images is a whole deep subject all it's own. 

To summarize, AI works by using an interface (like ChatGPT) to construct a prompt (system instructions + query + context) that is sent to a LLM (GPT3.5 or GPT4) which returns a response (multi-format). 




Subscribe Now To Get The Latest



How AI Works
New call-to-action
New call-to-action
New call-to-action
New call-to-action
New call-to-action