Cheat Sheet – The Most Common AI Terms You Should Know

Want to sound super smart at your next founder’s meeting or cocktail party?

Here are the most important terms to know in the field of AI, explained in plain english.

Algorithm – A finite set of rigorous instructions that perform calculations and data processing.

Artificial intelligence – A general concept of machines acting in a way that mimics human intelligence, such as human-like communication or decision making.

Artificial general intelligence – A hypothetical advanced intelligent agent that could accomplish any intellectual task that humans or animals can perform. Also known as “Strong AI.”

Bias – A disproportionate weight in favor of or against an idea or thing, usually in a way that is close-minded, prejudicial or unfair. They can be innate or learned, and are a systematic error.

Chain of thought prompting – A type of prompting that improves the performance of an LLM by encouraging it to explain its reasoning.

Chatbot – A software application or web interface that aims to mimic human conversation through text and voice interactions.

Emergence – Capabilities that arise in AI systems unpredictably that are not observable in its general parts. For example, GPT3 self trained to read natural language but also taught itself how to program.

Few shot prompting – A technique to enable in-context learning to steer a model to better performance.

Fine tuning – A process by which an AI chatbot is given feedback to give better results.

Generative AI – A type of machine learning that creates content by learning patterns in training data and synthesizing new material with the same learning characteristics. Examples include ChatGPT and Midjourney.

Graphics processing unit(GPU) – A type of processor that is suited to powering AI hardware because it can perform more simultaneous computations than a CPU.

Intelligence – The capacity or ability to acquire, apprehend and apply knowledge in a behavioral context.

Intelligence amplification – The effective use of information technology in augmenting human intelligence. Originally expressed in the 1950s and 1960s by cybernetics and computer pioneers.

Large language models(LLMs) – Deep learning algorithms that understand, summarize, generate and predict new content. Example, ChatGPT and Claude.

Machine learning – An umbrella term for solving problems by helping machines ‘discover’ their ‘own’ algorithms, without needing to be explicitly told what to do by any human-developed algorithms.

Multimodal – AI systems that can handle input and produce output in several mediums, including any combination of images, text, video, or sound, as opposed to just text.

Natural language processing(NLP) – The ability of a computer program to understand human language as it is spoken and written. It is an interdisciplinary subfield of computer science and linguistics.

Neural network – A computer system designed to function like the human brain. It can perform many tasks involving speech, vision and board game strategy.

OpenAI – An American artificial intelligence company responsible for creating ChatGPT and Dall-e.

Prompt – An input type, often in plain English, that generates a response in LLMs and other generative AI.

Prompt engineering – The processing of developing and refining prompts for LLMs that refines the output of a generative AI model.

Prompt chain – A series of prompts that guide an AI’s to produce higher quality output.

Retrieval Augmented Generations(RAGS) – A double layered system in which an LLM retrieves facts from an external knowledge base, to produce the most accurate, up-to-date information. Example, Google’s BARD uses the Google search engine results as the basis of its output.

Sentiment analysis – Also known as opinion mining, it analyzes text for tone and opinion with AI. Used in Fintech to assess buy signals.

Supervised learning – A type of machine learning where structured datasets are used to train and develop an algorithm.

Technological singularity – A point in the future in which advanced AI become more intelligent than humans and technological growth becomes uncontrollable. Originally conceived of by Ray Kurtweil, and often referred to simply as “The Singularity.”

Token – A basic unit of text that an LLM uses to understand and generate language. It may be a word or parts of a word. Paid LLMs charge users by tokens.

Transformer – Used in NLP, they can uniquely process context and long-term dependencies in language.

Turing test – Named after Alan Turing, the father of AI, this is a graded test in which a machine’s output is indistinguishable from that of a human.

Unsupervised learning – A form of training where algorithms are asked to make inferences from datasets that don’t contain labels. Those inference are what help it learn.

Zero shot learning – A machine learning technique in which algorithms observe samples that were not present in training and predict what class they belong too. For example, an AI trained to recognize cats could classify a lion as a big cat.


I hope you have found this cheat sheet useful. If you did, please leave me a comment and follow me for daily posts on copywriting and how to use AI.


Subscribe For Copywriting Tips

Make more money. Be more persuasive. Build your dream life.