Adversarial Machine Learning
A technique in machine learning where models are trained to identify and mitigate adversarial attacks, which are inputs deliberately designed to confuse the model and provoke incorrect outputs.
Commonly referred to as a “Chatbot”, this is identifying a unique AI interaction where typically through chat or voice, you can interact with an entity that is focused on a particular topic, or is influenced by a unique set of instructions
When an AI model generates imagery or text that does not accurately reflect factual reality. Usually driven by the AI trying to create coherent outputs from unfamiliar or inconsistent inputs.
This is the human person who is interacting with the AI assistant and is either giving instructions, asking questions, or having a conversation with that assistant. Although Prompt Engineering is an obvious part of the skillset, this role would do more than just write prompt instructions.
AutoML (Automated Machine Learning)
The process of automating the end-to-end process of applying machine learning to real-world problems. It includes automated data preprocessing, feature engineering, model selection, and hyperparameter tuning.
A fundamental algorithm used in training neural networks, particularly deep learning models. It involves the process of adjusting the weights of neural connections based on the error rate obtained in the previous epoch (iteration).
Bias in AI
Refers to biases in machine learning models that can result from unrepresentative or incomplete training data, flawed model design, or unintended algorithmic biases. These biases can lead to unfair or discriminatory outcomes.
A type of artificial neural network designed to better encode hierarchical relationships and spatial orientation, proposed as an improvement over convolutional networks for certain tasks.
We prefer to call the “AI Assistants”, but it is typically an AI resource that you interact with through a common chat interface like texting or DMing with a friend. Many of these are tuned with a unique set of instructions to make them more accurate and focused.
A conversational AI system created by Anthropic that can answer questions, generate content (articles, code, etc), summarize information, perform translations and more based on providing it natural language prompts. Understands context and learns interactively.
An AI assistant created by Anthropic focused on helpfulness, honesty and harmlessness. Can provide a range of information, writing, content generation and task support through natural language conversations. Has a friendly, knowledgeable persona.
This is a specific type of way to interact with AI, that is most commonly found in AI Assistants (aka Chatbots) that mimics how you interact with friends and colleagues through chat.
A system created by OpenAI that can generate realistic images and art from a text description through natural language prompts. Allows both creative open-ended generation as well as precise image editing.
A subset of machine learning involving neural networks with many layers (deep networks), which allows the learning of complex patterns in large amounts of data. Widely used for image and speech recognition, natural language processing, and more.
Generative AI models that create data (images, audio, text, etc) that mimics an existing dataset. Useful for tasks like image generation, super resolution, denoising, and inpainting. Example models include DALL-E 2 and stable diffusion.
A machine learning technique that combines several base models to produce one optimal predictive model. Techniques include bagging, boosting, and stacking.
Explainable AI (XAI)
Refers to methods and techniques in the field of AI that make the results of the solution understandable by humans. It contrasts with the concept of the “black box” in machine learning where even its designers cannot explain why an AI arrived at a specific decision.
A distributed machine learning approach that trains AI models using decentralized data from various sources while keeping the data localized through collaboration between devices or entities without transferring bulk data to one location.
The technique of further training an already trained machine learning model with a smaller dataset that is highly relevant to a specific task. This tailors the model for enhanced performance on specialized use cases.
Technology that creates new data outputs – such as text, code, media or other content – after training on large volumes of data. The outputs mimic patterns in training data without explicitly being programmed for the task. Enables creative applications.
Generative Text Models
AI systems focused on generating realistic human-written text by learning linguistic patterns from vast datasets of articles, books, websites and other sources. Prominent examples include GPT-3 and Claude. Enable applications like automated content writing.
Generative Visual Media Models
AI models trained on large datasets of images, videos, or 3D scenes to generate realistic original visual media from text prompts. Can produce high-fidelity results tailored to specified descriptive inputs. Used for video/image creation and editing tools.
GPT (Generative Pre-trained Transformer)
A series of language processing AI models developed by OpenAI, notable for their ability to generate coherent and contextually relevant text based on a given prompt. GPT models are a subset of transformers, a type of neural network architecture.
A technique designed for solving a problem more quickly when classic methods are too slow or for finding an approximate solution when classic methods fail to find any exact solution.
Using AI models to predict and generate visual information beyond the edges of an existing image, expanding its boundaries. Allows filling new blank areas with realistic-looking contents inferred by the algorithm.
A knowledge base that uses a graph-structured data model or topology to integrate data. Knowledge graphs are often used to store interlinked descriptions of entities – objects, events, situations, or abstract concepts.
In machine learning, especially in the context of generative models, it refers to the representation of compressed data in a lower-dimensional space, often used to capture hidden factors or features.
Stands for Large Language Model. Refers to a broad class of natural language processing models that have been trained on extensive text datasets, enabling them to perform a range of language generation tasks. Prominent examples include GPT-3 and Claude.
Machine Learning (ML)
The study and construction of algorithms that can learn and improve at tasks independently by analyzing data and feedback without following explicitly programmed instructions. Underlies modern AI breakthroughs.
A platform that allows users to create original images, illustrations and animations through text prompts using a cutting-edge text-to-image generation model trained via machine learning on millions of images.
The extent to which a human can understand the cause of a decision made by a machine learning model. It’s a crucial aspect in many domains for trust and ethical decision-making.
AI models that can process multiple data types – text, images, speech, video – as both inputs and outputs. Allow richer contextual understanding and generation ability. A multimodal chatbot, for example, could interpret images and spoken questions then respond with generated imagery and text.
Natural Language Processing (NLP)
A field of artificial intelligence focused on enabling computers to understand, interpret and generate human languages including analysis of text data for things like sentiment, intent and entities. Critical for applications like search, chatbots and more.
A computing system modeled on the neuronal structure of the human brain that uses interconnected nodes to recognize underlying relationships in data without task-specific programming. Foundational to modern AI, especially in fields like computer vision and NLP.
Products, platforms and tools that allow building AI systems like assistants, chatbots, generative applications and more through graphical interfaces instead of traditional coding. Democratizes access to applied AI.
When a machine learning model performs very well on the data it has been trained on, but is unable to generalize well to new, unseen data. This reduces its real-world utility so should be avoided during model development.
The crafting of effective natural language prompts to provide guidance to generative AI systems in order to produce a desired output. Considered an art form requiring intuition and skill to perfect.
When someone intentionally inputs a prompt that includes specific instructions or code-like elements to manipulate or influence the behavior of an AI model. This can be done to achieve certain responses, bypass restrictions, or alter the way the AI interprets and responds to the prompt. It’s a sophisticated aspect of interacting with AI systems, often requiring a deep understanding of how the AI processes and generates its responses.
Quantum Machine Learning
An emerging field that combines quantum computing and machine learning, aiming to harness quantum effects to improve machine learning algorithms.
Recurrent Neural Network (RNN)
A type of neural network where connections between nodes form a directed graph along a temporal sequence, allowing it to exhibit temporal dynamic behavior. Used extensively in language modeling and other sequence tasks.
A subset of machine learning where models are trained to make a series of decisions towards maximizing rewards through trial-and-error interactions with their environment. Enables excelling at complex tasks.
Robotics Process Automation (RPA)
Automating repetitive, rules-based digital tasks by emulating user interactions with UI-driven enterprise software. Improves efficiency by managing tedious functions autonomously. Contrasts with full AI automation.
In NLP, the process of understanding the meaning and interpretation of words and their combinations in a contextually appropriate manner.
An AI system created by Stability AI that can generate realistic images and artwork from text descriptions. Built using a latent text-to-image diffusion model for high accuracy and variation with few data artifacts in outputs.
Training machine learning models by providing labeled example input and output data so the algorithm can learn relationships between variables. Classification and prediction tasks often leverage this approach by supplying correct answers.
Artificially generated training data for machine learning models that mimics properties of real-world data. Used to augment limited real data samples to improve model robustness through expanded diversity.
Artificially generated media like images or videos. An example of this would be deep fake videos.
Tensor Processing Units (TPUs)
Specialized ASIC chips optimized specifically for machine learning workloads. Allow running models far more efficiently than CPU/GPU hardware. Utilized broadly by companies like Google for their internal ML pipelines.
A field of machine learning focused on generating images from textual descriptions. Models are trained on vast datasets mapping related text and images to establish meaningful connections between concepts. Enables applications like DALL-E, Midjourney and more.
In natural language processing, refers to breaking text down into individual words, phrases, sentences or paragraphs that algorithms can analyze for patterns. Provides structured data vectorization for ML models to interpret.
A technique where a model pre-trained on one machine learning task gets reused as a starting point for a new task. Allows leveraging generalized knowledge representations to speed up specialized optimization.
AI algorithms that analyze data without pre-existing labels or categorizations to uncover hidden insights and patterns. Clustering for segmentation and dimensionality reduction for feature extraction rely on unsupervised approaches.
A series of increasingly powerful generative text-to-image models developed by Anthropic designed to generate realistic images matching text prompt descriptions. Currently on VISION 5.
Similar to a dog or horse whisperer, whispering a form of prompt engineering that is concise, thoughtful, and informed. By mastering the art of whispering you are likely to not only get the results you’re looking for, but an opportunity to see things from different perspectives. Whispering also uses an empathetic tone to encourage a collaborative and positive experience with AI as it evolves.
A machine learning technique where a model is trained to correctly make predictions for tasks it has not explicitly seen during training, relying on understanding and generalization capabilities.