We are excited to announce The Next Wave Podcast with Matt Wolfe and HubSpot! Learn more
New to Gen AI? Read our Generative AI guide.
Generative AI Glossary
  • Autoencoder: A type of neural network that learns a compressed representation of data, which can be used for generation and other tasks.
  • Backpropagation: A process used in neural networks to adjust the weights of the network based on the error between predicted and actual outputs.
  • BERT: Bidirectional Encoder Representations from Transformers - a type of LLM developed by Google that is pre-trained on large amounts of text data.
  • BigGAN: A type of GAN developed by Google that is trained on large amounts of data to generate high-quality images.
  • Capsule Network: A type of neural network architecture that uses capsules to represent entities in data, and is able to handle variations in viewpoint and pose.
  • DeepDream: A technique developed by Google that uses neural networks to generate hallucinogenic images from existing images.
  • Dropout: A technique used in neural networks to prevent overfitting by randomly dropping out certain nodes during training.
  • Encoder: A component of a neural network that transforms input data into a compressed representation.
  • Fine-tuning: A process in which a pre-trained generative model is further trained on a specific task or data set.
  • Fully Connected Layer: A type of neural network layer in which every input is connected to every output.
  • Generative Design: A design process in which a generative model is used to generate and evaluate potential designs.
  • Gradient Descent: A process used in neural networks to minimize the loss function by adjusting the weights of the network in the direction of the negative gradient.
  • Hyperparameter: A parameter of a machine learning model that is set before training and determines the model's structure and behavior.
  • Inception Model: A deep neural network developed by Google that is used for image classification and other tasks.
  • Interpolation: A process in which a generative model is used to generate new data that lies between existing data points.
  • Latent Space: The compressed representation of data learned by a generative model.
  • Leaky ReLU: A variant of the rectified linear unit activation function that allows a small amount of negative input to pass through.
  • Logistic Regression: A type of machine learning algorithm used for binary classification.
  • Loss Function: A function used to measure the difference between predicted and actual outputs in a machine learning model.
  • Multi-Layer Perceptron: A type of neural network architecture consisting of multiple layers of fully connected nodes.
  • One-Shot Learning: A type of machine learning in which a model is trained to recognize new objects from only a few examples.
  • Outlier: A data point that is significantly different from other data points in a set.
  • PixelCNN: A generative model that is able to generate images pixel by pixel.
  • Recurrent Neural Network: A type of neural network architecture that is able to handle sequential data, such as text or time series data.
  • ResNet: A type of deep neural network architecture that uses residual connections to improve training and performance.
  • Reverse Image Search: A technique that uses a generative model to search for similar images to a given input image.
  • Self-Attention: A mechanism used in neural networks to allow the network to focus on different parts of the input data.
  • Seq2Seq: A type of neural network architecture used for sequence-to-sequence tasks, such as machine translation.
  • Style Transfer: A technique that uses a generative model to transfer the style of one image onto another.
  • Supervised Learning: A type of machine learning in which the model is trained on labeled data, with the goal of predicting labels for new data.
  • Tensor: A mathematical object used in machine learning to represent multi-dimensional arrays of data.
  • Transformer: A type of neural network architecture that uses self-attention to allow the network to focus on different parts of the input data.
  • Transfer Learning: A process in which a pre-trained machine learning model is adapted to a new task or data set.
  • Unsupervised Learning: A type of machine learning in which the model is trained on unlabeled data, with the goal of discovering patterns and structures in the data.
  • Variational Inference: A method used to approximate complex probability distributions, often used in generative models.
  • Weight Decay: A technique used in neural networks to prevent overfitting by penalizing large weights.
  • Word Embedding: A technique used to represent words as low-dimensional vectors, often used in natural language processing tasks.
  • Zero-Shot Learning: A type of machine learning in which a model is able to recognize new objects without any examples.
  • Zombie AI: A hypothetical scenario in which an AI system becomes uncontrollable or poses a threat to humanity.
PEOPLE + AI = MAGIC