To train Generative AI models, practitioners conventionally show these models many positive examples of what they want the model to generate. However, Generative AI models can also be trained on negative examples, which are examples of what the model should not generate. These negative examples are instrumental in teaching generative models constraints, which are essential for many engineering problems, particularly those with safety-critical requirements. Much like humans learn best from a mixture of positive and negative feedback, generative models can train more efficiently and effectively using negative data in addition to positive data. As a bonus, negative data is often significantly cheaper to generate than positive data, despite often being more information-rich.
Optimization is a tried-and-true engineering problem solving tool, while Generative AI has only recently been applied to engineering problems. While optimization excels at precisely finding high-quality solutions that satisfy constraints, Generative AI models excel at inferring problem requirements, bridging solution modalities, handling mixed data modalities, and rapidly generating numerous solutions. In many ways, Optimization and Generative AI are complementary tools, and combining them can lead to powerful problem-solving capabilities.
Foundation models for natural language and image synthesis have achieved such widespread success that general-purpose models are now used extensively for domain-specific tasks. Many engineering domains, which are dominated by tabular data, lack such general-purpose models and are instead powered by individual machine learning models trained for singular tasks. Developing general-purpose models that can be applied without domain-specific training to a wide variety of engineering tasks would signficiantly accelerate predictive tasks in engineering. Since engineering data is scarce, synthetic data is the key to generating powerful general-purpose models for engineering.