The Agile BioFoundry uses artificial intelligence to guide our metabolic engineering efforts, make predictions, and optimize bioproduct production.
Machine learning for titer, rate, and yield improvements
High levels of titers, rates and yields (TRY) are critical for product commercialization. But despite a deep understanding of their pathway, researchers often run out of engineering targets before they reach the desired TRY levels.
A data-intensive, statistics-based approach can successfully guide the process of bioengineering to reach the desired TRY levels. We use a variety of machine learning approaches that range from deep learning to our own methods developed specifically for synthetic biology. We have successfully used these techniques to guide metabolic engineering in an effective fashion.
Our machine learning techniques are coupled with our ability to produce our own data in an automated fashion using our capabilities. Machine learning provides a systematic method to leverage data stored in the Experiment Data Depot (EDD) to guide metabolic engineering methods without the need for a deep mechanistic understanding.
This data-intensive approach requires large amounts of high-quality data. Typically, 50-100 conditions (strains) are needed for successful predictions, as well as 3-4 consecutive rounds in which predictions are tested and the new data is used to improve predictions for the next round.
Machine learning for kinetic learning
A significant challenge in engineering biological systems is the inability to accurately predict biological behavior after modifying the corresponding genotype. Kinetic models are traditionally used to predict pathway dynamics in bioengineered systems, but they take significant time to develop, and rely heavily on domain expertise.
Our kinetic learning method combines machine learning and abundant multiomics data (proteomics and metabolomics) to effectively predict pathway dynamics in an automated fashion. This method outperforms kinetic models, and produces predictions that can productively guide bioengineering efforts.
Deep learning for regulatory inferences
Microbes are constantly collecting information from their surroundings and then processing this information to adapt their behavior and optimize survival.
We leverage these regulatory mechanisms for bioengineering processes, allowing us to optimize bioproduct production, mitigate stress responses that might lower yields, and more. By connecting microbes’ information processing and environmental sensing capabilities to the predictive computational design of regulatory networks, microbes become fully programmable biosystems capable of bioprocesses that are difficult to achieve through other means.