Experiment Data Depot

A repository for Learn-friendly formatting of multi-omics measurements

Recent advances in synthetic biology have allowed us to produce biological designs more efficiently than ever, but our ability to predict the end result of these designs is still evolving. 

The Experiment Data Depot (EDD) is an online tool available to Agile BioFoundry collaborators that enables the application of machine learning and mechanistic models to learn from data and guide the metabolic engineering process in a systematic fashion. 

EDD acts as a repository of experimental data and metadata. It provides a convenient way to upload a variety of data types, visualize these data, and export them in a standardized fashion for use with predictive algorithms.

The input of data to EDD is performed through automated data streams: each of these input streams automatically parses the standard outputs of the instruments most commonly used for bioengineering. New input streams can be added to adapt to local data production.

EDD provides a quick visualization of imported data that allows for a quality check by showing whether the imported data are within the expected range or not. Since data are stored internally in a relational database, all data output is consistent.

Outputs can be provided in terms of different standardized files (Systems Biology Markup Language, SBML, or CSV) or through a representational state transfer (RESTful) Application Programming Interface (API). SBML and CSV files can be used in conjunction with libraries such as COBRApy or Scikit-learn to generate actionable results for metabolic engineering.

EDD provides a data repository for -omics data types that is able to extract data directly from instrument output, visualize this data, and export the data in formats that are readily applicable to modeling tools and libraries. Code is available to anyone as part of an open source project.

Learn more about EDD and see use cases.