IJO1366-Tyr-Phe-Auxotrophs¶

This notebook contains the simulation for IJO1366-Tyr-Phe-Auxotrophs environment where two Ecoli auxotroph agents are learning to interact from scratch. For this simulation we need to use faster LP solvers as we are dealing with very large metabolic models. Here we used Gurobi.

NOTE: For some reason this notebook stops at some point during the training with this enivronment. So we just run 20 batches to show how the steps work. One approach to run the full simulations with thousands of batches is to make the script below a .py file and run that from the terminal.

In [ ]:
from spamdfba import toolkit as tk
from spamdfba import toymodels as tm
import torch
import torch.nn as nn
import numpy as np
import pandas as pd
import pickle
import os
import warnings
import rich
import json
import cobra
In [ ]:
NUM_CORES = 8
warnings.filterwarnings("ignore")
In [ ]:
model_base = cobra.io.read_sbml_model("iJO1366.xml")
medium = model_base.medium.copy()
test_model = model_base.copy()

knockouts_gene_names = [
    "tyrA",
    "pheA",
]

exchange_reactions = {
    "tyrA": "EX_tyr__L_e",
    "pheA": "EX_phe__L_e",
}

exchange_species = {}
exchange_mets = {}
for i in exchange_reactions.items():
    exchange_mets[i[0]] = list(test_model.reactions.get_by_id(i[1]).metabolites.keys())[
        0
    ].id

gene_ids = {}
for ko_gene in knockouts_gene_names:
    for gene in test_model.genes:
        if gene.name == ko_gene:
            gene_ids[ko_gene] = gene.id

            




ic={
    key.lstrip("EX_"):3 for key,val in model_base.medium.items() 
}

ic['glc__D_e']=500
ic['agent1']=0.1
ic['agent2']=0.1
for ko in [("tyrA","pheA")]:
    model1 = model_base.copy()
    model2 = model_base.copy()
    model1.remove_reactions([model1.reactions.get_by_id('PPND')]) ## Tyrosine Mutant
    model2.remove_reactions([model2.reactions.get_by_id('PPNDH')]) ## Phenylalanine Mutant
    model1.exchange_reactions = tuple([model1.reactions.index(i) for i in model1.exchanges])
    model2.exchange_reactions = tuple([model2.reactions.index(i) for i in model2.exchanges])
    model1.biomass_ind=model1.reactions.index("BIOMASS_Ec_iJO1366_core_53p95M")
    model2.biomass_ind=model2.reactions.index("BIOMASS_Ec_iJO1366_core_53p95M")
    model1.solver = "gurobi"
    model2.solver = "gurobi"
    if model1.optimize().objective_value != 0 or model2.optimize().objective_value != 0:
        rich.print(f"[yellow]Skipping {ko} because at least one organism can grow without auxotrophy")
        continue
    else:
        rich.print(f"[green]Non of the KOs can grow without auxotrophy: Running {ko}")
    ko_name = ko[0] + "_" + ko[1]
    agent1 = tk.Agent(
        "agent1",
        model=model1,
        actor_network=tk.NN,
        critic_network=tk.NN,
        clip=0.1,
        lr_actor=0.0001,
        lr_critic=0.001,
        actor_var=0.05,
        grad_updates=10,
        optimizer_actor=torch.optim.Adam,
        optimizer_critic=torch.optim.Adam,
        observables=[
            "agent1",
            "agent2",
            "glc__D_e",
            *[i.replace("EX_", "") for i in exchange_reactions.values()]
        ],
        actions=[i for i in exchange_reactions.values()],
        gamma=1,
    )
    agent2 = tk.Agent(
        "agent2",
        model=model2,
        actor_network=tk.NN,
        critic_network=tk.NN,
        clip=0.1,
        lr_actor=0.0001,
        lr_critic=0.001,
        grad_updates=10,
        actor_var=0.05,
        optimizer_actor=torch.optim.Adam,
        optimizer_critic=torch.optim.Adam,
        observables=[
            "agent1",
            "agent2",
            "glc__D_e",
            *[i.replace("EX_", "") for i in exchange_reactions.values()]
        ],
        actions=[i for i in exchange_reactions.values()],
        gamma=1,
    )
    constants=list(ic.keys())
    constants.remove("agent1")
    constants.remove("agent2")
    constants.remove("glc__D_e")

    env = tk.Environment(
        "IJO1366-Tyr-Phe-Auxotrophs" ,
        agents=[agent1, agent2],
        dilution_rate=0,
        extracellular_reactions=[],
        initial_condition=ic,
        inlet_conditions={},
        dt=0.2,
        episode_length=250,
        number_of_batches=20,  ##TOBECHANGED
        episodes_per_batch=4,
        constant=constants,
    )
Read LP format model from file /var/folders/jk/fr50qn391k794svhntbw333c0000gn/T/tmpguadhoys.lp
Reading time = 0.01 seconds
: 1805 rows, 5166 columns, 20366 nonzeros
Read LP format model from file /var/folders/jk/fr50qn391k794svhntbw333c0000gn/T/tmp2aexrthh.lp
Reading time = 0.01 seconds
: 1805 rows, 5166 columns, 20366 nonzeros
Read LP format model from file /var/folders/jk/fr50qn391k794svhntbw333c0000gn/T/tmp35fdp6yb.lp
Reading time = 0.01 seconds
: 1805 rows, 5166 columns, 20366 nonzeros
Non of the KOs can grow without auxotrophy: Running ('tyrA', 'pheA')
In [ ]:
sim=tk.Simulation(name=env.name,
                  env=env,
                  save_dir="./Results/",
                  )
In [ ]:
sim.run()
Hold on, bringing the creitc network to range ...
Done!
Hold on, bringing the creitc network to range ...
Done!
Batch 0 finished:
agent1 return was:  6.029456564822457
agent2 return was:  -215.10356613810933
Batch 1 finished:
agent1 return was:  6.257672236343053
agent2 return was:  -219.3686632893535
Batch 2 finished:
agent1 return was:  5.611567319649102
agent2 return was:  -217.4131249860677
Batch 3 finished:
agent1 return was:  5.749545596186158
agent2 return was:  -215.9091263737684
Batch 4 finished:
agent1 return was:  5.707984978064208
agent2 return was:  -214.38061595955145
Batch 5 finished:
agent1 return was:  7.31759687740003
agent2 return was:  -208.9927747204154
Batch 6 finished:
agent1 return was:  8.375978775362817
agent2 return was:  -205.01420905537717
Batch 7 finished:
agent1 return was:  11.813492199719676
agent2 return was:  -207.67215773952944
Batch 8 finished:
agent1 return was:  9.162362946565667
agent2 return was:  -200.52741722502083
Batch 9 finished:
agent1 return was:  12.642737882151867
agent2 return was:  -196.88749335149078
Batch 10 finished:
agent1 return was:  11.788483724248422
agent2 return was:  -190.73315563238157
Batch 11 finished:
agent1 return was:  10.186328603719298
agent2 return was:  -191.91594484331827
Batch 12 finished:
agent1 return was:  14.293102492440505
agent2 return was:  -183.12161973461264
Batch 13 finished:
agent1 return was:  12.160479019734053
agent2 return was:  -183.6890090973937
Batch 14 finished:
agent1 return was:  17.421693911555913
agent2 return was:  -171.0248006487937
Batch 15 finished:
agent1 return was:  17.357460193033205
agent2 return was:  -168.7234793295208
Batch 16 finished:
agent1 return was:  22.042752665566113
agent2 return was:  -161.0160997034889
Batch 17 finished:
agent1 return was:  16.528579873965576
agent2 return was:  -164.76228629104892
Batch 18 finished:
agent1 return was:  20.490052862206653
agent2 return was:  -152.69615142958398
Batch 19 finished:
agent1 return was:  33.053216985030225
agent2 return was:  -143.2511400757652
In [ ]:
sim.plot_learning_curves()
In [ ]:
sim.print_training_times()
                       Simulation times                       
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Level        ┃ Mean(s)             ┃ STD(s)                ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━┩
│ Optimization │ 0.02658179996609688 │ 0.0030197014186664397 │
│ Step         │ 0.07203596585989    │ 0.007191726345120451  │
│ Batch        │ 19.958823108673094  │ 1.8966723915975188    │
│ Simulation   │ 404.4616482257843   │ NA                    │
└──────────────┴─────────────────────┴───────────────────────┘
Out[ ]:
[{'mean': 0.07203596585989, 'std': 0.007191726345120451},
 {'mean': 0.02658179996609688, 'std': 0.0030197014186664397},
 {'mean': 19.958823108673094, 'std': 1.8966723915975188},
 {'mean': 404.4616482257843, 'std': 0.0}]