Trainer
- class Melodie.Trainer(config: Config, scenario_cls: Type[Scenario] | None, model_cls: Type[Model] | None, data_loader_cls: Type[DataLoader] | None = None, processors: int = 1, parallel_mode: Literal['process', 'thread'] = 'process')
Bases:
BaseModellingManagerThe
Traineruses a genetic algorithm to evolve agent-level parameters.It is designed for models where agents can “learn” or adapt their strategies to maximize a personal objective, defined by a
utilityfunction.- Parameters:
config – The project
Configobject.scenario_cls – The
Scenariosubclass for the model.model_cls – The
Modelsubclass for the model.data_loader_cls – The
DataLoadersubclass for the model.processors – The number of processor cores to use for parallel computation of the genetic algorithm.
parallel_mode – The parallelization mode.
"process"(default) uses subprocess-based parallelism, suitable for all Python versions."thread"uses thread-based parallelism, which is recommended for Python 3.13+ (free-threaded/No-GIL builds) for better performance.
- add_agent_training_property(agent_list_name: str, training_attributes: List[str], agent_ids: Callable[[Scenario], List[int]])
Register an agent container and its properties for training.
- Parameters:
agent_list_name – The name of the agent container attribute on the Model object (e.g., ‘agents’).
training_attributes – A list of agent property names to be tuned by the genetic algorithm.
agent_ids – A callable that takes a
Scenarioobject and returns a list of agent IDs to be trained.
- setup()
A hook for setting up the Trainer.
This method should be overridden in a subclass to define which agent properties to train, using
add_agent_training_property().
- get_trainer_scenario_cls()
(Internal) Get the parameter class for the trainer’s algorithm.
- collect_data()
(Optional) A hook to define which agent and environment properties to record.
This is not required for training itself but is useful for saving detailed simulation data during the training process. Use
add_agent_property()andadd_environment_property()to register properties.
- run()
The main entry point for starting the training process.
- run_once_new(scenario: Scenario, trainer_params: GATrainerParams)
(Internal) Run a single training path.
- utility(agent: Agent) float
The utility function to be maximized by the trainer.
This method must be overridden in a subclass. It should take an
Agentobject (representing the final state of an agent after a simulation run) and return a single float value representing the agent’s “utility” or “fitness.” The genetic algorithm will attempt to find the strategy parameters that maximize this value for each agent.- Parameters:
agent – The
Agentobject after a simulation run.- Returns:
A float representing the agent’s utility.
- target_function(agent: Agent) float
(Internal) The target function to be minimized, which is the negative of utility.
- add_agent_property(agent_list_name: str, prop: str)
Register an agent property to be recorded during training.
- Parameters:
agent_list_name – The name of the agent container.
prop – The name of the agent property to record.
- add_environment_property(prop: str)
Register an environment property to be recorded during training.
- Parameters:
prop – The name of the environment property to record.
- generate_scenarios()
Generate scenarios from the
TrainerScenariostable.- Returns:
A list of
Scenarioobjects.
- generate_trainer_params_list(trainer_scenario_cls: Type[GATrainerParams]) List[GATrainerParams]
(Internal) Load GA parameters from the
TrainerParamsScenariostable.