Shortcuts

ding.agent

a2c

Please refer to ding/bonus/a2c.py for more details.

A2CAgent

class ding.bonus.a2c.A2CAgent(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None)[源代码]
Overview:

Class of agent for training, evaluation and deployment of Reinforcement learning algorithm Advantage Actor Critic(A2C). For more information about the system design of RL agent, please refer to <https://di-engine-docs.readthedocs.io/en/latest/03_system/agent.html>.

Interface:

__init__, train, deploy, collect_data, batch_evaluate, best

__init__(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None) None[源代码]
Overview:

Initialize agent for A2C algorithm.

Arguments:
  • env_id (str): The environment id, which is a registered environment name in gym or gymnasium. If env_id is not specified, env_id in cfg.env must be specified. If env_id is specified, env_id in cfg.env will be ignored. env_id should be one of the supported envs, which can be found in supported_env_list.

  • env (BaseEnv): The environment instance for training and evaluation. If env is not specified, env_id` or cfg.env.env_id must be specified. env_id or cfg.env.env_id will be used to create environment instance. If env is specified, env_id and cfg.env.env_id will be ignored.

  • seed (int): The random seed, which is set before running the program. Default to 0.

  • exp_name (str): The name of this experiment, which will be used to create the folder to save log data. Default to None. If not specified, the folder name will be env_id-algorithm.

  • model (torch.nn.Module): The model of A2C algorithm, which should be an instance of class ding.model.VAC. If not specified, a default model will be generated according to the configuration.

  • cfg (:obj:Union[EasyDict, dict]): The configuration of A2C algorithm, which is a dict. Default to None. If not specified, the default configuration will be used. The default configuration can be found in ding/config/example/A2C/gym_lunarlander_v2.py.

  • policy_state_dict (str): The path of policy state dict saved by PyTorch a in local file. If specified, the policy will be loaded from this file. Default to None.

注解

An RL Agent Instance can be initialized in two basic ways. For example, we have an environment with id LunarLanderContinuous-v2 registered in gym, and we want to train an agent with A2C algorithm with default configuration. Then we can initialize the agent in the following ways:
>>> agent = A2CAgent(env_id='LunarLanderContinuous-v2')
or, if we want can specify the env_id in the configuration:
>>> cfg = {'env': {'env_id': 'LunarLanderContinuous-v2'}, 'policy': ...... }
>>> agent = A2CAgent(cfg=cfg)

There are also other arguments to specify the agent when initializing. For example, if we want to specify the environment instance:

>>> env = CustomizedEnv('LunarLanderContinuous-v2')
>>> agent = A2CAgent(cfg=cfg, env=env)
or, if we want to specify the model:
>>> model = VAC(**cfg.policy.model)
>>> agent = A2CAgent(cfg=cfg, model=model)
or, if we want to reload the policy from a saved policy state dict:
>>> agent = A2CAgent(cfg=cfg, policy_state_dict='LunarLanderContinuous-v2.pth.tar')

Make sure that the configuration is consistent with the saved policy state dict.

batch_evaluate(env_num: int = 4, n_evaluator_episode: int = 4, context: Optional[str] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Evaluate the agent with A2C algorithm for n_evaluator_episode episodes with env_num evaluator environments. The evaluation result will be returned. The difference between methods batch_evaluate and deploy is that batch_evaluate will create multiple evaluator environments to evaluate the agent to get an average performance, while deploy will only create one evaluator environment to evaluate the agent and save the replay video.

Arguments:
  • env_num (int): The number of evaluator environments. Default to 4.

  • n_evaluator_episode (int): The number of episodes to evaluate. Default to 4.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

property best: ding.bonus.a2c.A2CAgent
Overview:

Load the best model from the checkpoint directory, which is by default in folder exp_name/ckpt/eval.pth.tar. The return value is the agent with the best model.

Returns:
  • (A2CAgent): The agent with the best model.

Examples:
>>> agent = A2CAgent(env_id='LunarLanderContinuous-v2')
>>> agent.train()
>>> agent = agent.best

注解

The best model is the model with the highest evaluation return. If this method is called, the current model will be replaced by the best model.

collect_data(env_num: int = 8, save_data_path: Optional[str] = None, n_sample: Optional[int] = None, n_episode: Optional[int] = None, context: Optional[str] = None, debug: bool = False) None[源代码]
Overview:

Collect data with A2C algorithm for n_episode episodes with env_num collector environments. The collected data will be saved in save_data_path if specified, otherwise it will be saved in exp_name/demo_data.

Arguments:
  • env_num (int): The number of collector environments. Default to 8.

  • save_data_path (str): The path to save the collected data. Default to None. If not specified, the data will be saved in exp_name/demo_data.

  • n_sample (int): The number of samples to collect. Default to None. If not specified, n_episode must be specified.

  • n_episode (int): The number of episodes to collect. Default to None. If not specified, n_sample must be specified.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

deploy(enable_save_replay: bool = False, concatenate_all_replay: bool = False, replay_save_path: Optional[str] = None, seed: Optional[Union[int, List]] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Deploy the agent with A2C algorithm by interacting with the environment, during which the replay video can be saved if enable_save_replay is True. The evaluation result will be returned.

Arguments:
  • enable_save_replay (bool): Whether to save the replay video. Default to False.

  • concatenate_all_replay (bool): Whether to concatenate all replay videos into one video. Default to False. If enable_save_replay is False, this argument will be ignored. If enable_save_replay is True and concatenate_all_replay is False, the replay video of each episode will be saved separately.

  • replay_save_path (str): The path to save the replay video. Default to None. If not specified, the video will be saved in exp_name/videos.

  • seed (Union[int, List]): The random seed, which is set before running the program. Default to None. If not specified, self.seed will be used. If seed is an integer, the agent will be deployed once. If seed is a list of integers, the agent will be deployed once for each seed in the list.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

train(step: int = 10000000, collector_env_num: int = 4, evaluator_env_num: int = 4, n_iter_log_show: int = 500, n_iter_save_ckpt: int = 1000, context: Optional[str] = None, debug: bool = False, wandb_sweep: bool = False) ding.bonus.common.TrainingReturn[源代码]
Overview:

Train the agent with A2C algorithm for step iterations with collector_env_num collector environments and evaluator_env_num evaluator environments. Information during training will be recorded and saved by wandb.

Arguments:
  • step (int): The total training environment steps of all collector environments. Default to 1e7.

  • collector_env_num (int): The collector environment number. Default to None. If not specified, it will be set according to the configuration.

  • evaluator_env_num (int): The evaluator environment number. Default to None. If not specified, it will be set according to the configuration.

  • n_iter_save_ckpt (int): The frequency of saving checkpoint every training iteration. Default to 1000.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

  • wandb_sweep (bool): Whether to use wandb sweep, which is a hyper-parameter optimization process for seeking the best configurations. Default to False. If True, the wandb sweep id will be used as the experiment name.

Returns:
  • (TrainingReturn): The training result, of which the attributions are:
    • wandb_url (str): The weight & biases (wandb) project url of the trainning experiment.

c51

Please refer to ding/bonus/c51.py for more details.

C51Agent

class ding.bonus.c51.C51Agent(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None)[源代码]
Overview:

Class of agent for training, evaluation and deployment of Reinforcement learning algorithm C51. For more information about the system design of RL agent, please refer to <https://di-engine-docs.readthedocs.io/en/latest/03_system/agent.html>.

Interface:

__init__, train, deploy, collect_data, batch_evaluate, best

__init__(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None) None[源代码]
Overview:

Initialize agent for C51 algorithm.

Arguments:
  • env_id (str): The environment id, which is a registered environment name in gym or gymnasium. If env_id is not specified, env_id in cfg.env must be specified. If env_id is specified, env_id in cfg.env will be ignored. env_id should be one of the supported envs, which can be found in supported_env_list.

  • env (BaseEnv): The environment instance for training and evaluation. If env is not specified, env_id` or cfg.env.env_id must be specified. env_id or cfg.env.env_id will be used to create environment instance. If env is specified, env_id and cfg.env.env_id will be ignored.

  • seed (int): The random seed, which is set before running the program. Default to 0.

  • exp_name (str): The name of this experiment, which will be used to create the folder to save log data. Default to None. If not specified, the folder name will be env_id-algorithm.

  • model (torch.nn.Module): The model of C51 algorithm, which should be an instance of class ding.model.C51DQN. If not specified, a default model will be generated according to the configuration.

  • cfg (:obj:Union[EasyDict, dict]): The configuration of C51 algorithm, which is a dict. Default to None. If not specified, the default configuration will be used. The default configuration can be found in ding/config/example/C51/gym_lunarlander_v2.py.

  • policy_state_dict (str): The path of policy state dict saved by PyTorch a in local file. If specified, the policy will be loaded from this file. Default to None.

注解

An RL Agent Instance can be initialized in two basic ways. For example, we have an environment with id LunarLander-v2 registered in gym, and we want to train an agent with C51 algorithm with default configuration. Then we can initialize the agent in the following ways:
>>> agent = C51Agent(env_id='LunarLander-v2')
or, if we want can specify the env_id in the configuration:
>>> cfg = {'env': {'env_id': 'LunarLander-v2'}, 'policy': ...... }
>>> agent = C51Agent(cfg=cfg)

There are also other arguments to specify the agent when initializing. For example, if we want to specify the environment instance:

>>> env = CustomizedEnv('LunarLander-v2')
>>> agent = C51Agent(cfg=cfg, env=env)
or, if we want to specify the model:
>>> model = C51DQN(**cfg.policy.model)
>>> agent = C51Agent(cfg=cfg, model=model)
or, if we want to reload the policy from a saved policy state dict:
>>> agent = C51Agent(cfg=cfg, policy_state_dict='LunarLander-v2.pth.tar')

Make sure that the configuration is consistent with the saved policy state dict.

batch_evaluate(env_num: int = 4, n_evaluator_episode: int = 4, context: Optional[str] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Evaluate the agent with C51 algorithm for n_evaluator_episode episodes with env_num evaluator environments. The evaluation result will be returned. The difference between methods batch_evaluate and deploy is that batch_evaluate will create multiple evaluator environments to evaluate the agent to get an average performance, while deploy will only create one evaluator environment to evaluate the agent and save the replay video.

Arguments:
  • env_num (int): The number of evaluator environments. Default to 4.

  • n_evaluator_episode (int): The number of episodes to evaluate. Default to 4.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

property best: ding.bonus.c51.C51Agent
Overview:

Load the best model from the checkpoint directory, which is by default in folder exp_name/ckpt/eval.pth.tar. The return value is the agent with the best model.

Returns:
  • (C51Agent): The agent with the best model.

Examples:
>>> agent = C51Agent(env_id='LunarLander-v2')
>>> agent.train()
>>> agent = agent.best

注解

The best model is the model with the highest evaluation return. If this method is called, the current model will be replaced by the best model.

collect_data(env_num: int = 8, save_data_path: Optional[str] = None, n_sample: Optional[int] = None, n_episode: Optional[int] = None, context: Optional[str] = None, debug: bool = False) None[源代码]
Overview:

Collect data with C51 algorithm for n_episode episodes with env_num collector environments. The collected data will be saved in save_data_path if specified, otherwise it will be saved in exp_name/demo_data.

Arguments:
  • env_num (int): The number of collector environments. Default to 8.

  • save_data_path (str): The path to save the collected data. Default to None. If not specified, the data will be saved in exp_name/demo_data.

  • n_sample (int): The number of samples to collect. Default to None. If not specified, n_episode must be specified.

  • n_episode (int): The number of episodes to collect. Default to None. If not specified, n_sample must be specified.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

deploy(enable_save_replay: bool = False, concatenate_all_replay: bool = False, replay_save_path: Optional[str] = None, seed: Optional[Union[int, List]] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Deploy the agent with C51 algorithm by interacting with the environment, during which the replay video can be saved if enable_save_replay is True. The evaluation result will be returned.

Arguments:
  • enable_save_replay (bool): Whether to save the replay video. Default to False.

  • concatenate_all_replay (bool): Whether to concatenate all replay videos into one video. Default to False. If enable_save_replay is False, this argument will be ignored. If enable_save_replay is True and concatenate_all_replay is False, the replay video of each episode will be saved separately.

  • replay_save_path (str): The path to save the replay video. Default to None. If not specified, the video will be saved in exp_name/videos.

  • seed (Union[int, List]): The random seed, which is set before running the program. Default to None. If not specified, self.seed will be used. If seed is an integer, the agent will be deployed once. If seed is a list of integers, the agent will be deployed once for each seed in the list.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

train(step: int = 10000000, collector_env_num: Optional[int] = None, evaluator_env_num: Optional[int] = None, n_iter_save_ckpt: int = 1000, context: Optional[str] = None, debug: bool = False, wandb_sweep: bool = False) ding.bonus.common.TrainingReturn[源代码]
Overview:

Train the agent with C51 algorithm for step iterations with collector_env_num collector environments and evaluator_env_num evaluator environments. Information during training will be recorded and saved by wandb.

Arguments:
  • step (int): The total training environment steps of all collector environments. Default to 1e7.

  • collector_env_num (int): The collector environment number. Default to None. If not specified, it will be set according to the configuration.

  • evaluator_env_num (int): The evaluator environment number. Default to None. If not specified, it will be set according to the configuration.

  • n_iter_save_ckpt (int): The frequency of saving checkpoint every training iteration. Default to 1000.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

  • wandb_sweep (bool): Whether to use wandb sweep, which is a hyper-parameter optimization process for seeking the best configurations. Default to False. If True, the wandb sweep id will be used as the experiment name.

Returns:
  • (TrainingReturn): The training result, of which the attributions are:
    • wandb_url (str): The weight & biases (wandb) project url of the trainning experiment.

ddpg

Please refer to ding/bonus/ddpg.py for more details.

DDPGAgent

class ding.bonus.ddpg.DDPGAgent(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None)[源代码]
Overview:

Class of agent for training, evaluation and deployment of Reinforcement learning algorithm Deep Deterministic Policy Gradient(DDPG). For more information about the system design of RL agent, please refer to <https://di-engine-docs.readthedocs.io/en/latest/03_system/agent.html>.

Interface:

__init__, train, deploy, collect_data, batch_evaluate, best

__init__(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None) None[源代码]
Overview:

Initialize agent for DDPG algorithm.

Arguments:
  • env_id (str): The environment id, which is a registered environment name in gym or gymnasium. If env_id is not specified, env_id in cfg.env must be specified. If env_id is specified, env_id in cfg.env will be ignored. env_id should be one of the supported envs, which can be found in supported_env_list.

  • env (BaseEnv): The environment instance for training and evaluation. If env is not specified, env_id` or cfg.env.env_id must be specified. env_id or cfg.env.env_id will be used to create environment instance. If env is specified, env_id and cfg.env.env_id will be ignored.

  • seed (int): The random seed, which is set before running the program. Default to 0.

  • exp_name (str): The name of this experiment, which will be used to create the folder to save log data. Default to None. If not specified, the folder name will be env_id-algorithm.

  • model (torch.nn.Module): The model of DDPG algorithm, which should be an instance of class ding.model.ContinuousQAC. If not specified, a default model will be generated according to the configuration.

  • cfg (:obj:Union[EasyDict, dict]): The configuration of DDPG algorithm, which is a dict. Default to None. If not specified, the default configuration will be used. The default configuration can be found in ding/config/example/DDPG/gym_lunarlander_v2.py.

  • policy_state_dict (str): The path of policy state dict saved by PyTorch a in local file. If specified, the policy will be loaded from this file. Default to None.

注解

An RL Agent Instance can be initialized in two basic ways. For example, we have an environment with id LunarLanderContinuous-v2 registered in gym, and we want to train an agent with DDPG algorithm with default configuration. Then we can initialize the agent in the following ways:
>>> agent = DDPGAgent(env_id='LunarLanderContinuous-v2')
or, if we want can specify the env_id in the configuration:
>>> cfg = {'env': {'env_id': 'LunarLanderContinuous-v2'}, 'policy': ...... }
>>> agent = DDPGAgent(cfg=cfg)

There are also other arguments to specify the agent when initializing. For example, if we want to specify the environment instance:

>>> env = CustomizedEnv('LunarLanderContinuous-v2')
>>> agent = DDPGAgent(cfg=cfg, env=env)
or, if we want to specify the model:
>>> model = ContinuousQAC(**cfg.policy.model)
>>> agent = DDPGAgent(cfg=cfg, model=model)
or, if we want to reload the policy from a saved policy state dict:
>>> agent = DDPGAgent(cfg=cfg, policy_state_dict='LunarLanderContinuous-v2.pth.tar')

Make sure that the configuration is consistent with the saved policy state dict.

batch_evaluate(env_num: int = 4, n_evaluator_episode: int = 4, context: Optional[str] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Evaluate the agent with DDPG algorithm for n_evaluator_episode episodes with env_num evaluator environments. The evaluation result will be returned. The difference between methods batch_evaluate and deploy is that batch_evaluate will create multiple evaluator environments to evaluate the agent to get an average performance, while deploy will only create one evaluator environment to evaluate the agent and save the replay video.

Arguments:
  • env_num (int): The number of evaluator environments. Default to 4.

  • n_evaluator_episode (int): The number of episodes to evaluate. Default to 4.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

property best: ding.bonus.ddpg.DDPGAgent
Overview:

Load the best model from the checkpoint directory, which is by default in folder exp_name/ckpt/eval.pth.tar. The return value is the agent with the best model.

Returns:
  • (DDPGAgent): The agent with the best model.

Examples:
>>> agent = DDPGAgent(env_id='LunarLanderContinuous-v2')
>>> agent.train()
>>> agent = agent.best

注解

The best model is the model with the highest evaluation return. If this method is called, the current model will be replaced by the best model.

collect_data(env_num: int = 8, save_data_path: Optional[str] = None, n_sample: Optional[int] = None, n_episode: Optional[int] = None, context: Optional[str] = None, debug: bool = False) None[源代码]
Overview:

Collect data with DDPG algorithm for n_episode episodes with env_num collector environments. The collected data will be saved in save_data_path if specified, otherwise it will be saved in exp_name/demo_data.

Arguments:
  • env_num (int): The number of collector environments. Default to 8.

  • save_data_path (str): The path to save the collected data. Default to None. If not specified, the data will be saved in exp_name/demo_data.

  • n_sample (int): The number of samples to collect. Default to None. If not specified, n_episode must be specified.

  • n_episode (int): The number of episodes to collect. Default to None. If not specified, n_sample must be specified.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

deploy(enable_save_replay: bool = False, concatenate_all_replay: bool = False, replay_save_path: Optional[str] = None, seed: Optional[Union[int, List]] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Deploy the agent with DDPG algorithm by interacting with the environment, during which the replay video can be saved if enable_save_replay is True. The evaluation result will be returned.

Arguments:
  • enable_save_replay (bool): Whether to save the replay video. Default to False.

  • concatenate_all_replay (bool): Whether to concatenate all replay videos into one video. Default to False. If enable_save_replay is False, this argument will be ignored. If enable_save_replay is True and concatenate_all_replay is False, the replay video of each episode will be saved separately.

  • replay_save_path (str): The path to save the replay video. Default to None. If not specified, the video will be saved in exp_name/videos.

  • seed (Union[int, List]): The random seed, which is set before running the program. Default to None. If not specified, self.seed will be used. If seed is an integer, the agent will be deployed once. If seed is a list of integers, the agent will be deployed once for each seed in the list.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

train(step: int = 10000000, collector_env_num: Optional[int] = None, evaluator_env_num: Optional[int] = None, n_iter_log_show: int = 500, n_iter_save_ckpt: int = 1000, context: Optional[str] = None, debug: bool = False, wandb_sweep: bool = False) ding.bonus.common.TrainingReturn[源代码]
Overview:

Train the agent with DDPG algorithm for step iterations with collector_env_num collector environments and evaluator_env_num evaluator environments. Information during training will be recorded and saved by wandb.

Arguments:
  • step (int): The total training environment steps of all collector environments. Default to 1e7.

  • collector_env_num (int): The collector environment number. Default to None. If not specified, it will be set according to the configuration.

  • evaluator_env_num (int): The evaluator environment number. Default to None. If not specified, it will be set according to the configuration.

  • n_iter_save_ckpt (int): The frequency of saving checkpoint every training iteration. Default to 1000.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

  • wandb_sweep (bool): Whether to use wandb sweep, which is a hyper-parameter optimization process for seeking the best configurations. Default to False. If True, the wandb sweep id will be used as the experiment name.

Returns:
  • (TrainingReturn): The training result, of which the attributions are:
    • wandb_url (str): The weight & biases (wandb) project url of the trainning experiment.

dqn

Please refer to ding/bonus/dqn.py for more details.

DQNAgent

class ding.bonus.dqn.DQNAgent(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None)[源代码]
Overview:

Class of agent for training, evaluation and deployment of Reinforcement learning algorithm Deep Q-Learning(DQN). For more information about the system design of RL agent, please refer to <https://di-engine-docs.readthedocs.io/en/latest/03_system/agent.html>.

Interface:

__init__, train, deploy, collect_data, batch_evaluate, best

__init__(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None) None[源代码]
Overview:

Initialize agent for DQN algorithm.

Arguments:
  • env_id (str): The environment id, which is a registered environment name in gym or gymnasium. If env_id is not specified, env_id in cfg.env must be specified. If env_id is specified, env_id in cfg.env will be ignored. env_id should be one of the supported envs, which can be found in supported_env_list.

  • env (BaseEnv): The environment instance for training and evaluation. If env is not specified, env_id` or cfg.env.env_id must be specified. env_id or cfg.env.env_id will be used to create environment instance. If env is specified, env_id and cfg.env.env_id will be ignored.

  • seed (int): The random seed, which is set before running the program. Default to 0.

  • exp_name (str): The name of this experiment, which will be used to create the folder to save log data. Default to None. If not specified, the folder name will be env_id-algorithm.

  • model (torch.nn.Module): The model of DQN algorithm, which should be an instance of class ding.model.DQN. If not specified, a default model will be generated according to the configuration.

  • cfg (:obj:Union[EasyDict, dict]): The configuration of DQN algorithm, which is a dict. Default to None. If not specified, the default configuration will be used. The default configuration can be found in ding/config/example/DQN/gym_lunarlander_v2.py.

  • policy_state_dict (str): The path of policy state dict saved by PyTorch a in local file. If specified, the policy will be loaded from this file. Default to None.

注解

An RL Agent Instance can be initialized in two basic ways. For example, we have an environment with id LunarLander-v2 registered in gym, and we want to train an agent with DQN algorithm with default configuration. Then we can initialize the agent in the following ways:
>>> agent = DQNAgent(env_id='LunarLander-v2')
or, if we want can specify the env_id in the configuration:
>>> cfg = {'env': {'env_id': 'LunarLander-v2'}, 'policy': ...... }
>>> agent = DQNAgent(cfg=cfg)

There are also other arguments to specify the agent when initializing. For example, if we want to specify the environment instance:

>>> env = CustomizedEnv('LunarLander-v2')
>>> agent = DQNAgent(cfg=cfg, env=env)
or, if we want to specify the model:
>>> model = DQN(**cfg.policy.model)
>>> agent = DQNAgent(cfg=cfg, model=model)
or, if we want to reload the policy from a saved policy state dict:
>>> agent = DQNAgent(cfg=cfg, policy_state_dict='LunarLander-v2.pth.tar')

Make sure that the configuration is consistent with the saved policy state dict.

batch_evaluate(env_num: int = 4, n_evaluator_episode: int = 4, context: Optional[str] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Evaluate the agent with DQN algorithm for n_evaluator_episode episodes with env_num evaluator environments. The evaluation result will be returned. The difference between methods batch_evaluate and deploy is that batch_evaluate will create multiple evaluator environments to evaluate the agent to get an average performance, while deploy will only create one evaluator environment to evaluate the agent and save the replay video.

Arguments:
  • env_num (int): The number of evaluator environments. Default to 4.

  • n_evaluator_episode (int): The number of episodes to evaluate. Default to 4.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

property best: ding.bonus.dqn.DQNAgent
Overview:

Load the best model from the checkpoint directory, which is by default in folder exp_name/ckpt/eval.pth.tar. The return value is the agent with the best model.

Returns:
  • (DQNAgent): The agent with the best model.

Examples:
>>> agent = DQNAgent(env_id='LunarLander-v2')
>>> agent.train()
>>> agent = agent.best

注解

The best model is the model with the highest evaluation return. If this method is called, the current model will be replaced by the best model.

collect_data(env_num: int = 8, save_data_path: Optional[str] = None, n_sample: Optional[int] = None, n_episode: Optional[int] = None, context: Optional[str] = None, debug: bool = False) None[源代码]
Overview:

Collect data with DQN algorithm for n_episode episodes with env_num collector environments. The collected data will be saved in save_data_path if specified, otherwise it will be saved in exp_name/demo_data.

Arguments:
  • env_num (int): The number of collector environments. Default to 8.

  • save_data_path (str): The path to save the collected data. Default to None. If not specified, the data will be saved in exp_name/demo_data.

  • n_sample (int): The number of samples to collect. Default to None. If not specified, n_episode must be specified.

  • n_episode (int): The number of episodes to collect. Default to None. If not specified, n_sample must be specified.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

deploy(enable_save_replay: bool = False, concatenate_all_replay: bool = False, replay_save_path: Optional[str] = None, seed: Optional[Union[int, List]] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Deploy the agent with DQN algorithm by interacting with the environment, during which the replay video can be saved if enable_save_replay is True. The evaluation result will be returned.

Arguments:
  • enable_save_replay (bool): Whether to save the replay video. Default to False.

  • concatenate_all_replay (bool): Whether to concatenate all replay videos into one video. Default to False. If enable_save_replay is False, this argument will be ignored. If enable_save_replay is True and concatenate_all_replay is False, the replay video of each episode will be saved separately.

  • replay_save_path (str): The path to save the replay video. Default to None. If not specified, the video will be saved in exp_name/videos.

  • seed (Union[int, List]): The random seed, which is set before running the program. Default to None. If not specified, self.seed will be used. If seed is an integer, the agent will be deployed once. If seed is a list of integers, the agent will be deployed once for each seed in the list.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

train(step: int = 10000000, collector_env_num: Optional[int] = None, evaluator_env_num: Optional[int] = None, n_iter_save_ckpt: int = 1000, context: Optional[str] = None, debug: bool = False, wandb_sweep: bool = False) ding.bonus.common.TrainingReturn[源代码]
Overview:

Train the agent with DQN algorithm for step iterations with collector_env_num collector environments and evaluator_env_num evaluator environments. Information during training will be recorded and saved by wandb.

Arguments:
  • step (int): The total training environment steps of all collector environments. Default to 1e7.

  • collector_env_num (int): The collector environment number. Default to None. If not specified, it will be set according to the configuration.

  • evaluator_env_num (int): The evaluator environment number. Default to None. If not specified, it will be set according to the configuration.

  • n_iter_save_ckpt (int): The frequency of saving checkpoint every training iteration. Default to 1000.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

  • wandb_sweep (bool): Whether to use wandb sweep, which is a hyper-parameter optimization process for seeking the best configurations. Default to False. If True, the wandb sweep id will be used as the experiment name.

Returns:
  • (TrainingReturn): The training result, of which the attributions are:
    • wandb_url (str): The weight & biases (wandb) project url of the trainning experiment.

pg

Please refer to ding/bonus/pg.py for more details.

PGAgent

class ding.bonus.pg.PGAgent(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None)[源代码]
Overview:

Class of agent for training, evaluation and deployment of Reinforcement learning algorithm Policy Gradient(PG). For more information about the system design of RL agent, please refer to <https://di-engine-docs.readthedocs.io/en/latest/03_system/agent.html>.

Interface:

__init__, train, deploy, collect_data, batch_evaluate, best

__init__(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None) None[源代码]
Overview:

Initialize agent for PG algorithm.

Arguments:
  • env_id (str): The environment id, which is a registered environment name in gym or gymnasium. If env_id is not specified, env_id in cfg.env must be specified. If env_id is specified, env_id in cfg.env will be ignored. env_id should be one of the supported envs, which can be found in supported_env_list.

  • env (BaseEnv): The environment instance for training and evaluation. If env is not specified, env_id` or cfg.env.env_id must be specified. env_id or cfg.env.env_id will be used to create environment instance. If env is specified, env_id and cfg.env.env_id will be ignored.

  • seed (int): The random seed, which is set before running the program. Default to 0.

  • exp_name (str): The name of this experiment, which will be used to create the folder to save log data. Default to None. If not specified, the folder name will be env_id-algorithm.

  • model (torch.nn.Module): The model of PG algorithm, which should be an instance of class ding.model.PG. If not specified, a default model will be generated according to the configuration.

  • cfg (:obj:Union[EasyDict, dict]): The configuration of PG algorithm, which is a dict. Default to None. If not specified, the default configuration will be used. The default configuration can be found in ding/config/example/PG/gym_lunarlander_v2.py.

  • policy_state_dict (str): The path of policy state dict saved by PyTorch a in local file. If specified, the policy will be loaded from this file. Default to None.

注解

An RL Agent Instance can be initialized in two basic ways. For example, we have an environment with id LunarLanderContinuous-v2 registered in gym, and we want to train an agent with PG algorithm with default configuration. Then we can initialize the agent in the following ways:
>>> agent = PGAgent(env_id='LunarLanderContinuous-v2')
or, if we want can specify the env_id in the configuration:
>>> cfg = {'env': {'env_id': 'LunarLanderContinuous-v2'}, 'policy': ...... }
>>> agent = PGAgent(cfg=cfg)

There are also other arguments to specify the agent when initializing. For example, if we want to specify the environment instance:

>>> env = CustomizedEnv('LunarLanderContinuous-v2')
>>> agent = PGAgent(cfg=cfg, env=env)
or, if we want to specify the model:
>>> model = PG(**cfg.policy.model)
>>> agent = PGAgent(cfg=cfg, model=model)
or, if we want to reload the policy from a saved policy state dict:
>>> agent = PGAgent(cfg=cfg, policy_state_dict='LunarLanderContinuous-v2.pth.tar')

Make sure that the configuration is consistent with the saved policy state dict.

batch_evaluate(env_num: int = 4, n_evaluator_episode: int = 4, context: Optional[str] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Evaluate the agent with PG algorithm for n_evaluator_episode episodes with env_num evaluator environments. The evaluation result will be returned. The difference between methods batch_evaluate and deploy is that batch_evaluate will create multiple evaluator environments to evaluate the agent to get an average performance, while deploy will only create one evaluator environment to evaluate the agent and save the replay video.

Arguments:
  • env_num (int): The number of evaluator environments. Default to 4.

  • n_evaluator_episode (int): The number of episodes to evaluate. Default to 4.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

property best: ding.bonus.pg.PGAgent
Overview:

Load the best model from the checkpoint directory, which is by default in folder exp_name/ckpt/eval.pth.tar. The return value is the agent with the best model.

Returns:
  • (PGAgent): The agent with the best model.

Examples:
>>> agent = PGAgent(env_id='LunarLanderContinuous-v2')
>>> agent.train()
>>> agent = agent.best

注解

The best model is the model with the highest evaluation return. If this method is called, the current model will be replaced by the best model.

collect_data(env_num: int = 8, save_data_path: Optional[str] = None, n_sample: Optional[int] = None, n_episode: Optional[int] = None, context: Optional[str] = None, debug: bool = False) None[源代码]
Overview:

Collect data with PG algorithm for n_episode episodes with env_num collector environments. The collected data will be saved in save_data_path if specified, otherwise it will be saved in exp_name/demo_data.

Arguments:
  • env_num (int): The number of collector environments. Default to 8.

  • save_data_path (str): The path to save the collected data. Default to None. If not specified, the data will be saved in exp_name/demo_data.

  • n_sample (int): The number of samples to collect. Default to None. If not specified, n_episode must be specified.

  • n_episode (int): The number of episodes to collect. Default to None. If not specified, n_sample must be specified.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

deploy(enable_save_replay: bool = False, concatenate_all_replay: bool = False, replay_save_path: Optional[str] = None, seed: Optional[Union[int, List]] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Deploy the agent with PG algorithm by interacting with the environment, during which the replay video can be saved if enable_save_replay is True. The evaluation result will be returned.

Arguments:
  • enable_save_replay (bool): Whether to save the replay video. Default to False.

  • concatenate_all_replay (bool): Whether to concatenate all replay videos into one video. Default to False. If enable_save_replay is False, this argument will be ignored. If enable_save_replay is True and concatenate_all_replay is False, the replay video of each episode will be saved separately.

  • replay_save_path (str): The path to save the replay video. Default to None. If not specified, the video will be saved in exp_name/videos.

  • seed (Union[int, List]): The random seed, which is set before running the program. Default to None. If not specified, self.seed will be used. If seed is an integer, the agent will be deployed once. If seed is a list of integers, the agent will be deployed once for each seed in the list.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

train(step: int = 10000000, collector_env_num: Optional[int] = None, evaluator_env_num: Optional[int] = None, n_iter_save_ckpt: int = 1000, context: Optional[str] = None, debug: bool = False, wandb_sweep: bool = False) ding.bonus.common.TrainingReturn[源代码]
Overview:

Train the agent with PG algorithm for step iterations with collector_env_num collector environments and evaluator_env_num evaluator environments. Information during training will be recorded and saved by wandb.

Arguments:
  • step (int): The total training environment steps of all collector environments. Default to 1e7.

  • collector_env_num (int): The collector environment number. Default to None. If not specified, it will be set according to the configuration.

  • evaluator_env_num (int): The evaluator environment number. Default to None. If not specified, it will be set according to the configuration.

  • n_iter_save_ckpt (int): The frequency of saving checkpoint every training iteration. Default to 1000.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

  • wandb_sweep (bool): Whether to use wandb sweep, which is a hyper-parameter optimization process for seeking the best configurations. Default to False. If True, the wandb sweep id will be used as the experiment name.

Returns:
  • (TrainingReturn): The training result, of which the attributions are:
    • wandb_url (str): The weight & biases (wandb) project url of the trainning experiment.

ppo_offpolicy

Please refer to ding/bonus/ppo_offpolicy.py for more details.

PPOOffPolicyAgent

class ding.bonus.ppo_offpolicy.PPOOffPolicyAgent(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None)[源代码]
Overview:

Class of agent for training, evaluation and deployment of Reinforcement learning algorithm Proximal Policy Optimization(PPO) in an off-policy style. For more information about the system design of RL agent, please refer to <https://di-engine-docs.readthedocs.io/en/latest/03_system/agent.html>.

Interface:

__init__, train, deploy, collect_data, batch_evaluate, best

__init__(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None) None[源代码]
Overview:

Initialize agent for PPO (offpolicy) algorithm.

Arguments:
  • env_id (str): The environment id, which is a registered environment name in gym or gymnasium. If env_id is not specified, env_id in cfg.env must be specified. If env_id is specified, env_id in cfg.env will be ignored. env_id should be one of the supported envs, which can be found in supported_env_list.

  • env (BaseEnv): The environment instance for training and evaluation. If env is not specified, env_id` or cfg.env.env_id must be specified. env_id or cfg.env.env_id will be used to create environment instance. If env is specified, env_id and cfg.env.env_id will be ignored.

  • seed (int): The random seed, which is set before running the program. Default to 0.

  • exp_name (str): The name of this experiment, which will be used to create the folder to save log data. Default to None. If not specified, the folder name will be env_id-algorithm.

  • model (torch.nn.Module): The model of PPO (offpolicy) algorithm, which should be an instance of class ding.model.VAC. If not specified, a default model will be generated according to the configuration.

  • cfg (:obj:Union[EasyDict, dict]): The configuration of PPO (offpolicy) algorithm, which is a dict. Default to None. If not specified, the default configuration will be used. The default configuration can be found in ding/config/example/PPO (offpolicy)/gym_lunarlander_v2.py.

  • policy_state_dict (str): The path of policy state dict saved by PyTorch a in local file. If specified, the policy will be loaded from this file. Default to None.

注解

An RL Agent Instance can be initialized in two basic ways. For example, we have an environment with id LunarLander-v2 registered in gym, and we want to train an agent with PPO (offpolicy) algorithm with default configuration. Then we can initialize the agent in the following ways:
>>> agent = PPOOffPolicyAgent(env_id='LunarLander-v2')
or, if we want can specify the env_id in the configuration:
>>> cfg = {'env': {'env_id': 'LunarLander-v2'}, 'policy': ...... }
>>> agent = PPOOffPolicyAgent(cfg=cfg)

There are also other arguments to specify the agent when initializing. For example, if we want to specify the environment instance:

>>> env = CustomizedEnv('LunarLander-v2')
>>> agent = PPOOffPolicyAgent(cfg=cfg, env=env)
or, if we want to specify the model:
>>> model = VAC(**cfg.policy.model)
>>> agent = PPOOffPolicyAgent(cfg=cfg, model=model)
or, if we want to reload the policy from a saved policy state dict:
>>> agent = PPOOffPolicyAgent(cfg=cfg, policy_state_dict='LunarLander-v2.pth.tar')

Make sure that the configuration is consistent with the saved policy state dict.

batch_evaluate(env_num: int = 4, n_evaluator_episode: int = 4, context: Optional[str] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Evaluate the agent with PPO (offpolicy) algorithm for n_evaluator_episode episodes with env_num evaluator environments. The evaluation result will be returned. The difference between methods batch_evaluate and deploy is that batch_evaluate will create multiple evaluator environments to evaluate the agent to get an average performance, while deploy will only create one evaluator environment to evaluate the agent and save the replay video.

Arguments:
  • env_num (int): The number of evaluator environments. Default to 4.

  • n_evaluator_episode (int): The number of episodes to evaluate. Default to 4.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

property best: ding.bonus.ppo_offpolicy.PPOOffPolicyAgent
Overview:

Load the best model from the checkpoint directory, which is by default in folder exp_name/ckpt/eval.pth.tar. The return value is the agent with the best model.

Returns:
Examples:
>>> agent = PPOOffPolicyAgent(env_id='LunarLander-v2')
>>> agent.train()
>>> agent.best

注解

The best model is the model with the highest evaluation return. If this method is called, the current model will be replaced by the best model.

collect_data(env_num: int = 8, save_data_path: Optional[str] = None, n_sample: Optional[int] = None, n_episode: Optional[int] = None, context: Optional[str] = None, debug: bool = False) None[源代码]
Overview:

Collect data with PPO (offpolicy) algorithm for n_episode episodes with env_num collector environments. The collected data will be saved in save_data_path if specified, otherwise it will be saved in exp_name/demo_data.

Arguments:
  • env_num (int): The number of collector environments. Default to 8.

  • save_data_path (str): The path to save the collected data. Default to None. If not specified, the data will be saved in exp_name/demo_data.

  • n_sample (int): The number of samples to collect. Default to None. If not specified, n_episode must be specified.

  • n_episode (int): The number of episodes to collect. Default to None. If not specified, n_sample must be specified.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

deploy(enable_save_replay: bool = False, concatenate_all_replay: bool = False, replay_save_path: Optional[str] = None, seed: Optional[Union[int, List]] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Deploy the agent with PPO (offpolicy) algorithm by interacting with the environment, during which the replay video can be saved if enable_save_replay is True. The evaluation result will be returned.

Arguments:
  • enable_save_replay (bool): Whether to save the replay video. Default to False.

  • concatenate_all_replay (bool): Whether to concatenate all replay videos into one video. Default to False. If enable_save_replay is False, this argument will be ignored. If enable_save_replay is True and concatenate_all_replay is False, the replay video of each episode will be saved separately.

  • replay_save_path (str): The path to save the replay video. Default to None. If not specified, the video will be saved in exp_name/videos.

  • seed (Union[int, List]): The random seed, which is set before running the program. Default to None. If not specified, self.seed will be used. If seed is an integer, the agent will be deployed once. If seed is a list of integers, the agent will be deployed once for each seed in the list.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

train(step: int = 10000000, collector_env_num: Optional[int] = None, evaluator_env_num: Optional[int] = None, n_iter_save_ckpt: int = 1000, context: Optional[str] = None, debug: bool = False, wandb_sweep: bool = False) ding.bonus.common.TrainingReturn[源代码]
Overview:

Train the agent with PPO (offpolicy) algorithm for step iterations with collector_env_num collector environments and evaluator_env_num evaluator environments. Information during training will be recorded and saved by wandb.

Arguments:
  • step (int): The total training environment steps of all collector environments. Default to 1e7.

  • collector_env_num (int): The collector environment number. Default to None. If not specified, it will be set according to the configuration.

  • evaluator_env_num (int): The evaluator environment number. Default to None. If not specified, it will be set according to the configuration.

  • n_iter_save_ckpt (int): The frequency of saving checkpoint every training iteration. Default to 1000.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

  • wandb_sweep (bool): Whether to use wandb sweep, which is a hyper-parameter optimization process for seeking the best configurations. Default to False. If True, the wandb sweep id will be used as the experiment name.

Returns:
  • (TrainingReturn): The training result, of which the attributions are:
    • wandb_url (str): The weight & biases (wandb) project url of the trainning experiment.

ppof

Please refer to ding/bonus/ppof.py for more details.

PPOF

class ding.bonus.ppof.PPOF(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None)[源代码]
Overview:

Class of agent for training, evaluation and deployment of Reinforcement learning algorithm Proximal Policy Optimization(PPO). For more information about the system design of RL agent, please refer to <https://di-engine-docs.readthedocs.io/en/latest/03_system/agent.html>.

Interface:

__init__, train, deploy, collect_data, batch_evaluate, best

__init__(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None) None[源代码]
Overview:

Initialize agent for PPO algorithm.

Arguments:
  • env_id (str): The environment id, which is a registered environment name in gym or gymnasium. If env_id is not specified, env_id in cfg must be specified. If env_id is specified, env_id in cfg will be ignored. env_id should be one of the supported envs, which can be found in PPOF.supported_env_list.

  • env (BaseEnv): The environment instance for training and evaluation. If env is not specified, env_id or cfg.env_id must be specified. env_id or cfg.env_id will be used to create environment instance. If env is specified, env_id and cfg.env_id will be ignored.

  • seed (int): The random seed, which is set before running the program. Default to 0.

  • exp_name (str): The name of this experiment, which will be used to create the folder to save log data. Default to None. If not specified, the folder name will be env_id-algorithm.

  • model (torch.nn.Module): The model of PPO algorithm, which should be an instance of class ding.model.PPOFModel. If not specified, a default model will be generated according to the configuration.

  • cfg (Union[EasyDict, dict]): The configuration of PPO algorithm, which is a dict. Default to None. If not specified, the default configuration will be used.

  • policy_state_dict (str): The path of policy state dict saved by PyTorch a in local file. If specified, the policy will be loaded from this file. Default to None.

注解

An RL Agent Instance can be initialized in two basic ways. For example, we have an environment with id LunarLander-v2 registered in gym, and we want to train an agent with PPO algorithm with default configuration. Then we can initialize the agent in the following ways:
>>> agent = PPOF(env_id='LunarLander-v2')
or, if we want can specify the env_id in the configuration:
>>> cfg = {'env': {'env_id': 'LunarLander-v2'}, 'policy': ...... }
>>> agent = PPOF(cfg=cfg)

There are also other arguments to specify the agent when initializing. For example, if we want to specify the environment instance:

>>> env = CustomizedEnv('LunarLander-v2')
>>> agent = PPOF(cfg=cfg, env=env)
or, if we want to specify the model:
>>> model = VAC(**cfg.policy.model)
>>> agent = PPOF(cfg=cfg, model=model)
or, if we want to reload the policy from a saved policy state dict:
>>> agent = PPOF(cfg=cfg, policy_state_dict='LunarLander-v2.pth.tar')

Make sure that the configuration is consistent with the saved policy state dict.

batch_evaluate(env_num: int = 4, n_evaluator_episode: int = 4, context: Optional[str] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Evaluate the agent with PPO algorithm for n_evaluator_episode episodes with env_num evaluator environments. The evaluation result will be returned. The difference between methods batch_evaluate and deploy is that batch_evaluate will create multiple evaluator environments to evaluate the agent to get an average performance, while deploy will only create one evaluator environment to evaluate the agent and save the replay video.

Arguments:
  • env_num (int): The number of evaluator environments. Default to 4.

  • n_evaluator_episode (int): The number of episodes to evaluate. Default to 4.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

property best: ding.bonus.ppof.PPOF
Overview:

Load the best model from the checkpoint directory, which is by default in folder exp_name/ckpt/eval.pth.tar. The return value is the agent with the best model.

Returns:
  • (PPOF): The agent with the best model.

Examples:
>>> agent = PPOF(env_id='LunarLander-v2')
>>> agent.train()
>>> agent = agent.best()

注解

The best model is the model with the highest evaluation return. If this method is called, the current model will be replaced by the best model.

collect_data(env_num: int = 8, save_data_path: Optional[str] = None, n_sample: Optional[int] = None, n_episode: Optional[int] = None, context: Optional[str] = None, debug: bool = False) None[源代码]
Overview:

Collect data with PPO algorithm for n_episode episodes with env_num collector environments. The collected data will be saved in save_data_path if specified, otherwise it will be saved in exp_name/demo_data.

Arguments:
  • env_num (int): The number of collector environments. Default to 8.

  • save_data_path (str): The path to save the collected data. Default to None. If not specified, the data will be saved in exp_name/demo_data.

  • n_sample (int): The number of samples to collect. Default to None. If not specified, n_episode must be specified.

  • n_episode (int): The number of episodes to collect. Default to None. If not specified, n_sample must be specified.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

deploy(enable_save_replay: bool = False, concatenate_all_replay: bool = False, replay_save_path: Optional[str] = None, seed: Optional[Union[int, List]] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Deploy the agent with PPO algorithm by interacting with the environment, during which the replay video can be saved if enable_save_replay is True. The evaluation result will be returned.

Arguments:
  • enable_save_replay (bool): Whether to save the replay video. Default to False.

  • concatenate_all_replay (bool): Whether to concatenate all replay videos into one video. Default to False. If enable_save_replay is False, this argument will be ignored. If enable_save_replay is True and concatenate_all_replay is False, the replay video of each episode will be saved separately.

  • replay_save_path (str): The path to save the replay video. Default to None. If not specified, the video will be saved in exp_name/videos.

  • seed (Union[int, List]): The random seed, which is set before running the program. Default to None. If not specified, self.seed will be used. If seed is an integer, the agent will be deployed once. If seed is a list of integers, the agent will be deployed once for each seed in the list.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

train(step: int = 10000000, collector_env_num: int = 4, evaluator_env_num: int = 4, n_iter_log_show: int = 500, n_iter_save_ckpt: int = 1000, context: Optional[str] = None, reward_model: Optional[str] = None, debug: bool = False, wandb_sweep: bool = False) ding.bonus.common.TrainingReturn[源代码]
Overview:

Train the agent with PPO algorithm for step iterations with collector_env_num collector environments and evaluator_env_num evaluator environments. Information during training will be recorded and saved by wandb.

Arguments:
  • step (int): The total training environment steps of all collector environments. Default to 1e7.

  • collector_env_num (int): The number of collector environments. Default to 4.

  • evaluator_env_num (int): The number of evaluator environments. Default to 4.

  • n_iter_log_show (int): The frequency of logging every training iteration. Default to 500.

  • n_iter_save_ckpt (int): The frequency of saving checkpoint every training iteration. Default to 1000.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • reward_model (str): The reward model name. Default to None. This argument is not supported yet.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

  • wandb_sweep (bool): Whether to use wandb sweep, which is a hyper-parameter optimization process for seeking the best configurations. Default to False. If True, the wandb sweep id will be used as the experiment name.

Returns:
  • (TrainingReturn): The training result, of which the attributions are:
    • wandb_url (str): The weight & biases (wandb) project url of the trainning experiment.

sac

Please refer to ding/bonus/sac.py for more details.

SACAgent

class ding.bonus.sac.SACAgent(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None)[源代码]
Overview:

Class of agent for training, evaluation and deployment of Reinforcement learning algorithm Soft Actor-Critic(SAC). For more information about the system design of RL agent, please refer to <https://di-engine-docs.readthedocs.io/en/latest/03_system/agent.html>.

Interface:

__init__, train, deploy, collect_data, batch_evaluate, best

__init__(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None) None[源代码]
Overview:

Initialize agent for SAC algorithm.

Arguments:
  • env_id (str): The environment id, which is a registered environment name in gym or gymnasium. If env_id is not specified, env_id in cfg.env must be specified. If env_id is specified, env_id in cfg.env will be ignored. env_id should be one of the supported envs, which can be found in supported_env_list.

  • env (BaseEnv): The environment instance for training and evaluation. If env is not specified, env_id` or cfg.env.env_id must be specified. env_id or cfg.env.env_id will be used to create environment instance. If env is specified, env_id and cfg.env.env_id will be ignored.

  • seed (int): The random seed, which is set before running the program. Default to 0.

  • exp_name (str): The name of this experiment, which will be used to create the folder to save log data. Default to None. If not specified, the folder name will be env_id-algorithm.

  • model (torch.nn.Module): The model of SAC algorithm, which should be an instance of class ding.model.ContinuousQAC. If not specified, a default model will be generated according to the configuration.

  • cfg (:obj:Union[EasyDict, dict]): The configuration of SAC algorithm, which is a dict. Default to None. If not specified, the default configuration will be used. The default configuration can be found in ding/config/example/SAC/gym_lunarlander_v2.py.

  • policy_state_dict (str): The path of policy state dict saved by PyTorch a in local file. If specified, the policy will be loaded from this file. Default to None.

注解

An RL Agent Instance can be initialized in two basic ways. For example, we have an environment with id LunarLanderContinuous-v2 registered in gym, and we want to train an agent with SAC algorithm with default configuration. Then we can initialize the agent in the following ways:
>>> agent = SACAgent(env_id='LunarLanderContinuous-v2')
or, if we want can specify the env_id in the configuration:
>>> cfg = {'env': {'env_id': 'LunarLanderContinuous-v2'}, 'policy': ...... }
>>> agent = SACAgent(cfg=cfg)

There are also other arguments to specify the agent when initializing. For example, if we want to specify the environment instance:

>>> env = CustomizedEnv('LunarLanderContinuous-v2')
>>> agent = SACAgent(cfg=cfg, env=env)
or, if we want to specify the model:
>>> model = ContinuousQAC(**cfg.policy.model)
>>> agent = SACAgent(cfg=cfg, model=model)
or, if we want to reload the policy from a saved policy state dict:
>>> agent = SACAgent(cfg=cfg, policy_state_dict='LunarLanderContinuous-v2.pth.tar')

Make sure that the configuration is consistent with the saved policy state dict.

batch_evaluate(env_num: int = 4, n_evaluator_episode: int = 4, context: Optional[str] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Evaluate the agent with SAC algorithm for n_evaluator_episode episodes with env_num evaluator environments. The evaluation result will be returned. The difference between methods batch_evaluate and deploy is that batch_evaluate will create multiple evaluator environments to evaluate the agent to get an average performance, while deploy will only create one evaluator environment to evaluate the agent and save the replay video.

Arguments:
  • env_num (int): The number of evaluator environments. Default to 4.

  • n_evaluator_episode (int): The number of episodes to evaluate. Default to 4.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

property best: ding.bonus.sac.SACAgent
Overview:

Load the best model from the checkpoint directory, which is by default in folder exp_name/ckpt/eval.pth.tar. The return value is the agent with the best model.

Returns:
  • (SACAgent): The agent with the best model.

Examples:
>>> agent = SACAgent(env_id='LunarLanderContinuous-v2')
>>> agent.train()
>>> agent = agent.best

注解

The best model is the model with the highest evaluation return. If this method is called, the current model will be replaced by the best model.

collect_data(env_num: int = 8, save_data_path: Optional[str] = None, n_sample: Optional[int] = None, n_episode: Optional[int] = None, context: Optional[str] = None, debug: bool = False) None[源代码]
Overview:

Collect data with SAC algorithm for n_episode episodes with env_num collector environments. The collected data will be saved in save_data_path if specified, otherwise it will be saved in exp_name/demo_data.

Arguments:
  • env_num (int): The number of collector environments. Default to 8.

  • save_data_path (str): The path to save the collected data. Default to None. If not specified, the data will be saved in exp_name/demo_data.

  • n_sample (int): The number of samples to collect. Default to None. If not specified, n_episode must be specified.

  • n_episode (int): The number of episodes to collect. Default to None. If not specified, n_sample must be specified.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

deploy(enable_save_replay: bool = False, concatenate_all_replay: bool = False, replay_save_path: Optional[str] = None, seed: Optional[Union[int, List]] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Deploy the agent with SAC algorithm by interacting with the environment, during which the replay video can be saved if enable_save_replay is True. The evaluation result will be returned.

Arguments:
  • enable_save_replay (bool): Whether to save the replay video. Default to False.

  • concatenate_all_replay (bool): Whether to concatenate all replay videos into one video. Default to False. If enable_save_replay is False, this argument will be ignored. If enable_save_replay is True and concatenate_all_replay is False, the replay video of each episode will be saved separately.

  • replay_save_path (str): The path to save the replay video. Default to None. If not specified, the video will be saved in exp_name/videos.

  • seed (Union[int, List]): The random seed, which is set before running the program. Default to None. If not specified, self.seed will be used. If seed is an integer, the agent will be deployed once. If seed is a list of integers, the agent will be deployed once for each seed in the list.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

train(step: int = 10000000, collector_env_num: Optional[int] = None, evaluator_env_num: Optional[int] = None, n_iter_save_ckpt: int = 1000, context: Optional[str] = None, debug: bool = False, wandb_sweep: bool = False) ding.bonus.common.TrainingReturn[源代码]
Overview:

Train the agent with SAC algorithm for step iterations with collector_env_num collector environments and evaluator_env_num evaluator environments. Information during training will be recorded and saved by wandb.

Arguments:
  • step (int): The total training environment steps of all collector environments. Default to 1e7.

  • collector_env_num (int): The collector environment number. Default to None. If not specified, it will be set according to the configuration.

  • evaluator_env_num (int): The evaluator environment number. Default to None. If not specified, it will be set according to the configuration.

  • n_iter_save_ckpt (int): The frequency of saving checkpoint every training iteration. Default to 1000.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

  • wandb_sweep (bool): Whether to use wandb sweep, which is a hyper-parameter optimization process for seeking the best configurations. Default to False. If True, the wandb sweep id will be used as the experiment name.

Returns:
  • (TrainingReturn): The training result, of which the attributions are:
    • wandb_url (str): The weight & biases (wandb) project url of the trainning experiment.

sql

Please refer to ding/bonus/sql.py for more details.

SQLAgent

class ding.bonus.sql.SQLAgent(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None)[源代码]
Overview:

Class of agent for training, evaluation and deployment of Reinforcement learning algorithm Soft Q-Learning(SQL). For more information about the system design of RL agent, please refer to <https://di-engine-docs.readthedocs.io/en/latest/03_system/agent.html>.

Interface:

__init__, train, deploy, collect_data, batch_evaluate, best

__init__(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None) None[源代码]
Overview:

Initialize agent for SQL algorithm.

Arguments:
  • env_id (str): The environment id, which is a registered environment name in gym or gymnasium. If env_id is not specified, env_id in cfg.env must be specified. If env_id is specified, env_id in cfg.env will be ignored. env_id should be one of the supported envs, which can be found in supported_env_list.

  • env (BaseEnv): The environment instance for training and evaluation. If env is not specified, env_id` or cfg.env.env_id must be specified. env_id or cfg.env.env_id will be used to create environment instance. If env is specified, env_id and cfg.env.env_id will be ignored.

  • seed (int): The random seed, which is set before running the program. Default to 0.

  • exp_name (str): The name of this experiment, which will be used to create the folder to save log data. Default to None. If not specified, the folder name will be env_id-algorithm.

  • model (torch.nn.Module): The model of SQL algorithm, which should be an instance of class ding.model.DQN. If not specified, a default model will be generated according to the configuration.

  • cfg (:obj:Union[EasyDict, dict]): The configuration of SQL algorithm, which is a dict. Default to None. If not specified, the default configuration will be used. The default configuration can be found in ding/config/example/SQL/gym_lunarlander_v2.py.

  • policy_state_dict (str): The path of policy state dict saved by PyTorch a in local file. If specified, the policy will be loaded from this file. Default to None.

注解

An RL Agent Instance can be initialized in two basic ways. For example, we have an environment with id LunarLander-v2 registered in gym, and we want to train an agent with SQL algorithm with default configuration. Then we can initialize the agent in the following ways:
>>> agent = SQLAgent(env_id='LunarLander-v2')
or, if we want can specify the env_id in the configuration:
>>> cfg = {'env': {'env_id': 'LunarLander-v2'}, 'policy': ...... }
>>> agent = SQLAgent(cfg=cfg)

There are also other arguments to specify the agent when initializing. For example, if we want to specify the environment instance:

>>> env = CustomizedEnv('LunarLander-v2')
>>> agent = SQLAgent(cfg=cfg, env=env)
or, if we want to specify the model:
>>> model = DQN(**cfg.policy.model)
>>> agent = SQLAgent(cfg=cfg, model=model)
or, if we want to reload the policy from a saved policy state dict:
>>> agent = SQLAgent(cfg=cfg, policy_state_dict='LunarLander-v2.pth.tar')

Make sure that the configuration is consistent with the saved policy state dict.

batch_evaluate(env_num: int = 4, n_evaluator_episode: int = 4, context: Optional[str] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Evaluate the agent with SQL algorithm for n_evaluator_episode episodes with env_num evaluator environments. The evaluation result will be returned. The difference between methods batch_evaluate and deploy is that batch_evaluate will create multiple evaluator environments to evaluate the agent to get an average performance, while deploy will only create one evaluator environment to evaluate the agent and save the replay video.

Arguments:
  • env_num (int): The number of evaluator environments. Default to 4.

  • n_evaluator_episode (int): The number of episodes to evaluate. Default to 4.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

property best: ding.bonus.sql.SQLAgent
Overview:

Load the best model from the checkpoint directory, which is by default in folder exp_name/ckpt/eval.pth.tar. The return value is the agent with the best model.

Returns:
  • (SQLAgent): The agent with the best model.

Examples:
>>> agent = SQLAgent(env_id='LunarLander-v2')
>>> agent.train()
>>> agent = agent.best

注解

The best model is the model with the highest evaluation return. If this method is called, the current model will be replaced by the best model.

collect_data(env_num: int = 8, save_data_path: Optional[str] = None, n_sample: Optional[int] = None, n_episode: Optional[int] = None, context: Optional[str] = None, debug: bool = False) None[源代码]
Overview:

Collect data with SQL algorithm for n_episode episodes with env_num collector environments. The collected data will be saved in save_data_path if specified, otherwise it will be saved in exp_name/demo_data.

Arguments:
  • env_num (int): The number of collector environments. Default to 8.

  • save_data_path (str): The path to save the collected data. Default to None. If not specified, the data will be saved in exp_name/demo_data.

  • n_sample (int): The number of samples to collect. Default to None. If not specified, n_episode must be specified.

  • n_episode (int): The number of episodes to collect. Default to None. If not specified, n_sample must be specified.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

deploy(enable_save_replay: bool = False, concatenate_all_replay: bool = False, replay_save_path: Optional[str] = None, seed: Optional[Union[int, List]] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Deploy the agent with SQL algorithm by interacting with the environment, during which the replay video can be saved if enable_save_replay is True. The evaluation result will be returned.

Arguments:
  • enable_save_replay (bool): Whether to save the replay video. Default to False.

  • concatenate_all_replay (bool): Whether to concatenate all replay videos into one video. Default to False. If enable_save_replay is False, this argument will be ignored. If enable_save_replay is True and concatenate_all_replay is False, the replay video of each episode will be saved separately.

  • replay_save_path (str): The path to save the replay video. Default to None. If not specified, the video will be saved in exp_name/videos.

  • seed (Union[int, List]): The random seed, which is set before running the program. Default to None. If not specified, self.seed will be used. If seed is an integer, the agent will be deployed once. If seed is a list of integers, the agent will be deployed once for each seed in the list.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

train(step: int = 10000000, collector_env_num: Optional[int] = None, evaluator_env_num: Optional[int] = None, n_iter_save_ckpt: int = 1000, context: Optional[str] = None, debug: bool = False, wandb_sweep: bool = False) ding.bonus.common.TrainingReturn[源代码]
Overview:

Train the agent with SQL algorithm for step iterations with collector_env_num collector environments and evaluator_env_num evaluator environments. Information during training will be recorded and saved by wandb.

Arguments:
  • step (int): The total training environment steps of all collector environments. Default to 1e7.

  • collector_env_num (int): The collector environment number. Default to None. If not specified, it will be set according to the configuration.

  • evaluator_env_num (int): The evaluator environment number. Default to None. If not specified, it will be set according to the configuration.

  • n_iter_save_ckpt (int): The frequency of saving checkpoint every training iteration. Default to 1000.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

  • wandb_sweep (bool): Whether to use wandb sweep, which is a hyper-parameter optimization process for seeking the best configurations. Default to False. If True, the wandb sweep id will be used as the experiment name.

Returns:
  • (TrainingReturn): The training result, of which the attributions are:
    • wandb_url (str): The weight & biases (wandb) project url of the trainning experiment.

td3

Please refer to ding/bonus/td3.py for more details.

TD3Agent

class ding.bonus.td3.TD3Agent(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None)[源代码]
Overview:

Class of agent for training, evaluation and deployment of Reinforcement learning algorithm Twin Delayed Deep Deterministic Policy Gradient(TD3). For more information about the system design of RL agent, please refer to <https://di-engine-docs.readthedocs.io/en/latest/03_system/agent.html>.

Interface:

__init__, train, deploy, collect_data, batch_evaluate, best

__init__(env_id: Optional[str] = None, env: Optional[ding.envs.env.base_env.BaseEnv] = None, seed: int = 0, exp_name: Optional[str] = None, model: Optional[torch.nn.modules.module.Module] = None, cfg: Optional[Union[easydict.EasyDict, dict]] = None, policy_state_dict: Optional[str] = None) None[源代码]
Overview:

Initialize agent for TD3 algorithm.

Arguments:
  • env_id (str): The environment id, which is a registered environment name in gym or gymnasium. If env_id is not specified, env_id in cfg.env must be specified. If env_id is specified, env_id in cfg.env will be ignored. env_id should be one of the supported envs, which can be found in supported_env_list.

  • env (BaseEnv): The environment instance for training and evaluation. If env is not specified, env_id` or cfg.env.env_id must be specified. env_id or cfg.env.env_id will be used to create environment instance. If env is specified, env_id and cfg.env.env_id will be ignored.

  • seed (int): The random seed, which is set before running the program. Default to 0.

  • exp_name (str): The name of this experiment, which will be used to create the folder to save log data. Default to None. If not specified, the folder name will be env_id-algorithm.

  • model (torch.nn.Module): The model of TD3 algorithm, which should be an instance of class ding.model.ContinuousQAC. If not specified, a default model will be generated according to the configuration.

  • cfg (:obj:Union[EasyDict, dict]): The configuration of TD3 algorithm, which is a dict. Default to None. If not specified, the default configuration will be used. The default configuration can be found in ding/config/example/TD3/gym_lunarlander_v2.py.

  • policy_state_dict (str): The path of policy state dict saved by PyTorch a in local file. If specified, the policy will be loaded from this file. Default to None.

注解

An RL Agent Instance can be initialized in two basic ways. For example, we have an environment with id LunarLanderContinuous-v2 registered in gym, and we want to train an agent with TD3 algorithm with default configuration. Then we can initialize the agent in the following ways:
>>> agent = TD3Agent(env_id='LunarLanderContinuous-v2')
or, if we want can specify the env_id in the configuration:
>>> cfg = {'env': {'env_id': 'LunarLanderContinuous-v2'}, 'policy': ...... }
>>> agent = TD3Agent(cfg=cfg)

There are also other arguments to specify the agent when initializing. For example, if we want to specify the environment instance:

>>> env = CustomizedEnv('LunarLanderContinuous-v2')
>>> agent = TD3Agent(cfg=cfg, env=env)
or, if we want to specify the model:
>>> model = ContinuousQAC(**cfg.policy.model)
>>> agent = TD3Agent(cfg=cfg, model=model)
or, if we want to reload the policy from a saved policy state dict:
>>> agent = TD3Agent(cfg=cfg, policy_state_dict='LunarLanderContinuous-v2.pth.tar')

Make sure that the configuration is consistent with the saved policy state dict.

batch_evaluate(env_num: int = 4, n_evaluator_episode: int = 4, context: Optional[str] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Evaluate the agent with TD3 algorithm for n_evaluator_episode episodes with env_num evaluator environments. The evaluation result will be returned. The difference between methods batch_evaluate and deploy is that batch_evaluate will create multiple evaluator environments to evaluate the agent to get an average performance, while deploy will only create one evaluator environment to evaluate the agent and save the replay video.

Arguments:
  • env_num (int): The number of evaluator environments. Default to 4.

  • n_evaluator_episode (int): The number of episodes to evaluate. Default to 4.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

property best: ding.bonus.td3.TD3Agent
Overview:

Load the best model from the checkpoint directory, which is by default in folder exp_name/ckpt/eval.pth.tar. The return value is the agent with the best model.

Returns:
  • (TD3Agent): The agent with the best model.

Examples:
>>> agent = TD3Agent(env_id='LunarLanderContinuous-v2')
>>> agent.train()
>>> agent.best

注解

The best model is the model with the highest evaluation return. If this method is called, the current model will be replaced by the best model.

collect_data(env_num: int = 8, save_data_path: Optional[str] = None, n_sample: Optional[int] = None, n_episode: Optional[int] = None, context: Optional[str] = None, debug: bool = False) None[源代码]
Overview:

Collect data with TD3 algorithm for n_episode episodes with env_num collector environments. The collected data will be saved in save_data_path if specified, otherwise it will be saved in exp_name/demo_data.

Arguments:
  • env_num (int): The number of collector environments. Default to 8.

  • save_data_path (str): The path to save the collected data. Default to None. If not specified, the data will be saved in exp_name/demo_data.

  • n_sample (int): The number of samples to collect. Default to None. If not specified, n_episode must be specified.

  • n_episode (int): The number of episodes to collect. Default to None. If not specified, n_sample must be specified.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

deploy(enable_save_replay: bool = False, concatenate_all_replay: bool = False, replay_save_path: Optional[str] = None, seed: Optional[Union[int, List]] = None, debug: bool = False) ding.bonus.common.EvalReturn[源代码]
Overview:

Deploy the agent with TD3 algorithm by interacting with the environment, during which the replay video can be saved if enable_save_replay is True. The evaluation result will be returned.

Arguments:
  • enable_save_replay (bool): Whether to save the replay video. Default to False.

  • concatenate_all_replay (bool): Whether to concatenate all replay videos into one video. Default to False. If enable_save_replay is False, this argument will be ignored. If enable_save_replay is True and concatenate_all_replay is False, the replay video of each episode will be saved separately.

  • replay_save_path (str): The path to save the replay video. Default to None. If not specified, the video will be saved in exp_name/videos.

  • seed (Union[int, List]): The random seed, which is set before running the program. Default to None. If not specified, self.seed will be used. If seed is an integer, the agent will be deployed once. If seed is a list of integers, the agent will be deployed once for each seed in the list.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

Returns:
  • (EvalReturn): The evaluation result, of which the attributions are:
    • eval_value (np.float32): The mean of evaluation return.

    • eval_value_std (np.float32): The standard deviation of evaluation return.

train(step: int = 10000000, collector_env_num: Optional[int] = None, evaluator_env_num: Optional[int] = None, n_iter_save_ckpt: int = 1000, context: Optional[str] = None, debug: bool = False, wandb_sweep: bool = False) ding.bonus.common.TrainingReturn[源代码]
Overview:

Train the agent with TD3 algorithm for step iterations with collector_env_num collector environments and evaluator_env_num evaluator environments. Information during training will be recorded and saved by wandb.

Arguments:
  • step (int): The total training environment steps of all collector environments. Default to 1e7.

  • collector_env_num (int): The collector environment number. Default to None. If not specified, it will be set according to the configuration.

  • evaluator_env_num (int): The evaluator environment number. Default to None. If not specified, it will be set according to the configuration.

  • n_iter_save_ckpt (int): The frequency of saving checkpoint every training iteration. Default to 1000.

  • context (str): The multi-process context of the environment manager. Default to None. It can be specified as spawn, fork or forkserver.

  • debug (bool): Whether to use debug mode in the environment manager. Default to False. If set True, base environment manager will be used for easy debugging. Otherwise, subprocess environment manager will be used.

  • wandb_sweep (bool): Whether to use wandb sweep, which is a hyper-parameter optimization process for seeking the best configurations. Default to False. If True, the wandb sweep id will be used as the experiment name.

Returns:
  • (TrainingReturn): The training result, of which the attributions are:
    • wandb_url (str): The weight & biases (wandb) project url of the trainning experiment.

Read the Docs v: latest
Versions
latest
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.