Good morning. I want to use a custom neural network for TD3. In the documentation ( RLlib Models, Preprocessors, and Action Distributions — Ray v1.10.0 ) I have seen how to pass a custom neural network in the “model” key of the configuration dictionary. Is it possible to pass custom models for the actor and critic networks?.
Completely! Go ahead and check out our custom model examples:
import argparse
from gym.spaces import Box, Discrete
import numpy as np
from ray.rllib.examples.models.custom_model_api import (
DuelingQModel,
TorchDuelingQModel,
ContActionQModel,
TorchContActionQModel,
)
from ray.rllib.models.catalog import ModelCatalog, MODEL_DEFAULTS
from ray.rllib.policy.sample_batch import SampleBatch
from ray.rllib.utils.framework import try_import_tf, try_import_torch
tf1, tf, tfv = try_import_tf()
torch, _ = try_import_torch()
parser = argparse.ArgumentParser()
parser.add_argument(
"--framework",
This file has been truncated. show original
"""Example of using custom_loss() with an imitation learning loss.
The default input file is too small to learn a good policy, but you can
generate new experiences for IL training as follows:
To generate experiences:
$ ./train.py --run=PG --config='{"output": "/tmp/cartpole"}' --env=CartPole-v0
To train on experiences with joint PG + IL loss:
$ python custom_loss.py --input-files=/tmp/cartpole
"""
import argparse
from pathlib import Path
import os
import ray
from ray import tune
from ray.rllib.examples.models.custom_loss_model import (
CustomLossModel,
This file has been truncated. show original