How can I get the value of a state from a trained PPO model with the new API?

To get the expected value (value function output) for a given state observation from a trained PPO model using the new RLlib API, you need to use the RLModule’s compute_values method. This requires access to the RLModule (e.g., via algo.get_module()), and you should pass the observation in the correct format (typically as a dictionary with key "obs" and a torch tensor value). The value function output is then returned as a tensor. For example:

import torch
obs = ...  # your observation as a numpy array
module = algo.get_module()
value = module.compute_values({"obs": torch.from_numpy(obs).unsqueeze(0)})

This approach is shown in train_w_bc_finetune_w_ppo.py and discussed in KeyError: ‘advantages’ on MARL.

Would you like a step-by-step breakdown or more details on handling batch/multi-agent cases?

Sources:

Hint: Mention @RunLLM in the post for followups.