Action space Discrete is not supported for DQN

How severe does this issue affect your experience of using Ray?

  • Medium: It contributes to significant difficulty in completing my task, but I can work around it.

Hello everyone,
I learning RLlib, and I am trying to replicate the “tutorial” contained in the book “learning ray flexible distributed python for Machine Learning” concerning a maze environment for a multi-agent application.

Before starting the training step, I get the following error message: UnsupportedSpaceException: Action space Discrete(4) is not supported for DQN.

But DQN should support discrete spaces.

Can anyone help me?

Thanks in advance.

This example shows CartPole-v1, which has a Discrete action space, running on a DQN. It might be that you’ve got a custom model config, or something of the kind, that’s not getting fed in properly. If you’ve got a replication script, I could take a look.

Edit: Whoops, this is from September of the wrong year.