Offline RL with DQN, PPO, etc

Are there any examples of using non-standard offline RL algorithms using both offline data for training and evaluation? Second, many of the algorithms supported by ray have disappeared with 2.8.0, where can we find documentation for these past algorithms, e.g., CRR: https://docs.ray.io/en/latest/_modules/ray/rllib/algorithms/crr/crr.html

Thank you!