Offline inference vLLM: map_batches vs build_llm_processor

The ray.data.llm integration was introduced in Ray version 2.44.0. It is not available in 2.43; you will need to upgrade to at least 2.44 to use these APIs (Ray GitHub issue #2e2a40d4, Ray Discourse).

Would you like upgrade guidance or details on breaking changes?

Sources:

Hint: Mention @RunLLM in the post for followups.