Log inside function in class decorated by deployment does not appear in console

1. Severity of the issue: (select one)
High: Completely blocks me.

2. Environment:
Ubuntu 22.04.4 LTS

python: 2.12.9
ray: 2.42.1
fastapi: 0.115.8

3. What happened vs. what you expected:

  • Expected:
  • Actual:

What happened + What you expected to happen

What happened
Since all logs are saved in /tmp/ray/logs, it’s difficult to check custom log defined by users.
log inside process does not appear in conscole.

What you expected to happen
I want to print custom log in terminal and save to single log file, like test.log.

eg: print all following logs in terminal and save them into specified test.log.

@serve.deployment(num_replicas=2)
class Model1:
    def __init__(self):
        logger.info("logger info Model1 init ==================")
        logger.warning("logger warning Model1 init ==================")
        print("print Model1 init ==================", flush=True)

    def process(self, inputs):
        logger.info("logger info Model1 process ==================")
        logger.warning("logger warning Model1 process ==================")
        print("print Model1 process ==================", flush=True)
        chunk = "stdout Model1 process =================="
        sys.stdout.writelines([chunk])
        result = inputs + " __Model1_process__ "
        return result

Reproduction script

ray start --head
python test.py

import sys
import logging
import ray
from ray import serve
from fastapi import FastAPI
from pydantic import BaseModel


logger = logging.getLogger("ray.serve")


ray.init(address="auto")

class GenerateRequest(BaseModel):
    prompt: str


app = FastAPI()


@serve.deployment(num_replicas=2)
class Model1:
    def __init__(self):
        logger.info("logger info Model1 init ==================")
        logger.warning("logger warning Model1 init ==================")
        print("print Model1 init ==================", flush=True)

    def process(self, inputs):
        logger.info("logger info Model1 process ==================")
        logger.warning("logger warning Model1 process ==================")
        print("print Model1 process ==================", flush=True)
        chunk = "stdout Model1 process =================="
        sys.stdout.writelines([chunk])
        result = inputs + " __Model1_process__ "
        return result


@serve.deployment(num_replicas=2)
class Model2:
    def __init__(self):
        pass

    def generate(self, inputs):
        result = inputs + "__Model2_generate__"
        return result


@serve.deployment
@serve.ingress(app)
class Service:
    def __init__(self, preprocessor, llm_actor):
        self.preprocessor = preprocessor
        self.llm_actor = llm_actor

    @app.post("/generate")
    async def generate_handler(self, request: GenerateRequest):
        processed_prompt = await self.preprocessor.process.remote(request.prompt)

        generation_result = await self.llm_actor.generate.remote(processed_prompt)

        return {"result": generation_result}


if __name__ == "__main__":

    serve.start(detached=True, http_options={"host": "0.0.0.0", "port": 9000})

    # Deploy components with dependency injection
    preprocessor = Model1.bind()
    llm_actor = Model2.bind()
    serve_obj = Service.bind(preprocessor, llm_actor)
    serve.run(serve_obj,
        name="service",
        route_prefix="/",
    )

curl -X POST -H “Content-Type: application/json” -d ‘{“prompt”: “Introduce ray”}’ http://0.0.0.0:9000/generate

I believe this is a similar thread as Logging to stdout for ray serve - #2 by shrekris ?

Thanks for your help. Well Done! :grinning_face: