It seems like Ray Serve is not able to log the details in the custom log file I am defining, can anyone help
Here is my logger initialization
from ray import serve
import logging
logger = logging.getLogger("ray.serve")
logger.setLevel(logging.WARNING)
logger.addHandler(logging.FileHandler("/app/logs/app.log"))
And here is my main inference function
    async def __call__(self, request):
        image_path = await request.body()
        im = Image.open(image_path).convert("RGB")
        # prepare image for the model
        encoded_inputs = self.processor(
            im, return_tensors="pt", padding="max_length", truncation=True
        )
        # make sure all keys of encoded_inputs are on the same device as the model
        for k, v in encoded_inputs.items():
            encoded_inputs[k] = v.to(self.model.device)
        # forward pass
        outputs = self.model(**encoded_inputs)
        logits = outputs.logits
        predicted_class_idx = logits.argmax(-1).item()
        del logits
        del outputs
        del im 
        torch.cuda.empty_cache()
        gc.collect()
        # Finally class index to actual class
        logger.info("Request completed")
        return self.idx2label[predicted_class_idx]
The target is to log the custom information during inference in a separate logging file, not in the default location of ray serve logs  /tmp/ray/session.......