About Ray Serve upload files

`async def __call__(self, request):
    image_payload_bytes = request.files['file']
    # image_payload_bytes = await request.body()
    pil_image = Image.open(BytesIO(image_payload_bytes))
    pil_images = [pil_image]  # Our current batch size is one
    input_tensor = torch.cat([self.preprocessor(i).unsqueeze(0) for i in pil_images])

    with torch.no_grad():
        output_tensor = self.model(input_tensor)

    return {"class_index": int(torch.argmax(output_tensor[0]))}`

resp = requests.post(f"{parameters.u}:{parameters.p}/image_predict", files={"file": open(parameters.d, 'rb')}) print("Inference done!\n", resp.text)


It shows Internal Server Error .
But if I use requests.post(data = open(parameters.d, ‘rb’)), it worked.
Now I want to find a way to upload a batch of images to the server and predict them.
Please help me.

Hi, not sure what’s causing the Internal Server Error–is there anything in the logs (at /tmp/ray/) that might have some information?

Regarding batching, you can find some more information here: Batching Tutorial — Ray v2.0.0.dev0