Provided tensor has shape (240, 320, 1) and view requirement has shape shape (240, 320, 1).Make sure dimensions match to resolve this warning

How severe does this issue affect your experience of using Ray?

  • High: It blocks me to complete my task.

Hi,

I am running into a weird issue (full log below) where I am providing the correctly shaped observation, but I get an error? However, the client and server can still keep communicating. And the training loop completes successfully (not saying there is much progress as I am testing things out). The error only shows up once at the top as well.

Essentially, I just want to double check if this error is legit, or a false positive?

Policy server:

from gym import spaces
import ray
from ray.rllib.agents import with_common_config
from ray.rllib.agents.ppo import PPOTrainer
from ray.rllib.env import PolicyServerInput

from ray.rllib.algorithms.ppo import PPOConfig
from ray.rllib.examples.env.random_env import RandomEnv

import numpy as np
import argparse
from gymnasium.spaces import MultiDiscrete, Box

ppo_config = PPOConfig()

parser = argparse.ArgumentParser(description='Optional app description')
parser.add_argument('-ip', type=str, help='IP of this device')

parser.add_argument('-checkpoint', type=str, help='location of checkpoint to restore from')

args = parser.parse_args()


def _input(ioctx):
    return PolicyServerInput(
        ioctx,
        args.ip,
        55556,
    )


x = 320
y = 240
# coef = 0.5
# x = int(x * coef)
# y = int(y * coef)


# ignored:


# kl_coeff, ->
# vf_loss_coeff used to be 0.01??
# "entropy_coeff": 0.00005,
# "clip_param": 0.1,
ppo_config.gamma = 0.998  # default 0.99
ppo_config.lambda_ = 0.99  # default 1.0???
ppo_config.kl_target = 0.01  # used to use 0.02
ppo_config.rollout_fragment_length = 16
ppo_config.train_batch_size = 2560
ppo_config.sgd_minibatch_size = 128
ppo_config.num_sgd_iter = 1  # default 30???
ppo_config.lr = 3.5e-5  # 5e-5
ppo_config.model = {
    # Share layers for value function. If you set this to True, it's
    # important to tune vf_loss_coeff.
    "vf_share_layers": False,

    "use_lstm": True,
    "max_seq_len": 32,
    "lstm_cell_size": 128,
    "lstm_use_prev_action": True,

    # 'use_attention': True,
    # "max_seq_len": 128,
    # "attention_num_transformer_units": 1,
    # "attention_dim": 1024,
    # "attention_memory_inference": 128,
    # "attention_memory_training": 128,
    # "attention_num_heads": 8,
    # "attention_head_dim": 64,
    # "attention_position_wise_mlp_dim": 512,
    # "attention_use_n_prev_actions": 0,
    # "attention_use_n_prev_rewards": 0,
    # "attention_init_gru_gate_bias": 2.0,

    "conv_filters": [
        # [4, [3, 4], [1, 1]],
        # [16, [6, 8], [3, 3]],
        # [32, [6, 8], [3, 4]],
        # [64, [6, 6], 3],
        # [256, [9, 9], 1],

        # 480 x 640
        # [4, [7, 7], [3, 3]],
        # [16, [5, 5], [3, 3]],
        # [32, [5, 5], [2, 2]],
        # [64, [5, 5], [2, 2]],
        # [256, [5, 5], [3, 5]],

        # 240 X 320
        [16, [5, 5], 3],
        [32, [5, 5], 3],
        [64, [5, 5], 3],
        [128, [3, 3], 2],
        [256, [3, 3], 2],
        [512, [3, 3], 2],
    ],
    "conv_activation": "relu",
    "post_fcnet_hiddens": [512],
    "post_fcnet_activation": "relu"
}
ppo_config.batch_mode = "complete_episodes"
ppo_config.simple_optimizer = True
ppo_config.num_gpus = 0
# ppo_config.input_ = (
#         lambda ioctx: PolicyServerInput(ioctx, args.ip, 55556)
#     )


ppo_config.rollouts(num_rollout_workers=0)

ppo_config.offline_data(input_=_input)

ppo_config.env = None
ppo_config.observation_space = Box(low=0, high=1, shape=(y, x, 1), dtype=np.float32)
ppo_config.action_space = MultiDiscrete(
    [
        2,  # W
        2,  # A
        2,  # S
        2,  # D
        2,  # Space
        2,  # H
        2,  # J
        2,  # K
        2  # L
    ]
)

ppo_config.env_config = {
    "sleep": True,
}
ppo_config.framework_str = 'tf'
ppo_config.log_sys_usage = False
ppo_config.compress_observations = True
ppo_config.shuffle_sequences = False
print(ppo_config.to_dict())
tempyy = ppo_config.to_dict()

ray.init(num_cpus=2, num_gpus=0, log_to_driver=False)

trainer = PPOTrainer

from ray import tune

name = "" + args.checkpoint
print(f"Starting: {name}")

tune.run(trainer,
         resume='AUTO',
         config=ppo_config.to_dict(), name=name, keep_checkpoints_num=None, checkpoint_score_attr="episode_reward_mean",
         max_failures=1,
         # restore="C:\\Users\\denys\\ray_results\\mediumbrawl-attention-256Att-128MLP-L2\\PPOTrainer_RandomEnv_1e882_00000_0_2022-06-02_15-13-44\\checkpoint_000028\\checkpoint-28",
         checkpoint_freq=5, checkpoint_at_end=True)

Policy Client

import os

import cv2
from ray.rllib.env import PolicyClient

from pathlib import Path

from environment import BrawlEnv
import logging
import time
import argparse

logging.basicConfig(level=logging.INFO)

parser = argparse.ArgumentParser(description='Optional app description')
parser.add_argument('-ip', type=str,
                    help='IP of this device')

parser.add_argument('-speed', type=float,
                    help='gameFactor, default 1.0')

parser.add_argument('-update', type=float,
                    help='seconds how often to update from main process')

parser.add_argument('-local', type=str,
                    help='Whether to create and update a local copy of the AI (adds delay) or query server for each action.'
                         'possible values: "local" or "remote"')

args = parser.parse_args()

update = 3600.0

local = 'local'

remoteee = False

if args.update:
    update = args.update
    # remoteee = True

if args.local:
    local = args.local

if local == 'remote':
    remoteee = True

print(f"Going to update {local}-y  at {update} seconds interval")

print('trying to launch policy client')
print(f"http://{args.ip}:55556")

# Setting update_interval to false, so it doesn't update in middle of games, will be manually updating it between games
client = PolicyClient(address=f"http://{args.ip}:55556", update_interval=False, inference_mode=local)
# client = PolicyClient(address=f"http://{args.ip}:55556", update_interval=60, inference_mode=local)


forced = True
root = None

env = BrawlEnv({'sleep': True})

print('trying to get initial eid')
episode_id = client.start_episode()

# if local == 'remote':
#     env.underlord.startNewGame()c

# gameObservation = env.underlord.getObservation()
reward = 0
print('starting main loop')
replayList = []

update = True

runningReward = 0

counter = 0
runningCounter = 0
numLoops = 0

startTime = time.time()
endTime = time.time()

fps = 5
actionTimeOut = 1.0 / fps
print(f"action time: {actionTimeOut}")
actionTime = time.time()

env.restartRound()

x = 320
y = 240

epochActions = 4096
actionsUntilEpoch = 4096
epochNum = 0

needReset = False

numActions = 0
old_id = None

gameTime = time.time()

while True:

    # if needReset:
    #     env.releaseAllKeys()

    if numActions % 500 == 0:
        env.refreshWindow()

    elapsed_time = time.time() - actionTime
    if elapsed_time < actionTimeOut:
        time.sleep(actionTimeOut - elapsed_time)
        # continue

    actionTime = time.time()

    # average out to ~30actions a second
    counter = counter + 1
    runningCounter = runningCounter + 1
    endTime = time.time()
    if (endTime - startTime) > 1:
        print(f"actions per second: {counter}")
        startTime = time.time()
        counter = 0
        numLoops = numLoops + 1

    # timeStart = time.time()
    gameObservation, reward, gameOver = env.getObservation()
    # print(f"Time to get obs: {time.time() - timeStart}")
    # print('got observation')
    # print(gameObservation)
    # print(env.observation_space.contains(gameObservation))
    # print(reward, gameOver)

    # if not env.observation_space.contains(gameObservation):
    #     print(gameObservation)
    #     print("Not lined up 1")
    #     print(env.underlord.heroAlliances)
    #     sys.exit()

    action = None

    # timeStart = time.time()
    action = client.get_action(episode_id=episode_id, observation=gameObservation)
    # print(f"Time to get action: {time.time() - timeStart}")

    if needReset:

        print('starting reset!')

        if local == 'local':
            print("updating weights")
            client.update_policy_weights()
            print('finished updating weights')
        time.sleep(0.25)
        env.refreshWindow()
        time.sleep(0.25)
        # env.releaseAllKeys()
        env.restartRound()
        needReset = False
        reward = 0
        numLoops = 0
        runningCounter = 0
        counter = 0
        gameOver = False

        print('resetFinished!')
    else:
        # timeStart = time.time()
        env.act(action)
        # print(f"Time to act: {time.time() - timeStart}")
        # print('took action')

    # print('got action')

    runningReward += reward
    # act_time = time.time() - act_time
    # print("--- %s seconds to get do action ---" % (time.time() - start_time))
    # print(f"running reward: {reward}")

    client.log_returns(episode_id=episode_id, reward=reward)
    # print('logged returns')
    # Updating the model after every game in case there is a new one

    numActions = numActions + 1

    if gameOver and numActions > 25:

        # if elapsed_time > 20:
        #     print("restarting due to elapsed time")

        env.releaseAllKeys()
        env.resetHP()
        numActions = 0

        if reward <= -1:
            print(f"GAME OVER! WE Lost final reward: {runningReward}! Number of actions: {runningCounter}")
            env.gameLog += f"GAME OVER! WE Lost final reward: {runningReward}! Number of actions: {runningCounter}\\n"

        else:
            print(f"GAME OVER! WE Won final reward: {runningReward}! Number of actions: {runningCounter}")
            env.gameLog += f"GAME OVER! WE Won final reward: {runningReward}! Number of actions: {runningCounter}\n"

        env.gameLog += str(env.rewards)

        if runningReward >= -0.6:

            folderString = f"reward-{round(runningReward, 4)}-{epochNum}-{runningCounter}"

            fullString = os.getcwd() + "/replays/" + folderString

            if reward >= 0.0:
                fullString = os.getcwd() + "/replays/positive/" + folderString
            elif reward >= -0.3:
                fullString = os.getcwd() + "/replays/good/" + folderString
            else:
                fullString = os.getcwd() + "/replays/meh/" + folderString

            Path(fullString).mkdir(parents=True, exist_ok=True)
            f = open(fullString + "/log.txt", "a")
            f.write(env.gameLog)

            # this would be 10 minute long game

            video_fps = ((runningCounter - counter) / numLoops) + (counter / fps)

            if len(env.images) <= 6000:
                fourcc = cv2.VideoWriter_fourcc('M', 'J', 'P', 'G')
                video = cv2.VideoWriter(fullString + '/video.avi', fourcc, video_fps, (x, y), False)

                for img in env.images:
                    # img = img * 255.0
                    video.write(img.astype('uint8'))
                video.release()
            env.images = []
        env.gameLog = ""

        actionsUntilEpoch = actionsUntilEpoch - runningCounter

        if actionsUntilEpoch < 0:
            epochNum = epochNum + 1

        print(f"Actions until epoch: {actionsUntilEpoch}, current epoch: {epochNum}")
        print(env.rewards)
        if actionsUntilEpoch < 0:
            actionsUntilEpoch = epochActions

        runningReward = 0
        runningCounter = 0
        reward = 0
        numLoops = 0
        # need to call a reset of env here
        finalObs, reward, gameOver = env.getObservation()

        old_id = episode_id
        client.end_episode(episode_id=episode_id, observation=finalObs)

        episode_id = client.start_episode(episode_id=None)

        needReset = True
        time.sleep(0.25)

    # print('finished logging step')

    # print("--- %s seconds to get finish logging return ---" % (time.time() - start_time))

    # replayList.append((gameObservation, action, reward))

    # print( f"Round: {gameObservation[5]} - Time Left: {gameObservation[12]} - Obs duration: {obs_time} - Act
    # duration: {act_time} - Overall duration: {time.time() - start_time}")

Error Log

INFO:ray.rllib.evaluation.sampler:Raw obs from env: { 'c14d2a6b5fd645dbb34e18f7278d1f4d': { 'agent0': np.ndarray((240, 320, 1), dtype=float64, min=0.0, max=0.996, mean=0.666)}}
INFO:ray.rllib.evaluation.sampler:Info return from env: {'c14d2a6b5fd645dbb34e18f7278d1f4d': {'agent0': {}}}
INFO:ray.rllib.evaluation.sampler:Preprocessed obs: np.ndarray((240, 320, 1), dtype=float64, min=0.0, max=0.996, mean=0.666)
INFO:ray.rllib.evaluation.sampler:Filtered obs: np.ndarray((240, 320, 1), dtype=float64, min=0.0, max=0.996, mean=0.666)
WARNING:ray.rllib.evaluation.collectors.agent_collector:Provided tensor
[[[0.23529412]
  [0.23137255]
  [0.22745098]
  ...
  [0.21960784]
  [0.22745098]
  [0.23137255]]

 [[0.23529412]
  [0.23137255]
  [0.22352941]
  ...
  [0.21568627]
  [0.22352941]
  [0.22745098]]

 [[0.23137255]
  [0.23137255]
  [0.21960784]
  ...
  [0.21176471]
  [0.21960784]
  [0.22352941]]

 ...

 [[0.23529412]
  [0.23137255]
  [0.22745098]
  ...
  [0.14509804]
  [0.14901961]
  [0.15294118]]

 [[0.23529412]
  [0.23137255]
  [0.22745098]
  ...
  [0.14901961]
  [0.15294118]
  [0.15686275]]

 [[0.23529412]
  [0.23529412]
  [0.23137255]
  ...
  [0.15294118]
  [0.15686275]
  [0.15686275]]]
 does not match space of view requirements obs.
Provided tensor has shape (240, 320, 1) and view requirement has shape shape (240, 320, 1).Make sure dimensions match to resolve this warning.

Hi @Denys_Ashikhin,

Been a while. Hope all is well.

The data types do not match. In the server you specified float32.

ppo_config.observation_space = Box(low=0, high=1, shape=(y, x, 1), dtype=np.float32)

In the client you are sending float64

INFO:ray.rllib.evaluation.sampler:Preprocessed obs: np.ndarray((240, 320, 1), dtype=float64, min=0.0, max=0.996, mean=0.666)
1 Like

Hi
I get the exact same warning and in my case i’m only using train with no policy client/server:

config = ( 
    PPOConfig()
    .resources(num_gpus=1, num_cpus_per_worker=1, num_gpus_per_worker=0.1) 
    .environment("myEnv", env_config= env_config,disable_env_checking=True) 
    .rollouts( num_rollout_workers=1, batch_mode="complete_episodes",preprocessor_pref=None,observation_filter="NoFilter",compress_observations=False) 
    .framework(framework="tf2", eager_tracing=False)
    .experimental( _disable_preprocessor_api=True)
)
algo = config.build()
result = algo.train()

using ray nightly with gymnasium custom environment, the observation space is consistent in shape and data types (float 32) on all the outputs:

2023-01-10 07:03:16,552 INFO algorithm_config.py:2798 -- Executing eagerly (framework='tf2'), with eager_tracing=tf2. For production workloads, make sure to set eager_tracing=True  in order to match the speed of tf-static-graph (framework='tf'). For debugging purposes, `eager_tracing=False` is the best choice.
(RolloutWorker pid=3800) 2023-01-10 07:03:28,441        INFO eager_tf_policy_v2.py:75 -- Creating TF-eager policy running on GPU.
(RolloutWorker pid=3800) 2023-01-10 07:03:29,078        INFO policy.py:1196 -- Policy (worker=1) running on 0.1 GPUs.
(RolloutWorker pid=3800) 2023-01-10 07:03:29,078        INFO eager_tf_policy_v2.py:94 -- Found 1 visible cuda devices.
2023-01-10 07:03:29,637 INFO worker_set.py:309 -- Inferred observation/action spaces from remote worker (local worker has no env): {'default_policy': (Box(0.0, 1.0, (12, 32), float32), Box(-1.0, 1.0, (3,), float32)), '__env__': (Box(0.0, 1.0, (12, 32), float32), Box(-1.0, 1.0, (3,), float32))}
2023-01-10 07:03:29,682 INFO eager_tf_policy_v2.py:75 -- Creating TF-eager policy running on GPU.
2023-01-10 07:03:30,795 INFO policy.py:1196 -- Policy (worker=local) running on 1 GPUs.
2023-01-10 07:03:30,795 INFO eager_tf_policy_v2.py:94 -- Found 1 visible cuda devices.
2023-01-10 07:03:31,460 INFO rollout_worker.py:2040 -- Built policy map: <PolicyMap lru-caching-capacity=100 policy-IDs=['default_policy']>
2023-01-10 07:03:31,460 INFO rollout_worker.py:2041 -- Built preprocessor map: {'default_policy': None}
2023-01-10 07:03:31,460 INFO rollout_worker.py:757 -- Built filter map: defaultdict(<class 'ray.rllib.utils.filter.NoFilter'>, {'default_policy': <ray.rllib.utils.filter.NoFilter object at 0x0000017DF9ACA4A0>})
2023-01-10 07:03:31,488 INFO algorithm_config.py:2798 -- Executing eagerly (framework='tf2'), with eager_tracing=tf2. For production workloads, make sure to set eager_tracing=True  in order to match the speed of tf-static-graph (framework='tf'). For debugging purposes, `eager_tracing=False` is the best choice.
(RolloutWorker pid=6544) 2023-01-10 07:03:38,690        INFO eager_tf_policy_v2.py:75 -- Creating TF-eager policy running on GPU.
(RolloutWorker pid=6544) 2023-01-10 07:03:39,643        INFO policy.py:1196 -- Policy (worker=1) running on 0.1 GPUs.
(RolloutWorker pid=6544) 2023-01-10 07:03:39,643        INFO eager_tf_policy_v2.py:94 -- Found 1 visible cuda devices.
2023-01-10 07:03:40,168 INFO trainable.py:172 -- Trainable.setup took 23.562 seconds. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads.
(RolloutWorker pid=3800) 2023-01-10 07:03:40,179        INFO rollout_worker.py:905 -- Generating sample batch of size 4000
(RolloutWorker pid=3800) 2023-01-10 07:03:40,638        INFO sampler.py:609 -- Raw obs from env: { 0: { 'agent0': np.ndarray((12, 32), dtype=float32, min=0.0, max=1.0, mean=0.439)}}
(RolloutWorker pid=3800) 2023-01-10 07:03:40,638        INFO sampler.py:610 -- Info return from env: {0: {'agent0': {}}}
(RolloutWorker pid=3800) 2023-01-10 07:03:40,638        INFO sampler.py:857 -- Filtered obs: np.ndarray((12, 32), dtype=float32, min=0.0, max=1.0, mean=0.439)
(RolloutWorker pid=3800) 2023-01-10 07:03:40,645        WARNING agent_collector.py:176 -- Provided tensor
(RolloutWorker pid=3800)  does not match space of view requirements obs.
(RolloutWorker pid=3800) Provided tensor has shape (12, 32) and view requirement has shape shape (12, 32).Make sure dimensions match to resolve this warning.
(RolloutWorker pid=3800) 2023-01-10 07:03:40,646        INFO sampler.py:1143 -- Inputs to compute_actions():
(RolloutWorker pid=3800)
(RolloutWorker pid=3800) { 'default_policy': [ { 'data': { 'agent_id': 'agent0',
(RolloutWorker pid=3800)                                   'env_id': 0,
(RolloutWorker pid=3800)                                   'info': {},
(RolloutWorker pid=3800)                                   'obs': np.ndarray((12, 32), dtype=float32, min=0.0, max=1.0, mean=0.439),
(RolloutWorker pid=3800)                                   'prev_action': None,
(RolloutWorker pid=3800)                                   'prev_reward': 0.0,
(RolloutWorker pid=3800)                                   'rnn_state': None},
(RolloutWorker pid=3800)                         'type': '_PolicyEvalData'}]}
(RolloutWorker pid=3800)
(RolloutWorker pid=3800) 2023-01-10 07:03:40,919        INFO sampler.py:1170 -- Outputs of compute_actions():
(RolloutWorker pid=3800) 
(RolloutWorker pid=3800) { 'default_policy': ( np.ndarray((1, 3), dtype=float32, min=-0.605, max=0.521, mean=-0.005),
(RolloutWorker pid=3800)                       [],
(RolloutWorker pid=3800)                       { 'action_dist_inputs': np.ndarray((1, 6), dtype=float32, min=-0.009, max=0.005, mean=-0.001),
(RolloutWorker pid=3800)                         'action_logp': np.ndarray((1,), dtype=float32, min=-3.063, max=-3.063, mean=-3.063),
(RolloutWorker pid=3800)                         'action_prob': np.ndarray((1,), dtype=float32, min=0.047, max=0.047, mean=0.047),
(RolloutWorker pid=3800)                         'vf_preds': np.ndarray((1,), dtype=float32, min=0.004, max=0.004, mean=0.004)})}
(RolloutWorker pid=3800)
(RolloutWorker pid=3800) 2023-01-10 07:03:40,935        WARNING agent_collector.py:176 -- Provided tensor
(RolloutWorker pid=3800) 0
(RolloutWorker pid=3800)  does not match space of view requirements t.
(RolloutWorker pid=3800) Provided tensor has shape () and view requirement has shape shape ().Make sure dimensions match to resolve this warning.

@mannyv @Denys_Ashikhin I see this error comes from agent_collector.py line 149 where they specify that:

# We only check for the shape here, because conflicting dtypes are often
# because of float conversion

So this warning should not be caused because of a dtype inconsistency as they don’t check it.

I filed in an issue at github, in case you want to upvote : )

1 Like

Hey @mannyv ,

I have been, work caught up to me, and get a little burnt out on RL issues/little results (which is just a lack of practical experience on my part) but now I’m back to dip my toes into.

Looks like this is a pretty simple fix if I set the client floats to float32. Will try that later. Was too exhausted migrating my code to the new ray stuff for hours after work and missed this. I went and upvoted your github post (not sure how, but I left a thumbs up, let me know and I’ll do it), and I’ll reply once I get a chance to test out the same dtypes.

Thanks!

P.S.
This means that for training purposes it should be fine though?

Hi @PREJAN,

Good catch. The bug is on line 171. It is comparing an integer to a np.shape object.

np.sum(
                        (
                            tree.map_structure(
                                lambda x: np.product(getattr(x, "shape")),
                                flatten_space(vr.space),
                            )
                        )
                    ),
                )
                == np.shape(data)

The following got rid of the error for me:

np.sum(
                        (
                            tree.map_structure(
                                lambda x: np.product(getattr(x, "shape")),
                                flatten_space(vr.space),
                            )
                        )
                    ),
                )
                == np.prod(np.shape(data))

git blame CC: @arturn

2 Likes

thanks, I see Arturn already assigned the issue :slight_smile:
Using np.prod(np.shape(data)) solves it!

it was my first github issue ever, I thought they could be upvoted ; )

but let thumbing it up be it :slight_smile:

thanks

Thanks for raising this. These warnings were an attempt of mine to catch mismatches but have done no good I believe. I tested this with a nested space and the np.prod() does not work for that case. My PR removes the warning entirely and we will probably rely on @kourosh 's spec checking decorators in the future.

1 Like

Hi @arturn,

For the nested space you probably have to use tree.map_structure on both sides of the comparison.

Thanks @mannyv , yeah that should have resolved it but the check is really not only not well crafted but also we have better tools and better places to do these checks now/in the near future. Thanks for providing a solution though, I very much appreciate your help!

So just for my own (and anyone in the future) → this check only happens once at the start, and is only a warning but won’t affect training?

And how would we do this check ourselves if it is removed in the future (in my case I’m not even using an env really since I am handling the external game communication myself and just passing actions/rewards/obs_space)?

This check only happens the first time a batch passes through.
In the future, anywhere you want to check dimensions yourself, you can try doing so with what is currently under ray/rllib/models/specs at master · ray-project/ray · GitHub.
Be aware that this is under development and currently not a public API.
This is also what we will use in the future to catch mismatches.

1 Like

@mannyv
So I did indeed fix this by casting my observation image = image.astype(“float32”)

However, I get another error that I forgot to include in my original post:


Any idea what’s happening there?

@Denys_Ashikhin,

Based on Aurturn’s comments I think you can safely ignore those. There is a bug in how they compare shapes and thry are going to take that check out in the next release.

2 Likes

Sounds good, these errors were a bit different that the one above so I wanted to check this as well.
Thanks for your help!

Yeah absolutely, if we don’t crash elsewhere with less readable error, everything should be fine. Thanks again for raising this!

1 Like