That setting is for completely rabdom actions. So if you set that to non-zero rather than using the policy to determine actions it will generate a random value from the action space. It will do that for as long as the total number of sampled steps is less than your value.
This is a different behavior than your pervious question. After random_timesteps it will start to use the policy to generate actions. StochasticSampling will add noise to the “logits” produced by the policy and then use these values to choose an action.