Hi, I’m trying to use the TuneGridSearchCV to optimize hyperparameterer tuning but, after installlation, I found this error:
A worker died or was killed while executing task ffffffffffffffff3bf0c85601000000.
A worker died or was killed while executing task ffffffffffffffffa7357af301000000.
The actor or task with ID ffffffffffffffffe1083dbe01000000 cannot be scheduled right now. It requires {CPU: 1.000000} for placement, but this node only has remaining {memory: 4.052734 GiB}, {CPU: 4.000000}, {node:192.168.1.71: 1.000000}, {object_store_memory: 1.367188 GiB}. In total there are 0 pending tasks and 4 pending actors on this node. This is likely due to all cluster resources being claimed by actors. To resolve the issue, consider creating fewer actors or increase the resources available to this Ray cluster. You can ignore this message if this Ray cluster is expected to auto-scale.
My code is below:
from sklearn.tree import DecisionTreeClassifier
from ray.tune.sklearn import TuneGridSearchCV
clf = DecisionTreeClassifier()
#hypertuning paramenters
parameter_grid = {
'criterion':['gini','entropy'],
'splitter':['best','random'],
'max_depth': [5, 8,10, 15, 25],
'min_samples_split' : [2, 5, 10, 15,20,25],
'min_samples_leaf' : [1, 2, 5, 10,15,20],
'random_state' : [seed],
}
print("TUNING ############################")
startgrid=time.time()
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
grid_searchdt = TuneGridSearchCV(clf, parameter_grid, cv =3, verbose = 0, n_jobs = -1,early_stopping=False,max_iters=10)
bestDT = grid_searchdt.fit(x_train, y_train)
print(bestDT.best_params_)
best_grid_dt = bestDT.best_estimator_
print(best_grid_dt)
endgrid = time.time()
print("Grid time: "+str(endgrid-startgrid))
Could you please help me with this error? Thanks