Confusion matrix comparisons of predict function
(@verdoes and @grobler , this is the issue for questions on the confusion matrix.)
Hi @c.a.marocico ,
I have run the models that you provided using the predict
function and got my own confusion matrices.
Interestingly, while the matrices from the validation set look identical in comparison, this is not the case for the training set.
I have a few questions:
- Is the training set that you used to run your models the same as what I got?
- Do you expect the model to get better (or worse) as you decrease the class weight, given that the class weight of 1 means that the model essentially ignores the class imbalance issue?
- Your confusion matrix from the training set, with a class weight of 0.0001 is as below:
Do you expect the model to classify everything as 'asteroids' with this class weight value? (In context, this is my confusion matrix under the same settings):