You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I set a CheckPoint callback and a max_epochs parameter, the max_epochs will not be the true MAX epochs, but checkpoint's epoch PLUS max_epochs.
For example, if I set the max_epochs=400 and load the checkpoint at epoch=125, the fit-loop will start at epoch=125 and end at epoch=526, which is not expected.
It is noticed that the function NeuralNet.fit_loop has a parameter called epochs, the default value is None, namely, max_epochs. So the training will be looped for max_epochs times no matter what the current epoch is. I think it might be better if the default value is set to:
max_epochs-net.history[-1,'epoch']
Or, modify the for-loop in the fit_loop function:
# net.py, line 786# for _ in range(epochs):for_inrange(self.history[-1,'epoch'],epochs):
Version: 0.10.0
The text was updated successfully, but these errors were encountered:
If I set a
CheckPoint
callback and amax_epochs
parameter, themax_epochs
will not be the true MAX epochs, but checkpoint's epoch PLUS max_epochs.For example, if I set the max_epochs=400 and load the checkpoint at epoch=125, the fit-loop will start at epoch=125 and end at epoch=526, which is not expected.
It is noticed that the function
NeuralNet.fit_loop
has a parameter calledepochs
, the default value is None, namely, max_epochs. So the training will be looped for max_epochs times no matter what the current epoch is. I think it might be better if the default value is set to:Or, modify the for-loop in the fit_loop function:
Version: 0.10.0
The text was updated successfully, but these errors were encountered: