You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems like there's some sort of memory leak (TensorFlow not fully clearing the graph?) that means every subsequent order runs slower than the previous one.
Intermediate recommended fix is to run orders in small batches.
The text was updated successfully, but these errors were encountered:
Hi,
for me the issue in this contextit is actually not the speed that suffers, but the memory that is used up.
Order after order memory is accumulated (and not released), so that for typical data sets with around 100 epochs my machine is using up all the memory, and the process is freezed.
Edit:
Actually, the same happens in the RV computation, but is a more severe problem with the reg. parameters optimization.
It seems like there's some sort of memory leak (TensorFlow not fully clearing the graph?) that means every subsequent order runs slower than the previous one.
Intermediate recommended fix is to run orders in small batches.
The text was updated successfully, but these errors were encountered: