v0.12.7
github-actions
released this
29 Sep 11:31
·
1036 commits
to master
since this release
Flux v0.12.7
Closed issues:
- Poor performance relative to PyTorch (#886)
- Recur struct's fields are not type annotated, which is causing run–time dispatch and a significant slowdowns (#1092)
- Bug: lower degree polynomial substitute in gradient chain! (#1188)
- Very slow precompile (>50min) on julia 1.6.0 on Windows (#1554)
- Do not initialize CUDA during precompilation (#1597)
- GRU implementation details (#1671)
Parallel
layer doesn't need to be tied to array input (#1673)- update! a scalar parameter (#1677)
- Support NamedTuples for Container Layers (#1680)
- Freezing layer parameters still computes all gradients (#1688)
- A demo is 1.5x faster in Flux than tensorflow, both use cpu; while 3.0x slower during using CUDA (#1694)
- Problems with a mixed CPU/GPU model (#1695)
- Flux tests with master fail with signal 11 (#1697)
- [Q] How does Flux.jl work on Apple Silicon (M1)? (#1701)
- Typos in documents (#1706)
- Fresh install of Flux giving errors in precompile (#1710)
- Flux.gradient returns dict of params and nothing (#1713)
- Weight matrix not updating with a user defined initial weight matrix (#1717)
- [Documentation] No
logsumexp
in NNlib page (#1718) - Flattened data vs Flux.flatten layer in MNIST MLP in the model zoo (#1722)
Merged pull requests:
- Add WIP docstrings to CPU and GPU (#1632) (@logankilpatrick)
- Add section on Checking GPU Availability (#1633) (@logankilpatrick)
- fix README (#1668) (@DhairyaLGandhi)
- Generalise Parallel forwards pass (#1674) (@DhairyaLGandhi)
- Adding GRUv3 support. (#1675) (@mkschleg)
- Support NamedTuples for Chain + Parallel (#1681) (@mcabbott)
- Adding support for folding RNNs over 3d arrays (#1686) (@mkschleg)
- Update nnlib.md (#1689) (@CarloLucibello)
- fix typo (#1691) (@foldfelis)
- Typo fix (#1693) (@lukemerrick)
- Remove out of date dead code in Conv layers (#1702) (@ToucheSir)
- Gradient definitions for
cpu
&gpu
(#1704) (@mcabbott) - Fix #1706 (#1707) (@rongcuid)
- Add GPU Adaptor (#1708) (@DhairyaLGandhi)
- Initialize CUDA lazily. (#1711) (@maleadt)
- Update community.md to reflect help wanted != good first issue (#1712) (@logankilpatrick)
- Fix link in README (#1716) (@nilsmartel)
- Add logsumexp to docs (#1719) (@DhairyaLGandhi)