Skip to content

v0.7.0

Compare
Choose a tag to compare
@bwpriest bwpriest released this 25 Aug 18:28
· 56 commits to main since this release
8939344

v0.7.0 introduces major changes to the API, several new model features, as well as a major overhaul of the library's backend. Changes in detail include:

  • The API now reduces the number of strings arguments, and instead largely expects objects directly. See the univariate regression tutorial for details.
  • Added explicit distance models, currently supporting anisotropic and isotropic. See the anisotropic tutorial for details of the anisotropic distance model.
  • Added several new loss functions. See the new loss function tutorial for details.
  • Added several experimental features that are not fully hardened:
    • Heteroscedastic noise model, where each training observation has a separate Gaussian noise prior.
    • Hierarchical nonstationary parameters, where a scalar hyperparameter has several (possibly optimized) knot values throughout the domain and interpolates values at new locations using a lower-level GP.
    • An alternative optimization workflow that reconstructs neighborhoods during optimization, meant to handle cases like anisotropy where neighborhoods change during optimization.
  • Added experimental notebooks investigating several of these experimental features.
  • Majorly changed the backend of the library. Control flow is increasingly functional. Member functions such as MuyGPS.posterior_mean() are composed at object creation depending on different model choices. The specific way in which this is done is subject to change.

What's Changed

  • Refactoring math to remove global numpy dependency by @bwpriest in #95
  • Housekeeping to tidy up the import interface by @bwpriest in #97
  • All optimization functions now rely on kwargs form by @bwpriest in #98
  • removed all *_from_indices functions from the libarary (less MuyGPyS.examples) by @bwpriest in #100
  • Removing MuyGPS.regress in favor of separate mean and variance functions by @bwpriest in #102
  • Making all member functions modular pure functions by @bwpriest in #104
  • coalesced noise model application logic to single function by @bwpriest in #107
  • Feature/heteroscedasticity by @alecmdunton in #109
  • Exposing object creation to the user by @bwpriest in #118
  • Refactored code so length scale lives in distortion models by @alecmdunton in #122
  • Fixed bug that was fixing length_scale in optimizations by @bwpriest in #128
  • Reorganize hyperparameters and init hierarchical nonstationary hyperparameter by @igoumiri in #129
  • Anisotropic feature integrated into library. Tests added to gp.py and kernel.py by @alecmdunton in #127
  • Added performance benchmarking script by @bwpriest in #135
  • Updated format throughout to match PEP by @bwpriest in #133
  • fixed tests/optimize.py to actually work by @bwpriest in #139
  • removed jax cuda extras; user must now install manually by @bwpriest in #140
  • Added pretty print overloads for MuyGPS classes by @bwpriest in #142
  • Iss/141 by @alecmdunton in #143
  • All torch tests passing by @alecmdunton in #144
  • removed hardcoded float commands in examples/muygps_torch by @alecmdunton in #146
  • Streamlined doc notebooks to obscure sampling and plotting code by @bwpriest in #148
  • Refactored distortion class to take metric Callable as argument by @alecmdunton in #147
  • Fix develop tests being skipped by @igoumiri in #151
  • Fixed test harness errors introduced by merge. Updated some documentation. by @bwpriest in #152
  • Feature: hierarchical RBF by @igoumiri in #145
  • Added opt parameter indirection in preparation for hierarchical parameters by @bwpriest in #153
  • Optimization for hierarchical GPs by @igoumiri in #154
  • Fix knot optimization by @igoumiri in #156
  • pulled boilerplate functions out of optimization pipelines by @bwpriest in #157
  • minor nonstationary notebook cleanup [skip ci]. by @bwpriest in #159
  • removed regress api tutorial [skip ci] by @bwpriest in #160
  • [skip ci] refactored nb names and added a flat optimization for sanit… by @bwpriest in #161
  • Adding new pseudo Huber loss function for outlier robustness. by @bwpriest in #164
  • Added a variance-regularized pseudo-Huber loss function similar in fo… by @bwpriest in #166
  • fixed bug to actually forward the looph function by @bwpriest in #167
  • first implementation commit by @akilandrews in #165
  • documentation nb updates/cleanup by @bwpriest in #169
  • Update .readthedocs.yaml to supported python version [skip ci] by @bwpriest in #170
  • Roll back required ipython versions in setup.py [skip ci] by @bwpriest in #171
  • fixed RTD builds by @bwpriest in #172
  • precomputing torch tutorial as it does not seem possible to run on RT… by @bwpriest in #173
  • anisotropic tutorial and nb cleanup by @bwpriest in #175
  • Added loss function tutorial. Fixed some mistakes in the documentatio… by @bwpriest in #176
  • moved UnivariateSampler* into MuyGPyS._test.sampler for convenience by @bwpriest in #177
  • moved mini batch tests to their own file by @bwpriest in #178
  • Passing loss functions instead of strings. Unhooked experimental opti… by @bwpriest in #180
  • improvements to samplers and tutorials by @bwpriest in #182
  • reduced training ratio for anisotropic tutorial by @bwpriest in #183
  • Optimization loop chassis by @akilandrews in #181
  • partial fix to torch parameter optimization by @bwpriest in #184
  • final updates for v0.7.0 by @bwpriest in #185

New Contributors

Full Changelog: v0.6.6...v0.7.0