-
Notifications
You must be signed in to change notification settings - Fork 769
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactored hybrid inference #1863
Conversation
Revised asProductFactor methods collectProductFactor() method Move test better print Formatting Efficiency Fix several bugs Fix print methods Fix print methods More tests, BT tests in different file More product tests Disable printing tests Minimize diff Fix rebase issue
Would it be possible to break this into a separate PR for the |
auto [factor, scalar] = pair; | ||
// If factor is null, it has been pruned, hence return potential zero | ||
if (!factor) return 0.0; | ||
return exp(-scalar - factor->error(kEmpty)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a major bug!!!
when we don't normalize the values, if we have large factor errors (e.g. even 80), then exp(-80) ~= 0.0
and everything collapses. This was one of the major issues I fixed last December that got things working.
Unfortunately this is not revealed in the the GTSAM tests because the sigmas are too simple but show up in the hybrid estimation code (same reason why the pruning PR didn't show any issues in GTSAM but caused experiments to fail).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Confirming that I am running into a similar issue as before.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I'll fix this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will fix it only here, when the exp is taken. I'd like the original errors preserved in the factors, not shifted before it's needed. I can explain in person if you vehemently object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, for this case and case below, I added a function
/// Take negative log-values, shift them so that the minimum value is 0, and
/// then exponentiate to create a DecisionTreeFactor (not normalized yet!).
static DecisionTreeFactor::shared_ptr DiscreteFactorFromErrors(
const DiscreteKeys &discreteKeys,
const AlgebraicDecisionTree<Key> &errors) {
double min_log = errors.min();
AlgebraicDecisionTree<Key> potentials = DecisionTree<Key, double>(
errors, [&min_log](const double x) { return exp(-(x - min_log)); });
return std::make_shared<DecisionTreeFactor>(discreteKeys, potentials);
}
Please note I realized it is not necessary to normalize to probabilities: it is still just a factor that we need, not a conditional. That was an error on my part before - although it only cost some computation, nothing else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also I would like to hear the explanation for my own curiosity. I have a suspicion we may need to normalize somewhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, it might still need to happen in the discretePosterior methods. It seems what we need is a constructor in DecisionTreeFactor. factors are not normalized, so for discretePosterior it should be a constructor in DiscreteConditional (which is normalized).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can do it and piggyback it off of #1867
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apparently we can't create a DiscreteConditional since we don't have the cardinality information of the Keys from an AlgebraicDecisionTree. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also discretePosterior
already normalizes due to the p / p.sum()
.
Here are my "clang-format.executable": "/usr/local/bin/clang-format",
"clang-format.language.c.style": "Google",
"clang-format.language.cpp.style": "Google",
"clang-format.fallbackStyle": "Google", |
|
Updates to `No Hiding` PR
It's possible to split the PR - but not with this version of HybridGaussianProductFactor |
This file is checked in in:
Maybe we can just change that file to not have any modifications on top of Google style. |
Pushed new changes. I have no idea on how to fix windows. Because |
This reverts commit 55dc3f5.
I figured out a hack to fix the Windows CI. It is in #1866. |
More Hybrid Fixes
@varunagrawal Can we now merge this? Do your experiments work? |
Yes, but I wanted to fix all the pending TODOs. I'll do them in the other PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This is a big PR that gets rind of "hiding" the conditional constants into the Jacobian factors in HGF, in favor of storing a scalar alongside each factor in the HGFG, as is done in HNFG. It also adds extensive testing.
Details
augment
is GONETesting
TODO: