-
Notifications
You must be signed in to change notification settings - Fork 5
ublas::apply documentation
NOTE :
ublas::apply
only works with tensor expressions. Even though you can mix and match ublas and tensor expression, you should avoid it when you wish to useublas::apply
See #49
The new expression templates are flexible from the ground up. They were designed with simplicity at mind and giving you the most control over your big expressions. Out of many features of new expression templates, ublas::apply
is one of the best. While old expression template did provide you utilities like ublas::real
, ublas::imag
and ublas::conj
which lazily applies the respective operations. Surely standard
provide much more functions than this, it is also not viable for us to write all those functions such as ublas::sqrt
, ublas::abs
that bahave lazily. We decided to provide an interface where end-user could themselves achieve their goals without losing the performance from the lazy evaluation. In this documentation we will discuss how you can use this simple function to achieve great things, We will also share the implementation and limitations of the this functions.
ublas::apply
is a lazy equivalent of std::invoke
with the difference being that instead of general types, it takes an expression and returns a new expression. Rest all is same you pass a generic lambda or a normal callable which is applied to the expression when it is evaluated.
Consider this simple example:
auto a = ublas::tensor<int>{shape{5,5,5}, 45};
auto b = ublas::tensor<int>{shape{5,5,5}, 55};
auto expr = ublas::apply(a+b, [](auto const &e){return std::sqrt(e);}) + 5;
The Expression expr
during evaluation expands to following in-line when optimization is turned on:
auto zero_index = std::sqrt( a[0] + b[0] ) + 5
You can for sure have nested the ublas::apply
suiting your needs, all such expressions will eventually expand into a simple expression during evaluation.
Consider this example:
auto exp_int(int const &z){
return std::exp(z);
}
auto sq_root()(double const &z){
return sqrt(z);
}
auto x = ublas::tensor<int>( {5, 6}, 3);
auto y = ublas::tensor<int>( {5, 6}, 2);
auto expr = ublas::apply(ublas::apply(x+y, exp_int), sq_root);
The expr
during evaluation expands to following:
auto zero_index = sq_root( exp_int( x[0] + y[0] ) );
Note: We did not use
std::sqrt
andstd::exp
but a wrapper because those are overloaded functions.
If nesting ublas::apply
makes your code look bad, don't worry ublas::apply
can take multiple callables also, In this case all the callables are applied in the sequence you passed. Consider this examples code, where we apply 3 callables one after other.
auto add_one(int const &z){
return std::exp(z);
}
auto loose_one()(double const &z){
return sqrt(z);
}
auto double_it = [](auto const& z){return z*2;};
auto x = ublas::tensor<int>( {5, 6}, 3);
auto y = ublas::tensor<int>( {5, 6}, 2);
auto expr = ublas::apply(x+y, add_one, double_it, loose_one);
assert((bool) (expr == 2*(x+y+1) - 1));
The expr
during evaluation expands to
auto zero_index = loose_one( double_it( add_one( x[0]+y[0] ) ) );
The return type is deduced from the AST that is built up.
There are so many things you could do with the new ublas::apply
as long as your callable obeys some rules mentioned in the next section.
The callable passed to ublas::apply
must adhere to the following rules. Upon failure of any one of the below causes an assertion error with the reason. However, there is some scope of runtime_errors
as well. So make sure your callable:
-
returns a non-void type.
-
takes only one argument with constant reference / by value.
-
can be template function provided templates are deduced when passing.
-
can be a generic lambda.
-
In case you pass a non-generic lambda or other callables make sure that argument is convertible to function's formal parameter.
We plan to add another function called ublas::apply_index(Expr&& e, Callable... c)
where Callable c
can have a signature as c(value_type const & z, std::size_t index);
Allowing us to apply for each based on the value of the index
in the linear representation of the tensor.
We both would like to thank our mentor Cem for his constant support and help in achieving our goals. We always find him helpful and he was always easy to reach for help or discussion regarding the work. We would also like to thank Google for the Google Summer of Code Programme, without which all these wouldn't be possible. Lastly, we express our gratitude to our parents for helping and providing us with all the indirect or direct needs for carrying out our work nicely in our homes.
- Home
- Project Proposal
- Milestones and Tasks
- Implementation
- Documentation
- Discussions
- Examples
- Experimental features
- Project Proposal
- Milestones and Tasks
- Implementation
- Documentation
- Discussions
- Example Code