Presented at AI Camp in June 2020. Please find the presentation here and recorded talk here. RAPIDS homepage: rapids.ai.
Data science demands the interactive exploration of large volumes of data, combined with computationally intensive algorithms and analytics. Today, the computational limits of CPUs are being realized, and a new approach is needed. In this talk, we will discuss how GPUs can enable data scientists to perform feature engineering and train machine learning models at scale using RAPIDS.
- Accelerating Random Forests up to 45x using cuml by Vishal Mehta.
- RAPIDS Forest Inference Library: Prediction at 100 million rows per second by John Zedlewski.
- Run RAPIDS experiments at scale using Amazon SageMaker by Shashank Prasanna.
- RAPIDS HyperParameter Optimization.
- RAPIDS Overview.
- An Implementation and Explanation of the Random Forest in Python by Will Koehrsen.
- Introduction to Random Forests by Fast.ai.
- The Life of a Numba Kernel: A Compilation Pipeline Taking User Defined Functions in Python to CUDA Kernels by Graham Markall.
Website: aroraakshit.github.io / Email me: akshita@nvidia.com / Follow me on Twitter: @_AkshitArora