A Flexible and Powerful Parameter Server for large-scale machine learning
-
Updated
Jan 16, 2024 - Java
A Flexible and Powerful Parameter Server for large-scale machine learning
Lightweight and Scalable framework that combines mainstream algorithms of Click-Through-Rate prediction based computational DAG, philosophy of Parameter Server and Ring-AllReduce collective communication.
extremely distributed machine learning
自己实现的深度学习训练框架,纯java实现,没有过多的第三方依赖,可分布式训练
OpenEmbedding is an open source framework for Tensorflow distributed training acceleration.
PetPS: Supporting Huge Embedding Models with Tiered Memory
Serverless ML Framework
A fully adaptive, zero-tuning parameter manager that enables efficient distributed machine learning training
WIP. Veloce is a low-code Ray-based parallelization library that makes machine learning computation novel, efficient, and heterogeneous.
Distributed training with Multi-worker & Parameter Server in TensorFlow 2
Distributed Fieldaware Factorization Machines based on Parameter Server
Machine Learning models for large datasets
python lib for sparse parameter server using rocksdb, written in c++
Serving layer for large machine learning models on Apache Flink
ROS utility package for build-time configuration file generation and dumping/restoring contents of ROS parameter server to/from ROS bags.
a simple machine learning library
A lightweight community-aware heterogeneous parameter server paradigm.
Add a description, image, and links to the parameter-server topic page so that developers can more easily learn about it.
To associate your repository with the parameter-server topic, visit your repo's landing page and select "manage topics."