Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
-
Updated
Nov 14, 2024 - Python
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
An unofficial https://bgm.tv ui first app client for Android and iOS, built with React Native. 一个无广告、以爱好为驱动、不以盈利为目的、专门做 ACG 的类似豆瓣的追番记录,bgm.tv 第三方客户端。为移动端重新设计,内置大量加强的网页端难以实现的功能,且提供了相当的自定义选项。 目前已适配 iOS / Android / WSA、mobile / 简单 pad、light / dark theme、移动端网页。
Mixture-of-Experts for Large Vision-Language Models
PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538
⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)
Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models
Tutel MoE: An Optimized Mixture-of-Experts Implementation
中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)
MindSpore online courses: Step into LLM
Official LISTEN.moe Android app
Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).
A libGDX cross-platform API for InApp purchasing.
ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. We released a collection of ModuleFormer-based Language Models (MoLM) ranging in scale from 4 billion to 8 billion parameters.
MoH: Multi-Head Attention as Mixture-of-Head Attention
MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts
Add a description, image, and links to the moe topic page so that developers can more easily learn about it.
To associate your repository with the moe topic, visit your repo's landing page and select "manage topics."