A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.
-
Updated
Jun 13, 2022
A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.
A benchmark that challenges language models to code solutions for scientific problems
Benchmark evaluating LLMs on their ability to create and resist disinformation. Includes comprehensive testing across major models (Claude, GPT-4, Gemini, Llama, etc.) with standardized evaluation metrics.
LlamaEval is a rapid prototype developed during a hackathon to provide a user-friendly dashboard for evaluating and comparing Llama models using the TogetherAI API.
Performance benchmarking for ML/AI workloads
Add a description, image, and links to the ai-benchmarks topic page so that developers can more easily learn about it.
To associate your repository with the ai-benchmarks topic, visit your repo's landing page and select "manage topics."