The sample code for a large-scale crowd counting dataset, NWPU-Crowd.
-
Updated
Sep 24, 2020 - Python
The sample code for a large-scale crowd counting dataset, NWPU-Crowd.
Collective Knowledge extension with unified and customizable benchmarks (with extensible JSON meta information) to be easily integrated with customizable and portable Collective Knowledge workflows. You can easily compile and run these benchmarks using different compilers, environments, hardware and OS (Linux, MacOS, Windows, Android). More info:
Collective Knowledge crowd-tuning extension to let users crowdsource their experiments (using portable Collective Knowledge workflows) such as performance benchmarking, auto tuning and machine learning across diverse platforms with Linux, Windows, MacOS and Android provided by volunteers. Demo of DNN crowd-benchmarking and crowd-tuning:
Crowdsourcing video experiments (such as collaborative benchmarking and optimization of DNN algorithms) using Collective Knowledge Framework across diverse Android devices provided by volunteers. Results are continuously aggregated in the open repository:
Cross-platform Python client for the CodeReef.ai portal to manage portable workflows, reusable automation actions, software detection plugins, meta packages and dashboards for crowd-benchmarking:
News: we have moved this code to the CK framework:
Public results in the Collective Knowledge Format (JSON meta data) from collaborative optimization of computer systems. See live repository:
Development version of CodeReefied portable CK workflows for image classification and object detection. Stable "live" versions are available at CodeReef portal:
Add a description, image, and links to the crowd-benchmarking topic page so that developers can more easily learn about it.
To associate your repository with the crowd-benchmarking topic, visit your repo's landing page and select "manage topics."