Skip to content

A repository of Dockerfiles, scripts, yaml files, Helm Charts, etc. used to build and scale the sample AI workflows with python, kubernetes, kubeflow, cnvrg.io, and other frameworks on Intel platforms in the cloud and on-premise.

License

Notifications You must be signed in to change notification settings

faaany/ai-workflows

 
 

Repository files navigation

AI Workflows Infrastructure for Intel® Architecture

Description

On this page you will find details and instructions on how to set up an environment that supports Intel's AI Pipelines container build and test infrastructure.

Dependency Requirements

Only Linux systems are currently supported. Please make sure the following are installed in your package manager of choice:

  • make
  • docker.io

A full installation of docker engine with docker CLI is required. The recommended docker engine version is 19.03.0+.

  • docker-compose

The Docker Compose CLI can be installed both manually and via package manager.

$ DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
$ mkdir -p $DOCKER_CONFIG/cli-plugins
$ curl -SL https://github.com/docker/compose/releases/download/v2.7.0/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose
$ chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose

$ docker compose version
Docker Compose version v2.7.0

Build and Run Workflows

Each pipeline will contain specific requirements and instructions for how to provide its specific dependencies and what customization options are possible. Generally, pipelines are run with the following format:

git submodule update --init --recursive

This will pull the dependent repo containing the scripts to run the end2end pipeline's inference and/or training.

<KEY>=<VALUE> ... <KEY>=<VALUE> make <PIPELINE_NAME>

Where KEY and VALUE pairs are environment variables that can be used to customize both the pipeline's script options and the resulting container. For more information about the valid KEY and VALUE pairs, see the README.md file in the folder for each workflow container:

AI Workflow Framework/Tool Mode
Chronos Time Series Forecasting Chronos and PyTorch* Training
Document-Level Sentiment Analysis PyTorch* Training
Friesian Recommendation System Spark with TensorFlow Training | Inference
Habana® Gaudi® Processor Training and Inference using OpenVINO™ Toolkit for U-Net 2D Model OpenVINO™ Training and Inference
Privacy Preservation Spark with TensorFlow and PyTorch* Training and Inference
NLP workflow for AWS Sagemaker TensorFlow and Jupyter Inference
NLP workflow for Azure ML PyTorch* and Jupyter Training | Inference
Protein Structure Prediction PyTorch* Inference
Quantization Aware Training and Inference OpenVINO™ Quantization Aware Training(QAT)
RecSys Challenge Analytics With Python Hadoop and Spark Training
Video Streamer TensorFlow Inference
Vision Based Transfer Learning TensorFlow Training | Inference
Wafer Insights SKLearn Inference

Cleanup

Each pipeline can remove all resources allocated by executing make clean.

About

A repository of Dockerfiles, scripts, yaml files, Helm Charts, etc. used to build and scale the sample AI workflows with python, kubernetes, kubeflow, cnvrg.io, and other frameworks on Intel platforms in the cloud and on-premise.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Makefile 67.8%
  • Smarty 32.2%