Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for running models on NPU of RPi5 with the Hailo 8L and Orange Pi 5 Pro (RK3588S) #616

Open
1 of 2 tasks
grzegorz-roboflow opened this issue Aug 28, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@grzegorz-roboflow
Copy link
Contributor

Search before asking

  • I have searched the Inference issues and found no similar feature requests.

Description

User requested a feature as described below:

Hi folks,

Been loving the interface, but having issues with the inference module required on Edge devices. I am working on accessing the NPU of two of my devices a RPi5 with the Hailo 8L, and an Orange Pi 5 Pro (RK3588S). For example, the Orange Pi 5 Pro has a 6TOPS NPU that can accelerate my work quite a bit vs the CPU, Yolov8 is supported, but it needs the .pt file (or ONNX) to convert into it’s own optimized format. The Hailo for RPi 5 has a similar (albeit simpler) process and from the documentation it seems that there is built in support already for Hailo via inference. For the life of me I cannot figure out how to properly get a model export going so I can get things converted for the RK3588s’ npu. There have been past forum posts asking about this but its been a while. I really want to avoid being hardware/ecosystem locked if I can avoid it.

Any tips, tricks, thoughts?

User asks to add support for running models on NPU of RPi5 with the Hailo 8L and Orange Pi 5 Pro (RK3588S)

Use case

As explained by the user:

Orange Pi 5 Pro has a 6TOPS NPU that can accelerate my work quite a bit vs the CPU

Providing models that can be loaded onto user hardware will result in faster inference time

Additional

User states:

The Hailo for RPi 5 has a similar (albeit simpler) process and from the documentation it seems that there is built in support already for Hailo via inference.

Maybe it would be possible to achieve what user asks for through ONNXRUNTIME_EXECUTION_PROVIDERS?

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@grzegorz-roboflow grzegorz-roboflow added the enhancement New feature or request label Aug 28, 2024
@zhaokefei
Copy link

Is there a timeline for supporting this feature? Multi-hardware compatibility would provide more options.

@RossLote
Copy link
Contributor

I would also like this. We've been thinking of getting the Rpi hailo kit for our project but don't want to come away from roboflow. It's not clear to me whether the hailo npu would work with inference at this point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants