I merged the following two repos to this one for my Jetson Nano.
Jetpack 4.2
with 1.12.2
self-build tensorflow installed on my Jetson Nano.
Run all steps on your target platform, ex. Jetson Nano.
- Install
tensorflow-gpu
with tensorRT enabled on your Jetson platform - Install pycuda (ref this for Jetson Nano)
- Patch your
graphsurgeon converter
, please refer to the next section. - Put your
frozen_inference_graph.pb
to the repo root. - Modify config/model_ssd_mobilenet_v2_coco_2018_03_29.py with your own
numClasses
. - Modify
utils/coco.py
to your classes - Run
convery.py
with one picture, ex.python3 convert.py 1.jpg
- this step will generate
uff
andbin
files on the repo root
- this step will generate
- Run
camera.py
, ex.python3 camera
, enjoy!
Edit /usr/lib/python3.6/dist-packages/graphsurgeon/node_manipulation.py
diff --git a/node_manipulation.py b/node_manipulation.py
index d2d012a..1ef30a0 100644
--- a/node_manipulation.py
+++ b/node_manipulation.py
@@ -30,6 +30,7 @@ def create_node(name, op=None, _do_suffix=False, **kwargs):
node = NodeDef()
node.name = name
node.op = op if op else name
+ node.attr["dtype"].type = 1
for key, val in kwargs.items():
if key == "dtype":
node.attr["dtype"].type = val.as_datatype_enum