・2019/05/15
NVIDIA Jetson Nanoで tf-pose-estimation(旧称:tf-openpose)をビルドしてインストールする方法
(NVIDIA Jetson Nanoで tf-pose-estimationを動かしてみる)
Tags: [Raspberry Pi], [電子工作], [ディープラーニング]
● Jetson Nano、Jetson Xavier NXの便利スクリプト
・2020/07/03
【2020年版】NVIDIA Jetson Nano、Jetson Xavier NXの便利スクリプト
Jetsonの面倒な初期設定やミドルウェアのインストールを bashスクリプトの実行だけで簡単にできます
● NVIDIA Jetson Nanoで tf-pose-estimation(旧称:tf-openpose)をビルドしてインストールする方法
tf-pose-estimation(旧称:tf-openpose)
tf-pose-estimation
●今回動かした NVIDIA Jetson Nanoの Ubuntu OSのバージョン
user@user-desktop:~$ uname -a
Linux user-desktop 4.9.140-tegra #1 SMP PREEMPT Wed Mar 13 00:32:22 PDT 2019 aarch64 aarch64 aarch64 GNU/Linux
● NVIDIA Jetson Nanoで tf-pose-estimationをビルドしてインストールする方法
# Dependencies
You need dependencies below.
python3
tensorflow 1.4.1+
opencv3, protobuf, python3-tk
slidingwindow
https://github.com/adamrehn/slidingwindow
I copied from the above git repo to modify few things.
#!/bin/bash
# Building wheels for collected packages: scipy, llvmlite
# building 'dfftpack' library
# error: library dfftpack has Fortran sources but no Fortran compiler found
# Failed building wheel for scipy
sudo apt-get -y install gfortran
# sudo apt-get -y install python-scipy
# Numba
# Running setup.py bdist_wheel for llvmlite ... error
# FileNotFoundError: [Errno 2] No such file or directory: 'llvm-config': 'llvm-config'
# RuntimeError: llvm-config failed executing, please point LLVM_CONFIG to the path for llvm-config
# Failed building wheel for llvmlite
# user@user-desktop:~$ ls -l /usr/bin/llvm-config
# lrwxrwxrwx 1 root root 31 5月 17 2018 /usr/bin/llvm-config -> ../lib/llvm-6.0/bin/llvm-config
# sudo apt-get -y install llvm
export LLVM_CONFIG=/usr/bin/llvm-config
# pip3 install llvmlite
# LLVM 7.0をインストールする
# Install tf-pose-estimation
# Clone the repo and install 3rd-party libraries
cd
git clone https://github.com/ildoonet/tf-pose-estimation
cd tf-pose-estimation
# ModuleNotFoundError: No module named 'Cython'
pip3 install cython
# Failed building wheel for llvmlite
# pip3 install llvmlite
pip3 install -r requirements.txt
# TensorFlowをインストールする
# post-processing for Part-Affinity Fields Map implemented in C++ & Swig
# https://github.com/ildoonet/tf-pose-estimation/tree/master/tf_pose/pafprocess
# -bash: swig: command not found
sudo apt-get -y install swig
# Build c++ library for post processing
cd tf_pose/pafprocess
swig -python -c++ pafprocess.i && python3 setup.py build_ext --inplace
# Trained Models & Performances
# https://github.com/ildoonet/tf-pose-estimation/blob/master/etcs/experiments.md
# cmu
cd
cd tf-pose-estimation
cd models/graph/cmu
bash download.sh
# mobilenet mobilenet_thin
cd
cd tf-pose-estimation
cd models/graph/mobilenet_thin
bash download.sh
# mobilenet mobilenet_thin width=0.75 refine-width=0.75
cd
cd tf-pose-estimation
cd models/pretrained/mobilenet_v1_0.75_224_2017_06_14/
bash download.sh
tf-pose-estimation/models/pretrained/mobilenet_v1_0.75_224_2017_06_14/download.sh
#!/bin/bash
extract_download_url() {
url=$( wget -q -O - $1 | grep -o 'http*://download[^"]*' | tail -n 1 )
echo "$url"
}
wget --continue $( extract_download_url http://www.mediafire.com/file/kibz0x9e7h11ueb/mobilenet_v1_0.75_224.ckpt.data-00000-of-00001 ) -O mobilenet_v1_0.75_224.ckpt.data-00000-of-00001
wget --continue $( extract_download_url http://www.mediafire.com/file/t8909eaikvc6ea2/mobilenet_v1_0.75_224.ckpt.index ) -O mobilenet_v1_0.75_224.ckpt.index
wget --continue $( extract_download_url http://www.mediafire.com/file/6jjnbn1aged614x/mobilenet_v1_0.75_224.ckpt.meta ) -O mobilenet_v1_0.75_224.ckpt.meta
● tf-pose-estimationでエラー
error: library dfftpack has Fortran sources but no Fortran compiler found
Failed building wheel for scipy
building 'dfftpack' library
error: library dfftpack has Fortran sources but no Fortran compiler found
Failed building wheel for scipy
----------------------------------------
Failed building wheel for scipy
Running setup.py clean for scipy
Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-i6wpkhke/scipy/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" clean --all:
`setup.py clean` is not supported, use one of the following instead:
- `git clean -xdf` (cleans all files)
- `git clean -Xdf` (cleans all versioned files, doesn't touch
files that aren't checked into the git repo)
Add `--force` to your command to use it anyway if you must (unsupported).
----------------------------------------
Failed cleaning build dir for scipy
# Solution (解決方法)
sudo apt-get -y install gfortran
Running setup.py bdist_wheel for llvmlite ... error
Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-i6wpkhke/llvmlite/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/tmpej3ndfcjpip-wheel- --python-tag cp36:
running bdist_wheel
/usr/bin/python3 /tmp/pip-build-i6wpkhke/llvmlite/ffi/build.py
LLVM version... Traceback (most recent call last):
File "/tmp/pip-build-i6wpkhke/llvmlite/ffi/build.py", line 105, in main_posix
out = subprocess.check_output([llvm_config, '--version'])
File "/usr/lib/python3.6/subprocess.py", line 336, in check_output
**kwargs).stdout
File "/usr/lib/python3.6/subprocess.py", line 403, in run
with Popen(*popenargs, **kwargs) as process:
File "/usr/lib/python3.6/subprocess.py", line 709, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.6/subprocess.py", line 1344, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'llvm-config': 'llvm-config'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/pip-build-i6wpkhke/llvmlite/ffi/build.py", line 167, in <module>
main()
File "/tmp/pip-build-i6wpkhke/llvmlite/ffi/build.py", line 157, in main
main_posix('linux', '.so')
File "/tmp/pip-build-i6wpkhke/llvmlite/ffi/build.py", line 108, in main_posix
"to the path for llvm-config" % (llvm_config,))
RuntimeError: llvm-config failed executing, please point LLVM_CONFIG to the path for llvm-config
error: command '/usr/bin/python3' failed with exit status 1
----------------------------------------
Failed building wheel for llvmlite
# Solution (解決方法)
Install LLVM 7.0
running bdist_wheel
/usr/bin/python3 /tmp/pip-build-8jlo9vkx/llvmlite/ffi/build.py
LLVM version... 6.0.0
RuntimeError: Building llvmlite requires LLVM 7.0.x. Be sure to set LLVM_CONFIG to the right executable path.
Read the documentation at http://llvmlite.pydata.org/ for more information about building llvmlite.
----------------------------------------
Failed building wheel for llvmlite
# Solution (解決方法)
Install LLVM 7.0
Successfully installed PyWavelets-1.0.3 argparse-1.4.0 certifi-2019.3.9 chardet-3.0.4 cycler-0.10.0 decorator-4.4.0 dill-0.2.9 fire-0.1.3 idna-2.8 imageio-2.5.0 kiwisolver-1.1.0 llvmlite-0.28.0 matplotlib-3.1.0 msgpack-0.6.1 msgpack-numpy-0.4.4.3 networkx-2.3 numba-0.43.1 numpy-1.16.3 pillow-6.0.0 psutil-5.6.2 pycocotools-2.0.0 pyparsing-2.4.0 python-dateutil-2.8.0 pyzmq-18.0.1 requests-2.22.0 scikit-image-0.15.0 scipy-1.3.0 setuptools-41.0.1 six-1.12.0 slidingwindow-0.0.13 tabulate-0.8.3 tensorpack-0.9.4 termcolor-1.1.0 tqdm-4.32.1 urllib3-1.25.2
user@user-desktop:~/tf-openpose$ python3 run.py --help
Unable to init server: Could not connect: Connection refused
(run.py:6050): Gdk-CRITICAL **: 20:55:52.282: gdk_cursor_new_for_display: assertion 'GDK_IS_DISPLAY (display)' failed
usage: run.py [-h] [--image IMAGE] [--model MODEL] [--resize RESIZE]
[--resize-out-ratio RESIZE_OUT_RATIO]
tf-pose-estimation run
optional arguments:
-h, --help show this help message and exit
--image IMAGE
--model MODEL cmu / mobilenet_thin / mobilenet_v2_large /
mobilenet_v2_small
--resize RESIZE if provided, resize images before they are processed.
default=0x0, Recommends : 432x368 or 656x368 or
1312x736
--resize-out-ratio RESIZE_OUT_RATIO
if provided, resize heatmaps before they are post-
processed. default=1.0
user@user-desktop:~/tf-openpose$ python3 run.py --model=cmu --resize=432x368 --image=./images/p1.jpg
Unable to init server: Could not connect: Connection refused
Unable to init server: Could not connect: Connection refused
(run.py:6125): Gdk-CRITICAL **: 21:01:57.824: gdk_cursor_new_for_display: assertion 'GDK_IS_DISPLAY (display)' failed
[2019-05-20 21:01:57,847] [TfPoseEstimator] [INFO] loading graph from /home/user/tf-openpose/models/graph/cmu/graph_opt.pb(default size=432x368)
2019-05-20 21:01:57,847 INFO loading graph from /home/user/tf-openpose/models/graph/cmu/graph_opt.pb(default size=432x368)
2019-05-20 21:02:00.533440: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2019-05-20 21:02:00.534162: I tensorflow/compiler/xla/service/service.cc:161] XLA service 0x2aadf7a0 executing computations on platform Host. Devices:
2019-05-20 21:02:00.536244: I tensorflow/compiler/xla/service/service.cc:168] StreamExecutor device (0): <undefined>, <undefined>
2019-05-20 21:02:00.794226: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:965] ARM64 does not support NUMA - returning NUMA node zero
2019-05-20 21:02:00.794513: I tensorflow/compiler/xla/service/service.cc:161] XLA service 0x28fc1240 executing computations on platform CUDA. Devices:
2019-05-20 21:02:00.794572: I tensorflow/compiler/xla/service/service.cc:168] StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3
2019-05-20 21:02:00.795827: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
totalMemory: 3.87GiB freeMemory: 1.79GiB
2019-05-20 21:02:00.795914: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-05-20 21:02:17.980537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-05-20 21:02:17.980616: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-05-20 21:02:17.980691: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-05-20 21:02:17.980953: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1010 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
2019-05-20 21:02:18,070 WARNING From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
Killed
動かない。。。
user@user-desktop:~$ lspci | grep -i nvidia
00:01.0 PCI bridge: NVIDIA Corporation Device 0fae (rev a1)
00:02.0 PCI bridge: NVIDIA Corporation Device 0faf (rev a1)
Tags: [Raspberry Pi], [電子工作], [ディープラーニング]
●関連するコンテンツ(この記事を読んだ人は、次の記事も読んでいます)
NVIDIA Jetson Nano 開発者キットを買ってみた。メモリ容量 4GB LPDDR4 RAM
Jetson Nanoで TensorFlow PyTorch Caffe/Caffe2 Keras MXNet等を GPUパワーで超高速で動かす!
Raspberry Piでメモリを馬鹿食いするアプリ用に不要なサービスを停止してフリーメモリを増やす方法
ラズパイでメモリを沢山使用するビルドやアプリ用に不要なサービス等を停止して使えるメインメモリを増やす
【成功版】最新版の Darknetに digitalbrain79版の Darknet with NNPACKの NNPACK処理を適用する
ラズパイで NNPACK対応の最新版の Darknetを動かして超高速で物体検出や DeepDreamの悪夢を見る
【成功版】Raspberry Piで NNPACK対応版の Darknet Neural Network Frameworkをビルドする方法
ラズパイに Darknet NNPACK darknet-nnpackをソースからビルドして物体検出を行なう方法
【成功版】Raspberry Piで Darknet Neural Network Frameworkをビルドする方法
ラズパイに Darknet Neural Network Frameworkを入れて物体検出や悪夢のグロ画像を生成する
【成功版】Raspberry Piに TensorFlow Deep Learning Frameworkをインストールする方法
ラズパイに TensorFlow Deep Learning Frameworkを入れて Google DeepDreamで悪夢を見る方法
Raspberry Piで TensorFlow Deep Learning Frameworkを自己ビルドする方法
ラズパイで TensorFlow Deep Learning Frameworkを自己ビルドする方法
Raspberry Piで Caffe Deep Learning Frameworkで物体認識を行なってみるテスト
ラズパイで Caffe Deep Learning Frameworkを動かして物体認識を行なってみる
【ビルド版】Raspberry Piで DeepDreamを動かしてキモイ絵をモリモリ量産 Caffe Deep Learning Framework
ラズパイで Caffe Deep Learning Frameworkをビルドして Deep Dreamを動かしてキモイ絵を生成する
【インストール版】Raspberry Piで DeepDreamを動かしてキモイ絵をモリモリ量産 Caffe Deep Learning
ラズパイで Caffe Deep Learning Frameworkをインストールして Deep Dreamを動かしてキモイ絵を生成する
Raspberry Piで Caffe2 Deep Learning Frameworkをソースコードからビルドする方法
ラズパイで Caffe 2 Deep Learning Frameworkをソースコードから自己ビルドする方法
Orange Pi PC 2の 64bitのチカラで DeepDreamしてキモイ絵を高速でモリモリ量産してみるテスト
OrangePi PC2に Caffe Deep Learning Frameworkをビルドして Deep Dreamを動かしてキモイ絵を生成する
Raspberry Piに Jupyter Notebookをインストールして拡張子 ipynb形式の IPythonを動かす
ラズパイに IPython Notebookをインストールして Google DeepDream dream.ipynbを動かす
Raspberry Piで Deep Learningフレームワーク Chainerをインストールしてみる
ラズパイに Deep Learningのフレームワーク Chainerを入れてみた
Raspberry Piで DeepBeliefSDKをビルドして画像認識フレームワークを動かす方法
ラズパイに DeepBeliefSDKを入れて画像の物体認識を行なう
Raspberry Piで Microsoftの ELLをビルドする方法
ラズパイで Microsoftの ELL Embedded Learning Libraryをビルドしてみるテスト、ビルドするだけ
Raspberry Piで MXNet port of SSD Single Shot MultiBoxを動かして画像の物体検出をする方法
ラズパイで MXNet port of SSD Single Shot MultiBox Object Detectorで物体検出を行なってみる
Raspberry Piで Apache MXNet Incubatingをビルドする方法
ラズパイで Apache MXNet Incubatingをビルドしてみるテスト、ビルドするだけ
Raspberry Piで OpenCVの Haar Cascade Object Detectionでリアルタイムにカメラ映像の顔検出を行なってみる
ラズパイで OpenCVの Haar Cascade Object Detection Face & Eyeでリアルタイムでカメラ映像の顔検出をする方法
Raspberry Piで NNPACKをビルドする方法
ラズパイで NNPACKをビルドしてみるテスト、ビルドするだけ
Raspberry Pi 3の Linuxコンソール上で使用する各種コマンドまとめ
ラズパイの Raspbian OSのコマンドラインで使用する便利コマンド、負荷試験や CPUシリアル番号の確認方法等も
[HOME]
|
[BACK]
リンクフリー(連絡不要、ただしトップページ以外は Web構成の変更で移動する場合があります)
Copyright (c)
2019 FREE WING,Y.Sakamoto
Powered by 猫屋敷工房 & HTML Generator
http://www.neko.ne.jp/~freewing/raspberry_pi/nvidia_jetson_nano_build_tf_pose_estimation/