HOME
  Security
   Software
    Hardware
  
FPGA
  CPU
   Android
    Raspberry Pi
  
nLite
  Xcode
   etc.
    ALL
  
LINK
BACK
 

2019/05/15

NVIDIA Jetson Nanoに TensorFlowをインストールする方法 NVIDIA Jetson Nanoに TensorFlowをインストールする方法

(NVIDIAから TensorFlowのインストールモジュールが提供されています)

Tags: [Raspberry Pi], [電子工作], [ディープラーニング]





● NVIDIA Jetson Nanoに TensorFlowをインストールする方法

 NVIDIAから TensorFlowのインストールモジュールが提供されています。

NVIDIA - TensorFlow


●今回動かした NVIDIA Jetson Nanoの Ubuntu OSのバージョン

user@user-desktop:~$ uname -a
Linux user-desktop 4.9.140-tegra #1 SMP PREEMPT Wed Mar 13 00:32:22 PDT 2019 aarch64 aarch64 aarch64 GNU/Linux


● NVIDIA Jetson Nanoで TensorFlowをインストールする方法

 Jetson Nano
 NVIDIA Jetson Nano is a small, powerful computer for embedded AI systems and IoT that delivers the power of modern AI in a low-power platform. The Jetson Nano is targeted to get started fast with the NVIDIA Jetpack SDK and a full desktop Linux environment, and start exploring a new world of embedded products.

 ※ NVIDIA版の TensorFlowは Python3専用です
#!/bin/bash

# Prerequisites and Dependencies
# Install HDF5 as required by TensorFlow:
sudo apt-get -y install libhdf5-serial-dev hdf5-tools

# Install pip3.
sudo apt-get -y install python3-pip

# Install the following packages:
# $ pip3 install -U pip
sudo apt-get -y install zlib1g-dev zip libjpeg8-dev libhdf5-dev
sudo pip3 install -U numpy grpcio absl-py py-cpuinfo psutil portpicker grpcio six mock requests gast h5py astor termcolor

# Installing TensorFlow
# Install TensorFlow using the pip3 command. This command will install the latest version of TensorFlow.
# tensorflow_gpu-1.13.1+nv19.5-cp36-cp36m-linux_aarch64.whl (204.6MB)
sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu

# Installing collected packages: keras-preprocessing, protobuf, werkzeug, markdown, tensorboard, tensorflow-estimator, keras-applications, tensorflow-gpu
# Successfully installed keras-applications-1.0.7 keras-preprocessing-1.0.9 markdown-3.1 protobuf-3.8.0rc1 tensorboard-1.13.1 tensorflow-estimator-1.13.0 tensorflow-gpu-1.13.1+nv19.5 werkzeug-0.15.4

# Verifying The Installation
python3 -c "import tensorflow; print (tensorflow.__version__)"
# 1.13.1

python3 -c "import tensorflow; print (tensorflow.__file__)"
# local/lib/python3.6/dist-packages/tensorflow/__init__.py

# python2では動かない
python -c "import tensorflow; print (tensorflow.__file__)"
# Traceback (most recent call last):
#   File "<string>", line 1, in <module>
# ImportError: No module named tensorflow


● TensorFlowを Python版の DeepDreamで動かしたい場合 hjptriplebee/deep_dream_tensorflow版

cd
git clone https://github.com/hjptriplebee/deep_dream_tensorflow.git
cd deep_dream_tensorflow

python3 main.py --input nature_image.jpg --output nature_image_tf.jpg
python3 main.py --input paint.jpg --output paint_tf.jpg

user@user-desktop:~/deep_dream_tensorflow$ python3 main.py --input nature_image.jpg --output nature_image_tf.jpg
2019-05-21 20:55:11.265467: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2019-05-21 20:55:11.266324: I tensorflow/compiler/xla/service/service.cc:161] XLA service 0x196de670 executing computations on platform Host. Devices:
2019-05-21 20:55:11.266395: I tensorflow/compiler/xla/service/service.cc:168]   StreamExecutor device (0): <undefined>, <undefined>
2019-05-21 20:55:11.536313: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:965] ARM64 does not support NUMA - returning NUMA node zero
2019-05-21 20:55:11.536594: I tensorflow/compiler/xla/service/service.cc:161] XLA service 0x181b9e70 executing computations on platform CUDA. Devices:
2019-05-21 20:55:11.536654: I tensorflow/compiler/xla/service/service.cc:168]   StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3
2019-05-21 20:55:11.537888: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
totalMemory: 3.87GiB freeMemory: 2.30GiB
2019-05-21 20:55:11.538213: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-05-21 20:55:22.196790: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-05-21 20:55:22.196859: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0
2019-05-21 20:55:22.196885: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N
2019-05-21 20:55:22.197112: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1504 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
WARNING:tensorflow:From main.py:55: FastGFile.__init__ (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.gfile.GFile.
octave num 1/4...
2019-05-21 20:55:27.698862: I tensorflow/stream_executor/dso_loader.cc:153] successfully opened CUDA library libcublas.so.10.0 locally
octave num 2/4...
octave num 3/4...
octave num 4/4...
2019-05-21 20:56:59.346721: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 799.03MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-05-21 20:57:00.097332: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.55GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
 何かいろいろ出ているけど動きました。
・nature_image.jpg
nature_image.jpg .nature_image_tf.jpg



# Performance model
# In order to check the current performance mode, issue:
sudo nvpmodel -q --verbose

# To change the mode to MAX-N, issue:
sudo nvpmodel -m 0

user@user-desktop:~/deep_dream_tensorflow$ python3 main.py --input paint.jpg --output paint_tf.jpg
2019-05-21 21:08:08.657361: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2019-05-21 21:08:08.658298: I tensorflow/compiler/xla/service/service.cc:161] XLA service 0x290c9b00 executing computations on platform Host. Devices:
2019-05-21 21:08:08.658370: I tensorflow/compiler/xla/service/service.cc:168]   StreamExecutor device (0): <undefined>, <undefined>
2019-05-21 21:08:08.758426: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:965] ARM64 does not support NUMA - returning NUMA node zero
2019-05-21 21:08:08.758710: I tensorflow/compiler/xla/service/service.cc:161] XLA service 0x27c86ea0 executing computations on platform CUDA. Devices:
2019-05-21 21:08:08.758791: I tensorflow/compiler/xla/service/service.cc:168]   StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3
2019-05-21 21:08:08.759192: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
totalMemory: 3.87GiB freeMemory: 1.93GiB
2019-05-21 21:08:08.759265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-05-21 21:08:09.841440: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-05-21 21:08:09.841520: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0
2019-05-21 21:08:09.841552: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N
2019-05-21 21:08:09.841746: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1507 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
WARNING:tensorflow:From main.py:55: FastGFile.__init__ (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.gfile.GFile.
octave num 1/4...
2019-05-21 21:08:14.882492: I tensorflow/stream_executor/dso_loader.cc:153] successfully opened CUDA library libcublas.so.10.0 locally
octave num 2/4...
Killed
 1回目は Killedで動かなかった。
 2回目は動いた。
user@user-desktop:~/deep_dream_tensorflow$ python3 main.py --input paint.jpg --output paint_tf.jpg
2019-05-21 21:10:29.272559: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2019-05-21 21:10:29.273762: I tensorflow/compiler/xla/service/service.cc:161] XLA service 0x37e9b00 executing computations on platform Host. Devices:
2019-05-21 21:10:29.273835: I tensorflow/compiler/xla/service/service.cc:168]   StreamExecutor device (0): <undefined>, <undefined>
2019-05-21 21:10:29.634529: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:965] ARM64 does not support NUMA - returning NUMA node zero
2019-05-21 21:10:29.634812: I tensorflow/compiler/xla/service/service.cc:161] XLA service 0x23a6ea0 executing computations on platform CUDA. Devices:
2019-05-21 21:10:29.634873: I tensorflow/compiler/xla/service/service.cc:168]   StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3
2019-05-21 21:10:29.636113: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
totalMemory: 3.87GiB freeMemory: 2.19GiB
2019-05-21 21:10:29.636507: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-05-21 21:10:40.285432: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-05-21 21:10:40.285503: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0
2019-05-21 21:10:40.285528: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N
2019-05-21 21:10:40.285754: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1463 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
WARNING:tensorflow:From main.py:55: FastGFile.__init__ (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.gfile.GFile.
octave num 1/4...
2019-05-21 21:10:45.981006: I tensorflow/stream_executor/dso_loader.cc:153] successfully opened CUDA library libcublas.so.10.0 locally
octave num 2/4...
2019-05-21 21:11:41.665321: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.55GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
octave num 3/4...
2019-05-21 21:12:52.709882: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.55GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
octave num 4/4...
2019-05-21 21:14:53.198886: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.04GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-05-21 21:14:54.771410: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.07GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-05-21 21:14:55.450058: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.54GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
・paint.jpg
paint.jpg .paint_tf.jpg



user@user-desktop:~/deep_dream_tensorflow$ sudo nvpmodel -q --verbose

NVPM VERB: Config file: /etc/nvpmodel.conf
NVPM VERB: parsing done for /etc/nvpmodel.conf
NVPM VERB: Current mode: NV Power Mode: MAXN
0
NVPM VERB: PARAM CPU_ONLINE: ARG CORE_0: PATH /sys/devices/system/cpu/cpu0/online: REAL_VAL: 1 CONF_VAL: 1
NVPM VERB: PARAM CPU_ONLINE: ARG CORE_1: PATH /sys/devices/system/cpu/cpu1/online: REAL_VAL: 1 CONF_VAL: 1
NVPM VERB: PARAM CPU_ONLINE: ARG CORE_2: PATH /sys/devices/system/cpu/cpu2/online: REAL_VAL: 1 CONF_VAL: 1
NVPM VERB: PARAM CPU_ONLINE: ARG CORE_3: PATH /sys/devices/system/cpu/cpu3/online: REAL_VAL: 1 CONF_VAL: 1
NVPM VERB: PARAM CPU_A57: ARG MIN_FREQ: PATH /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq: REAL_VAL: 102000 CONF_VAL: 0
NVPM VERB: PARAM CPU_A57: ARG MAX_FREQ: PATH /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq: REAL_VAL: 1428000 CONF_VAL: 2147483647
NVPM VERB: PARAM GPU_POWER_CONTROL_ENABLE: ARG GPU_PWR_CNTL_EN: PATH /sys/devices/gpu.0/power/control: REAL_VAL: auto CONF_VAL: on
NVPM VERB: PARAM GPU: ARG MIN_FREQ: PATH /sys/devices/gpu.0/devfreq/57000000.gpu/min_freq: REAL_VAL: 76800000 CONF_VAL: 0
NVPM VERB: PARAM GPU: ARG MAX_FREQ: PATH /sys/devices/gpu.0/devfreq/57000000.gpu/max_freq: REAL_VAL: 921600000 CONF_VAL: 2147483647
NVPM VERB: PARAM GPU_POWER_CONTROL_DISABLE: ARG GPU_PWR_CNTL_DIS: PATH /sys/devices/gpu.0/power/control: REAL_VAL: auto CONF_VAL: auto
NVPM VERB: PARAM EMC: ARG MAX_FREQ: PATH /sys/kernel/nvpmodel_emc_cap/emc_iso_cap: REAL_VAL: 0 CONF_VAL: 0
user@user-desktop:~/deep_dream_tensorflow$ cat /etc/nvpmodel.conf
#
# Copyright (c) 2019, NVIDIA CORPORATION.  All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
# FORMAT:
# < PARAM TYPE=PARAM_TYPE NAME=PARAM_NAME >
# ARG1_NAME ARG1_PATH_VAL
# ARG2_NAME ARG2_PATH_VAL
# ...
# This starts a section of PARAM definitions, in which each line
# has the syntax below:
# ARG_NAME ARG_PATH_VAL
# ARG_NAME is a macro name for argument value ARG_PATH_VAL.
# PARAM_TYPE can be FILE, or CLOCK.
#
# < POWER_MODEL ID=id_num NAME=mode_name >
# PARAM1_NAME ARG11_NAME ARG11_VAL
# PARAM1_NAME ARG12_NAME ARG12_VAL
# PARAM2_NAME ARG21_NAME ARG21_VAL
# ...
# This starts a section of POWER_MODEL configurations, followed by
# lines with parameter settings as the format below:
# PARAM_NAME ARG_NAME ARG_VAL
# PARAM_NAME and ARG_NAME are defined in PARAM definition sections.
# ARG_VAL is an integer for PARAM_TYPE of CLOCK, and -1 is taken
# as INT_MAX. ARG_VAL is a string for PARAM_TYPE of FILE.
# This file must contain at least one POWER_MODEL section.
#
# < PM_CONFIG DEFAULT=default_mode >
# This is a mandatory section to specify one of the defined power
# model as the default.

###########################
#                         #
# PARAM DEFINITIONS       #
#                         #
###########################

< PARAM TYPE=FILE NAME=CPU_ONLINE >
CORE_0 /sys/devices/system/cpu/cpu0/online
CORE_1 /sys/devices/system/cpu/cpu1/online
CORE_2 /sys/devices/system/cpu/cpu2/online
CORE_3 /sys/devices/system/cpu/cpu3/online

< PARAM TYPE=FILE NAME=GPU_POWER_CONTROL_ENABLE >
GPU_PWR_CNTL_EN /sys/devices/gpu.0/power/control

< PARAM TYPE=FILE NAME=GPU_POWER_CONTROL_DISABLE >
GPU_PWR_CNTL_DIS /sys/devices/gpu.0/power/control

< PARAM TYPE=CLOCK NAME=CPU_A57 >
FREQ_TABLE /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies
MAX_FREQ /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq
MIN_FREQ /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq
FREQ_TABLE_KNEXT /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies
MAX_FREQ_KNEXT /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq
MIN_FREQ_KNEXT /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq

< PARAM TYPE=CLOCK NAME=GPU >
FREQ_TABLE /sys/devices/gpu.0/devfreq/57000000.gpu/available_frequencies
MAX_FREQ /sys/devices/gpu.0/devfreq/57000000.gpu/max_freq
MIN_FREQ /sys/devices/gpu.0/devfreq/57000000.gpu/min_freq
FREQ_TABLE_KNEXT /sys/devices/17000000.gv11b/devfreq/devfreq0/available_frequencies
MAX_FREQ_KNEXT /sys/devices/gpu.0/devfreq/57000000.gpu/max_freq
MIN_FREQ_KNEXT /sys/devices/gpu.0/devfreq/57000000.gpu/min_freq

< PARAM TYPE=CLOCK NAME=EMC >
MAX_FREQ /sys/kernel/nvpmodel_emc_cap/emc_iso_cap
MAX_FREQ_KNEXT /sys/kernel/nvpmodel_emc_cap/emc_iso_cap

###########################
#                         #
# POWER_MODEL DEFINITIONS #
#                         #
###########################

# MAXN is the NONE power model to release all constraints
< POWER_MODEL ID=0 NAME=MAXN >
CPU_ONLINE CORE_0 1
CPU_ONLINE CORE_1 1
CPU_ONLINE CORE_2 1
CPU_ONLINE CORE_3 1
CPU_A57 MIN_FREQ  0
CPU_A57 MAX_FREQ -1
GPU_POWER_CONTROL_ENABLE GPU_PWR_CNTL_EN on
GPU MIN_FREQ  0
GPU MAX_FREQ -1
GPU_POWER_CONTROL_DISABLE GPU_PWR_CNTL_DIS auto
EMC MAX_FREQ 0

< POWER_MODEL ID=1 NAME=5W >
CPU_ONLINE CORE_0 1
CPU_ONLINE CORE_1 1
CPU_ONLINE CORE_2 0
CPU_ONLINE CORE_3 0
CPU_A57 MIN_FREQ  0
CPU_A57 MAX_FREQ 918000
GPU_POWER_CONTROL_ENABLE GPU_PWR_CNTL_EN on
GPU MIN_FREQ 0
GPU MAX_FREQ 640000000
GPU_POWER_CONTROL_DISABLE GPU_PWR_CNTL_DIS auto
EMC MAX_FREQ 1600000000

# mandatory section to configure the default mode
< PM_CONFIG DEFAULT=0 >



Tags: [Raspberry Pi], [電子工作], [ディープラーニング]

●関連するコンテンツ(この記事を読んだ人は、次の記事も読んでいます)

NVIDIA Jetson Nano 開発者キットを買ってみた。メモリ容量 4GB LPDDR4 RAM
NVIDIA Jetson Nano 開発者キットを買ってみた。メモリ容量 4GB LPDDR4 RAM

  Jetson Nanoで TensorFlow PyTorch Caffe/Caffe2 Keras MXNet等を GPUパワーで超高速で動かす!

Raspberry Piでメモリを馬鹿食いするアプリ用に不要なサービスを停止してフリーメモリを増やす方法
Raspberry Piでメモリを馬鹿食いするアプリ用に不要なサービスを停止してフリーメモリを増やす方法

  ラズパイでメモリを沢山使用するビルドやアプリ用に不要なサービス等を停止して使えるメインメモリを増やす

【成功版】最新版の Darknetに digitalbrain79版の Darknet with NNPACKの NNPACK処理を適用する
【成功版】最新版の Darknetに digitalbrain79版の Darknet with NNPACKの NNPACK処理を適用する

  ラズパイで NNPACK対応の最新版の Darknetを動かして超高速で物体検出や DeepDreamの悪夢を見る

【成功版】Raspberry Piで NNPACK対応版の Darknet Neural Network Frameworkをビルドする方法
【成功版】Raspberry Piで NNPACK対応版の Darknet Neural Network Frameworkをビルドする方法

  ラズパイに Darknet NNPACK darknet-nnpackをソースからビルドして物体検出を行なう方法

【成功版】Raspberry Piで Darknet Neural Network Frameworkをビルドする方法
【成功版】Raspberry Piで Darknet Neural Network Frameworkをビルドする方法

  ラズパイに Darknet Neural Network Frameworkを入れて物体検出や悪夢のグロ画像を生成する

【成功版】Raspberry Piに TensorFlow Deep Learning Frameworkをインストールする方法
【成功版】Raspberry Piに TensorFlow Deep Learning Frameworkをインストールする方法

  ラズパイに TensorFlow Deep Learning Frameworkを入れて Google DeepDreamで悪夢を見る方法

Raspberry Piで TensorFlow Deep Learning Frameworkを自己ビルドする方法
Raspberry Piで TensorFlow Deep Learning Frameworkを自己ビルドする方法

  ラズパイで TensorFlow Deep Learning Frameworkを自己ビルドする方法

Raspberry Piで Caffe Deep Learning Frameworkで物体認識を行なってみるテスト
Raspberry Piで Caffe Deep Learning Frameworkで物体認識を行なってみるテスト

  ラズパイで Caffe Deep Learning Frameworkを動かして物体認識を行なってみる

【ビルド版】Raspberry Piで DeepDreamを動かしてキモイ絵をモリモリ量産 Caffe Deep Learning Framework
【ビルド版】Raspberry Piで DeepDreamを動かしてキモイ絵をモリモリ量産 Caffe Deep Learning Framework

  ラズパイで Caffe Deep Learning Frameworkをビルドして Deep Dreamを動かしてキモイ絵を生成する

【インストール版】Raspberry Piで DeepDreamを動かしてキモイ絵をモリモリ量産 Caffe Deep Learning
【インストール版】Raspberry Piで DeepDreamを動かしてキモイ絵をモリモリ量産 Caffe Deep Learning

  ラズパイで Caffe Deep Learning Frameworkをインストールして Deep Dreamを動かしてキモイ絵を生成する

Raspberry Piで Caffe2 Deep Learning Frameworkをソースコードからビルドする方法
Raspberry Piで Caffe2 Deep Learning Frameworkをソースコードからビルドする方法

  ラズパイで Caffe 2 Deep Learning Frameworkをソースコードから自己ビルドする方法

Orange Pi PC 2の 64bitのチカラで DeepDreamしてキモイ絵を高速でモリモリ量産してみるテスト
Orange Pi PC 2の 64bitのチカラで DeepDreamしてキモイ絵を高速でモリモリ量産してみるテスト

  OrangePi PC2に Caffe Deep Learning Frameworkをビルドして Deep Dreamを動かしてキモイ絵を生成する

Raspberry Piに Jupyter Notebookをインストールして拡張子 ipynb形式の IPythonを動かす
Raspberry Piに Jupyter Notebookをインストールして拡張子 ipynb形式の IPythonを動かす

  ラズパイに IPython Notebookをインストールして Google DeepDream dream.ipynbを動かす

Raspberry Piで Deep Learningフレームワーク Chainerをインストールしてみる
Raspberry Piで Deep Learningフレームワーク Chainerをインストールしてみる

  ラズパイに Deep Learningのフレームワーク Chainerを入れてみた

Raspberry Piで DeepBeliefSDKをビルドして画像認識フレームワークを動かす方法
Raspberry Piで DeepBeliefSDKをビルドして画像認識フレームワークを動かす方法

  ラズパイに DeepBeliefSDKを入れて画像の物体認識を行なう

Raspberry Piで Microsoftの ELLをビルドする方法
Raspberry Piで Microsoftの ELLをビルドする方法

  ラズパイで Microsoftの ELL Embedded Learning Libraryをビルドしてみるテスト、ビルドするだけ

Raspberry Piで MXNet port of SSD Single Shot MultiBoxを動かして画像の物体検出をする方法
Raspberry Piで MXNet port of SSD Single Shot MultiBoxを動かして画像の物体検出をする方法

  ラズパイで MXNet port of SSD Single Shot MultiBox Object Detectorで物体検出を行なってみる

Raspberry Piで Apache MXNet Incubatingをビルドする方法
Raspberry Piで Apache MXNet Incubatingをビルドする方法

  ラズパイで Apache MXNet Incubatingをビルドしてみるテスト、ビルドするだけ

Raspberry Piで OpenCVの Haar Cascade Object Detectionでリアルタイムにカメラ映像の顔検出を行なってみる
Raspberry Piで OpenCVの Haar Cascade Object Detectionでリアルタイムにカメラ映像の顔検出を行なってみる

  ラズパイで OpenCVの Haar Cascade Object Detection Face & Eyeでリアルタイムでカメラ映像の顔検出をする方法

Raspberry Piで NNPACKをビルドする方法
Raspberry Piで NNPACKをビルドする方法

  ラズパイで NNPACKをビルドしてみるテスト、ビルドするだけ

Raspberry Pi 3の Linuxコンソール上で使用する各種コマンドまとめ
Raspberry Pi 3の Linuxコンソール上で使用する各種コマンドまとめ

  ラズパイの Raspbian OSのコマンドラインで使用する便利コマンド、負荷試験や CPUシリアル番号の確認方法等も




[HOME] | [BACK]
リンクフリー(連絡不要、ただしトップページ以外は Web構成の変更で移動する場合があります)
Copyright (c) 2019 FREE WING,Y.Sakamoto
Powered by 猫屋敷工房 & HTML Generator

http://www.neko.ne.jp/~freewing/raspberry_pi/nvidia_jetson_nano_install_tensorflow/