20211012のTensorFlowに関する記事は2件です。

言語処理100本ノック(2020)-74: 正解率の計測(TensorFlow)

言語処理100本ノック 2020 (Rev2)の「第8章: ニューラルネット」の74本目「正解率の計測」記録です。前回ノックに追加して、学習データと評価データに対する正解率を求めます。keras使えば超楽勝です。 記事「まとめ: 言語処理100本ノックで学べることと成果」に言語処理100本ノック 2015についてはまとめていますが、追加で差分の言語処理100本ノック 2020 (Rev2)についても更新します。 参考リンク リンク 備考 74_正解率の計測.ipynb 回答プログラムのGitHubリンク 言語処理100本ノック 2020 第8章: ニューラルネット (PyTorchだけど)解き方の参考 【言語処理100本ノック 2020】第8章: ニューラルネット (PyTorchだけど)解き方の参考 まとめ: 言語処理100本ノックで学べることと成果 言語処理100本ノックまとめ記事 【Keras入門(4)】Kerasの評価関数(Metrics)」 kerasの評価関数使い方 環境 後々GPUを使わないと厳しいので、Goolge Colaboratry使いました。Pythonやそのパッケージでより新しいバージョンありますが、新機能使っていないので、プリインストールされているものをそのまま使っています。 種類 バージョン 内容 Python 3.7.12 Google Colaboratoryのバージョン google 2.0.3 Google Driveのマウントに使用 tensorflow 2.6.0 ディープラーニングの主要処理 第8章: ニューラルネット 学習内容 深層学習フレームワークの使い方を学び,ニューラルネットワークに基づくカテゴリ分類を実装します. ノック内容 第6章で取り組んだニュース記事のカテゴリ分類を題材として,ニューラルネットワークでカテゴリ分類モデルを実装する.なお,この章ではPyTorch, TensorFlow, Chainerなどの機械学習プラットフォームを活用せよ. 74. 正解率の計測 問題73で求めた行列を用いて学習データおよび評価データの事例を分類したとき,その正解率をそれぞれ求めよ. 回答 回答結果 学習データに対する正解率が0.8460。4値分類なので、悪くはないですね。 学習データに対する正解率 167/167 [==============================] - 2s 8ms/step - loss: 0.4576 - acc: 0.8460 [0.4576301872730255, 0.8460314273834229] 評価データに対する正解率が0.8488。 評価データに対する結果 21/21 [==============================] - 0s 5ms/step - loss: 0.4579 - acc: 0.8488 [0.4579184055328369, 0.848802387714386] 回答プログラム 74_正解率の計測.ipynb GitHubには確認用コードも含めていますが、ここには必要なものだけ載せています。 import tensorflow as tf from google.colab import drive drive.mount('/content/drive') def _parse_function(example_proto): # 特徴の記述 feature_description = { 'title': tf.io.FixedLenFeature([], tf.string), 'category': tf.io.FixedLenFeature([], tf.string)} # 上記の記述を使って入力の tf.Example を処理 features = tf.io.parse_single_example(example_proto, feature_description) X = tf.io.decode_raw(features['title'], tf.float32) y = tf.io.decode_raw(features['category'], tf.int32) return X, y BASE_PATH = '/content/drive/MyDrive/ColabNotebooks/ML/NLP100_2020/08.NeuralNetworks/' def get_dataset(file_name): ds_raw = tf.data.TFRecordDataset(BASE_PATH+file_name+'.tfrecord') return ds_raw.map(_parse_function).shuffle(1000).batch(64) train_ds = get_dataset('train') test_ds = get_dataset('test') model = tf.keras.Sequential( [tf.keras.layers.Dense( 4, activation='softmax', use_bias=False, input_dim=300, kernel_initializer='random_uniform') ]) model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['acc']) model.summary() model.fit(train_ds, epochs=100) model.evaluate(train_ds) model.evaluate(test_ds) 回答解説 正解率出力 compile関数のパラメータmetricsに正解率を示すaccを渡します。以前、自分で書いた記事「【Keras入門(4)】Kerasの評価関数(Metrics)」を読み返しました。 model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['acc']) 評価 evaluate関数を使って訓練データおよび評価データに対して正解率を求めます。 model.evaluate(train_ds) model.evaluate(test_ds)
  • このエントリーをはてなブックマークに追加
  • Qiitaで続きを読む

Using MediaPipe + TensorFlow.js, I created a virtual tracker that runs on SteamVR with only one webcam.

日本語版 同内容の日本語記事はnoteに投稿しています https://note.com/kamatari_san/n/n866915eede41 Introduction The following tweet was posted by the TensorFlow official the other day. https://twitter.com/TensorFlow/status/1432373876235898881 I was interested in the fact that you can get the Z-coordinate in 3D (Y-top) by looking at the link and demo in the tweet. 1.How do we get SteamVR to recognize it? Valve has released a library and specifications for creating HMDs, controllers, and trackers as OpenVR (link below). https://github.com/ValveSoftware/openvr However, it seems to be quite troublesome. But fortunately, there is a driver called Virtual Motion Tracker (VMT). There is a person who has created a driver that can be recognized as a virtual tracker by sending coordinates, posture (rotation), etc. using the OSC protocol. https://qiita.com/gpsnmeajp/items/9c41654e6c89c6b9702f https://gpsnmeajp.github.io/VirtualMotionTrackerDocument/ I installed and configured the above. (Note: Please do not contact the author of VMT about this article. 2. Get the official demo of MediaPipe BlazePose GHUM + TensorFlow.js (TFJS) Get the TFJS code from below. https://github.com/tensorflow/tfjs-models Also, the hierarchy of the demo code is the link below. https://github.com/tensorflow/tfjs-models/tree/master/pose-detection/demos/live_video/src If you just want to run the official demo, just build the demo code obtained above with the yarn watch command in a development environment (node.js, yarn installed) and it will work. 3.Editing the demo code1 Unfortunately, there is no webcam selector in this demo code. First, I need to modify around line 81 of camera.js to make it possible to specify any webcam. The code is still in the validation stage and is very messy, so I don't want to show you too much. In my case, for now, I am writing the device ID of the webcam directly. (LOL 4.Editing the demo code2 It would be nice if we could send the OSC protocol to VMT as it is, but since this is not possible, I will use WebSocket to send the KeyPoint3D information generated by TFJS to another server via WebSocket. Added the process of WebSocket transmission to the drawKeypoints function in line 168. Naturally, we also need to do something to establish WebSocket communication. 5.Creating an OSC protocol sending server via WebSocket. In this section, we will use WebSocket to receive Keypoint3D information and modify it a little bit to make it make sense in VR space. An overview of KeyPoint is described in the following link. https://google.github.io/mediapipe/solutions/pose.html 5-1. Rotation on X-axis First of all, depending on the position of the webcam, it is often placed from upward to downward in order to fit the whole body into the webcam. This causes the Z coordinate to be shifted around the X axis. Therefore, it is necessary to rotate it. This can be done by using trigonometric functions, so I will not go into details. 5-2. Recalculate Y Position The origin of the KeyPoint is the buttocks (the middle coordinate of P23 and P24). Therefore, the y-coordinate of the lower body will be a negative value if the posture is upright, and the upper body will be a positive value. Since this is difficult to handle in VR, I currently find the minimum value from P0 to P32 and use it as the offset value. (Also, since I wanted to use it mainly on VRChat, I modified the values somewhat to make it easier for VRChat to handle. (For trackers, I created "waist", "left leg", and "right leg", which are necessary for VRChat. 5-3.Send OSC Protocol to VMT Once this is done, all that is left is to send the necessary values to VMT via the OSC protocol. (I used the osc-min plugin this time because it was easy to use. I am still in the process of trial and error for the quaternions to find the posture. However, I believe that the direction of the feet can be obtained from P28-30-32, P27-29-31. 6.Execution result The following link shows the results of running the program in VRChat. https://twitter.com/kamatari_san/status/1447092158318579721?s=20 (Note: Because this is a capture of a previous version of this post, some of the movements are a bit awkward, and the left foot and right foot are reversed. The following link is a capture from a game called Thief Simurator VR, where I couldn't crouch because I didn't have a waist tracker until now. https://twitter.com/kamatari_san/status/1445765307142848525?s=20 7.System configuration diagram The following is a poorly drawn diagram of the system. Acknowledgements MediaPipe https://google.github.io/mediapipe/solutions/pose.html TensorFlow.js https://blog.tensorflow.org/2021/08/3d-pose-detection-with-mediapipe-blazepose-ghum-tfjs.html Virtual Motion Tracker https://gpsnmeajp.github.io/VirtualMotionTrackerDocument/ osc-min https://www.npmjs.com/package/osc-min
  • このエントリーをはてなブックマークに追加
  • Qiitaで続きを読む