Python+OpenCV手勢檢測與識別Mediapipe基礎篇
前言
本篇文章適合剛入門OpenCV的同學們。文章將介紹如何使用Python利用OpenCV圖像捕捉,配合強大的Mediapipe庫來實現手勢檢測與識別;本系列後續還會繼續更新Mediapipe手勢的各種衍生項目,還請多多關註!
項目效果圖
視頻捕捉幀數穩定在(25-30)
認識Mediapipe
項目的實現,核心是強大的Mediapipe ,它是google的一個開源項目:
功能 | 詳細 |
---|---|
人臉檢測 FaceMesh | 從圖像/視頻中重建出人臉的3D Mesh |
人像分離 | 從圖像/視頻中把人分離出來 |
手勢跟蹤 | 21個關鍵點的3D坐標 |
人體3D識別 | 33個關鍵點的3D坐標 |
物體顏色識別 | 可以把頭發檢測出來,並圖上顏色 |
Mediapipe Dev:https://mediapipe.dev/
以上是Mediapipe的幾個常用功能 ,這幾個功能我們會在後續一一講解實現
Python安裝Mediapipe
pip install mediapipe==0.8.9.1
也可以用 setup.py 安裝
https://github.com/google/mediapipe
項目環境
Python 3.7
Mediapipe 0.8.9.1
Numpy 1.21.6
OpenCV-Python 4.5.5.64
OpenCV-contrib-Python 4.5.5.64
實測也支持Python3.8-3.9
代碼
核心代碼
OpenCV攝像頭捕捉部分:
import cv2 cap = cv2.VideoCapture(0) #OpenCV攝像頭調用:0=內置攝像頭(筆記本) 1=USB攝像頭-1 2=USB攝像頭-2 while True: success, img = cap.read() imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) #cv2圖像初始化 cv2.imshow("HandsImage", img) #CV2窗體 cv2.waitKey(1) #關閉窗體
mediapipe 手勢識別與繪制
#定義並引用mediapipe中的hands模塊 mpHands = mp.solutions.hands hands = mpHands.Hands() mpDraw = mp.solutions.drawing_utils while True: success, img = cap.read() imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) #cv2圖像初始化 results = hands.process(imgRGB) # print(results.multi_hand_landmarks) if results.multi_hand_landmarks: for handLms in results.multi_hand_landmarks: for id, lm in enumerate(handLms.landmark): # print(id, lm) h, w, c = img.shape cx, cy = int(lm.x * w), int(lm.y * h) print(id, cx, cy) # if id == 4: cv2.circle(img, (cx, cy), 15, (255, 0, 255), cv2.FILLED) #繪制手部特征點: mpDraw.draw_landmarks(img, handLms, mpHands.HAND_CONNECTIONS)
視頻幀率計算
import time #幀率時間計算 pTime = 0 cTime = 0 while True cTime = time.time() fps = 1 / (cTime - pTime) pTime = cTime cv2.putText(img, str(int(fps)), (10, 70), cv2.FONT_HERSHEY_PLAIN, 3, (255, 0, 255), 3) #FPS的字號,顏色等設置
完整代碼
# Coding BIGBOSSyifi # Datatime:2022/4/24 21:41 # Filename:HandsDetector.py # Toolby: PyCharm import cv2 import mediapipe as mp import time cap = cv2.VideoCapture(0) #OpenCV攝像頭調用:0=內置攝像頭(筆記本) 1=USB攝像頭-1 2=USB攝像頭-2 #定義並引用mediapipe中的hands模塊 mpHands = mp.solutions.hands hands = mpHands.Hands() mpDraw = mp.solutions.drawing_utils #幀率時間計算 pTime = 0 cTime = 0 while True: success, img = cap.read() imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) #cv2圖像初始化 results = hands.process(imgRGB) # print(results.multi_hand_landmarks) if results.multi_hand_landmarks: for handLms in results.multi_hand_landmarks: for id, lm in enumerate(handLms.landmark): # print(id, lm) h, w, c = img.shape cx, cy = int(lm.x * w), int(lm.y * h) print(id, cx, cy) # if id == 4: cv2.circle(img, (cx, cy), 15, (255, 0, 255), cv2.FILLED) #繪制手部特征點: mpDraw.draw_landmarks(img, handLms, mpHands.HAND_CONNECTIONS) ''''' 視頻FPS計算 ''' cTime = time.time() fps = 1 / (cTime - pTime) pTime = cTime cv2.putText(img, str(int(fps)), (10, 70), cv2.FONT_HERSHEY_PLAIN, 3, (255, 0, 255), 3) #FPS的字號,顏色等設置 cv2.imshow("HandsImage", img) #CV2窗體 cv2.waitKey(1) #關閉窗體
項目輸出
結語
以此篇文章技術為基礎,後續會更新利用此篇基礎技術實現的《手勢控制:音量,鼠標》
項目下載地址https://github.com/BIGBOSS-dedsec/HandsDetection_Python
到此這篇關於Python+OpenCV手勢檢測與識別Mediapipe基礎篇的文章就介紹到這瞭,更多相關Python OpenCV手勢識別Mediapipe內容請搜索WalkonNet以前的文章或繼續瀏覽下面的相關文章希望大傢以後多多支持WalkonNet!
推薦閱讀:
- python+mediapipe+opencv實現手部關鍵點檢測功能(手勢識別)
- 基於Mediapipe+Opencv實現手勢檢測功能
- Python如何使用opencv進行手勢識別詳解
- OpenCV+MediaPipe實現手部關鍵點識別
- Python+OpenCV實戰之拖拽虛擬方塊的實現