[姿势估计] 实践记录:使用 Dlib 和 mediapipe 进行人脸姿势估计 - 本文重点介绍方法 2):方法 1:基于深度学习的方法:。 基于深度学习的方法:基于深度学习的方法利用深度学习模型,如卷积神经网络(CNN)或递归神经网络(RNN),直接从人脸图像中学习姿势估计。这些方法能够学习更复杂的特征表征,并在大规模数据集上取得优异的性能。方法二:基于二维校准信息估计三维姿态信息(计算机视觉 PnP 问题)。 特征点定位:人脸姿态估计的第一步是通过特征点定位来检测和定位人脸的关键点,如眼睛、鼻子和嘴巴。这些关键点提供了人脸的局部结构信息,可用于后续的姿势估计。 旋转表示:常见的旋转表示方法包括欧拉角和旋转矩阵。欧拉角通过三个旋转角度(通常是俯仰、偏航和滚动)描述头部的旋转姿态。旋转矩阵是一个 3x3 矩阵,表示头部从一个坐标系到另一个坐标系的变换。 三维模型重建:根据特征点的定位结果,三维人脸模型可用于姿势估计。通过将人脸的二维图像映射到三维模型上,可以估算出人脸的旋转和平移信息。这就需要
最编程
2024-04-07 20:25:04
...
欧拉角是一种通过三个旋转角度来描述物体的旋转姿态的方法。在人脸姿态估计中,通常使用俯仰角(pitch)、偏航角(yaw)和翻滚角(roll)来表示头部的旋转姿态。
- 俯仰角:表示头部绕着垂直于地面的X轴旋转的角度。当头部向下低头时,俯仰角为正值;当头部向上仰头时,俯仰角为负值。
- 偏航角:表示头部绕着垂直于地面的Y轴旋转的角度。当头部向右转时,偏航角为正值;当头部向左转时,偏航角为负值。
- 翻滚角:表示头部绕着垂直于地面的Z轴旋转的角度。当头部向右倾斜时,翻滚角为正值;当头部向左倾斜时,翻滚角为负值。
3D模型重建
进行3D模型重建需要有两个大前提,这里我们以单目相机为例:
【条件一:】 有一个已经校准了的相机(也就是知道相机的内参);
【条件二:】 我们知道物体上的N个3D 点的位置和这些3D点在图像中相应的2D投影。
对于【条件一】大家可以参考我的【专栏: 单双目测距】中有介绍。在这里我将略过。
'相机内参矩阵'
img_size = (640, 480)
focal_length = img_size[0]
camera_center = (img_size[1] / 2, img_size[0] / 2)
cam_matrix = np.array([[2049.608299, 1.241852862, 1032.391255],
[0, 2066.791362, 550.6131349],
[0, 0, 1]], dtype="double")
'畸变矩阵'
dist_coeffs = np.array([0.108221558, -0.232697802, 0.002050653, -0.004714754, 0])
对于【条件二】中的3D坐标和 Coimbra 大学科学技术学院提供的通用三维人脸模型的坐标一致。
'头部三维通用模型关键点坐标'
object_pts_6 = np.array([
(0.0, 0.0, 0.0), # Nose tip 34
(0.0, -330.0, -65.0), # Chin 9
(-225.0, 170.0, -135.0), # Left eye left corner 46
(225.0, 170.0, -135.0), # Right eye right corne 37
(-150.0, -150.0, -125.0), # Left Mouth corner 55
(150.0, -150.0, -125.0) # Right mouth corner 49
], dtype=float) / 4.5
object_pts_14 = np.float32([[6.825897, 6.760612, 4.402142],
[1.330353, 7.122144, 6.903745],
[-1.330353, 7.122144, 6.903745],
[-6.825897, 6.760612, 4.402142],
[5.311432, 5.485328, 3.987654],
[1.789930, 5.393625, 4.413414],
[-1.789930, 5.393625, 4.413414],
[-5.311432, 5.485328, 3.987654],
[2.005628, 1.409845, 6.165652],
[-2.005628, 1.409845, 6.165652],
[2.774015, -2.080775, 5.048531],
[-2.774015, -2.080775, 5.048531],
[0.000000, -3.116408, 6.097667],
[0.000000, -7.415691, 4.070434]])
object_pts_68 = np.array([
[-73.393523, -29.801432, -47.667532],
[-72.775014, -10.949766, -45.909403],
[-70.533638, 7.929818, -44.84258 ],
[-66.850058, 26.07428 , -43.141114],
[-59.790187, 42.56439 , -38.635298],
[-48.368973, 56.48108 , -30.750622],
[-34.121101, 67.246992, -18.456453],
[-17.875411, 75.056892, -3.609035],
[ 0.098749, 77.061286, 0.881698],
[ 17.477031, 74.758448, -5.181201],
[ 32.648966, 66.929021, -19.176563],
[ 46.372358, 56.311389, -30.77057 ],
[ 57.34348 , 42.419126, -37.628629],
[ 64.388482, 25.45588 , -40.886309],
[ 68.212038, 6.990805, -42.281449],
[ 70.486405, -11.666193, -44.142567],
[ 71.375822, -30.365191, -47.140426],
[-61.119406, -49.361602, -14.254422],
[-51.287588, -58.769795, -7.268147],
[-37.8048 , -61.996155, -0.442051],
[-24.022754, -61.033399, 6.606501],
[-11.635713, -56.686759, 11.967398],
[ 12.056636, -57.391033, 12.051204],
[ 25.106256, -61.902186, 7.315098],
[ 38.338588, -62.777713, 1.022953],
[ 51.191007, -59.302347, -5.349435],
[ 60.053851, -50.190255, -11.615746],
[ 0.65394 , -42.19379 , 13.380835],
[ 0.804809, -30.993721, 21.150853],
[ 0.992204, -19.944596, 29.284036],
[ 1.226783, -8.414541, 36.94806 ],
[-14.772472, 2.598255, 20.132003],
[ -7.180239, 4.751589, 23.536684],
[ 0.55592 , 6.5629 , 25.944448],
[ 8.272499, 4.661005, 23.695741],
[ 15.214351, 2.643046, 20.858157],
[-46.04729 , -37.471411, -7.037989],
[-37.674688, -42.73051 , -3.021217],
[-27.883856, -42.711517, -1.353629],
[-19.648268, -36.754742, 0.111088],
[-28.272965, -35.134493, 0.147273],
[-38.082418, -34.919043, -1.476612],
[ 19.265868, -37.032306, 0.665746],
[ 27.894191, -43.342445, -0.24766 ],
[ 37.437529, -43.110822, -1.696435],
[ 45.170805, -38.086515, -4.894163],
[ 38.196454, -35.532024, -0.282961],
[ 28.764989, -35.484289, 1.172675],
[-28.916267, 28.612716, 2.24031 ],
[-17.533194, 22.172187, 15.934335],
[ -6.68459 , 19.029051, 22.611355],
[ 0.381001, 20.721118, 23.748437],
[ 8.375443, 19.03546 , 22.721995],
[ 18.876618, 22.394109, 15.610679],
[ 28.794412, 28.079924, 3.217393],
[ 19.057574, 36.298248, 14.987997],
[ 8.956375, 39.634575, 22.554245],
[ 0.381549, 40.395647, 23.591626],
[ -7.428895, 39.836405, 22.406106],
[-18.160634, 36.677899, 15.121907],
[-24.37749 , 28.677771, 4.785684],
[ -6.897633, 25.475976, 20.893742],
[ 0.340663, 26.014269, 22.220479],
[ 8.444722, 25.326198, 21.02552 ],
[ 24.474473, 28.323008, 5.712776],
[ 8.449166, 30.596216, 20.671489],
[ 0.205322, 31.408738, 21.90367 ],
[ -7.198266, 30.844876, 20.328022]])
reprojectsrc = np.float32 ([[10.0, 10.0, 10.0],
[10.0, -10.0, 10.0],
[-10.0, 10.0, 10.0],
[-10.0, -10.0, 10.0]])
获取了前置的两个条件后,我们可使用如下代码实现对获取到的关键点进行3D重构。 步骤如下:
- 图像坐标系中点的坐标从face_landmark_localization的检测结果抽取姿态估计需要的点坐标;
- 函数solvepnp接收一组对应的3D坐标和2D坐标,以及相机内参camera_matrix和dist_coeffs进行反推图片的外参;
- 函数projectPoints根据所给的3D坐标和已知的几何变换来求解投影后的2D坐标;
def get_head_pose(landmarks_part,point_number):
'''即图像坐标系中点的坐标从face_landmark_localization的检测结果抽取姿态估计需要的点坐标'''
if point_number==14:
image_pts = np.float32([landmarks_part[17], landmarks_part[21], landmarks_part[22], landmarks_part[26], landmarks_part[36],
landmarks_part[39], landmarks_part[42], landmarks_part[45], landmarks_part[31], landmarks_part[35],
landmarks_part[48], landmarks_part[54], landmarks_part[57], landmarks_part[8]])
'函数solvepnp接收一组对应的3D坐标和2D坐标,以及相机内参camera_matrix和dist_coeffs进行反推图片的外参'
_, rotation_vec, translation_vec = cv2.solvePnP(object_pts_14, image_pts, cam_matrix, dist_coeffs)
elif point_number==6:
image_pts = np.float32([landmarks_part[34], landmarks_part[9],
landmarks_part[46], landmarks_part[37], landmarks_part[55], landmarks_part[49]])
_, rotation_vec, translation_vec = cv2.solvePnP(object_pts_6, image_pts, cam_matrix, dist_coeffs)
else:
image_pts = np.float32([landmarks_part])
_, rotation_vec, translation_vec = cv2.solvePnP(object_pts_68, image_pts, cam_matrix, dist_coeffs)
'函数projectPoints根据所给的3D坐标和已知的几何变换来求解投影后的2D坐标'
reprojectdst, _ = cv2.projectPoints(reprojectsrc, rotation_vec, translation_vec, cam_matrix,
dist_coeffs)
reprojectdst = tuple(map(tuple, reprojectdst.reshape(4, 2)))
# calc euler angle
rotation_mat, _ = cv2.Rodrigues(rotation_vec)
pose_mat = cv2.hconcat((rotation_mat, translation_vec))
_, _, _, _, _, _, euler_angle = cv2.decomposeProjectionMatrix(pose_mat)
return reprojectdst, euler_angle
姿态估计
上述的几大关键步骤函数已完成部署,后面需要串联起来运行,这里我们梳理一下思路:
graph TD
开始 --> 载入相机内参矩阵
开始 --> 载入标准人头3D模型
开始 --> 打开摄像头获取画面
打开摄像头获取画面 --> 获取人脸关键点坐标
获取人脸关键点坐标 --> SolvePnP问题求解
载入相机内参矩阵 --> SolvePnP问题求解
载入标准人头3D模型 --> SolvePnP问题求解
SolvePnP问题求解 --> 通过旋转向量得到欧拉角
def angle_xyz(point_x, point_y, point_z):
"""
1. X 抬头 、 低头
2. Y 头左转 、 头右转
3. Z 头左斜 、 头右斜
"""
if point_x > 5:
point_x_status = "down"
elif point_x < -5:
point_x_status = "up"
else:
point_x_status = " "
if point_y > 5:
point_y_status = "right"
elif point_y < -5:
point_y_status = "left"
else:
point_y_status = " "
if point_z > 5:
point_z_status = "left"
elif point_z < -5:
point_z_status = "right"
else:
point_z_status = " "
return point_x_status, point_y_status, point_z_status
if __name__ == "__main__":
cap = cv2.VideoCapture(0)
while cap.isOpened():
success, src_img = cap.read()
if not success:
print("Ignoring empty camera frame.")
continue
else:
landmarks_part = draw_landmarks(src_img)
if len(landmarks_part) == 68:
reprojectdst, euler_angle = get_head_pose(landmarks_part, 14)
point_x = euler_angle[0][0]
point_y = euler_angle[1][0]
point_z = euler_angle[2][0]
x_status, y_status, z_status = angle_xyz(point_x, point_y, point_z)
cv2.putText(src_img, str("X: %.2f Status: %s" % (point_x, x_status)), (5, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (0, 0, 255), 2)
cv2.putText(src_img, str('Y: %.2f Status: %s' % (point_y, y_status)), (5, 50), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (0, 0, 255), 2)
cv2.putText(src_img, str('Z: %.2f Status: %s' % (point_z, z_status)), (5, 80), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (0, 0, 255), 2)
cv2.imshow("src_img", src_img)
cv2.waitKey(1)
结语
本篇博客从原理简介入手为大家带来了人脸姿态估计的传统算法,分别讲解了人脸关键点的定位与坐标获取 、 相机内参设定、 人头三维坐标定位设定 、 通过二维坐标进行3D坐标重建和估算人头姿态。由于本文着手稍加匆忙,内容上的错误希望能够得到指正!如需要代码可私信,免费!
上一篇: 照相机校准 - 数学原理和公式推导