OpenCV Stereo Calibration (example code questions) -
at moment implementing calibration method(s) stereo vision. using opencv library.
there example in sample folder, have questions implementation:
where these array's , cvmat variables?
// array , vector storage: double m1[3][3], m2[3][3], d1[5], d2[5]; double r[3][3], t[3], e[3][3], f[3][3]; cvmat _m1 = cvmat(3, 3, cv_64f, m1 ); cvmat _m2 = cvmat(3, 3, cv_64f, m2 ); cvmat _d1 = cvmat(1, 5, cv_64f, d1 ); cvmat _d2 = cvmat(1, 5, cv_64f, d2 ); cvmat _r = cvmat(3, 3, cv_64f, r ); cvmat _t = cvmat(3, 1, cv_64f, t ); cvmat _e = cvmat(3, 3, cv_64f, e ); cvmat _f = cvmat(3, 3, cv_64f, f ); in other examples see code:
//--------find , draw chessboard-------------------------------------------------- if((frame++ % 20) == 0) { //----------------cam1------------------------------------------------------------------------------------------------------- result1 = cvfindchessboardcorners( frame1, board_sz,&temp1[0], &count1,cv_calib_cb_adaptive_thresh|cv_calib_cb_filter_quads); cvcvtcolor( frame1, gray_fr1, cv_bgr2gray ); what if statement do? why %20?
thank in advance!
update:
i have 2 questions implementation code: link
-1: nx , ny variables declared in line 18 , used in board_sz variable @ line 25. these nx , ny rows , columns or corners in chessboard pattern? (i think these rows , columns, because cvsize has parameters width , height).
-2: these cvmat variables (lines 143 - 146)?
cvmat _objectpoints = cvmat(1, n, cv_32fc3, &objectpoints[0] ); cvmat _imagepoints1 = cvmat(1, n, cv_32fc2, &points[0][0] ); cvmat _imagepoints2 = cvmat(1, n, cv_32fc2, &points[1][0] ); cvmat _npoints = cvmat(1, npoints.size(), cv_32s, &npoints[0] );
each of matrices has meaning in epipolar geometry. describe relation between 2 cameras in 3d space , between images record.
in example, are:
- m1 - camera intrinsics matrix of left camera
- m2 - camera intrinsics matrix of right camera
- d1 - distortion coefficients of left camera
- d2 - distortion coefficients of right camera
- r - rotation matrix right left camera
- t - translation vector right left camera
- e - essential matrix of stereo setup
- f - fundamental matrix of stereo setup
on basis of these matrices, can undistort , rectify images, allows extract depth of point see in both images way of disparity (the difference in x, basically). finding point in both images called matching, , last step after rectification.
any introduction epipolar geometry , stereo vision better type here. recommend learning opencv book example code taken , goes great detail explaining basics.
the second part of question has been answered in comment: (frame++ % 20) 0 every 20th frame recorded webcam, code in if-clause executed once per 20 frames.
response update:
nx , ny number of corners in chessboard pattern in calibration images. n "normal" 8x8 chessboard, nx = ny = 7. can see in lines 138-139, points of 1 ideal chessboard created offsetting nx*ny points distance of squaresize, size of 1 square in chessboard.
the cvmat variables "objectpoints", "imagepoints" , "npoints" passed cvstereocalibrate function.
- objectpoints contains points of calibration object (the chessboard)
- imagepoints1/2 contain these points seen each of cameras
- npoints contains number of points in each image (as m-by-1 matrix) - feel free ignore it, it's not used in opencv c++ api more anyway.
basically, cvstereocalibrate fits imagepoints objectpoints, , returns 1) distortion coefficients, 2) intrinsic camera matrices , 3) spatial relation of 2 cameras rotation matrix r , translation vector t. first used undistort images, second relay pixel coordinates real-world coordinates, , third allow can rectify 2 images.
as side note: remember having trouble stereo calibration because chessboard orientation detected differently in left , right camera images. shouldn't problem unless have large angle between cameras (which isn't great idea) or incline chessboards lot (which isn't necessary), can still keep eye out.
Comments
Post a Comment