Commit e5917a8f authored by Alexander Alekhin's avatar Alexander Alekhin

Merge pull request #13580 from LaurentBerger:PythonStitch2

parents 40959fcc 49a43dfc
...@@ -96,10 +96,63 @@ or (dataset from professional book scanner): ...@@ -96,10 +96,63 @@ or (dataset from professional book scanner):
Examples above expects POSIX platform, on windows you have to provide all files names explicitly Examples above expects POSIX platform, on windows you have to provide all files names explicitly
(e.g. `boat1.jpg` `boat2.jpg`...) as windows command line does not support `*` expansion. (e.g. `boat1.jpg` `boat2.jpg`...) as windows command line does not support `*` expansion.
See also Stitching detailed (python opencv >4.0.1)
-------- --------
If you want to study internals of the stitching pipeline or you want to experiment with detailed If you want to study internals of the stitching pipeline or you want to experiment with detailed
configuration see configuration you can use stitching_detailed source code available in C++ or python
[stitching_detailed.cpp](https://github.com/opencv/opencv/tree/master/samples/cpp/stitching_detailed.cpp)
in `opencv/samples/cpp` folder. <H4>stitching_detailed</H4>
@add_toggle_cpp
[stitching_detailed.cpp](https://raw.githubusercontent.com/opencv/opencv/master/samples/cpp/stitching_detailed.cpp)
@end_toggle
@add_toggle_python
[stitching_detailed.py](https://raw.githubusercontent.com/opencv/opencv/master/samples/python/stitching_detailed.py)
@end_toggle
stitching_detailed program uses command line to get stitching parameter. Many parameters exists. Above examples shows some command line parameters possible :
boat5.jpg boat2.jpg boat3.jpg boat4.jpg boat1.jpg boat6.jpg --work_megapix 0.6 --features orb --matcher homography --estimator homography --match_conf 0.3 --conf_thresh 0.3 --ba ray --ba_refine_mask xxxxx --save_graph test.txt --wave_correct no --warp fisheye --blend multiband --expos_comp no --seam gc_colorgrad
![](images/fisheye.jpg)
Pairwise images are matched using an homography --matcher homography and estimator used for transformation estimation too --estimator homography
Confidence for feature matching step is 0.3 : --match_conf 0.3. You can decrease this value if you have some difficulties to match images
Threshold for two images are from the same panorama confidence is 0. : --conf_thresh 0.3 You can decrease this value if you have some difficulties to match images
Bundle adjustment cost function is ray --ba ray
Refinement mask for bundle adjustment is xxxxx ( --ba_refine_mask xxxxx) where 'x' means refine respective parameter and '_' means don't. Refine one, and has the following format: fx,skew,ppx,aspect,ppy
Save matches graph represented in DOT language to test.txt ( --save_graph test.txt) : Labels description: Nm is number of matches, Ni is number of inliers, C is confidence
![](images/gvedit.jpg)
Perform wave effect correction is no (--wave_correct no)
Warp surface type is fisheye (--warp fisheye)
Blending method is multiband (--blend multiband)
Exposure compensation method is not used (--expos_comp no)
Seam estimation estimator is Minimum graph cut-based seam (--seam gc_colorgrad)
you can use those arguments on command line too :
boat5.jpg boat2.jpg boat3.jpg boat4.jpg boat1.jpg boat6.jpg --work_megapix 0.6 --features orb --matcher homography --estimator homography --match_conf 0.3 --conf_thresh 0.3 --ba ray --ba_refine_mask xxxxx --wave_correct horiz --warp compressedPlaneA2B1 --blend multiband --expos_comp channels_blocks --seam gc_colorgrad
You will get :
![](images/compressedPlaneA2B1.jpg)
For images captured using a scanner or a drone ( affine motion) you can use those arguments on command line :
newspaper1.jpg newspaper2.jpg --work_megapix 0.6 --features surf --matcher affine --estimator affine --match_conf 0.3 --conf_thresh 0.3 --ba affine --ba_refine_mask xxxxx --wave_correct no --warp affine
![](images/affinepano.jpg)
You can find all images in https://github.com/opencv/opencv_extra/tree/master/testdata/stitching
...@@ -73,7 +73,7 @@ public: ...@@ -73,7 +73,7 @@ public:
@param corners Source images top-left corners @param corners Source images top-left corners
@param sizes Source image sizes @param sizes Source image sizes
*/ */
CV_WRAP void prepare(const std::vector<Point> &corners, const std::vector<Size> &sizes); CV_WRAP virtual void prepare(const std::vector<Point> &corners, const std::vector<Size> &sizes);
/** @overload */ /** @overload */
CV_WRAP virtual void prepare(Rect dst_roi); CV_WRAP virtual void prepare(Rect dst_roi);
/** @brief Processes the image. /** @brief Processes the image.
......
...@@ -120,6 +120,8 @@ final transformation for each camera. ...@@ -120,6 +120,8 @@ final transformation for each camera.
*/ */
class CV_EXPORTS_W AffineBasedEstimator : public Estimator class CV_EXPORTS_W AffineBasedEstimator : public Estimator
{ {
public:
CV_WRAP AffineBasedEstimator(){}
private: private:
virtual bool estimate(const std::vector<ImageFeatures> &features, virtual bool estimate(const std::vector<ImageFeatures> &features,
const std::vector<MatchesInfo> &pairwise_matches, const std::vector<MatchesInfo> &pairwise_matches,
......
...@@ -133,7 +133,6 @@ void Blender::blend(InputOutputArray dst, InputOutputArray dst_mask) ...@@ -133,7 +133,6 @@ void Blender::blend(InputOutputArray dst, InputOutputArray dst_mask)
dst_mask_.release(); dst_mask_.release();
} }
void FeatherBlender::prepare(Rect dst_roi) void FeatherBlender::prepare(Rect dst_roi)
{ {
Blender::prepare(dst_roi); Blender::prepare(dst_roi);
...@@ -231,7 +230,6 @@ MultiBandBlender::MultiBandBlender(int try_gpu, int num_bands, int weight_type) ...@@ -231,7 +230,6 @@ MultiBandBlender::MultiBandBlender(int try_gpu, int num_bands, int weight_type)
weight_type_ = weight_type; weight_type_ = weight_type;
} }
void MultiBandBlender::prepare(Rect dst_roi) void MultiBandBlender::prepare(Rect dst_roi)
{ {
dst_roi_final_ = dst_roi; dst_roi_final_ = dst_roi;
......
...@@ -83,8 +83,11 @@ parser.add_argument('--seam_megapix',action = 'store', default = 0.1,help=' Reso ...@@ -83,8 +83,11 @@ parser.add_argument('--seam_megapix',action = 'store', default = 0.1,help=' Reso
parser.add_argument('--seam',action = 'store', default = 'no',help='Seam estimation method. The default is "gc_color".',type=str,dest = 'seam' ) parser.add_argument('--seam',action = 'store', default = 'no',help='Seam estimation method. The default is "gc_color".',type=str,dest = 'seam' )
parser.add_argument('--compose_megapix',action = 'store', default = -1,help='Resolution for compositing step. Use -1 for original resolution.',type=float,dest = 'compose_megapix' ) parser.add_argument('--compose_megapix',action = 'store', default = -1,help='Resolution for compositing step. Use -1 for original resolution.',type=float,dest = 'compose_megapix' )
parser.add_argument('--expos_comp',action = 'store', default = 'no',help='Exposure compensation method. The default is "gain_blocks".',type=str,dest = 'expos_comp' ) parser.add_argument('--expos_comp',action = 'store', default = 'no',help='Exposure compensation method. The default is "gain_blocks".',type=str,dest = 'expos_comp' )
parser.add_argument('--expos_comp_nr_feeds',action = 'store', default = 1,help='Number of exposure compensation feed.',type=np.int32,dest = 'expos_comp_nr_feeds' )
parser.add_argument('--expos_comp_nr_filtering',action = 'store', default = 2,help='Number of filtering iterations of the exposure compensation gains',type=float,dest = 'expos_comp_nr_filtering' )
parser.add_argument('--expos_comp_block_size',action = 'store', default = 32,help='BLock size in pixels used by the exposure compensator.',type=np.int32,dest = 'expos_comp_block_size' )
parser.add_argument('--blend',action = 'store', default = 'multiband',help='Blending method. The default is "multiband".',type=str,dest = 'blend' ) parser.add_argument('--blend',action = 'store', default = 'multiband',help='Blending method. The default is "multiband".',type=str,dest = 'blend' )
parser.add_argument('--blend_strength',action = 'store', default = 5,help='Blending strength from [0,100] range.',type=int,dest = 'blend_strength' ) parser.add_argument('--blend_strength',action = 'store', default = 5,help='Blending strength from [0,100] range.',type=np.int32,dest = 'blend_strength' )
parser.add_argument('--output',action = 'store', default = 'result.jpg',help='The default is "result.jpg"',type=str,dest = 'output' ) parser.add_argument('--output',action = 'store', default = 'result.jpg',help='The default is "result.jpg"',type=str,dest = 'output' )
parser.add_argument('--timelapse',action = 'store', default = None,help='Output warped images separately as frames of a time lapse movie, with "fixed_" prepended to input file names.',type=str,dest = 'timelapse' ) parser.add_argument('--timelapse',action = 'store', default = None,help='Output warped images separately as frames of a time lapse movie, with "fixed_" prepended to input file names.',type=str,dest = 'timelapse' )
parser.add_argument('--rangewidth',action = 'store', default = -1,help='uses range_width to limit number of images to match with.',type=int,dest = 'rangewidth' ) parser.add_argument('--rangewidth',action = 'store', default = -1,help='uses range_width to limit number of images to match with.',type=int,dest = 'rangewidth' )
...@@ -119,10 +122,16 @@ elif args.expos_comp=='gain': ...@@ -119,10 +122,16 @@ elif args.expos_comp=='gain':
expos_comp_type = cv.detail.ExposureCompensator_GAIN expos_comp_type = cv.detail.ExposureCompensator_GAIN
elif args.expos_comp=='gain_blocks': elif args.expos_comp=='gain_blocks':
expos_comp_type = cv.detail.ExposureCompensator_GAIN_BLOCKS expos_comp_type = cv.detail.ExposureCompensator_GAIN_BLOCKS
elif args.expos_comp=='channel':
expos_comp_type = cv.detail.ExposureCompensator_CHANNELS
elif args.expos_comp=='channel_blocks':
expos_comp_type = cv.detail.ExposureCompensator_CHANNELS_BLOCKS
else: else:
print("Bad exposure compensation method") print("Bad exposure compensation method")
exit exit()
expos_comp_nr_feeds = args.expos_comp_nr_feeds
expos_comp_nr_filtering = args.expos_comp_nr_filtering
expos_comp_block_size = args.expos_comp_block_size
match_conf = args.match_conf match_conf = args.match_conf
seam_find_type = args.seam seam_find_type = args.seam
blend_type = args.blend blend_type = args.blend
...@@ -180,7 +189,7 @@ for name in img_names: ...@@ -180,7 +189,7 @@ for name in img_names:
img = cv.resize(src=full_img, dsize=None, fx=seam_scale, fy=seam_scale, interpolation=cv.INTER_LINEAR_EXACT) img = cv.resize(src=full_img, dsize=None, fx=seam_scale, fy=seam_scale, interpolation=cv.INTER_LINEAR_EXACT)
images.append(img) images.append(img)
if matcher_type== "affine": if matcher_type== "affine":
matcher = cv.detail.AffineBestOf2NearestMatcher_create(False, try_cuda, match_conf) matcher = cv.detail_AffineBestOf2NearestMatcher(False, try_cuda, match_conf)
elif range_width==-1: elif range_width==-1:
matcher = cv.detail.BestOf2NearestMatcher_create(try_cuda, match_conf) matcher = cv.detail.BestOf2NearestMatcher_create(try_cuda, match_conf)
else: else:
...@@ -189,14 +198,14 @@ p=matcher.apply2(features) ...@@ -189,14 +198,14 @@ p=matcher.apply2(features)
matcher.collectGarbage() matcher.collectGarbage()
if save_graph: if save_graph:
f = open(save_graph_to,"w") f = open(save_graph_to,"w")
# f.write(matchesGraphAsString(img_names, pairwise_matches, conf_thresh)) f.write(cv.detail.matchesGraphAsString(img_names, p, conf_thresh))
f.close() f.close()
indices=cv.detail.leaveBiggestComponent(features,p,0.3) indices=cv.detail.leaveBiggestComponent(features,p,0.3)
img_subset =[] img_subset =[]
img_names_subset=[] img_names_subset=[]
full_img_sizes_subset=[] full_img_sizes_subset=[]
num_images=len(indices) num_images=len(indices)
for i in range(0,num_images): for i in range(len(indices)):
img_names_subset.append(img_names[indices[i,0]]) img_names_subset.append(img_names[indices[i,0]])
img_subset.append(images[indices[i,0]]) img_subset.append(images[indices[i,0]])
full_img_sizes_subset.append(full_img_sizes[indices[i,0]]) full_img_sizes_subset.append(full_img_sizes[indices[i,0]])
...@@ -273,26 +282,33 @@ for i in range(0,num_images): ...@@ -273,26 +282,33 @@ for i in range(0,num_images):
masks.append(um) masks.append(um)
warper = cv.PyRotationWarper(warp_type,warped_image_scale*seam_work_aspect) # warper peut etre nullptr? warper = cv.PyRotationWarper(warp_type,warped_image_scale*seam_work_aspect) # warper peut etre nullptr?
for i in range(0,num_images): for idx in range(0,num_images):
K = cameras[i].K().astype(np.float32) K = cameras[idx].K().astype(np.float32)
swa = seam_work_aspect swa = seam_work_aspect
K[0,0] *= swa K[0,0] *= swa
K[0,2] *= swa K[0,2] *= swa
K[1,1] *= swa K[1,1] *= swa
K[1,2] *= swa K[1,2] *= swa
corner,image_wp =warper.warp(images[i],K,cameras[i].R,cv.INTER_LINEAR, cv.BORDER_REFLECT) corner,image_wp =warper.warp(images[idx],K,cameras[idx].R,cv.INTER_LINEAR, cv.BORDER_REFLECT)
corners.append(corner) corners.append(corner)
sizes.append((image_wp.shape[1],image_wp.shape[0])) sizes.append((image_wp.shape[1],image_wp.shape[0]))
images_warped.append(image_wp) images_warped.append(image_wp)
p,mask_wp =warper.warp(masks[i],K,cameras[i].R,cv.INTER_NEAREST, cv.BORDER_CONSTANT) p,mask_wp =warper.warp(masks[idx],K,cameras[idx].R,cv.INTER_NEAREST, cv.BORDER_CONSTANT)
masks_warped.append(mask_wp) masks_warped.append(mask_wp.get())
images_warped_f=[] images_warped_f=[]
for img in images_warped: for img in images_warped:
imgf=img.astype(np.float32) imgf=img.astype(np.float32)
images_warped_f.append(imgf) images_warped_f.append(imgf)
compensator=cv.detail.ExposureCompensator_createDefault(expos_comp_type) if cv.detail.ExposureCompensator_CHANNELS == expos_comp_type:
compensator.feed(corners, images_warped, masks_warped) compensator = cv.detail_ChannelsCompensator(expos_comp_nr_feeds)
# compensator.setNrGainsFilteringIterations(expos_comp_nr_filtering)
elif cv.detail.ExposureCompensator_CHANNELS_BLOCKS == expos_comp_type:
compensator=cv.detail_BlocksChannelsCompensator(expos_comp_block_size, expos_comp_block_size,expos_comp_nr_feeds)
# compensator.setNrGainsFilteringIterations(expos_comp_nr_filtering)
else:
compensator=cv.detail.ExposureCompensator_createDefault(expos_comp_type)
compensator.feed(corners=corners, images=images_warped, masks=masks_warped)
if seam_find_type == "no": if seam_find_type == "no":
seam_finder = cv.detail.SeamFinder_createDefault(cv.detail.SeamFinder_NO) seam_finder = cv.detail.SeamFinder_createDefault(cv.detail.SeamFinder_NO)
elif seam_find_type == "voronoi": elif seam_find_type == "voronoi":
...@@ -332,7 +348,7 @@ for idx,name in enumerate(img_names): # https://github.com/opencv/opencv/blob/ma ...@@ -332,7 +348,7 @@ for idx,name in enumerate(img_names): # https://github.com/opencv/opencv/blob/ma
cameras[i].focal *= compose_work_aspect cameras[i].focal *= compose_work_aspect
cameras[i].ppx *= compose_work_aspect cameras[i].ppx *= compose_work_aspect
cameras[i].ppy *= compose_work_aspect cameras[i].ppy *= compose_work_aspect
sz = (full_img.shape[1] * compose_scale,full_img.shape[0] * compose_scale) sz = (full_img_sizes[i][0] * compose_scale,full_img_sizes[i][1]* compose_scale)
K = cameras[i].K().astype(np.float32) K = cameras[i].K().astype(np.float32)
roi = warper.warpRoi(sz, K, cameras[i].R); roi = warper.warpRoi(sz, K, cameras[i].R);
corners.append(roi[0:2]) corners.append(roi[0:2])
...@@ -353,21 +369,20 @@ for idx,name in enumerate(img_names): # https://github.com/opencv/opencv/blob/ma ...@@ -353,21 +369,20 @@ for idx,name in enumerate(img_names): # https://github.com/opencv/opencv/blob/ma
seam_mask = cv.resize(dilated_mask,(mask_warped.shape[1],mask_warped.shape[0]),0,0,cv.INTER_LINEAR_EXACT) seam_mask = cv.resize(dilated_mask,(mask_warped.shape[1],mask_warped.shape[0]),0,0,cv.INTER_LINEAR_EXACT)
mask_warped = cv.bitwise_and(seam_mask,mask_warped) mask_warped = cv.bitwise_and(seam_mask,mask_warped)
if blender==None and not timelapse: if blender==None and not timelapse:
blender = cv.detail.Blender_createDefault(1) blender = cv.detail.Blender_createDefault(cv.detail.Blender_NO)
dst_sz = cv.detail.resultRoi(corners,sizes) dst_sz = cv.detail.resultRoi(corners=corners,sizes=sizes)
blend_strength=1
blend_width = np.sqrt(dst_sz[2]*dst_sz[3]) * blend_strength / 100 blend_width = np.sqrt(dst_sz[2]*dst_sz[3]) * blend_strength / 100
if blend_width < 1: if blend_width < 1:
blender = cv.detail.Blender_createDefault(cv.detail.Blender_NO) blender = cv.detail.Blender_createDefault(cv.detail.Blender_NO)
elif blend_type == "MULTI_BAND": elif blend_type == "multiband":
blender = cv.detail.Blender_createDefault(cv.detail.Blender_MULTIBAND) blender = cv.detail_MultiBandBlender()
blender.setNumBands((np.log(blend_width)/np.log(2.) - 1.).astype(np.int)) blender.setNumBands((np.log(blend_width)/np.log(2.) - 1.).astype(np.int))
elif blend_type == "FEATHER": elif blend_type == "feather":
blender = cv.detail.Blender_createDefault(cv.detail.Blender_FEATHER) blender = cv.detail_FeatherBlender()
blender.setSharpness(1./blend_width) blender.setSharpness(1./blend_width)
blender.prepare(corners, sizes) blender.prepare(dst_sz)
elif timelapser==None and timelapse: elif timelapser==None and timelapse:
timelapser = cv.detail.createDefault(timelapse_type); timelapser = cv.detail.Timelapser_createDefault(timelapse_type)
timelapser.initialize(corners, sizes) timelapser.initialize(corners, sizes)
if timelapse: if timelapse:
matones=np.ones((image_warped_s.shape[0],image_warped_s.shape[1]), np.uint8) matones=np.ones((image_warped_s.shape[0],image_warped_s.shape[1]), np.uint8)
...@@ -379,9 +394,14 @@ for idx,name in enumerate(img_names): # https://github.com/opencv/opencv/blob/ma ...@@ -379,9 +394,14 @@ for idx,name in enumerate(img_names): # https://github.com/opencv/opencv/blob/ma
fixedFileName = img_names[idx][:pos_s + 1 ]+"fixed_" + img_names[idx][pos_s + 1: ] fixedFileName = img_names[idx][:pos_s + 1 ]+"fixed_" + img_names[idx][pos_s + 1: ]
cv.imwrite(fixedFileName, timelapser.getDst()) cv.imwrite(fixedFileName, timelapser.getDst())
else: else:
blender.feed(image_warped_s, mask_warped, corners[idx]) blender.feed(cv.UMat(image_warped_s), mask_warped, corners[idx])
if not timelapse: if not timelapse:
result=None result=None
result_mask=None result_mask=None
result,result_mask = blender.blend(result,result_mask) result,result_mask = blender.blend(result,result_mask)
cv.imwrite(result_name,result) cv.imwrite(result_name,result)
zoomx =600/result.shape[1]
dst=cv.normalize(src=result,dst=None,alpha=255.,norm_type=cv.NORM_MINMAX,dtype=cv.CV_8U)
dst=cv.resize(dst,dsize=None,fx=zoomx,fy=zoomx)
cv.imshow(result_name,dst)
cv.waitKey()
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment