如何使用 Python 用新图像替换图像中的轮廓(矩形)?

How to replace a contour (rectangle) in an image with a new image using Python?(如何使用 Python 用新图像替换图像中的轮廓(矩形)?)
本文介绍了如何使用 Python 用新图像替换图像中的轮廓(矩形)?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

问题描述

我目前正在使用 opencv (CV2) 和 Python Pillow 图像库来尝试拍摄任意手机的图像并用新图像替换屏幕.我已经到了可以拍摄图像并识别手机屏幕并获取角落的所有坐标的地步,但是我很难用新图像替换图像中的那个区域.

我目前的代码:

导入 cv2从 PIL 导入图像image = cv2.imread('mockup.png')edged_image = cv2.Canny(图像, 30, 200)(轮廓,_)= cv2.findContours(edged_image.copy(),cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)轮廓=排序(轮廓,键= cv2.contourArea,反向=真)[:10]screenCnt = 无对于轮廓中的轮廓:peri = cv2.arcLength(轮廓,真)约= cv2.approxPolyDP(轮廓,0.02 * peri,真)# 如果我们的近似轮廓有四个点,那么# 我们可以假设我们已经找到了我们的屏幕如果 len(大约)== 4:screenCnt = 大约休息cv2.drawContours(image, [screenCnt], -1, (0, 255, 0), 3)cv2.imshow("屏幕位置", image)cv2.waitKey(0)

这会给我一个看起来像这样的图像:

我也可以使用这行代码获取屏幕角的坐标:

screenCoords = [x[0].tolist() for x in screenCnt]//[[398, 139], [245, 258], [474, 487], [628, 358]]

但是,我终生无法弄清楚如何拍摄新图像并将其缩放到我找到的坐标空间的形状并将图像覆盖在上面.

我的猜测是,我可以使用我改编自

如果我使用不同的垂直高度非常高的图像,我最终会得到一些太长"的图像:

我是否需要应用额外的转换来缩放图像?不知道这里发生了什么,我认为透视变换会使图像自动缩放到提供的坐标.

解决方案

我下载了你的图片数据并在本地机器上重现了问题以找出解决方案.还下载了 lenna.png 以适应手机屏幕.

导入 cv2将 numpy 导入为 np# iPhone 的模板图片img1 = cv2.imread("/Users/anmoluppal/Downloads/46F1U.jpg")# 用于拟合白色空腔的样本图像img2 = cv2.imread("/Users/anmoluppal/Downloads/Lenna.png")行,列,ch = img1.shape# 硬编码白色空腔的 3 个角点,用绿色矩形标记.pts1 = np.float32([[201, 561], [455, 279], [742, 985]])# 在要拟合的参考图像上硬编码相同的点.pts2 = np.float32([[0, 0], [512, 0], [0, 512]])# 将样本图像仿射变换为模板.M = cv2.getAffineTransform(pts2,pts1)# 应用转换,注意传递的 (cols,rows),这些定义了转换后输出的最终维度.dst = cv2.warpAffine(img2,M,(cols,rows))# 仅用于调试输出.最终 = cv2.addWeighted(dst, 0.5, img1, 0.5, 1)cv2.imwrite("./garbage.png", 最终)

I'm currently using the opencv (CV2) and Python Pillow image library to try and take an image of arbitrary phones and replace the screen with a new image. I've gotten to the point where I can take an image and identify the screen of the phone and get all the coordinates for the corner, but I'm having a really hard time replacing that area in the image with a new image.

The code I have so far:

import cv2
from PIL import Image

image = cv2.imread('mockup.png')
edged_image = cv2.Canny(image, 30, 200)

(contours, _) = cv2.findContours(edged_image.copy(), cv2.RETR_TREE,     cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key = cv2.contourArea, reverse = True)[:10]
screenCnt = None

for contour in contours:
    peri = cv2.arcLength(contour, True)
    approx = cv2.approxPolyDP(contour, 0.02 * peri, True)

    # if our approximated contour has four points, then
    # we can assume that we have found our screen
    if len(approx) == 4:
        screenCnt = approx
        break

cv2.drawContours(image, [screenCnt], -1, (0, 255, 0), 3)
cv2.imshow("Screen Location", image)
cv2.waitKey(0)

This will give me an image that looks like this:

I can also get the coordinates of the screen corners using this line of code:

screenCoords = [x[0].tolist() for x in screenCnt] 
// [[398, 139], [245, 258], [474, 487], [628, 358]]

However I can't figure out for the life of me how to take a new image and scale it into the shape of the coordinate space I've found and overlay the image ontop.

My guess is that I can do this using an image transform in Pillow using this function that I adapted from this stackoverflow question:

def find_transform_coefficients(pa, pb):
"""Return the coefficients required for a transform from start_points to end_points.

    args:
        start_points -> Tuple of 4 values for start coordinates
        end_points --> Tuple of 4 values for end coordinates
"""
matrix = []
for p1, p2 in zip(pa, pb):
    matrix.append([p1[0], p1[1], 1, 0, 0, 0, -p2[0]*p1[0], -p2[0]*p1[1]])
    matrix.append([0, 0, 0, p1[0], p1[1], 1, -p2[1]*p1[0], -p2[1]*p1[1]])

A = numpy.matrix(matrix, dtype=numpy.float)
B = numpy.array(pb).reshape(8)

res = numpy.dot(numpy.linalg.inv(A.T * A) * A.T, B)
return numpy.array(res).reshape(8) 

However I'm in over my head a bit, and I can't get the details right. Could someone give me some help?

EDIT

Ok now that I'm using the getPerspectiveTransform and warpPerspective functions, I've got the following additional code:

screenCoords = numpy.asarray(
    [numpy.asarray(x[0], dtype=numpy.float32) for x in screenCnt],
    dtype=numpy.float32
)

overlay_image = cv2.imread('123.png')
overlay_height, overlay_width = image.shape[:2]

input_coordinates = numpy.asarray(
    [
        numpy.asarray([0, 0], dtype=numpy.float32),
        numpy.asarray([overlay_width, 0], dtype=numpy.float32),
        numpy.asarray([overlay_width, overlay_height],     dtype=numpy.float32),
        numpy.asarray([0, overlay_height], dtype=numpy.float32)
    ],
    dtype=numpy.float32,
)

transformation_matrix = cv2.getPerspectiveTransform(
    numpy.asarray(input_coordinates),
    numpy.asarray(screenCoords),
)

warped_image = cv2.warpPerspective(
    overlay_image,
    transformation_matrix,
    (background_width, background_height),
)
cv2.imshow("Overlay image", warped_image)
cv2.waitKey(0)

The image is getting rotated and skewed properly (I think), but its not the same size as the screen. Its "shorter":

and if I use a different image that is very tall vertically I end up with something that is too "long":

Do I need to apply an additional transformation to scale the image? Not sure whats going on here, I thought the perspective transform would make the image automatically scale out to the provided coordinates.

解决方案

I downloaded your image data and reproduced the problem in my local machine to find out the solution. Also downloaded lenna.png to fit inside the phone screen.

import cv2
import numpy as np

# Template image of iPhone
img1 = cv2.imread("/Users/anmoluppal/Downloads/46F1U.jpg")
# Sample image to be used for fitting into white cavity
img2 = cv2.imread("/Users/anmoluppal/Downloads/Lenna.png")

rows,cols,ch = img1.shape

# Hard coded the 3 corner points of white cavity labelled with green rect.
pts1 = np.float32([[201, 561], [455, 279], [742, 985]])
# Hard coded the same points on the reference image to be fitted.
pts2 = np.float32([[0, 0], [512, 0], [0, 512]])

# Getting affine transformation form sample image to template.
M = cv2.getAffineTransform(pts2,pts1)

# Applying the transformation, mind the (cols,rows) passed, these define the final dimensions of output after Transformation.
dst = cv2.warpAffine(img2,M,(cols,rows))

# Just for Debugging the output.
final = cv2.addWeighted(dst, 0.5, img1, 0.5, 1)
cv2.imwrite("./garbage.png", final)

这篇关于如何使用 Python 用新图像替换图像中的轮廓(矩形)?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

本站部分内容来源互联网,如果有图片或者内容侵犯您的权益请联系我们删除!

相关文档推荐

cv2.approxPolyDP() , cv2.arcLength() How these works(cv2.approxPolyDP() , cv2.arcLength() 这些是如何工作的)
How To Detect Red Color In OpenCV Python?(如何在 OpenCV Python 中检测红色?)
Distance between 2 pixels(2个像素之间的距离)
How to get length of each side of a shape in an image?(如何获取图像中形状每一边的长度?)
How to upload dataset in google colaboratory?(如何在谷歌合作实验室上传数据集?)
Fast and Robust Image Stitching Algorithm for many images in Python?(Python中许多图像的快速而强大的图像拼接算法?)