Can’t reproduce the same results as on the web interface with code

#4
by gfdadfas - opened

I’m currently using the eloftr model, but I can’t reproduce the same results as on the web interface. Could you explain how the parameter settings on the web interface correspond to the parameter settings in the code?

Here is the result on web:
image (27).webp
The result using code is quite different:
032_to_033_matching.png

Zhejiang University org

Hi @gfdadfas ,
I assume the web interface adds Ransac on top of the model's output, which is not the case in the present model as Ransac is another post processing that is independent from the model. You should have similar results by implementing Ransac on top of the model

Hi @stevenbucaille ,
The model you provide is responsible for extracting keypoints, while RANSAC is only used to estimate the homography matrix between two images. What I want to express is that the keypoints extracted on the web interface are not consistent with those extracted in the code.

Zhejiang University org

Could you provide a reproducible example with the images used ?

Hi @stevenbucaille ,
The model you provide is responsible for extracting keypoints, while RANSAC is only used to estimate the homography matrix between two images. What I want to express is that the keypoints extracted on the web interface are not consistent with those extracted in the code.

Hi, I'm the author of matchanything. I'd like to clarify that our web demo uses RANSAC to filter out bad matches, not only for estimating the homography matrix.

Hey here are my code filter out bad with RANSAC and reults:

# --- 2. 模型推理 ---
print("正在进行模型推理以寻找匹配点...")
images = [image1, image2]
inputs = processor(images, return_tensors="pt")

with torch.no_grad():
    outputs = model(**inputs)

# --- 3. 后处理 ---
image_sizes = [[(image.height, image.width) for image in images]]
processed_outputs = processor.post_process_keypoint_matching(outputs, image_sizes, threshold=threshold)

keypoints0 = processed_outputs[0].get("keypoints0", [])
keypoints1 = processed_outputs[0].get("keypoints1", [])

# 转换为Numpy数组以便于处理
src_points_raw = np.array([kp.tolist() for kp in keypoints0], dtype=np.float32)
dst_points_raw = np.array([kp.tolist() for kp in keypoints1], dtype=np.float32)

print(f"\n模型初步找到 {len(src_points_raw)} 对匹配点 (阈值 > {threshold})。")

# --- 4. 使用RANSAC过滤匹配点 ---
src_points_ransac = np.empty((0, 2), dtype=np.float32)
dst_points_ransac = np.empty((0, 2), dtype=np.float32)

if len(src_points_raw) >= 4: # RANSAC需要至少4个点来计算单应性矩阵
    print("正在使用RANSAC过滤外点 (使用MAGSAC)...")
    print("RANSAC参数: ReprojThreshold=4.0, Confidence=0.9999, MaxIters=10000")
    
    # 使用用户指定的更高级的RANSAC参数
    H, mask = cv2.findHomography(
        src_points_raw, 
        dst_points_raw, 
        method=cv2.USAC_MAGSAC,
        ransacReprojThreshold=4.0,
        maxIters=10000,
        confidence=0.9999
    )

Result on code:

image.png

while the result on web is quite different:

image.png

image.png

Here are my input images:

010.png

011.png
What's wrong?

Sign up or log in to comment