Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

OpenCV Essential Functions: Template Matching, Image Moments, and Lookup Table Transforms

Tech 1

Template Matching Functions

Template matching techniques enable locating specific regions within larger images. OpenCV provides several functions for this purpose.

Function Purpose
getRectSubPix Extract rectangular regions with sub-pixel accuracy
matchTemplate Search for template patterns in images
matchShapes Compare similarity between two contours

Extracting Rectangular Regions (getRectSubPix)

This function retrieves a rectangular patch from an image at sub-pixel precision, which is valuable for high-accuracy image analysis tasks.

import cv2
import numpy as np

# Load source image
img = cv2.imread('source_image.jpg')

# Specify extraction parameters
extraction_center = (120, 150)
patch_dimensions = (80, 80)

# Extract the region
roi = cv2.getRectSubPix(img, patch_dimensions, extraction_center)

cv2.imshow('Source', img)
cv2.imshow('Extracted Region', roi)
cv2.waitKey(0)
cv2.destroyAllWindows()

Template Search and Matching (matchTemplate)

The matchTemplate function slides the template across the input image and computes similarity metrics at each position, producing a result matrix.

# Load source and template in grayscale
source = cv2.imread('photograph.jpg', 0)
template = cv2.imread('target_pattern.jpg', 0)

# Perform template matching using normalized correlation
match_result = cv2.matchTemplate(source, template, cv2.TM_CCOEFF_NORMED)

# Locate best match
_, confidence, _, best_position = cv2.minMaxLoc(match_result)

# Define bounding box dimensions
h, w = template.shape
top_left = best_position
bottom_right = (top_left[0] + w, top_left[1] + h)

# Annotate the match
cv2.rectangle(source, top_left, bottom_right, 255, 2)

cv2.imshow('Detection Result', source)
cv2.waitKey(0)
cv2.destroyAllWindows()

Shape Similarity Comparison (matchShapes)

This function quantifies how similar two contours are using Hu moments, with lower values indicating greater similarity.

# Load two images in grayscale
first_img = cv2.imread('shape_a.jpg', 0)
second_img = cv2.imread('shape_b.jpg', 0)

# Binarize images
_, binary_a = cv2.threshold(first_img, 127, 255, cv2.THRESH_BINARY)
_, binary_b = cv2.threshold(second_img, 127, 255, cv2.THRESH_BINARY)

# Find contours
contours_a, _ = cv2.findContours(binary_a, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours_b, _ = cv2.findContours(binary_b, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# Calculate similarity metric
if contours_a and contours_b:
    similarity_score = cv2.matchShapes(contours_a[0], contours_b[0], cv2.CONTOURS_MATCH_I1, 0.0)
    print(f'Similarity score: {similarity_score}')

Function Summary:

  • getRectSubPix: Extracts precise rectangular patches for detailed analysis
  • matchTemplate: Generates a similarity map for locating matching regions
  • matchShapes: Returns a similarity metric between 0 and infinity, where 0 represents identical shapes

Image Moments

Image moments mathemaitcally describe the spatial distribution and geometric properties of shapes within an image.

Function Description
moments Compute spatial moments of an image or contour
HuMoments Calculate seven invariant Hu moments

Computing Spatial Moments (moments)

import cv2
import numpy as np

# Load and binarize image
grayscale = cv2.imread('illustration.jpg', cv2.IMREAD_GRAYSCALE)
_, binary = cv2.threshold(grayscale, 127, 255, cv2.THRESH_BINARY)

# Extract contours
contours, _ = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

# Compute moments for the first contour
if contours:
    spatial_moments = cv2.moments(contours[0])
    print("Spatial moments:", spatial_moments)

Computing Hu Invariant Moments (HuMoments)

if contours:
    spatial_moments = cv2.moments(contours[0])
    hu_inv = cv2.HuMoments(spatial_moments).flatten()
    print("Hu invariant moments:", hu_inv)

Function Summary:

  • moments: Returns a dictionary containing zeroth-order moments, first-order moments, second-order moments, and derived features like centroid and area
  • HuMoments: Produces seven moments that remain unchanged under scale, rotation, and reflection transformations

Complete Example

import cv2
import numpy as np

# Load and process image
grayscale = cv2.imread('diagram.jpg', cv2.IMREAD_GRAYSCALE)
_, binary = cv2.threshold(grayscale, 127, 255, cv2.THRESH_BINARY)

# Extract contours
contours, _ = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

if contours:
    target_contour = contours[0]
    
    # Calculate moments
    m = cv2.moments(target_contour)
    print("Moments:", m)
    
    # Derive centroid coordinates
    cx = int(m["m10"] / m["m00"])
    cy = int(m["m01"] / m["m00"])
    print("Centroid location:", (cx, cy))
    
    # Compute Hu invariant moments
    hu = cv2.HuMoments(m).flatten()
    print("Hu Moments:", hu)
    
    # Visualize results
    color_output = cv2.cvtColor(grayscale, cv2.COLOR_GRAY2BGR)
    cv2.drawContours(color_output, [target_contour], -1, (0, 255, 0), 2)
    cv2.circle(color_output, (cx, cy), 5, (0, 0, 255), -1)
    
    cv2.imshow("Contour Analysis", color_output)
    cv2.waitKey(0)
    cv2.destroyAllWindows()

Lookup Table Transformation

The Lookup Table (LUT) operation provides an efficient mechanism for pixel value remapping, enabling complex non-linear transformations without per-pixel computation.

Function Purpose
LUT Transform pixel values using a precomputed lookup table

Basic LUT Transformation

import cv2
import numpy as np

# Load grayscale image
img = cv2.imread('input.jpg', cv2.IMREAD_GRAYSCALE)

# Create lookup table for square-root intensity mapping
lut = np.array([int(np.sqrt(i) * 16) for i in range(256)], dtype=np.uint8)

# Apply transformation
output = cv2.LUT(img, lut)

cv2.imshow('Original', img)
cv2.imshow('Transformed', output)
cv2.waitKey(0)
cv2.destroyAllWindows()

Practical Applications

LUT transformations are widely used for:

  1. Gamma correction: Adjust image brightness and contrast
  2. Color mapping: Convert grayscale to pseudo-color representations
  3. Non-linear enhancement: Apply logarithmic or exponential tone adjustments

Gamma Correction Implementation

# Define gamma value (values > 1 darken, values < 1 brighten)
gamma_value = 1.8

# Generate gamma correction lookup table
gamma_lut = np.array([int(((i / 255.0) ** gamma_value) * 255) for i in range(256)], dtype=np.uint8)

# Apply gamma correction
corrected = cv2.LUT(img, gamma_lut)

cv2.imshow('Gamma Adjusted', corrected)
cv2.waitKey(0)
cv2.destroyAllWindows()

The LUT approach transforms each input pixel value to its corresponding output value by indexing into the prebuilt table, making it significantly faster than computing transformations on-the-fly for each pixel.

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.