Structured light

Structured light is a method used in 3D scanning and computer vision that measures the shape and depth of an object by projecting a predefined pattern of light onto its surface. The pattern can be either stripes, grids, or dots. The resulting distortions of the pattern reveals the object's solid geometry through triangulation, enabling the creation of a 3D model of the object. The scanning process relies on coding techniques for accurately detailed measurement. The most widely used ones are binary, Gray, and phase-shifting, each offering distinct advantages and drawbacks.
Structured light technology is applied across diverse fields, including industrial quality control, where it is used for precision inspection and dimensional analysis, and cultural heritage preservation, where it assists in the documentation and restoration of archaeological artifacts. In medical imaging, it facilitates non-invasive diagnostics and detailed surface mapping, particularly in applications such as dental scanning and orthotics. Consumer electronics integrate structured light technology, with applications ranging from facial recognition systems in smartphones to motion-tracking devices like Kinect. Some implementations, especially in facial recognition, use infrared structured light to enhance accuracy under varying lighting conditions.
Process
[edit]

Structured light measurement is a technique used to determine the three-dimensional coordinates of points on an object's surface. It involves a projector and a camera positioned at a fixed distance from each other—known as the baseline—and oriented at specific angles. The projector casts a structured light pattern, such as stripes, grids, or dots, onto the object's surface. The camera then captures the distortions in this pattern caused by the object's geometry, which reveal the surface shape. By analyzing these distortions, depth values can be calculated.[1][2]
The measurement process relies on triangulation, using the baseline distance and known angles to calculate depth from the pattern's displacement via trigonometric principles. When structured light hits a non-planar surface, the pattern distorts predictably, enabling a 3D reconstruction of the surface. Accurate reconstruction depends on system calibration—which establishes the precise geometric relationship between the projector and camera to prevent depth errors and geometric distortions from misalignment—and pattern analysis algorithms.[1][2][3]
Types of coding
[edit]Structured light scanning relies on various coding techniques for 3D shape measurement. The most widely used ones are binary, Gray, and phase-shifting. Each method presents distinct advantages and drawbacks in terms of accuracy, computational complexity, sensitivity to noise, and suitability for dynamic objects. Binary and Gray coding offer reliable, fast scanning for static objects, while phase-shifting provides higher detail. Hybrid methods, such as binary defocusing and Fourier transform profilometry (FTP), balance speed and accuracy, enabling real-time scanning of moving 3D objects.[2][3][4]
Binary coding
[edit]Binary coding uses alternating black and white stripes, where each stripe represents a binary digit. This method is computationally efficient and widely employed due to its simplicity. However, it requires the projection of multiple patterns sequentially to achieve high spatial resolution. While this approach is effective for scanning static objects, it is less suitable for dynamic scenes due to the need for multiple image captures. In addition, the accuracy of binary coding is constrained by projector and camera pixel resolution, and it needs precise thresholding algorithms to distinguish projected stripes accurately.[4]
Gray coding
[edit]Gray coding, named after physicist Frank Gray, is a binary encoding scheme designed to minimize errors by ensuring that only one bit changes at a time between successive values. This reduces transition errors, making it particularly useful in applications such as analog-to-digital conversion and optical scanning.[5] In structured light scanning, where Gray codes are used for pattern projection, a drawback arises as more patterns are projected: the stripes become progressively narrower, which can make them harder for cameras to detect accurately, especially in noisy environments or with limited resolution. To mitigate this issue, advanced variations such as complementary Gray codes and phase-shifted Gray code patterns have been developed. These techniques introduce opposite or phase-aligned patterns to enhance robustness as well as to aid in error detection and correction in complex scanning environments.[2][6]
Phase-shifting
[edit]Phase-shifting techniques use sinusoidal wave patterns that gradually shift across multiple frames to measure depth. Unlike binary and Gray coding, which provide depth in discrete steps, phase-shifting allows for smooth, continuous depth measurement, resulting in higher precision. The main challenges are that depth ambiguities can occur because the repeating wave patterns make it difficult to determine exact distances, which requires extra reference data or advanced processing to resolve, and, because multiple images are needed, this method is not ideal for moving objects—as motion can create distortions and introduce artifacts in the measurement.[4]
Hybrid methods
[edit]To address the limitations of phase-shifting in dynamic environments, binary defocusing techniques have been developed, in which binary patterns are deliberately blurred to approximate sinusoidal waves. This approach integrates the efficiency of binary projection with the precision of phase-shifting, enabling high-speed 3D shape capture. Advances in high-speed digital light processing (DLP) projectors have further supported the adoption of these hybrid methods in applications requiring real-time scanning, including biomedical imaging and industrial inspection.[3]
Fourier transform profilometry (FTP) measures the shape of an object using a single image of a projected pattern. It analyzes how the pattern deforms over the surface, enabling fast, full-field 3D shape measurement, even for moving objects. The process involves applying a Fourier transform to convert the image into frequency data, filtering out unwanted components, and performing an inverse transform to extract depth information. Although FTP is often used alone, hybrid systems sometimes combine it with phase-shifting profilometry (PSP) or dual-frequency techniques to improve accuracy while maintaining high speed.[7][8]
See also
[edit]- Depth map
- Dual photography
- Laser Dynamic Range Imager
- Lidar
- Light stage
- Range imaging
- Stereoscopy
- Structured Illumination Microscopy (SIM)
- Structured-light 3D scanner – Sensor that can create 3D scans using visible light
- Time-of-flight camera
References
[edit]- ^ a b Geng, Jason (2011). "Structured-light 3D surface imaging: a tutorial". Advances in Optics and Photonics. 3 (2): 128–160. Bibcode:2011AdOP....3..128G. doi:10.1364/AOP.3.000128.
- ^ a b c d Lu, Xingyu (2024). "SGE: structured light system based on Gray code with an event camera". Optics Express. 32 (26): 46044–46057. arXiv:2403.07326. Bibcode:2024OExpr..3246044L. doi:10.1364/OE.538396.
- ^ a b c Zhang, Song (2018). "High-speed 3D shape measurement with structured light methods: A review". Optics and Lasers in Engineering. 106: 119–131. Bibcode:2018OptLE.106..119Z. doi:10.1016/j.optlaseng.2018.02.017.
- ^ a b c Salvi, Joaquim; Pagès, Jordi; Batlle, Joan (2004). "Pattern codification strategies in structured light systems". Pattern Recognition. 37 (4): 827–849. Bibcode:2004PatRe..37..827S. doi:10.1016/j.patcog.2003.10.002.
- ^ Doran, Robert W. (2007). "The Gray Code". Journal of Universal Computer Science. 13 (11): 1573–1597. doi:10.3217/jucs-013-11-1573.
- ^ Kim, Daesik; Ryu, Moonwook; Lee, Sukhan (2008). Antipodal gray codes for structured light. 2008 IEEE International Conference on Robotics and Automation. Pasadena, CA. pp. 3016–3021. doi:10.1109/ROBOT.2008.4543668.
- ^ Rosenberg, Ori Izhak; Abookasis, David (2020). "Hybrid method combining orthogonal projection Fourier transform profilometry and laser speckle imaging for 3D visualization of flow profile". Journal of Modern Optics. 67 (13): 1197–1209. Bibcode:2020JMOp...67.1197R. doi:10.1080/09500340.2020.1823503.
- ^ Chen, Liang-Chia; Ho, Hsuan-Wei; Nguyen, Xuan-Loc (2010). "Fourier transform profilometry (FTP) using an innovative band-pass filter for accurate 3-D surface reconstruction". Optics and Lasers in Engineering. 48 (2): 218–225. Bibcode:2010OptLE..48..182C. doi:10.1016/j.optlaseng.2009.04.004.
External links
[edit]- Projector-Camera Calibration Toolbox
- Tutorial on Coded Light Projection Techniques
- Structured light using pseudorandom codes
- High-accuracy stereo depth maps using structured light
- A comparative survey on invisible structured light
- A Real-Time Laser Range Finding Vision System
- Dual-frequency Pattern Scheme for High-speed 3-D Shape Measurement
- High-Contrast Color-Stripe Pattern for Rapid Structured-Light Range Imaging