The proposed procedure consists of a closed-form solution, followed by a nonlinear refinement based on the maximum likelihood criterion. The main contribution of this paper is the elimination of the requirement of using known occluding bodies in the scene for camera and projector-based structured light system calibration, which has not been extensively studied. Computer simulation and real data experiments were carried out to validate the method. A structured light vision system using pattern projection is useful for robust reconstruction of three-dimensional objects. Abstract In this book, the design of two new planar patterns for camera calibration of intrinsic parameters is addressed and a line-based method for distortion correction is suggested. The vision system consists of a light pattern projector and a camera.
However, some relations between the projected pattern and the reflected one must be solved. We show that even in this case some very rich non-metric reconstructions of the environment can nonetheless be obtained. Moreover, frequent reconfiguration of the measurement system may be needed based on the size of the measured object, making the self-recalibration of extrinsic parameters indispensable. It is because the model is implemented with less mathematical terms than the traditional models. Given an image with two rail profile stripes, the first step involves parallelity constraintbased establishment of an auxiliary plane whose normal vector is parallel with the rail longitudinal axis.
Distance from the robotic endeffector and a surface can be calculated from the triangular geometry created by the camera, the laser, and the projected laser dot. The calibration method is evaluated on several sets of synthetic and real image data. The dynamic calibration of structured light systems, which consist of a camera and a projector is also treated. Testing with synthetically generated profile maps shows that if the geometry of the object is appropriate and the registration parameters and the intrinsic parameters of the system are known exactly, then a calibration accuracy of 0. Such a self-calibration process is relevant for unmanned vehicles, robots working in remote places, and so forth. In practice, automatic calibration methods are more convenient. The 3D scene can then be reconstructed via this transformation.
We then show that if we choose only four arbitrary correspondences, then an affine representation of the environment can be constructed. Abstract Color-encoded structured lighting systems are widely used for three- dimensional data acquisition based on machine vision. The results exhibit its effectiveness and superiority in terms of the dynamic measurement of the rail profile. As in other light striping systems, the correspondence problem is solved by projecting light plane onto an object inside a frame. Radial lens distortion is modeled. The experiments conducted suggest that this novel calibration method is robust, economical, and is applicable to many dense shape reconstruction tasks. It mainly refers to the determination of camera intrinsic parameters, e.
This technique generates the surface model via Bezier networks based on surface points, which are retrieved via laser line scanning. The method can be used for general calibration of camera intrinsics We address in this paper the problem of self-calibration and metric reconstruction up to a scale factor from one unknown motion of an uncalibrated stereo rig. Furthermore, the computed 3D point data does not require by registration process because the data is directly measured based on unified world coordinates. The algebraic analysis that is made possible with such a linear formulation allows investigation of not only the well-known case of general screw motions but also of such singular motions as pure translations, pure rotations, and planar motions. Although the designation encompasses various system configurations, the structured light pattern, if any and sensor positioning are key elements.
The surface measurement is carried out by a Bezier network via line position to avoid external measurement errors. However, it should be calibrated carefully before a vision task. We conduct a large number of experiments that validate the quality of the method by comparing it with existing ones. Chapter 7 Conclusions and Future Expectation. One of the major tasks in using such a system is the calibration of the sensing system.
Our method can be used in a dynamic as well as static environment. In: Automatic Calibration and Reconstruction for Active Vision Systems. A critical review of the state of the art is given and it is shown that the two-stage technique has advantages in terms of accuracy, speed and versatility. It is also possible to adjust several calibrations at the same time. Also, the surface model reduces operations and memory size to generate the object surface.
If two epipolar transformations, arising from different camera displacements, are available then the compatible camera calibrations are parameterized by an algebraic curve of genus four. Calibration of such a system is a laborious and tedious task. Then the coding and decoding strategies for the light pattern of the projector are discussed. Also, the 3D Euclidean reconstruction by using the image-to-world transformation is investigated. An application is described for active vision, where a Euclidean reconstruction is ob- tained during normal operation with an initially uncalibrated cam- era.
The phase shift between the board wavefront at each position and the reference position wavefront is evaluated. Simulation and real experiments have been conducted using our system and the reconstructed results turn out to be satisfactory. Also, the 3D Euclidean reconstruction by using the image-to-world transformation is investigated. Real data has been used to test the proposed method, and the results obtained are quite good. Chapter 4 Homography-based Dynamic Calibration. Specifically we show that if we choose five arbitrary correspondences, then a unique up to an arbitrary projective transformation projective representation of the environment can be constructed which is relative to the five points in three-dimensional space which gave rise to the correspondences.
Results are given for images of real scenes. We will show that these approaches satisfy the above requirements. . The problem then becomes one of estimating unknowns such that the discrepancy from the epipolar constraint, in terms of sum of squared distances between points and their corresponding epipolar lines, is minimized. A method is described to recover the three-dimensional affine structure of a scene consisting of at least five points identified in two perspective views with a relative object-camera translation in between. It can be worth collecting for practitioners as well as for students studying active or catadioptric vision systems. This paper presents a novel method for self-recalibration of such a vision system.