Show all abstracts
View Session
- Mechanical Properties and Imaging
- Design of Phase Control Devices
- Geometry Measurement and Inspection
- Fringe-Projection 3D Surface Measurement: Joint Session with Conference 6375A
- Pattern Recognition, Segmentation, and Object Modeling
- Novel Optomechatronic Applications
- Vision-Based Tracking
Mechanical Properties and Imaging
Hardness measurements of metals with the complex refractive index
Show abstract
In this work is shown that hardness properties of metals can be related with its refraction index. Given a metal, its surface hardness is a function of the molecular structure which can be studied with the refraction of optical radiation. In general refraction index of conductors are complex, it produces reflected light with elliptical polarization. It is observed that the ellipse rotates for different hardness of a given metal. Different hardness of two types of steel are measured as well as their respective rotation. The measurements show that there exist a direct relation between the hardness and the refraction index.
An active imaging system using a deformable mirror and its application to super resolution
Show abstract
In this paper, we propose an active vision system which has variable PSF. The system consists of a deformable mirror, an aperture stop and four lenses. The deformable mirror is placed at the pupil plane and its effective size is determined by the aperture stop at the conjugate position of the mirror. We try to enhance the image resolution using this system. We make four different mirror surface shapes to take four regularly shifted images and use super-resolution algorithm to synthesize higher resolution image from the low resolution observations. It is demonstrated that our method can be used to enhance image resolution which is limited by CCD cell size. We compare the result with the real image and some discussion about algorithmic parameters follows.
Design of Phase Control Devices
Design of liquid crystal Fresnel lens by uneven electric field
Show abstract
We propose a liquid crystal Fresnel lens that consists of a liquid crystal layer sandwitched between two substrates, one of which a Fresnel lens-like structure is formed. Simulations on the liquid crystal orientations show that the retardation distribution of such a lens can mimic that of a spherical lens. The radius of the equivalent spherical lens can be varied by the bias across the liquid crystal layer. This may open a way to a thin variable-focus lens. As a preparation for building such a device, we have conducted preliminary experiments that involves modulated polymerization of polymer-network liquid crystal material by UV light exposure through a phase mask. A liquid crystal phase grating has been created by exposure through a prism sheet.
Phase retardation symmetric design of a refractive and diffractive element for linearizing sinusoidal scanning
Show abstract
We propose a design of diffractive and refractive optical corrective elements with zooming capability for linearizing the angular scan of a resonant mirror scanner. Considering the symmetry requirements of the refractive element a graded index of refraction and its binary amplitude version are designed based on phase lag (beam retardation due to propagation through an inhomogeneous media). The design takes the beam diameter into consideration making it robust against beam fanning.
Geometry Measurement and Inspection
Simultaneous measurement of film surface topography and thickness variation using white-light interferometry
Show abstract
In vertical scanning white-light interferometry, two peaks appear in the interference waveform if a transparent film exists on the surface to be measured. We have developed an algorithm that is able to detect the position of these peaks quickly and accurately, and have put into practical use a film profiler that is able to simultaneously measure the profiles of both the front and back surfaces and the thickness distribution of a transparent film. This technique is applicable to transparent films with an optical thickness of approximately 1 μm or greater, and it is used effectively in the semiconductor and LCD manufacturing processes.
Development of super-resolution optical inspection system for semiconductor defects using standing wave illumination shift
Show abstract
Semiconductor design rules and process windows continue to shrink, so we face many challenges in developing new
processes such as 300mm wafer, copper line and low-k dielectrics. The challenges have become more difficult because
we must solve problems on patterned and un-patterned wafers. The problems include physical defects, electrical defects,
and even macro defects, which can ruin an entire wafer rather than just a die. The optics and electron beam have been
mainly used for detecting of the critical defects, but both technologies have disadvantages. The optical inspection is
generally not enough sensitive for defects at 100nm geometries and below, while the SEM inspection has low throughput
because it takes long time in preparing a vacuum and scanning 300mm. In order to find a solution to these problems, we
propose the novel optical inspecting method for the critical defects on the semiconductor wafer. It is expected that the
inspection system's resolution exceed the Rayleigh limit by the method. Additionally the method is optical one, so we
can expect to develop high throughput inspection system. In the research, we developed the experimental equipment for
the super-resolution optical inspection system. The system includes standing wave illumination shift with the
piezoelectric actuator, dark-field imaging and super-resolution-post-processing of images. And then, as the fundamental
verification of the super-resolution method, we performed basic experiments for scattered light detection from standard
particles.
Evaluation of laser trapping probe properties for coordinate measurement
Show abstract
For past decades Micro-System Technology (MST) has been developed and it has enabled fabricating the microcomponents on the micro-systems. In order to measure such microcomponents having micrometer-size shapes, a concept of nano-CMM was proposed. According to the concept, nano-CMM specifications are, for example, a measuring range is (10 mm)3 and accuracy is 50 nm. Then, we have proposed a laser trapping probe as a position detecting probe for nano-CMM. The laser trapping probe can be suitable to nano-CMM because of high sensitivity, an availability of a high spherical probe stylus and changeable properties. On the other hands, there are sources of uncertainties, one of them is the standing wave, and the influence is experimentally investigated.
The results reveal the facts such as following. The standing wave obviously influences the behavior of the laser trapping probe sphere. The positional fluctuations by the standing wave proceed several hundreds nm, and the phenomenon appears with high repeatability. A size of the probe sphere can be a crucial parameter for reducing the influence of a standing wave. As another possibility of reducing the influence, on the inclined substrate the probe tends not to be affected by the standing wave. On the other hands, the standing wave influences the probe sphere beyond 100 mm far from the flat silicon substrate.
Measuring shapes of three-dimensional objects by rotary focused-plane sectioning
Show abstract
We devised a new method of shape-from-focus to model a robotic work space. The focused-plane in the new method of focused-plane sectioning was rotated around the axis located at the front focal point of an imaging lens to be able to scan omni-directionally and obtain panoramic range data on the space where objects were located. The astigmatism of the image-taking lens was a major factor causing systematic errors in detecting the 3-D coordinates of the objects. The measurement errors depended on the direction of the object contours. We devised an effective method of correction, where the coordinates of a contour point were corrected by detecting the edge direction of a contour image and interpolating the line-of-sight range and azimuthal angle of the contour point between the predetermined ranges and azimuthal angles of the horizontal and vertical contour points using the direction of the measured contour. After correction, the lateral position errors were less than 0.17 mm and the range errors were less than 8.6 mm on the contour in a direction of 45° at a distance of 730 mm.
Expansion of measuring range of stereovision system based on Mono-MoCap
Show abstract
This study aims to expand a measuring range of stereovision system. In the previous paper, the authors achieved a 3D motion capture system by using one camera with triangle markers, named as Mono-MoCap (MMC). MMC has two features. One is that MMC can measure 3D positions of subjects by using one camera on the basis of the perspective n point (PnP) problem. Another is that MMC does not need to recalibrate the camera parameters. In this paper, the authors apply MMC to binocular stereovision system for expanding its measurement range. MMC will solve the problem that stereovision system can not measure 3D positions of objects when even one camera does not capture the objects. In this study, 3D positions of three points where a geometrical relation each other is already-known will be measured by a stereovision. 3D position measurement by the stereovision is enabled by secondarily using MMC when a part of the object is hidden with one camera of the stereovision. Simulation and experimental results will show the effectiveness of the proposed method for expanding the measuring range of stereovision system.
Fringe-Projection 3D Surface Measurement: Joint Session with Conference 6375A
Triangular phase-shifting algorithms for surface measurement
Show abstract
Two-step triangular phase-shifting has recently been developed for 3-D surface-shape measurement. Compared with
previous phase-shifting methods, the method involves less processing and fewer images to reconstruct the 3-D object.
This paper presents novel extensions of the two-step triangular phase-shifting method to multiple-step algorithms to
increase measurement accuracy. The phase-shifting algorithms used to generate the intensity ratio, which is essential for
determination of the 3-D coordinates of the measured object, are developed for different multiple-step approaches. The
measurement accuracy is determined for different numbers of additional steps and values of pitch. Compared with the
traditional sinusoidal phase-shifting-based method with same number of phase shifting steps, the processing is expected
to be reduced with similar resolution. More phase steps generate higher accuracy in the 3-D shape reconstruction;
however, the digital fringe projection generates phase shifting error if the pitch of the pattern cannot be evenly divided
by the number of phase steps. The pitch in the projected pattern must therefore be selected appropriately according to the
number of phase-shifting steps used.
Repeated phase-offset measurement for error compensation in two-step triangular phase-shifting profilometry
Show abstract
Two-step triangular phase-shifting is a recently developed method for 3-D shape measurement. In this method, two
triangular gray-level-coded patterns, which are phase-shifted by half of the pitch, are needed to reconstruct the 3-D
object. The measurement accuracy is limited by gamma non-linearity and defocus of the projector and camera. This
paper presents a repeated phase-offset two-step triangular-pattern phase-shifting method used to decrease the
measurement error caused by the gamma non-linearity and defocus in the previously developed two-step triangularpattern
phase-shifting 3-D object measurement method. Experimental analysis indicated that a sensitivity threshold
based on the gamma non-linearity curve should be used as the minimum intensity of the computer-generated pattern
input to the projector to reduce measurement error. In the repeated phase-offset method, two-step triangular phaseshifting
is repeated with an initial phase offset of one-eighth of the pitch, and the two obtained 3-D object height
distributions are averaged to generate the final 3-D object-height distribution. Experimental results demonstrated that the
repeated phase-offset measurement method substantially decreased measurement error compared to the two-step
triangular phase-shifting method.
An active vision sensor system employing adaptive digital fringe pattern generated by SLM pattern projector
Show abstract
Nowadays, many kinds of fringe projection method for 3D depth measurements have been researched such as moire method, optical triangulation method and so on. Generally speaking these methods uses the regular vertical fringe pattern. However, by using regular vertical fringe pattern, the shape measurement results especially for symmetric objects are not accurate, because the vertical fringe pattern cannot present the shape of various objects well. In this paper to solve this problem, we introduce a new sensing methodology based on object-adapted fringe projection method. To make a flexible object-adapted fringe pattern we used the projector with spatial light modulator (SLM). Our algorithm mainly consists of three parts. The first part is to generate an object-adapted fringe pattern by applying the moire technique. The second part is image projection of moire image to the projector plane. The final part is to find an absolute depth value of the object by using the optical triangulation method. To verify the performance of our proposed sensing system we conducted a series of experiments for various simple objects. The result shows the feasibility of successful perception for some objects treated herein.
Pattern Recognition, Segmentation, and Object Modeling
Virtual assemblage of fragmented artefacts
Show abstract
Recent improvements in computer graphics, three-dimensional digitization and virtual reality tools have enabled
archaeologists to capture and preserve ancient relics recovered from excavated sites by creating virtual representations of
the original artefacts. The digital copies offer an accurate and enhanced visual representation of the physical object. The
process of reconstructing an artefact from damaged pieces by virtual assemblage and clay sculpting is summarized in
this paper. Surface models of the digitized fragments are first created and then manipulated in a virtual reality (VR)
environment using simple force feedback tools. The haptic device provides tactile cues that assist the user with the
assembly process and introducing soft virtual clay to the resultant assemblage for complete 3D reconstruction. Since
reconstruction is performed within a VR environment, the joining or "gluing" of separate damaged fragments will permit
the scientist to investigate alternative relic configurations. Results from a preliminary experiment are presented to
illustrate the virtual assemblage procedure used to reconstruct fragmented or broken objects.
Template generation by component maximization for real time face detection
Show abstract
Real-time face detection on video sequences is important in diverse applications such as,
man-machine interfaces, face recognition, security and multimedia retrieval. In this work,
we present a new method based on the maximization of local components in the directional
image to optimize templates for frontal face detection. In the past, several methods for face
detection have been developed using face templates. These templates are based on common
face features such as eyebrows, eyes, nose and mouth. Templates have been applied to a
directional image containing faces computing a line integral to detect faces with high
accuracy. In this paper, the maximization of local components in the directional image is
used to select new templates optimizing its size and response to a face in the directional
image. The method selects common directional vectors in a set of frontal faces to generate
the template. The method was tested on 386 images from the Caltech face database and 55
images from the Purdue database. Results were compared to those of the traditional
anthropometric templates that contain features from the eyebrow, nose and mouth. Results
show that the new templates have significant better performance in the estimation of face
size and the line integral value. Face detection reached 97% on the Caltech face database
and 98% on the Purdue database. The templates have fewer number of points compared to
the traditional anthropometric templates which will lead to lower processing time.
Local retouching of degraded images by histogram-based similarity evaluation
Show abstract
Quality of local regions in the scene could be deteriorated by the effects of ill-conditioned lighting effects and reflection. We develop a method to improve the quality of local regions. The local region of the image is firstly selected by an algorithm that is based on similarity evaluation of the region. Then the histogram
of brightness and saturation of the local region are expanded using the histogram equalization, both the
brightness and saturation are improved. In the next step, based on the distance from the point selected
by user, the improved data are combined with the original image. The local part of the image is naturally
merged with the surrounding scenes.
Using orientation code matching for robustly sensing real velocity of agrimotors
Show abstract
Instead of tachometer-type velocity sensors, an effective method of estimating real-time velocities based on a robust image matching algorithm is proposed to measure real-time velocities of agrimotors or working machines, such as sprayers and harvesters, driven by them in real farm fields.
It should be precise even they have any slipping, and stable and robust to many ill-conditions occurred in the real world farm fields, where the robust and fast algorithm for image matching, Orientation code matching is effectively utilized.
A prototype system has been designed for real-time estimation of velocities of the agrimotors and then the effectiveness has been verified through many frames obtained from the real fields in various weather and ground conditions.
Novel Optomechatronic Applications
Spherical imaging array based on bioelectronic photoreceptors
Show abstract
The performance of wide field-of-view (FOV) and omni-directional sensors are often limited by the complex optics used to project three-dimensional world points onto the planar surface of a charged-couple device (CCD) or CMOS array. Recent advances in the design and development of a spherical imaging system that exploits the fast photoelectric signals generated by dried bacteriorhodopsin (bR) films are described in this paper. The bendable, lightweight and durable bR-based photocell array is manufactured on an indium-tin-oxide (ITO) coated plastic film using Electrophoretic Sedimentation technique (EPS). The effective sensing area of each pixel in the preliminary prototype is 2x2 mm2, separated by 1mm and arranged in a 4x4 array. When exposed to light, the differential response characteristic is attributed to charge displacement and recombination within the bR molecule, as well as loading effects of the attached amplifier. The peak spectral response occurs at 568nm and is linear over the tested light power range of 200μW to 12mW. Response remains linear at the other tested wavelengths, but at reduced signal amplitude. Excess material between the bR sensing elements can be cut from the plastic substrate to increase structure flexibility and permit the array of photodetectors to be wrapped around the exterior, or adhered to the interior, of a sphere.
Noncontact vibration analysis using innovative laser-based methodology
Show abstract
Vibration is a back and forth mechanical motion with a steady, uninterrupted rhythm about an equilibrium point. There
are two types of vibration; natural (or free) and forced. Natural vibration occurs as the result of a disturbing force that is
applied once and then removed. Forced vibration occurs as a result of a force applied repeatedly to a system. All
machines have some amount of forced vibration. However, in some cases this vibration can cause damage to machinery.
Understanding vibration is essential for any system that will be exposed to motion. Equipment such as strain gauges and
piezoelectric accelerometers have been adequate in measuring vibration in the past; however, due to increased
performance requirements and subsequent reductions in vibration, these methods are slowly being replaced by laserbased
measurement systems. One reason for the slow transition is that part of the system in these methods must be
mounted on the surface of the object being measured which can change the mass thus alter the frequency and mode
shape of the vibrating object. At this time however, the high expenses to monitor precision vibration is a challenge, and
there is a need for more cost-effective methods of vibration analysis. This paper outlines a lower cost laser-based method
of measuring vibration with minimum surface contact.
Vision-Based Tracking
Camera pan-tilt ego-motion tracking from point-based environment models
Show abstract
We propose a point-based environment model (PEM) to represent the
absolute coordinate frame in which camera motion is to be tracked.
The PEM can easily be acquired by laser scanning both indoors and
outdoors even over long distances. The approach avoids any expensive
modeling step and instead uses the raw point data for scene representation.
Also the approach requires no additional artificial markers or active
components as orientation cues. Using intensity feature detection
techniques key points are automatically extracted from the PEM and
tracked across the image sequence. The orientation procedure of the
imaging sensor is solely based on spatial resection.
Object tracking by block division based on radial reach filter
Show abstract
Recently, developing of image processing method which enables to track to moving objects on time series images
taken by a fixed camera is one of important subjects in the field of machine vision. Here, we try to consider
influences by change in brightness and change of region caused by moving objects, respectively. In this paper,
we introduce a new tracking method which can be reduced the influences by those changes. First, we use Radial
Reach Filter in order to detect the moving objects. In addition, the moving objects can be tracked by an image
processing based on information obtained by applying RRF and block division. Further, we propose a method in
the case that changes size of moving object by time progress. Finally, through experiments we show the validity
of our proposed method.
A boundary tracking approach for tape substrate pattern inspection based on skeleton information
Show abstract
Tape substrate (TS) product is a high-density circuit pattern on thin film substrate, and it requires precise and high resolution imaging system for inspection. We introduce here a TS inspection system developed, where the products are fed through a reel to reel system, and a series of inspection algorithms based on a referential method. In the system, it is so hard to achieve consistent images for such a thin and flexible materials as TS product that the images suffer from individual, local distortion during the image acquisition. Since the distortion results in relatively big discrepancy between an inspection image and the master one, direct image to image comparison approach is not available for inspection. To inspect the pattern in a more robust way in this application, we propose a graph matching method where the patterns are modeled as a collection of lines with link points as features. In the offline teaching process, the graph model is achieved from skeleton of the master image, which is collected as a data base. In the run time, a boundary tracking method is used for extracting the graph model from an inspection image instead of a skeleton process to reduce the computation time. By comparing the corresponding graph models, a line that is linked to undesired endpoints can be detected, which becomes an open or short defect. Through boundary tracking approach, we can also detect boundary defects such as pattern nick and protrusions as well.