Proceedings Volume 8029

Sensing Technologies for Global Health, Military Medicine, Disaster Response, and Environmental Monitoring; and Biometric Technology for Human Identification VIII

cover
Proceedings Volume 8029

Sensing Technologies for Global Health, Military Medicine, Disaster Response, and Environmental Monitoring; and Biometric Technology for Human Identification VIII

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 9 May 2011
Contents: 14 Sessions, 48 Papers, 0 Presentations
Conference: SPIE Defense, Security, and Sensing 2011
Volume Number: 8029

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Global Health and Disease Surveillance I
  • Global Health and Disease Surveillance II
  • Global Health: Ensuring Safe Water Supply
  • Military Health and Traumatic Brain Injury I
  • Military Health and Traumatic Brain Injury II
  • Disaster Response and Situational Awareness
  • Oil Spill (DHW) and Ocean Monitoring I: Joint Session with Conference 8030
  • Oil Spill (DHW) and Ocean Monitoring II: Joint Session with Conference 8030
  • Poster Session on Sensing Technologies for Global Health, Military Medicine, Disaster Response, and Environmental Monitoring: Global Health
  • Poster Session on Sensing Technologies for Global Health, Military Medicine, Disaster Response, and Environmental Monitoring: Environmental Monitoring
  • Face Biometrics
  • Fingerprint and Voice Biometrics
  • Iris Biometrics
  • Ocular Biometrics
Global Health and Disease Surveillance I
icon_mobile_dropdown
Instrument-free nucleic acid amplification assays for global health settings
Paul LaBarre, David Boyle, Kenneth Hawkins, et al.
Many infectious diseases that affect global health are most accurately diagnosed through nucleic acid amplification and detection. However, existing nucleic acid amplification tests are too expensive and complex for most low-resource settings. The small numbers of centralized laboratories that exist in developing countries tend to be in urban areas and primarily cater to the affluent. In contrast, rural area health care facilities commonly have only basic equipment and health workers have limited training and little ability to maintain equipment and handle reagents.1 Reliable electric power is a common infrastructure shortfall. In this paper, we discuss a practical approach to the design and development of non-instrumented molecular diagnostic tests that exploit the benefits of isothermal amplification strategies. We identify modular instrument-free technologies for sample collection, sample preparation, amplification, heating, and detection. By appropriately selecting and integrating these instrument-free modules, we envision development of an easy to use, infrastructure independent diagnostic test that will enable increased use of highly accurate molecular diagnostics at the point of care in low-resource settings.
Novel approaches in diagnosing tuberculosis
Arend H. J. Kolk, Ngoc A. Dang, Sjoukje Kuijper, et al.
The WHO declared tuberculosis (TB) a global emergency. An estimated 8-9 million new cases occur each year with 2-3 million deaths. Currently, TB is diagnosed mostly by chest-X ray and staining of the mycobacteria in sputum with a detection limit of 1x104 bacteria /ml. There is an urgent need for better diagnostic tools for TB especially for developing countries. We have validated the electronic nose from TD Technology for the detection of Mycobacterium tuberculosis by headspace analysis of 284 sputum samples from TB patients. We used linear discriminant function analysis resulting in a sensitivity of 75% a specificity of 67% and an accuracy of 69%. Further research is still required to improve the results by choosing more selective sensors and sampling techniques. We used a fast gas chromatography- mass spectrometry method (GC-MS). The automated procedure is based on the injection of sputum samples which are methylated inside the GC injector using thermally assisted hydrolysis and methylation (THM-GC-MS). Hexacosanoic acid in combination with tuberculostearic acid was found to be specific for the presence of M. tuberculosis. The detection limit was similar to microscopy. We found no false positives, all microscopy and culture positive samples were also found positive with the THM-GC-MS method. The detection of ribosomal RNA from the infecting organism offers great potential since rRNA molecules outnumber chromosomal DNA by a factor 1000. It thus may possible to detect the organism without amplification of the nucleic acids (NA). We used a capture and a tagged detector probe for the direct detection of M. tuberculosis in sputum. So far the detection limit is 1x106 bacteria / ml. Currently we are testing a Lab-On-A-Chip Interferometer detection system.
Massively multiplexed microbial identification using resequencing DNA microarrays for outbreak investigation
T. A. Leski, R. Ansumana, D. H. Jimmy, et al.
Multiplexed microbial diagnostic assays are a promising method for detection and identification of pathogens causing syndromes characterized by nonspecific symptoms in which traditional differential diagnosis is difficult. Also such assays can play an important role in outbreak investigations and environmental screening for intentional or accidental release of biothreat agents, which requires simultaneous testing for hundreds of potential pathogens. The resequencing pathogen microarray (RPM) is an emerging technological platform, relying on a combination of massively multiplex PCR and high-density DNA microarrays for rapid detection and high-resolution identification of hundreds of infectious agents simultaneously. The RPM diagnostic system was deployed in Sierra Leone, West Africa in collaboration with Njala University and Mercy Hospital Research Laboratory located in Bo. We used the RPM-Flu microarray designed for broad-range detection of human respiratory pathogens, to investigate a suspected outbreak of avian influenza in a number of poultry farms in which significant mortality of chickens was observed. The microarray results were additionally confirmed by influenza specific real-time PCR. The results of the study excluded the possibility that the outbreak was caused by influenza, but implicated Klebsiella pneumoniae as a possible pathogen. The outcome of this feasibility study confirms that application of broad-spectrum detection platforms for outbreak investigation in low-resource locations is possible and allows for rapid discovery of the responsible agents, even in cases when different agents are suspected. This strategy enables quick and cost effective detection of low probability events such as outbreak of a rare disease or intentional release of a biothreat agent.
Tunable wavelength interrogated sensor platform (TWIST) for point-of-care diagnostics of infectious diseases
Sonia Grego, Kristin H. Gilchrist, Brian R. Stoner
The TWIST platform is an optical evanescent wave sensor which enables a label-free immunoassay-based portable instrument. The approach is based on input grating coupler sensors serving as functionalized sensing devices. Binding of the target analyte to the receptor-coated grating is detected by wavelength interrogation in the telecom spectral range. We have demonstrated that high performance volumetric sensing can be achieved using a compact, low-cost telecom laser as light source. The system footprint including light source, detectors and digitizers is compact. The platform is amenable to multiplexed operation. We demonstrated a two-output system which enables detection of an analyte with an on-chip reference signal.
Constructing paths through social networks for disease surveillance
Global health security needs better information on biological threats such as pandemics and bioterrorism that pose ever-increasing dangers for the health of populations worldwide. A vast amount of real-time information about infectious disease outbreaks is found in various forms of Web-based data streams. There are advantages and disadvantages of Internet-based surveillance and it has been suggested that an important research area will be to evaluate the application of technologies that will provide benefits to outbreak disease control at local, national, and international levels.
Solving stochastic epidemiological models using computer algebra
Doracelly Hincapie, Juan Ospina
Mathematical modeling in Epidemiology is an important tool to understand the ways under which the diseases are transmitted and controlled. The mathematical modeling can be implemented via deterministic or stochastic models. Deterministic models are based on short systems of non-linear ordinary differential equations and the stochastic models are based on very large systems of linear differential equations. Deterministic models admit complete, rigorous and automatic analysis of stability both local and global from which is possible to derive the algebraic expressions for the basic reproductive number and the corresponding epidemic thresholds using computer algebra software. Stochastic models are more difficult to treat and the analysis of their properties requires complicated considerations in statistical mathematics. In this work we propose to use computer algebra software with the aim to solve epidemic stochastic models such as the SIR model and the carrier-borne model. Specifically we use Maple to solve these stochastic models in the case of small groups and we obtain results that do not appear in standard textbooks or in the books updated on stochastic models in epidemiology. From our results we derive expressions which coincide with those obtained in the classical texts using advanced procedures in mathematical statistics. Our algorithms can be extended for other stochastic models in epidemiology and this shows the power of computer algebra software not only for analysis of deterministic models but also for the analysis of stochastic models. We also perform numerical simulations with our algebraic results and we made estimations for the basic parameters as the basic reproductive rate and the stochastic threshold theorem. We claim that our algorithms and results are important tools to control the diseases in a globalized world.
Global Health and Disease Surveillance II
icon_mobile_dropdown
Digital microbiology: detection and classification of unknown bacterial pathogens using a label-free laser light scatter-sensing system
Bartek Rajwa, M. Murat Dundar, Ferit Akova, et al.
The majority of tools for pathogen sensing and recognition are based on physiological or genetic properties of microorganisms. However, there is enormous interest in devising label-free and reagentless biosensors that would operate utilizing the biophysical signatures of samples without the need for labeling and reporting biochemistry. Optical biosensors are closest to realizing this goal and vibrational spectroscopies are examples of well-established optical label-free biosensing techniques. A recently introduced forward-scatter phenotyping (FSP) also belongs to the broad class of optical sensors. However, in contrast to spectroscopies, the remarkable specificity of FSP derives from the morphological information that bacterial material encodes on a coherent optical wavefront passing through the colony. The system collects elastically scattered light patterns that, given a constant environment, are unique to each bacterial species and/or serovar. Both FSP technology and spectroscopies rely on statistical machine learning to perform recognition and classification. However, the commonly used methods utilize either simplistic unsupervised learning or traditional supervised techniques that assume completeness of training libraries. This restrictive assumption is known to be false for real-life conditions, resulting in unsatisfactory levels of accuracy, and consequently limited overall performance for biodetection and classification tasks. The presented work demonstrates preliminary studies on the use of FSP system to classify selected serotypes of non-O157 Shiga toxin-producing E. coli in a nonexhaustive framework, that is, without full knowledge about all the possible classes that can be encountered. Our study uses a Bayesian approach to learning with a nonexhaustive training dataset to allow for the automated and distributed detection of unknown bacterial classes.
Light without substrate amendment: the bacterial luciferase gene cassette as a mammalian bioreporter
Dan M. Close, Tingting Xu, Abby E. Smartt, et al.
Bioluminescent production represents a facile method for bioreporter detection in mammalian tissues. The lack of endogenous bioluminescent reactions in these tissues allows for high signal to noise ratios even at low signal strength compared to fluorescent signal detection. While the luciferase enzymes commonly employed for bioluminescent detection are those from class Insecta (firefly and click beetle luciferases), these are handicapped in that they require concurrent administration of a luciferin compound to elicit a bioluminescent signal. The bacterial luciferase (lux) gene cassette offers the advantages common to other bioluminescent proteins, but is simultaneously capable of synthesizing its own luciferin substrates using endogenously available cellular compounds. The longstanding shortcoming of the lux cassette has been its recalcitrance to function in the mammalian cellular environment. This paper will present an overview of the work completed to date to overcome this limitation and provide examples of mammalian lux-based bioreporter technologies that could provide the framework for advanced, biomedically relevant real-time sensor development.
Characterization of a chromosomally integrated luxCDABE marker for investigation of shiga toxin-producing Escherichia coli O91:H21 shedding in cattle
Yingying Hong, Alan G. Mathew
Shiga toxin-producing Escherichia coli (STEC) O91:H21 has been recognized as a potential life-threatening foodborne pathogen and is commonly involved in human infections in European countries. Fecal shedding of the organism by cattle is considered to be the ultimate source for contaminations. Studies examining STEC shedding patterns often include inoculation of strains carrying antibiotic resistance makers for identifiable recovery. However, indigenous intestinal microflora exhibiting similar antibiotic resistance patterns can confound such studies. Such was the case in a study by our group when attempting to characterize shedding patterns of O91:H21 in calves. A chromosomally integrated bioluminescence marker using a luxCDABE cassette from Photorhabdus luminescens was developed in O91:H21 to overcome such shortcomings of antibiotic resistance markers during animal challenge experiment. The marker was validated in various aspects and was shown to have no impact on metabolic reactions, isotype virulence gene patterns, cost to growth, and additionally demonstrated high in vitro stability. Together, the results indicated that a chromosomally integrated luxCDABE based marker may be a superior system for the study of STEC colonization and shedding in cattle.
Global Health: Ensuring Safe Water Supply
icon_mobile_dropdown
Monitoring from source to tap: the new paradigm for ensuring water security and quality
The threat of terrorist action targeting water supplies is often overlooked for the more historically obvious threats of an air attack or a dirty bomb. Studies have shown that an attack on water is simple to orchestrate, inexpensive and can result in mass casualties. The twin motivators of the terrorist threat to water along with consumer demands for safe and potable supplies has lead to a sea change in the drinking water industry. From a historical perspective, most monitoring in the distribution system as well as source water has been relegated to the occasional snapshot provided by grab sampling for a few limited parameters or the infrequent regulatory testing required by mandates such as the Total Coliform Rule. New technologies are being deployed to ameliorate the threat from both intentional and accidental water contamination. The threat to water and these new technologies are described as well as needs and requirements for new sensors to improve the monitoring structure.
Large area radiation source for water and wastewater treatment
Michael T. Mueller, Seungwoo Lee, Anthony Kloba, et al.
There is a strong desire for processes that improve the safety of water supplies and that minimize disinfection byproducts. Stellarray is developing mercury-free next-generation x-ray and UV-C radiation sources in flat-panel and pipe form factors for water and wastewater treatment applications. These new radiation sources are designed to sterilize sludge and effluent, and to enable new treatment approaches to emerging environmental concerns such as the accumulation of estrogenic compounds in water. Our UV-C source, based on cathodoluminescent technology, differs significantly from traditional disinfection approaches using mercury arc lamps or UV LEDs. Our sources accelerate electrons across a vacuum gap, converting their energy into UV-C when striking a phosphor, or x-rays when striking a metallic anode target. Stellarray's large area radiation sources for wastewater treatment allow matching of the radiation source area to the sterilization target area for maximum coverage and improved efficiency.
Early warning system for detection of microbial contamination of source waters
Claus Tilsted Mogensen, Anders Bentien, Mogens Lau, et al.
Ensuring chemical and microbial water quality is an ever increasing important issue world-wide. Currently, determination of microbial water quality is a time (and money) consuming manual laboratory process. We have developed and field-tested an online and real-time sensor for measuring the microbial water quality of a wide range of source waters. The novel optical technique, in combination with advanced data analysis, yields a measure for the microbial content present in the sample. This gives a fast and reliable detection capability of microbial contamination of the source. Sample acquisition and analysis is performed real-time where objects in suspension are differentiated into e.g. organic/inorganic subgroups. The detection system is a compact, low power, reagentless device and thus ideal for applications where long service intervals and remote operations are desired. Due to the very large dynamic range in measured parameters, the system is able to monitor process water in industry and food production as well as monitor waste water, source water and water distribution systems. The applications envisioned for this system includes early warning of source water contamination and/or variation. This includes: water plants/water distribution networks, filtration systems (water purification), commercial buildings, swimming pools, waste water effluent, and industry in general.
A new demulsifier device for oil-water separation in oil tanks
In this paper, a new innovative closed-loop and autonomous electronic device for oil-water separation in the emulsion layer is presented. The device is designed for crude oil separation tanks and is sought to replace other traditional methods such as the ones using chemicals. It is modular and comprises three subsystems: sensing subsystem, actuating subsystem, and data communication/interfacing subsystem. The sensing subsystem is intrinsically safe and consists of a one dimensional level array of non intrusive ultrasonic transducers that monitor in real-time the low and high levels of the emulsion layer in a tank with a vertical resolution of 15 cm. The actuating system includes a microwave generator which stimulates the emulsion at a predefined position to breaks it out. A built-in feedback PID-based controller determines the optimal position of this generator based on the oil-water content which is provided by the sensor array and moves the generator accordingly. The data communication/interfacing system is responsible to transfer to the control room real-time data (e.g. the actual position of the emulsion layer and the actual temperature inside the tank) using field bus network protocol (RS485 protocol). This would help a continuous and effective monitoring by the operator using a dedicated GUI. In addition of being safe and environmentally friendly, the device provides faster and more efficient separation than the traditional techniques.
Military Health and Traumatic Brain Injury I
icon_mobile_dropdown
Traumatic brain injury produced by exposure to blasts, a critical problem in current wars: biomarkers, clinical studies, and animal models
C. Edward Dixon
Traumatic brain injury (TBI) resulting from exposure to blast energy released by Improvised Explosive Devices (IEDs) has been recognized as the "signature injury" of Operation Iraqi Freedom and Operation Enduring Freedom. Repeated exposure to mild blasts may produce subtle deficits that are difficult to detect and quantify. Several techniques have been used to detect subtle brain dysfunction including neuropsychological assessments, computerized function testing and neuroimaging. Another approach is based on measurement of biologic substances (e.g. proteins) that are released into the body after a TBI. Recent studies measuring biomarkers in CSF and serum from patients with severe TBI have demonstrated the diagnostic, prognostic, and monitoring potential. Advancement of the field will require 1) biochemical mining for new biomarker candidates, 2) clinical validation of utility, 3) technical advances for more sensitive, portable detectors, 4) novel statistical approach to evaluate multiple biomarkers, and 5) commercialization. Animal models have been developed to simulate elements of blast-relevant TBI including gas-driven shock tubes to generate pressure waves similar to those produced by explosives. These models can reproduce hallmark clinical neuropathological responses such as neuronal degeneration and inflammation, as well as behavioral impairments. An important application of these models is to screen novel therapies and conduct proteomic, genomic, and lipodomic studies to mine for new biomarker candidates specific to blast relevant TBI.
Diagnostic protein biomarkers for severe, moderate and mild traumatic brain injury
Jackson Streeter M.D., Ronald L Hayes, Kevin K. W. Wang
Traumatic Brain Injury (TBI) is a major problem in military and civilian medicine. Yet, there are no simple non-invasive diagnostics for TBI. Our goal is to develop and clinically validate blood-based biomarker assays for the diagnosis, prognosis and management of mild, moderate and severe TBI patients. These assays will ultimately be suitable for deployment to far-forward combat environments. Using a proteomic and systems biology approach, we identified over 20 candidate biomarkers for TBI and developed robust ELISAs for at least 6 candidate biomarkers, including Ubiquitin C-terminal hydrolase- L1 (UCH-L1), Glial Fibrillary Acidic Protein (GFAP) and a 145 kDa breakdown products of αII-spectrin (SBDP 145) generated by calpain proteolysis. In a multi-center feasibility study (Biomarker Assessment For Neurotrauma Diagnosis And Improved Triage System (BANDITS), we analyzed CSF and blood samples from 101 adult patients with severe TBI [Glasgow Coma Scale (GCS) ≤ 8] at 6 sites and analyzed 27 mild TBI patients and 5 moderate TBI patients [GCS 9-15] from 2 sites in a pilot study. We identified that serum levels of UCH-L1, GFAP and SBDP145 have strong diagnostic and prognostic properties for severe TBI over controls. Similarly initial post-TBI serum levels (< 6 h) of UCH-L1 and GFAP have diagnostic characteristics for moderate and mild TBI. We are now furthering assay production, refining assay platforms (both benchtop and point-ofcare/ handheld) and planning a pivotal clinical study to seek FDA approval of these TBI diagnostic assays.
Field-based multiplex and quantitative assay platforms for diagnostics
Srivatsa Venkatasubbarao, C. Edward Dixon, Russell Chipman, et al.
The U.S. military has a continued interest in the development of handheld, field-usable sensors and test kits for a variety of diagnostic applications, such as traumatic brain injury (TBI) and infectious diseases. Field-use presents unique challenges for biosensor design, both for the readout unit and for the biological assay platform. We have developed robust biosensor devices that offer ultra-high sensitivity and also meet field-use needs. The systems under development include a multiplexed quantitative lateral flow test strip for TBI diagnostics, a field test kit for the diagnosis of pathogens endemic to the Middle East, and a microfluidic assay platform with a label-free reader for performing complex biological automated assays in the field.
Accelerating the commercialization of university technologies for military healthcare applications: the role of the proof of concept process
Rosibel Ochoa, Hal DeLong, Jessica Kenyon, et al.
The von Liebig Center for Entrepreneurism and Technology Advancement at UC San Diego (vonliebig.ucsd.edu) is focused on accelerating technology transfer and commercialization through programs and education on entrepreneurism. Technology Acceleration Projects (TAPs) that offer pre-venture grants and extensive mentoring on technology commercialization are a key component of its model which has been developed over the past ten years with the support of a grant from the von Liebig Foundation. In 2010, the von Liebig Entrepreneurism Center partnered with the U.S. Army Telemedicine and Advanced Technology Research Center (TATRC), to develop a regional model of Technology Acceleration Program initially focused on military research to be deployed across the nation to increase awareness of military medical needs and to accelerate the commercialization of novel technologies to treat the patient. Participants to these challenges are multi-disciplinary teams of graduate students and faculty in engineering, medicine and business representing universities and research institutes in a region, selected via a competitive process, who receive commercialization assistance and funding grants to support translation of their research discoveries into products or services. To validate this model, a pilot program focused on commercialization of wireless healthcare technologies targeting campuses in Southern California has been conducted with the additional support of Qualcomm, Inc. Three projects representing three different universities in Southern California were selected out of forty five applications from ten different universities and research institutes. Over the next twelve months, these teams will conduct proof of concept studies, technology development and preliminary market research to determine the commercial feasibility of their technologies. This first regional program will help build the needed tools and processes to adapt and replicate this model across other regions in the Country.
An innovative non-contact ECG sensor for monitoring heart disease
Ye Sun, Xiong (Bill) Yu, Jim Berilla
This paper describes the development of a non-contact sensing platform to monitor the ECG signals. The non-contact sensing will be based on capacitive coupling the bioelectricity produced by cardiovascular activities around the heart. High sensitivity sensor and electronics are designed to amplify the signals. Our preliminary study has pointed to the promise of this sensing concept. A sensor prototype was able to clearly detect the ECG signals from 10 cm away from the body. Research tasks continue improving the sensor design to detect the polarization in the ECG signals. The final goal is a non-contact sensing platform for ECG signals and for real time diagnostics of the mental distress and cardiovascular diseases.
Military Health and Traumatic Brain Injury II
icon_mobile_dropdown
Detecting gait alterations due to concussion impairment with radar using information-theoretic techniques
Jennifer Palmer, Kristin Bing, Amy Sharma, et al.
Several studies have shown that measuring changes in gait could provide an easier method of diagnosing and monitoring concussions. The purpose of this study was to measure radar signal returns to explore if differences in gait patterns between normal and "concussed" individuals could be identified from radar spectrogram data. Access to concussed individuals was not available during this feasibility study. Instead, based on research that demonstrated concussion impairment is equivalent to a blood alcohol content (BAC) of 0.05%, BAC impairment goggles were used to visually simulate a concussion. Both "impaired" and "not impaired" individuals were asked to complete only a motor skill task (walking) and then complete motor skill and cognitive skill (saying the months of the year in reverse order) tasks simultaneously. Results from the tests were analyzed using informationtheoretic (IT) techniques. IT algorithms were selected because of their potential to identify similarities and differences without having the requirement of a priori knowledge on an individual. To quantify results, two methods were incorporated: decision index, D(Q), analysis with receiver operating characteristic (ROC) curves and object-feature matrix clustering. Both techniques showed acceptable percent correctness in discriminating between normal and "impaired" individuals.
A miniature pressure sensor for blast event evaluation
Traumatic brain injury (TBI) is a great potential threat to people who deal with explosive devices. Protection from TBI has attracted more and more interest. Great efforts have been taken to the studies on the understanding of the propagation of the blast events and its effect on TBI. However, one of the biggest challenges is that the current available pressure sensors are not fast enough to capture the blast wave especially the transient period. This paper reports an ultrafast pressure sensor that could be very useful for analysis of the fast changing blast signal. The sensor is based on Fabry-Perot (FP) principle. It uses a 45º angle polished fiber sitting in a V-groove on a silicon chip. The endface of the angle polished fiber and the diaphragm which is lifted off on the side wall of the V-groove form the FP cavity. The sensor is very small and can be mounted on different locations of a helmet to measure blast pressure simultaneously. The tests were conducted at Natick Soldier Research, Development, and Engineering Center (NSRDEC) in Natick, MA. The sensors were mounted in a shock tube, side by side with the reference sensors, to measure a rapidly increased pressure. The results demonstrated that our sensors' responses agreed well with those from the electrical reference sensors and their response time is comparable.
Point-of-care instrument for monitoring tissue health during skin graft repair
R. S. Gurjar, M. Seetamraju, J. Zhang, et al.
We have developed the necessary theoretical framework and the basic instrumental design parameters to enable mapping of subsurface blood dynamics and tissue oxygenation for patients undergoing skin graft procedures. This analysis forms the basis for developing a simple patch geometry, which can be used to map by diffuse optical techniques blood flow velocity and tissue oxygenation as a function of depth in subsurface tissue.skin graft, diffuse correlation analysis, oxygen saturation.
Disaster Response and Situational Awareness
icon_mobile_dropdown
Automated classification of single airborne particles from two-dimension, angle-resolved optical scattering (TAOS) patterns
Giovanni F. Crosta, Yong-Le Pan, Richard K. Chang
Two-dimension, angle-resolved optical scattering (TAOS) is an experimental technique by which patterns of LASER light intensity scattered by single (micrometer or sub-micrometer sized) airborne particles are collected. In the past 10 years TAOS instrumentation has evolved from laboratory prototypes to field-deployable equipment; patterns are collected by the thousands during indoor or outdoor sampling in short times. Although comparison between experimental and computed scattering patterns has been carried out extensively, there is no satisfactory way to relate a given pattern to the particle it comes from. This paper reports about the ongoing development and implementation of a method which is aimed at classifying patterns, rather than identifying original particles. A machine learning algorithm includes the extraction of morphological features and their multivariate statistical analysis. A classifier is trained and validated in a supervised mode, by relying on patterns from known materials. Then the tuned classifier is applied to the recognition of patterns of unknown origin.
Beyond command and control: USSOUTHCOM's use of social networking to 'connect and collaborate' during Haiti relief operations
Ricardo Arias
On 12 January 2010, a magnitude 7.0 earthquake devastated Haiti killing over 230,000 unsuspecting victims, injuring tens of thousands more and displacing over 1.1 million people. The physical damage was so severe that over 50 percent of buildings in and near the affected areas were completely destroyed or damaged. After struggling for decades with adversity, and besieged by a myriad of social, economic and political challenges, Haiti, its government, and its people were by most accounts already in a state of crisis. The earthquake's devastation and its aftermath shocked the world and prompted a global response. Over 800 institutions and organizations representing the whole of society - governments and their militaries, international organizations, nongovernmental organizations, public institutions, academia, corporations, and private citizens - mobilized to provide aid and relief. However, coordinating and managing their activities seemed a daunting, if not impossible, task. How could a global response achieve "unity of effort" when "unity of command" was not feasible? To provide a solution, US Southern Command (USSOUTHCOM) looked beyond traditional Command and Control systems for collaboration with non-traditional partners and implemented the All Partners Access Network (APAN) in order to "Connect and Collaborate."
Using social media to communicate during crises: an analytic methodology
The Emerging Media Integration Team at the Department of the Navy Office of Information (CHINFO) has recently put together a Navy Command Social Media Handbook designed to provide information needed to safely and effectively use social media. While not intended to be a comprehensive guide on command use of social media or to take the place of official policy, the Handbook provides a useful guide for navigating a dynamic communications environment. Social media are changing the way information is diffused and decisions are made, especially for Humanitarian Assistance missions when there is increased emphasis on Navy commands to share critical information with other Navy command sites, government, and official NGO (nongovernmental organization) sites like the American Red Cross. In order to effectively use social media to support such missions, the Handbook suggests creating a centralized location to funnel information. This suggests that as the community of interest (COI) grows during a crisis, it will be important to ensure that information is shared with appropriate organizations for different aspects of the mission such as evacuation procedures, hospital sites, location of seaports and airports, and other topics relevant to the mission. For example, in the first 14 days of the U.S. Southern Command's Haiti HA/DR (Humanitarian Assistance/Disaster Relief) mission, the COI grew to over 1,900 users. In addition, operational conditions vary considerably among incidents, and coordination between different groups is often set up in an ad hoc manner. What is needed is a methodology that will help to find appropriate people with whom to share information for particular aspects of a mission during a wide range of events related to the mission. CNA has developed such a methodology and we would like to test it in a small scale lab experiment.
Oil Spill (DHW) and Ocean Monitoring I: Joint Session with Conference 8030
icon_mobile_dropdown
Operational mapping of the DWH deep subsurface dispersed oil
Harvey E. Seim, Richard Crout, Glen Rice
Mapping of the deep dispersed oil feature from the blowout of the MC252 wellhead was organized by the subsurface mapping unit within the Unified Area Command starting in early August, 2010. The operational process employed and the challenge presented by the response situation are reviewed. Colored dissolved organic matter fluorescence, used to establish existence of the subsurface oil prior to this time, had largely fallen below background levels for the sensors by this time. Dissolved oxygen (DO), deficits in which were assumed to be related to consumption of oil by microbes, was the only routinely observed variable in vertical profiles that displayed a persistent and obvious anomaly. The DO anomaly was therefore used to identify the presence and magnitude of the dispersed oil impact. An adaptive sampling plan employing daily review of DO profiles to provide vessel guidance was established and permitted a coarse mapping of the feature within 4 weeks. The DO anomaly extended from the wellhead to the WSW for more than 350 km, bounded to the north by the upper slope (approximately 1000 m isobath), with a cross-slope extent of 60-100 km, and was also present to the ENE of the wellhead out to 60 km.
Oil Spill (DHW) and Ocean Monitoring II: Joint Session with Conference 8030
icon_mobile_dropdown
Making sense of ocean sensing: the Gulf of Mexico Coastal Ocean Observing System links observations to applications
Christina Simoniello, Ann E. Jochens, Matthew K. Howard, et al.
The Gulf of Mexico Coastal Ocean Observing System Regional Association (GCOOS-RA) works to enhance our ability to collect, deliver and use ocean information. The GCOOS-RA Education and Outreach Council works to bring together industry, governments, academia, formal and informal educators, and the public to assess regional needs for coastal ocean information, foster cooperation, and increase utility of the data. Examples of data products in varying stages of development are described, including web pages for recreational boaters and fishermen, novel visualizations of storm surge, public exhibits focused on five Gulf of Mexico Priority Issues defined by the Gulf of Mexico Alliance, a Harmful Algae Bloom warning system, the Basic Observation Buoy project designed to engage citizen scientists in ocean monitoring activities, and the GCOOS Data Portal, instrumental in Deepwater Horizon mitigation efforts.
Building interoperable data systems in the Gulf of Mexico: a case study
Matthew K. Howard
Data collection in oceanography is undergoing a paradigm shift. We are moving from month-long shipboard surveys and mooring deployments with annual data recoveries to persistent adaptive observatories using networked, distributed and, sometimes, autonomous sensor systems returning data in near real-time. Real-time data have value that delayed-mode data do not (e.g., for rapid response to reduce impacts to ecosystems from oil spills). The challenge is automating the conversion and integration of sensor data to an analysis-ready state. The U.S. Integrated Ocean Observing System data management group identified the core elements, including data discovery, access, and transport, that are required to build interoperable systems. In practice, these require adoption of standards for vocabularies, data models, and Web Services. The Gulf of Mexico Coastal Ocean Observation System (GCOOS) began as a regional collaboration of eleven nonfederal sub-regional observatories whose data systems had evolved independently and were not interoperable. Grants allowed us to deploy a service-oriented architecture consisting of OPeNDAP transports and standards-based Open Geospatial Consortium Sensor Observation Service web interfaces with Observation and Measurement encodings in each of the observatories. We constructed and deployed a common vocabulary based on the NetCDF Climate and Forecast Metadata Conventions and Climate and Forecast standard names. The observatories host XML files, listing their active sensors, that are compiled into catalogs of available assets. Our regional data portal aggregates data from the observatories and constructs data products for stakeholder groups. These capabilities were available during and following the Deepwater Horizon oil spill. The GCOOS-built capability for interoperability made the data and products readily available to the incident command centers in hours instead of months.
Developing technologies for regional ocean observing systems
Jan R. van Smirren, Robert I. Smith, Xiaorui Guan
The Gulf of Mexico Coastal Ocean Observing System Regional Association (GCOOS-RA) takes a continuing and proactive role in in-situ monitoring and characterization of the marine environment. In many ways the Gulf is the ideal integrated ocean observing environment, its complex and extreme meteorological and oceanic conditions make it an ideal test bed for characterization of such technologies. This paper identifies some of the more useful techniques that have been adopted in understanding the Gulf. We also identify approaches, as yet untried, that could provide vital data for operational support and the provision of data for initialization, assimilation, and verification of ocean forecast models.
Poster Session on Sensing Technologies for Global Health, Military Medicine, Disaster Response, and Environmental Monitoring: Global Health
icon_mobile_dropdown
Amplification-free point of care immunosensor for detecting type V collagen at a concentration level of ng/ml
Pei-Yu Chung, Evelyn R. Bracho-Sanchez, Peng Jiang, et al.
Point-of-care testing (POCT) is applicable in the immediate vicinity of the patient, where timely diagnosis or prognostic information could help doctors decide the following treatment. Among types of developed POCT, gold nanoparticle based lateral flow strip technology provides advantages such as simple operation, cost-effectiveness, and a user-friendly platform. Therefore, this type of POCT is most likely to be used in battlefields and developing countries. However, conventional lateral flow strips suffer from low detection limits. Although enzyme-linked amplification was demonstrated to improve the detection limit and sensitivity by stronger visible lines or by permitting electrochemical analytical instrumentation, the enzyme labels have potential to cause interference with other enzymes in our body fluids. To eliminate this limitation, we developed an amplification-free gold nanoparticle-based immunosensor applied for detecting collagen type V, which is produced or released abnormally during rejection of lung transplants and sulfur mustard exposure. By using suitable blocking protein to stabilize gold nanoparticles as the reporter probe, a low detection limit of ng/ml was achieved. This strategy is a promising platform for clinical POCT, with potential applications in military or disaster response.
Poster Session on Sensing Technologies for Global Health, Military Medicine, Disaster Response, and Environmental Monitoring: Environmental Monitoring
icon_mobile_dropdown
Research of soil moisture retrieval in arid region on the moistureshed scale
Soil moisture is an indispensable parameter of water, heat and carbon cycle processes in the earth surface system, and plays a key role in the formation of run-off in arid areas. The retrieval of regional-scale soil moisture is significant in the monitoring of crop growth and drought in arid regions, and in the modeling of global climatic dynamic surface processes. The use of multi-source remote sensing data in the soil moisture retrieval can improve the accuracy of reversion and generally over perform the use of a single remote sensing data due to that the data acquired by different remote sensors can provide complementary information about the soil moisture. The co-reversion of multi-source remotely sensed data is a cutting-edge technique for soil moisture retrieval. This study tries to optimize and adjust the existing reversion models available for different land cover types. A co-reversion scheme model will be designed and used to retrieve soil moisture of different vegetation types of the arid area by using MODIS and AMSR-E remote sensing data. The downscaling strategy and field verification will be used to analyze the accuracy, uncertainty and sensitivity of the reversion models. The popularity and regionality of the model will be also examined to explore the possibility of the model used for large-scale and dynamic monitoring of soil moisture.
Aerosol sensing technologies in the mining industry
Samuel J. Janisko, James D. Noll, Emanuele E. Cauda
Recent health, safety and environmental regulations are causing an increased demand for monitoring of aerosols in the mining industry. Of particular concern are airborne concentrations of combustible and toxic rock dusts as well as particulate matter generated from diesel engines in underground mines. In response, the National Institute for Occupational Safety and Health (NIOSH) has been evaluating a number of real time sensing technologies for potential use in underground mines. In particular, extensive evaluation has been done on filter-based light extinction using elemental carbon (EC) as a surrogate measurement of total diesel particulate matter (DPM) mass concentration as well as mechanical tapered element oscillating microbalance (TEOM) technology for measurement of both DPM and rock dust mass concentrations. Although these technologies are promising in their ability to accurately measure mine aerosols for their respective applications, there are opportunities for design improvements or alternative technologies that may significantly enhance the monitoring of mine aerosols. Such alterations can lead to increases in sensitivity or a reduction in the size and cost of these devices. This paper provides a brief overview of current practices and presents results of NIOSH research in this area. It concludes with a short discussion of future directions in mine aerosol sensing research.
A statistical method to correct radiometric data measured by AVHRR onboard the National Oceanic and Atmospheric Administration (NOAA) Polar Orbiting Environmental Satellites (POES)
Md. Z. Rahman, Leonid Roytman, Abdel Hamid Kadik
This paper apply an statistical technique to correct radiometric data measured by Advanced Very High Resolution Radiometers(AVHRR) onboard the National Oceanic and Atmospheric Administration (NOAA) Polar Orbiting Environmental Satellites(POES). This paper study Normalized Difference Vegetation Index (NDVI) stability in the NOAA/NESDIS Global Vegetation Index (GVI) data for the period 1982-2003. AVHRR weekly data for the five NOAA afternoon satellites NOAA-7, NOAA-9, NOAA-11, NOAA-14, and NOAA-16 are used for the China dataset, for it includes a wide variety or different ecosystems represented globally. GVI has found wide use for studying and monitoring land surface, atmosphere, and recently for analyzing climate and environmental changes. Unfortunately the POES AVHRR data, though informative, can not be directly used in climate change studies because of the orbital drift in the NOAA satellites over these satellites' life time. This orbital drift introduces errors in AVHRR data sets for some satellites. To correct this error of satellite data, this paper implements Empirical Distribution Function (EDF) which is a statistical technique to generate error free long-term time-series for GVI data sets. We can use the same methodology globally to create vegetation index to improve the climatology.
Low-power wireless trace gas sensing network
Clinton J. Smith, Stephen So, Amir Khan, et al.
A basic wireless laser spectroscopic sensor network for monitoring of trace-gases will be presented. The prototype lowpower sensor nodes targeting carbon dioxide are based on tunable diode laser absorption spectroscopy and operate using a 2 μm VCSEL and a 3.5 m Herriott multi-pass cell. The sensor system, which employs real-time wireless communications, is controlled by custom electronics and can be operated autonomously. The sensor core electronics performs molecular concentration measurements using wavelength modulation spectroscopy with an active laser frequency locking to the target transition. The operating sensor node consumes approximately 300 mW of electrical power and can work autonomously for up to 100 hours when powered by a 10.5 Ah Lithium-ion polymer battery. Environmentally controlled long term (12 hours) stability tests show sensor node detection limit of ~0.286 ppm with 1 second integration time and the ultimate minimum detectable fractional absorption of 1.5x10-6 is obtained after 3500 seconds averaging time. The sensor node performance results and preliminary tests in a basic network configuration are discussed.
Simultaneous detection of atmospheric nitrous oxide and carbon monoxide using a quantum cascade laser
Amir Khan, Kang Sun, David J. Miller, et al.
We describe a non-intrusive, open-path, fast-response compact sensor for simultaneous measurements of nitrous-oxide (N2O) and carbon-monoxide (CO) primarily designed for UAV applications. N2O is the third most important anthropogenic greenhouse gas, but the spatial and temporal distributions of N2O emissions are poorly quantified. On the other hand, CO is an important tracer to distinguish between fossil fuel and biogenic sources. We use a 4.5 micron thermoelectrically-cooled, distributed feedback, continuous wave quantum cascade laser as a mid-infrared radiation source to scan CO and N2O transitions centered at 4538.9 nm and 4539.8 nm respectively. Detection was achieved by a thermo-electrically (TE) cooled 5 micron Indium-Phosphide (InSb) infrared detector. For the first time in this application, a compact cylindrical cell with a pattern configuration to minimize the sensor size with a pathlength of 10 meters (2.54 cm radius mirrors, 25 cm basepath). Wavelength modulation spectroscopy was employed to achieve high sensitivity detection. The detection limit of 10-5 fractional absorbance was achieved at a 10 sec. averaging time. This is equivalent to less than 1 ppbv of N2O and 2 ppbv of CO out of 320 ppbv and 200 ppbv ambient levels respectively. In summary we report a cryogen-free, consumable-free sensor that can operate with 10s W of electrical power and packaged in a small shoe-box size which is ideal for UAV or airborne applications.
Novel handheld x-ray fluorescence spectrometer for routine testing for the presence of lead
Noa M. Rensing, Timothy C. Tiernan, Michael R. Squillante
RMD is developing a safe, inexpensive, and easy to operate lead detector for retailers and consumers that can reliably detect dangerous levels of lead in toys and other household products. Lead and its compounds have been rated as top chemicals that pose a great threat to human health. However, widespread testing for environmental lead is rarely undertaken until lead poisoning has already been diagnosed. The problem is not due to the accuracy or sensitivity of existing lead detection technology, but rather to the high expense, safety and licensing barriers of available test equipment. An inexpensive and easy to use lead detector would enable the identification of highly contaminated objects and areas and allow for timely and cost effective remediation. The military has similar needs for testing for lead and other heavy elements such as mercury, primarily in the decontamination of former military properties prior to their return to civilian use. RMD's research and development efforts are abased on advanced solid-state detectors combined with recently patented lead detection techniques to develop a consumer oriented lead detector that will be widely available and easy and inexpensive to use. These efforts will result in an instrument that offers: (1) high sensitivity, to identify objects containing dangerous amounts of lead, (2) low cost to encourage widespread testing by consumers and other end users and (3) convenient operation requiring no training or licensing. In contrast, current handheld x-ray fluorescence spectrometers either use a radioactive source requiring licensing and operating training, or use an electronic x-ray source that limits their sensitivity to surface lead.
Environmental monitoring of brominated flame retardants
Brominated flame retardants (BFRs) are synthetic organobromide compounds which inhibit ignition and combustion processes. Because of their immense ability to retard fire and save life and property, they have been extensively used in many products such as TVs, computers, foam, plastics etc. The five major classes of BFRs are tetrabromobisphenol-A (TBBPA), hexabromocyclododecane (HBCD), pentabromodiphenyl ether, octabromodiphenyl ether, and decabromodiphenyl ether. The last three are also commonly called PBDEs. BDE-85 and BDE-209 are the two prominent congeners of PBDEs and this study reports the adverse effects of these congeners in rodents. Exposure of rat sciatic nerves to 5 μg/mL and 20 μg/mL of BDE-85 and BDE-209 respectively lead to significant, concentration dependent reduction in nerve conduction function. Glucose absorption in the rat intestinal segments exposed to 5 μg/mL of BDE-85 and BDE-209 was significantly reduced for both the compounds tested. Lastly, mice when exposed to 0.25 mg/kg body weight for four days showed a disruption in oxidant and antioxidant equilibrium. The tissues namely liver and brain have shown increase in the levels of lipid hydroperoxides indicating oxidative stress. Moreover, all the protective enzymes namely superoxide dismutase (SOD), glutathione peroxidase (GPx), catalase, and glutathione S transferase (GST) have shown tissue specific alterations indicating the induction of damaging oxidative stress and setting in of lipid peroxidation in exposed animals. The results indicate monitoring of PBDEs in the environment is essential because levels as low as 5 μg/mL and 0.25 mg/kg body weight were able to cause damage to the functions of rodents.
Face Biometrics
icon_mobile_dropdown
Super-resolution benefit for face recognition
Vast amounts of video footage are being continuously acquired by surveillance systems on private premises, commercial properties, government compounds, and military installations. Facial recognition systems have the potential to identify suspicious individuals on law enforcement watchlists, but accuracy is severely hampered by the low resolution of typical surveillance footage and the far distance of suspects from the cameras. To improve accuracy, super-resolution can enhance suspect details by utilizing a sequence of low resolution frames from the surveillance footage to reconstruct a higher resolution image for input into the facial recognition system. This work measures the improvement of face recognition with super-resolution in a realistic surveillance scenario. Low resolution and super-resolved query sets are generated using a video database at different eye-to-eye distances corresponding to different distances of subjects from the camera. Performance of a face recognition algorithm using the super-resolved and baseline query sets was calculated by matching against galleries consisting of frontal mug shots. The results show that super-resolution improves performance significantly at the examined mid and close ranges.
A quantitative comparison of 3D face databases for 3D face recognition
Dirk Smeets, Jeroen Hermans, Dirk Vandermeulen, et al.
During the last decade research in face recognition has shifted from 2D to 3D face representations. The need for 3D face data has resulted in the advent of 3D databases. In this paper, we first give an overview of publicly available 3D face databases containing expression variations, since these variations are an important challenge in today's research. The existence of many databases demands a quantitative comparison of these databases in order to compare more objectively the performances of the various methods available in literature. The ICP algorithm is used as baseline algorithm for this quantitative comparison for the identification and verification scenario, allowing to order the databases according to their inherent difficulty. Performance analysis using the rank 1 recognition rate for identification and the equal error rate for verification reveals that the FRGC v2 database can be considered as the most challenging. Therefore, we recommend to use this database further as reference database to evaluate (expression-invariant) 3D face recognition algorithms. As second contribution, the main factors that influence the performance of the baseline technique are determined and attempted to be quantified. It appears that (1) pose variations away from frontality degrade performance, (2) expression types affect results, (3) more intense expressions degrade recognition, (4) an increasing number of expressions decreases performance and (5) the number of gallery subjects degrades performace. A new 3D face recognition algorithm should be evaluated for all these factors.
QUEST hierarchy for hyperspectral face recognition
David M. Ryer, Trevor J. Bihl, Kenneth W. Bauer, et al.
A face recognition methodology employing an efficient fusion hierarchy for hyperspectral imagery (HSI) is presented. A Matlab-based graphical user interface (GUI) is developed to aid processing, track performance and to display results. The incorporation of adaptive feedback loops enhance performance through the reduction of candidate subjects in the gallery as well as the injection of additional probe images during the matching process. Algorithmic results and performance improvements are presented as spatial, spectral, and temporal effects are utilized in this Qualia Exploitation of Sensor Technology (QUEST) motivated methodology.
Fingerprint and Voice Biometrics
icon_mobile_dropdown
Adding localization information in a fingerprint binary feature vector representation
Julien Bringer, Vincent Despiegel, Mélanie Favre
At BTAS'10, a new framework to transform a fingerprint minutiae template into a binary feature vector of fixed length is described. A fingerprint is characterized by its similarity with a fixed number set of representative local minutiae vicinities. This approach by representative leads to a fixed length binary representation, and, as the approach is local, it enables to deal with local distortions that may occur between two acquisitions. We extend this construction to incorporate additional information in the binary vector, in particular on localization of the vicinities. We explore the use of position and orientation information. The performance improvement is promising for utilization into fast identification algorithms or into privacy protection algorithms.
Speech biometric mapping for key binding cryptosystem
K. Inthavisas, D. Lopresti
We propose a new scheme to transform speech biometric measurements (feature vector) to a binary string which can be combined with a pseudo-random key for a cryptographic purpose. We utilize Dynamic Time Warping (DTW) in our scheme. The challenge of using DTW in a cryptosystem is that a template must be useful to create a warping function, while it must not be usable for an attacker to derive the cryptographic key. In this work, we propose a hardened template to address these problems. We evaluate our scheme with two speech datasets and compare with DTW, VQ, and GMM speaker verifications. The experimental results show that the performance of the proposed scheme outperforms VQ and GMM. It is slightly degraded when compared to the DTW speaker verification. The EERs against attackers utilizing the hardened template are 0% both datasets.
C-BET evaluation of voice biometrics
Dmitry O. Gorodnichy, Michael Thieme, David Bissessar, et al.
C-BET is the Comprehensive Biometrics Evaluation Toolkit developed by CBSA in order to analyze the suitability of biometric systems for fully-automated border/access control applications. Following the multiorder score analysis and the threshold-validated analysis defined within the C-BET framework, the paper presents the results of the C-BET evaluation of a commercial voice biometric product. In addition to error tradeoff and ranking curves traditionally reported elsewhere, the paper presents the results on the newly introduced performance metrics: threshold-validated recognition ranking and non-confident decisions due to multiple threshold-validated scores. The results are obtained on over a million voice audio clip comparisons. Good biometric evaluation practices offered within C-BET framework are presented.
Iris Biometrics
icon_mobile_dropdown
Impact of out-of-focus blur on iris recognition
Nadezhda Sazonova, Stephanie Schuckers, Peter Johnson, et al.
Iris recognition has expanded from controlled settings to uncontrolled settings (on the move, from a distance) where blur is more likely to be present in the images. More research is needed to quantify the impact of blur on iris recognition. In this paper we study the effect of out-of-focus blur on iris recognition performance from images captured with out-of-focus blur produced at acquisition. A key aspect to this study is that we are able to create a range of blur based on changing focus of the camera during acquisition. We quantify the produced out-of-focus blur based on the Laplacian of Gaussian operator and compare it to the gold standard of the modulation transfer function (MTF) of a calibrated black/white chart. The sharpness measure uses an unsegmented iris images from a video sequence with changing focus and offers a good approximation of the standard MTF. We examined the effect of the 9 blur levels on iris recognition performance. Our results have shown that for moderately blurry images (sharpness at least 50%) the drop in performance does not exceed 5% from the baseline (100% sharpness).
A simple shape prior model for iris image segmentation
Daniel A. Bishop, Anthony Yezzi Jr.
In order to make biometric systems faster and more user-friendly, lower-quality images must be accepted. A major hurdle in this task is accurate segmentation of the boundaries of the iris in these images. Quite commonly, circle-fitting is used to approximate the boundaries of the inner (pupil) and outer (limbic) boundaries of the iris, but this assumption does not hold for off-axis or otherwise non-circular boundaries. In this paper we present a novel, foundational method for elliptical segmentation of off-axis iris images. This method uses active contours with constrained flow to achieve a simplified form of shape prior active contours. This is done by calculating a region-based contour evolution and projecting it upon a properly chosen set of vectors to confine it to a class of shapes. In this case, that class of shapes is ellipses. This serves to regularize the contour, simplifying the curve evolution and preventing the development of irregularities that present challenges in iris segmentation. The proposed method is tested using images from the UBIRIS v.1 and CASIA-IrisV3 image data sets, with both near-ideal and off-axis images. Additional testing has been performed using the WVU Off Axis/Angle Iris Dataset, Release 1. By avoiding many of the assumptions commonly used in iris segmentation methods, the proposed method is able to accurately fit elliptical boundaries to off-axis images.
Security enhanced BioEncoding for protecting iris codes
Improving the security of biometric template protection techniques is a key prerequisite for the widespread deployment of biometric technologies. BioEncoding is a recently proposed template protection scheme, based on the concept of cancelable biometrics, for protecting biometric templates represented as binary strings such as iris codes. The main advantage of BioEncoding over other template protection schemes is that it does not require user-specific keys and/or tokens during verification. Besides, it satisfies all the requirements of the cancelable biometrics construct without deteriorating the matching accuracy. However, although it has been shown that BioEncoding is secure enough against simple brute-force search attacks, the security of BioEncoded templates against more smart attacks, such as record multiplicity attacks, has not been sufficiently investigated. In this paper, a rigorous security analysis of BioEncoding is presented. Firstly, resistance of BioEncoded templates against brute-force attacks is revisited thoroughly. Secondly, we show that although the cancelable transformation employed in BioEncoding might be non-invertible for a single protected template, the original iris code could be inverted by correlating several templates used in different applications but created from the same iris. Accordingly, we propose an important modification to the BioEncoding transformation process in order to hinder attackers from exploiting this type of attacks. The effectiveness of adopting the suggested modification is validated and its impact on the matching accuracy is investigated empirically using CASIA-IrisV3-Interval dataset. Experimental results confirm the efficacy of the proposed approach and show that it preserves the matching accuracy of the unprotected iris recognition system.
Ocular Biometrics
icon_mobile_dropdown
Challenging ocular image recognition
V. Paúl Pauca, Michael Forkin, Xiao Xu, et al.
Ocular recognition is a new area of biometric investigation targeted at overcoming the limitations of iris recognition performance in the presence of non-ideal data. There are several advantages for increasing the area beyond the iris, yet there are also key issues that must be addressed such as size of the ocular region, factors affecting performance, and appropriate corpora to study these factors in isolation. In this paper, we explore and identify some of these issues with the goal of better defining parameters for ocular recognition. An empirical study is performed where iris recognition methods are contrasted with texture and point operators on existing iris and face datasets. The experimental results show a dramatic recognition performance gain when additional features are considered in the presence of poor quality iris data, offering strong evidence for extending interest beyond the iris. The experiments also highlight the need for the direct collection of additional ocular imagery.
Segmentation-free ocular detection and recognition
Iris recognition is a well-known technique to identify persons. However this technique requires high resolution images in order to automatically segment the iris. In some scenarios obtaining the required resolution may be difficult. In this paper, we investigate the recognition of ocular regions using correlation filters without segmenting the iris region. This method uses the whole eye region and surrounding areas, i.e., the ocular region, for identification. In our experiments we use the recently developed Quadratic Correlation Filter and show that at low resolutions segmentation-free ocular recognition can succeed while iris segmentation fails.
LED eye safety considerations in the design of iris capture systems
We have developed a standoff iris biometrics system for improved usability in access-control applications. The system employs an eye illuminator, which is composed of an array of encapsulated near-infrared light emitting diodes (NIRLEDs), which are triggered at the camera frame rate for reduced motion blur and ambient light effects. Neither the standards / recommendations for NIR laser and lamp safety, nor the LED-specific literature address all the specific aspects of LED eye-safety measurement. Therefore, we established exposure limit criteria based on a worst-case scenario combining the following: the CIE/ANSI standard/recommendations for exposure limits; concepts for maximum irradiance level and for strobing from the laser safety standards; and ad-hoc rules minimizing irradiance on the fovea, for handling LED arrays, and for LED mounting density. Although our system was determined as eye safe, future variants may require higher exposure levels and lower safety margins. We therefore discuss system configuration for accurate LED radiometric measurement that will ensure reliable eye-safety evaluation. The considerations and ad hoc rules described in this paper are not, and should not be treated as safety recommendations.