Moore's law has fostered the steady growth of the field of digital image processing, though computational complexity remains a significant problem for most of the digital image processing applications. At the same time, also research in the field of optical image processing has matured, potentially bypassing the limitations of digital approaches and giving rise to new applications. Additionally, from image acquisition perspective the rapid convergence of digital imaging devices is driving a strong industrial growth of photonics technologies. Already, photonics-based enablers can be found in a myriad of imaging and visualization applications such as displays and image sensing, illumination systems, and high-performance light engines - all of which have major volume positions in the photonics market. Along with the growing interest for emerging multimedia applications the demand for new photonics enablers is steadily increasing, and new technologies are continuously created to meet the needs.

One example is the use of laser sources for cinema projection systems enabling high-dynamic range and high-quality stereoscopic cinema, another the use of advanced optics and display technology in head mounted displays. In miniaturizing digital cameras new challenges emerge when striving for high performance combined with mass volume production. This requires the design of sophisticated lens elements and new types of imaging optics; optimized image processing pipelines; compact high-performance sensors etc. In addition, photonics has enabled fully digital media, with accompanied growth in image processing, in content storage, retrieval and transmission techniques, and in related hardware and software. These new applications all have their specific requirements and put new challenges on the optical designs. Finally, we have observed recently also the vast emergence of learning-based solutions in imaging, processing and visualization.

The aim of this conference is to create a joint forum for both research and application communities in the area of optics, photonics and digital imaging technologies to share expertise, to solve present-day application bottlenecks and to propose new application areas. Consequently, this conference has a broad scope, ranging from basic and applied research to dissemination of existing knowledge.

The conference sessions will address (but not be limited to) following topics:

  • image sensors (CCD, CMOS, and others like OPD arrays)
  • image acquisition and computational imaging (image reconstruction, phase image restoration, image fusion, high dynamic range imaging, light-field imaging, point-cloud imaging, holographic imaging, compressive sensing)
  • camera optics (imaging lenses, design, flashes, adaptive optics, wafer-level optics, novel lenses, extended depth of focus, etc.)
  • camera systems and characterization (system design, testing, metrics, standards, image processing chains)
  • photonics components and enabling technologies for multimedia (micro-optics, lens arrays, filters, optical interconnects, optical storage)
  • image transformations (wavelet theory, space theory, geometrical transforms, restoration)
  • image analysis (motion estimation, segmentation, object tracking, pattern recognition, classification)
  • learning-based solutions (machine learning, deep learning, explainable learning)
  • image information management (coding, cryptography, watermarking, storage and retrieval systems, resolution enhancement)
  • displays, projectors and applications (augmented and virtual reality glasses, high-dynamic range, stereoscopic, light field, and holographic visualization)
  • optical engines for displays (LED and RGB-laser based engines, holographic modulators)
  • display illumination (light guide solutions, micro-optics, design)
  • interaction between architectures, systems or devices for optical and digital image processing (includes also bioinspired imaging solutions)
  • objective and subjective quality assessment and measurement
  • applications (medical, microscopy, surveillance, security, remote sensing, industrial inspection, entertainment)
  • standardization.
  • ;
    In progress – view active session
    Conference 12138

    Optics, Photonics and Digital Technologies for Imaging Applications VII

    In person: 6 - 7 April 2022 | Salon 1, Niveau/Level 0
    On demand now
    View Session ∨
    • 1: Learning-based Solutions
    • 2: Image Analysis
    • 3: Image Acquisition and Computational Imaging
    • 4: Applications
    • Posters-Wednesday
    • Hot Topics III
    • 5: Standardization of Plenoptic Coding and Media Security Frameworks
    • 6: Displays and Projections
    2022-05-22T10:08:52-07:00
    LIVE NOW:
    -
    UPCOMING LIVE EVENTS:
    Session 1: Learning-based Solutions
    In person: 6 April 2022 • 09:00 - 11:10 CEST | Salon 1, Niveau/Level 0
    Session Chair: David Blinder, Vrije Univ. Brussel (Belgium)
    12138-33
    Author(s): Sepehr Elahi, Bilkent Univ. (Turkey); Can Polat, Bogaziçi Üniv. (Turkey); Omid Safarzadeh, International Bank of Azerbaijan (Azerbaijan, Republic of); Parviz Elahi, Bogaziçi Üniv. (Turkey)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    In high precision laser micromachining, it is required to actively control the machining setup's performance and make the necessary corrections when needed. One of the most critical control mechanisms required for precise machining is controlling the focus position on the workpiece. In our work, we investigate the effects of noise on focal distance detection by simulating the image of a sample at different focal lengths using Fourier optics and then designing, training, and testing a deep learning machine learning model to detect the focal distances from the simulated images with varying strengths of added noise. We simulate both input noise, such as noise due to surface roughness, and output noise, such as camera noise, by adding zero-mean Gaussian noise to the source wave and the simulated image, respectively, for different focal distances. Our model is a fusion of a convolutional neural network and a Gaussian process classifier that can not only predict the focal distance of a sample from its noisy image, but also offers an uncertainty measure for its prediction, indicating how sure it is of its prediction. Thus, our model reports low confidence for images that are too noisy, corresponding to cases where the camera resolution is too low or when the reflective surface is too rough. Therefore, our model is not only robust to both input and output noise, but it can also determine when the image is too noisy to make meaningful predictions. Lastly, we show that our model can achieve inference speeds of more than 400 Hz on a CPU and thus can be used in real-world noisy experimental setups that need real-time focus detection,
    12138-34
    Author(s): Can Polat, Gizem Nuran Yapici, Bogaziçi Üniv. (Turkey); Sepehr Elahi, Bilkent Univ. (Turkey); Parviz Elahi, Bogaziçi Üniv. (Turkey)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    In high precision laser material processing, finding the exact position of focus on the material is very crucial because an unfocused beam can have at best no effect on the material and, at worst, a destructive effect. Additionally, this problem is challenging in ultrafast laser material processing due to the non-linear interactions between light and atoms. Many solutions, including distance measurement, a cylindrical lens group, contrast-detection method, have been proposed. However, these existing techniques are limited by different parameters such as the resolution of the camera, detection algorithms, and the performance of lenses. Furthermore, the experimental setup equipment needed for the mentioned techniques is costly and/or renders the technique specialized for one specific material’s processing. Most importantly, these approaches do not offer real-time detection, thus reducing processing speed, production and hence increasing costs. Recently, Si-Jia Xu, et al . demonstrated that recording the light being reflected from the sample could determine the focal point. They did so by fitting the addition of Gaussians to the intensity of the reflected light and analyzing its properties, including width and height. Although they report a low focus prediction error, their method relies on a particular high numerical aperture, 40x objective lens, their method is not real-time, and their precision depends on the deviation from the focus. We present a new approach for real-time focus detection via computer vision and machine learning, specifically convolutional neural networks. In addition to working with an ordinary lens (8 mm focal length, 0.125 NA), such as a USB camera, our method has better precision than the previously mentioned work. A high focus prediction accuracy of 95% when identifying focus distances in {-150,-140,...,140,150} μm, each step 7% of the Rayleigh length, and a high processing speed of 1000+ FPS on a CPU.
    12138-35
    Author(s): José Antonio Lopez Portillo, Univ. Nacional Autónoma de México (Mexico); Iván Casasola, Posgrado en ciencias e Ingeniería en Computacion, Universidad Nacional Autónoma de México (Mexico); Boris Escalante-Ramírez, Jimena Olveres Montiel, Univ. Nacional Autónoma de México (Mexico); Jaime Arriaga, Technische Univ. Delft (Netherlands); Christian Appendini, Univ. Nacional Autónoma de México (Mexico)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    Sargassum has affected the Mexican Caribbean coasts since 2015 in atypical amounts, causing economic and ecological problems. Removal once it reaches the coast is complex since it is not easily separated from the sand, damaging dune vegetation, heavy transport compacts the sand and further deteriorates the coastline. Therefore, it is important to detect and estimate the sargassum mats path to optimize the collection efforts in the water. There have been some improvements in systems that rely on satellite images to determine areas and possible paths of sargassum, but these methods do not solve the problems near the coastline where the big mats observed in deep sea end up segregating in little mats which often do not show up in the satellite images. Besides, the temporal scales of nearshore sargassum dynamics are characterized by finer temporal resolution. This paper focuses on cameras located near the coast of Puerto Morelos reef lagoon where images are recorded of both beach and near-coastal sea. First, we apply preprocessing techniques based on time that allows us to discriminate the moving sargassum mats from the static sea bottom, then, using classic image processing techniques and neural networks we detect, trace, and estimate the path of the mat towards the place of arrival on the beach. We compared classic algorithms with neural networks. Some of the algorithms we tested are k-means and random forest for segmentation and dense optical flow to follow and estimate the path. This new methodology allows to supervise in real time the demeanor of sargassum close to shore without complex technical support.
    Coffee Break 10:00 - 10:30
    12138-1
    Author(s): Fernando González, Boris Escalante-Ramírez, Jimena Olveres Montiel, José Bargas Díaz, Miguel Serrano, Univ. Nacional Autónoma de México (Mexico)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    In this work we first expose some relevant neuroscience concepts to understand the need for automatized identification of functional neurons in epifluorescence microscopy imaging and what it is expected from it. We then review related work to identify common problems and main challenges. Next, we describe our proposal, architecture, data, tests and outcomes. A comparison between this proposal and a previous approach based on digital image processing techniques is made. Afterwards results and comparison are discussed and finally we present our conclusions and plans for future work.
    PC12138-2
    Author(s): Amir Mohammad Ketabchi, Berna Morova, Nima Bavili, Alper Kiraz, Koç Univ. (Turkey)
    In person: 6 April 2022 • 10:50 - 11:10 CEST | Salon 1, Niveau/Level 0
    Show Abstract + Hide Abstract
    Eliminating out-of-focus background, confocal microscopy provides images with a higher resolution and signal-to-noise ratio in comparison with wide-field illumination microscopy. In this work, we demonstrate the operation of a line-scanning confocal microscope developed by using a digital light projector (DLP) and CMOS camera with a rolling shutter. In this method, series of illumination lines are projected on a sample by a DLP with a focusing objective (50X, NA=0.55), and the reflected light is imaged with a rolling shutter CMOS camera. The line-scanning confocal imaging is achieved by overlapping the illumination lines and the rolling shutter of the sensor. Significant improvements in image contrast (more than 50%) and minimum feature size (about 17.6%) are obtained using this technique. In addition, the results obtained with line-scanning microscopy are employed for training a Convolutional Neural Network (CNN) for deep learning enhanced microscopy. For this, a data set containing 300 pairs of simple and line scanning mode images obtained by imaging fibers of a paper tissue is generated. Significant contrast and resolution improvements are obtained at the output of the CNN, comparable to those obtained with the ground-truth images. This work was supported by TÜBİTAK (Grant No. 118F529).
    12138-3
    Author(s): Bowen Wang, Nanjing Univ. of Science and Technology (China); Yan Zou, Minqi Wang, Nanjing University of Science and Technology (China)
    On demand
    12138-4
    Author(s): Sheng Li, Bowen Wang, Minqi Wang, Xu Zhang, Nanjing Univ. of Science and Technology (China)
    On demand
    Show Abstract + Hide Abstract
    Fourier Ptychography is a phase recovery technique that uses synthetic aperture concept to recover high-resolution sample images. It has made great breakthroughs in microscopic fields such as biological cells. However, Fourier Ptychography is still restricted by many macroscopic fields of remote detection such as sea, land and air due to its non-active imaging. In this paper, a fast Fourier Ptychography technique based on via deep learning is proposed. Firstly, different from the previous macro scanning, a 3-3 array camera is used to quickly obtain part of the spectrum of the object to be measured. Secondly, the network is constructed by using the large aperture imaging results under non-laser irradiation as the ground truth. Finally, 9 low-resolution images are used to obtain high resolution results. Compared with other advanced methods, the results obtained in this paper have satisfactory resolution and eliminate most of the influence of speckle caused by laser irradiation.
    12138-5
    Author(s): Minqi Wang, Bowen Wang, Nanjing Univ. of Science and Technology (China)
    On demand
    Show Abstract + Hide Abstract
    Imaging systems with different imaging sensors are widely used in the surveillance, military, and medical fields. Infrared imaging sensors are widely used because they are less affected by the environment and can fully obtain the radiation information of objects, but they also have the characteristics of being insensitive to the brightness changes in the visual field and losing color information. The visible light imaging sensor can obtain rich texture information and color information but will lose scene information under bad weather conditions. Pseudo-color of infrared image and visible image can synthesize new image with complementary information of source image. This paper proposed a pseudo-color deep learning method for infrared and visible images based on a dual-path propagation codec structure. Firstly, the residual channel attention module is introduced to extract features at different scales, which can retain more meaningful information and enhance important information. Secondly, an improved fusion strategy based on visual saliency is used to pseudo-color the feature map. Finally, the pseudo-color results are recovered by reconstructing the network. Compared with other advanced methods, our experimental results achieve the satisfactory visual effect and objective evaluation performance.
    Session 2: Image Analysis
    In person: 6 April 2022 • 11:10 - 12:30 CEST | Salon 1, Niveau/Level 0
    Session Chair: Juan Martínez-Carranza, Warsaw Univ. of Technology (Poland)
    12138-7
    Author(s): Sumesh Nair, Chia-Wei Hsu, National Yang Ming Chiao Tung Univ. (Taiwan); Yvonne Y. Hu, National Cheng Kung Univ. (Taiwan); Ming-Jeh Chien, Lohas Biotech Development Corp. (Taiwan); Shean-Jen Chen, National Yang Ming Chiao Tung Univ. (Taiwan)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    Fungus gnats (Sciaridae) are one of the notorious pests in mushrooms plantations. These pests, which feed on the mycelium of mushrooms, breed aggressively in the damp, cool and dark conditions in mushroom plantations. Fungus gnats are generally dealt with using relatively inefficient physical and chemical means, such as sticky traps and pesticides, respectively. Hence, we have proposed an integrated pest control system composed of a UVA LED source at 365 nm, a galvo-mirror pair, and a 445 nm high-power laser diode. We used a 365 nm UVA LED as the innovative light trap, since fungus gnats’ show maximum attraction to 365 nm, based on previous studies. We also modulated the UVA LED at frequencies ranging from 1 Hz to 1 KHz and the response of the gnats to each of these frequencies were observed. Briefly, the flies were attracted to a 5 × 5 cm2 transparent platform, where they were eliminated by using precisely directed laser beam. The laser beam was scanned accurately at the target platform using a galvo-mirror pair. The entire module was controlled using a NVIDIA Nano microcontroller. A DAC card was used to control the x and y coordinates of the galvo mirrors. Our experiments show that the laser module, with the UVA LED modulated at 50 Hz, could achieve 65% elimination rate in under a minute, in a cubic container of 15 × 15 × 15 cm3. Without the light trap, random laser scanning achieved 69% elimination rate in a single scan, taking approximately 4 minutes, indicating that the UVA modulated light trap increased the efficiency rate by almost 600%. In conclusion, we have proposed an affordable and easily replicable laser-based pest control system with UVA light trap modulated at 50 Hz for precisely eliminating fungus gnats in mushroom plantations rapidly
    12138-8
    Author(s): Roxana-Mariana Beiu, Univ. "Aurel Vlaicu" din Arad (Romania); Virgil-Florin Duma, Univ. "Aurel Vlaicu" din Arad (Romania), Univ. Politehnica Timisoara (Romania); Corina Mnerie, Univ. "Aurel Vlaicu" din Arad (Romania); Andrea-Claudia Beiu, Technische Univ. Eindhoven (Netherlands); Mihaela Dochia, Lucian Copolovici, Univ. "Aurel Vlaicu" din Arad (Romania); George M. Dobre, Adrian Bradu, Adrian G. H. Podoleanu, Univ. of Kent (United Kingdom)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    The aim of this study is to compare the advantages and disadvantages of two optical methods, namely optical coherence tomography (OCT) and microscopy, in the study of the structure of Aloe Vera leaves. Microscopy has the advantage of a higher resolution, but the disadvantage of destroying the object under investigation (as the leaf must be peeled). In this case, the advantages of OCT include its non-invasiveness and the potential added benefit of on-site, in vivo measurements (if portable). The main disadvantage of OCT (and even more so for portable systems) is the achievable resolution, which may not be good enough to reveal the detailed structure of noteworthy parts of leaves, for example the stomata. The present study compares experimentally Aloe Vera data obtained using an optical microscope at different magnifications and an in-house Swept Source (SS) OCT system with a 1310 nm center wavelength. For gaining additional information an analysis of the normalized A-scan OCT images was also performed. This allows to reveal additional parts of the leaf structure, while it falls short of the results obtained using classical microscopy.
    12138-9
    Author(s): Haider Al-Juboori, Institute of Technology Carlow (Ireland); Tom McCormackb, School of Physics, University College Dublin (Ireland)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    Augmented reality (AR) is a part of environment-enhancing technologies, vastly utilized in the new concepts of computer vision, that has only lately started to apply in different fields of high-resolution as well as ultrafast imaging. With the current technical advances, AR is well-positioned to provide computer-guided assistance for a wide variety of emission imaging applications, such as plasma dynamic detection in visible wavelengths that will be represented in this paper. The main investigations in this work are based on the ultrafast time-resolved visible measurements of colliding laser-produced plasmas (CLPP) experiments, that will be supported with techniques of digital image processing, image analysis, and augmented reality to characterize and track the response and unique behaviour of the colliding plasma as well as stagnation layer features that giving a good indication of the degree of plume interpenetration, for all presented experiments. Concretely, the performance of the CLPP studies is strongly dependent on the choice of the experimental conditions. The work describes the core design and demonstrates the imaging performance and analysis for colliding plasma and stagnation layer which create from homogeneous target material, (i.e., Aluminum ‘Al, Z= 13’, or Silicon (Si, Z =14)). The outcomes and design concepts of AR present in this paper can give the milestone to give remarkable improvements to the field of plasma dynamic understanding, especially with images captured in the nanosecond time scale. Additionally, the study provides a considerable amount of detailed data related to the geometrical analysis of the interaction zone which extends the understanding of the behavior of particular species within colliding laser-produced plasmas.
    12138-10
    Author(s): Melisa Mateu, Jimena Olveres Montiel, Boris Escalante-Ramírez, Univ. Nacional Autónoma de México (Mexico)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    Early-stage detection of Coronavirus Disease 2019 (COVID-19) is crucial for patient medical attention. Since lungs are the most affected organs, monitoring them constantly is an effective way to observe sickness evolution. The most common technique for lung-imaging and evaluation is Computed Tomography (CT). However, its costs and effects over human health has made Lung Ultrasound (LUS) a good alternative. LUS does not expose the patient to radiation and minimizes the risk of contamination. Also, there is evidence of a relation between different artifacts on LUS and lung’s diseases coming from the pleura, whose abnormalities are related with most acute respiratory disorders. However, LUS often requires an expert clinical interpretation that may increase diagnosis time or decrease diagnosis performance. This paper describes and compares machine learning classification methods namely Naive Bayes (NB) Support Vector Machine (SVM), K-Nearest Neighbor (K-NN) and Random Forest (RF) over several LUS images. They obtain a classification between lung images with COVID-19, pneumonia, and healthy patients, using image’s features previously extracted from Gray Level Co-Occurrence Matrix (GLCM) and histogram’s statistics. Furthermore, this paper compares the above classic methods with different Convolutional Neural Networks (CNN) that classifies the images in order to identify these lung’s diseases.
    Break
    Lunch/Exhibition Break 12:30 - 14:00
    Session 3: Image Acquisition and Computational Imaging
    In person: 6 April 2022 • 14:00 - 15:00 CEST | Salon 1, Niveau/Level 0
    Session Chair: Juan Martínez-Carranza, Warsaw Univ. of Technology (Poland)
    12138-11
    Author(s): Ali A. Darki, Aurélien R. Dantan, Jens V. Nygaard, Søren P. Madsen, Alexios Parthenopoulos, Christian Vandborg, Aarhus Univ. (Denmark)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    We demonstrate spatial differentiation of optical beams using guided mode resonances in suspended dielectric one-dimensional photonic crystals. The structural, optical and mechanical properties of various nanostructured SiN gratings fabricated by Electron Beam Lithography and chemical etching are characterized. Polarization-dependent first- and second-order spatial differentiation of Gaussian beams impinging on the gratings at oblique and normal incidence respectively are demonstrated in transmission. Polarization-independent first-order spatial differentiation is also achieved with specifically designed structures. Such nanostructured thin films are promising for various optical processing, optomechanics and sensing applications.
    12138-13
    Author(s): Shuhe Zhang, Tos T.J. M. Berendschot, Maastricht Univ. Medical Ctr. (Netherlands); Jinhua Zhou, Meng Shao, Anhui Medical Univ. (China)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    We propose the combination of single image blind deconvolution and illumination correction (BDIC) to enhance the image quality of a microscopy system. We evaluated the performance this method by calculating the peak signal-to-noise ratio and structural similarity of both raw and enhanced images with respect to the reference images. Both subjective and objective assessments show that BDIC increases the image's quality including its contrast and signal-to-noise ratio, without losing image resolution and structural information. To demonstrate its applicability, we also applied BDIC to different samples including plant root tissue and human blood smears.
    12138-15
    Author(s): Maroun Hjeij, Luiz Poffo, Fonctions Optiques pour les Technologies de l'information (France); Bastien Billiot, Agro Innovation International (France); Ronan Le Page, Pascal Besnard, Jean-Marc Goujon, Univ. de Rennes (France)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    Imaging is a reliable and fast non-destructive way to evaluate the different states of organic and non-organic materials. Hyperspectral imaging allows simultaneous extraction of spectral and spatial signatures related to the structure of these materials. The result of which is a series of narrowband sub-images arranged over the reflectance spectrum, forming a hyperspectral cube. Recently, we presented an active mid-infrared (MIR) hyperspectral imaging system, using two thermal cameras and a tunable monochromatic MIR laser operating in the spectral range from 3 to 11 µm. The targeted application is the characterization of physiological status of plant under biotic and abiotic stress. Preliminary results, to early water stress detection on growing plants under hydrics stress, were validated by our system. The classification of two water status was observable and separated after the sixth day of water stress application before irreversible damages. The use of a laser as a source of illumination of a scene involves a monochromatic and coherent illumination, which generates “speckle”, due to the interference of waves scattered by a surface representing roughness at the wavelength scale. In our case, since speckle was the main limitation in extracting the spatial information of the image, we had to introduce spatial averaging of the spectral reflectance measurements. To reduce this phenomenon, various techniques in the literature use the diffuse transparency of materials; but most of these techniques are incompatible in MIR spectral ranges. In this article, we evaluate the potential of speckle noise reduction techniques using only reflective optics, over large MIR wavelength bands. Six types of illumination combination are presented to demonstrate their efficiency. This optical preprocessing reduces laser coherence, and additional image postprocessing provides access to spatial information in the image. Results exhibit a speckle contrast reduction of 95%.
    12138-14
    Author(s): Boris S. Gurevich, Kirill V. Zaitchenko, Institute for Analytical Instrumentation (Russian Federation)
    On demand
    Show Abstract + Hide Abstract
    Both spatial and spectral information is important for the images analysis. However, their simultaneous processing is connected with certain difficulties caused basically by the deficiencies of the devices of spectrum analysis of the high-definition images. These devices provide the limited amount of the selected spectral components as well as low processing rate and reliability due to the presence of the mechanically moving parts. These deficiencies can be eliminated by means of application of acousto-optic tunable filters (AOTF) as selective elements. The functional circuit of the devices providing processing of both kinds of information by means of multispectral image processing using AOTF has been shown and described. AOTF consecutively selects monochromic sub-images of the object at the given wavelengths. The device based on AOTF with Bragg cell of wide angular aperture resolves up to several hundreds spectral intervals which is much more than in other known devices where their number does not exceed several dozens. The further increasing of spectral resolution in our AOTF-based device is possible if the spectral interval is narrowed, i.e., angular aperture is decreased. However, it caused the output image spatial resolution deterioration. Hence, it is necessary to find solution of the problem of the device information optimization depending on the executed tasks in which the spectral and spatial information can be of different valuу. This solution versions are proposed and anal proposed and analyzed in the talk. For instance, the recommendation has been proposed to apply the cylindric lens in order to provide the light beam of elliptic cross section which allows to increase amount of spatial information in one dimension without spectral information decreasing.
    Break
    Coffee Break 15:00 - 15:30
    Session 4: Applications
    In person: 6 April 2022 • 15:30 - 17:10 CEST | Salon 1, Niveau/Level 0
    Session Chair: Peter Schelkens, Vrije Univ. Brussel (Belgium)
    12138-16
    Author(s): Luca Schifano, Royal Meteorological Institute of Belgium (Belgium); Fabian Duerr, Francis Berghmans, Vrije Univ. Brussel (Belgium); Steven Dewitte, Royal Meteorological Institute of Belgium (Belgium); Lien Smeesters, Vrije Univ. Brussel (Belgium)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    Climate change monitoring is still a major challenge, which is currently typically addressed using radiometers monitoring the radiative fluxes at the top of the atmosphere. To improve the current state-of-the-art monitoring instruments, we pursue the development of novel space instrumentation, combining a radiometer with two additional imagers, improving the spatial resolution to a few kilometers allowing scene identification, while enabling a spectral distinction between the reflected solar radiation (RSR) using a visible to near-infrared (400 – 1100 nm) camera, and the Earth’s emitted thermal radiation using a thermal infrared (8 – 14 μm) camera. In this paper, we present a novel camera design optimized towards RSR monitoring, while targeting a compact design and minimizing the number of aspheric components. More specifically, our optimized imaging design shows a wide field of view (138°) enabling to observe the Earth from limb to limb, a compact volume fitting within 1 CubeSat Unit (1U), a wide spectral range (400 – 900 nm) to retrieve the RSR with a certainty of more than 95%, a spatial resolution better than 5 km at nadir, and a close to diffraction-limited performance. After optimization towards the nominal design, possible design alternatives are considered and discussed, enabling a cost-efficient design choice. Following, the mechanical and optical design tolerances are evaluated using a statistical Monte Carlo analysis, indicating a robust and tolerant design that can be manufactured using ultra-precision diamond tooling. Finally, stray-light analysis was performed enabling evaluation of ghost reflection and evaluating the necessity of an anti-reflection coating. Consequently, we can conclude our proposed imaging designs show a promising performance optimized towards Earth observation, paving the way to an improved climate change monitoring.
    12138-17
    Author(s): Christofer Schwartz, Ingo Sander, Rodolfo Jordão, KTH Royal Institute of Technology (Sweden); Fredrik Bruhn, Mälardalen University (Sweden); Mathias Persson, Unibap AB (Sweden); Joakim Ekblad, Saab AB (Sweden); Christer Fuglesang, KTH Royal Institute of Technology (Sweden)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    Nowadays, it is a reality to launch, operate, and utilize small satellites at an affordable cost. However, bandwidth constraint is still an important challenge. For instance, multispectral and hyperspectral sensors generate a significant amount of data subjected to communication channel impairments, which is addressed mainly by source and channel coding aiming at an effective transmission. This paper targets a significant further bandwidth reduction by proposing an on-the-fly analysis technique on the satellite to decide which information is effectively useful for specific target applications, before coding and transmitting. The challenge would be detecting clouds and vessels having the measurements of red-band, green-band, blue-band, and near infrared band, aiming at sufficient probability of detection, avoiding false alarms. Furthermore, the embedded platform constraints must be satisfied. Experiments for typical scenarios of summer and winter days in Stockholm, Sweden, are conducted using data from the Mimir's Well, the Saab AI-based data fusion system. Results show that non-relevant content can be identified and discarded, pointing out that for the cloudy scenarios evaluated, up to 73.1% percent of image content can be suppressed without compromising the useful information into the image. For the water regions in the scenarios containing vessels, results indicate that a stringent amount of data can be discarded (up to 98.5%) when transmitting only the regions of interest (ROI).
    12138-18
    Author(s): Felix Lichtenegger, Claude Leiner, Christian Sommer, Andreas Weiss, Andreas Kröpfl, Saman Zahiri-Rad, JOANNEUM RESEARCH Forschungsgesellschaft mbH (Austria)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    Indoor Positioning has received a lot of attention in the recent years. Visible Light Positioning (VLP) systems, performing positioning by the means of visible light emitted by the obligatory room lighting have shown to achieve highest positioning accuracies. In this work, we investigate a novel angle diversity receiver concept for visible light positioning. The receiver concept, consisting an ultrathin Fresnel lens, embedded in an aperture, mounted on top of a CMOS sensor has been tested and optimized by ray-tracing simulations. This angle-dependent receiver system has the advantage of compact dimensions, a high field-of-view, an off-the-shelf-sensor and relatively high amount of collected light. The origination of the previously calculated Fresnel lens structure is performed by means of grayscale laser lithography. In the presented receiver system, the incoming radiant intensity distribution is converted into an irradiance distribution on the CMOS sensor, where different angles of incidence of incoming light are refracted towards different areas on the CMOS sensor. To verify the optical system experimentally, a prototype of the receiver is placed in a goniometer setup to record images under controlled angles of incidence. Irradiance distributions recorded in the experiment are compared to irradiance distributions obtained by a realistic ray-tracing model. By direct comparison between experiment and simulation, we can verify the optical functionality of the developed optical system of the receiver and investigate the effect of manufacturing imperfections.
    12138-19
    Author(s): Chun-Ting Sung, Wen-Chuan Tseng, National Yang Ming Chiao Tung Univ. (Taiwan); Meng-Hui Hsu, Kun Shan Univ. (Taiwan); Shean-Jen Chen, National Yang Ming Chiao Tung Univ. (Taiwan), National Applied Research Labs. (Taiwan)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    Recently, agricultural unmanned ground vehicles (UGVs) have been studied for increasing farming yields and reducing labor costs. The ground in orchard or agricultural field is rugged, so a tracked vehicle with model predictive control (MPC) is adopted. To follow the path accurately, it is important to find a suitable model of the vehicle for MPC. Traditionally, MPC is based on a kinematic model or a classic dynamic model, but both of them have to be linearized and hence the computation cost is increased. Furthermore, the parameters of a classic dynamic model such as resistance forces and lateral forces are difficult to be measured. Therefore, using identified state-space dynamic model which is a linear model can decrease the computation cost, and then the MPC cycling rate can be increased. Furthermore, to collect the outputs of system identification (ID), we use Intel D435i depth camera with oriented fast and rotated brief-simultaneous localization and mapping (ORB-SLAM) under Robot Operating System (ROS) to provide a visual odometer (VO) for the localization information. Then, the VO is fused with an inertial odometer (IO) by extended Kalman filter (EKF) as a visual inertial odometry (VIO) can offer a better pose of the vehicle. In addition, the multichannel system is two-input and three-output, and the state-space dynamic model is estimated by using a subspace method (N4SID) in MATLAB. The 6-order model is utilized when above 50%, 30%, and 50% matchings to the estimated data of the x and y-positions, and the rotation of the vehicle are achieved. The tracked vehicle is hard to drive along the transverse, so the accuracy of the lateral position is not important. Finally, the experimental results demonstrate that the MPC with the VIO and the identified state-space dynamic model can drive the tracked vehicle to follow the reference path in orchard correctly.
    12138-20
    Author(s): Moncy Sajeev Idicula, Patryk Mitura, Michal Józwik, Warsaw Univ. of Technology (Poland); Hyon-Gon Choo, Electronics and Telecommunications Research Institute (Korea, Republic of); Juan Martínez-Carranza, Warsaw Univ. of Technology (Poland); Kai Wen, Xidian University (China), Warsaw Univ. of Technology (Poland); Tomasz Kozacki, Warsaw Univ. of Technology (Poland)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    Digital holographic microscopy (DHM) is a non-contact, profilometric tool that allows obtaining microscopic object topography from captured holograms. However, the use of DHM is limited when the object under observation has a high gradient or is discontinuous. Multi-angle digital holographic profilometry (MIDHP) is an alternative solution for overcoming this limitation for measuring the topography with discontinuities. This method combines digital holography and multi-angle interferometry. The method requires a certain number of holograms that are processed into longitudinal scanning function (LSF). The topography of the object is recovered by finding the maxima of the LSF. MIDHP enables to enlarge the measurement range and provides a high axial resolution. This paper investigates MIDHP to measure surfaces with various (low and high) surface gradients. The calculations of LSF requires many Fourier Transforms (FT) and the computations are slow. In this paper, we improve LSF calculations by introducing two algorithms. The first algorithm reduces number of FT needed by applying summation in frequency domain. Second approach applies the method of 3D filtering, which improves the quality of the reconstructed shape. The introduced approaches are verified both numerically and experimentally.
    Posters-Wednesday
    In person: 6 April 2022 • 17:40 - 19:30 CEST | Hall Rhin, Poster Area
    PC12138-36
    Author(s): Corina Mnerie, Univ. "Aurel Vlaicu" din Arad (Romania); Virgil-Florin Duma, Univ. "Aurel Vlaicu" din Arad (Romania), Univ. Politehnica Timisoara (Romania)
    In person: 6 April 2022 • 17:40 - 19:30 CEST | Hall Rhin, Poster Area
    Show Abstract + Hide Abstract
    Galvanometer-based scanners (GS) are commonly used for lateral scanning in Optical Coherence Tomography (OCT). A GS is basically a driven servomotor, with a mirror fixed on the shaft, which behaves like an oscillator when a symmetrical periodic command signal is applied. Each manufacturer has his own driving system, using analogic or digital technology. In this study we tested two different GSs from the same range, belonging to two important manufacturers (i.e., Cambridge Technology and Thorlabs) and driven with different type of control algorithms (with and without the integral component). The experimental results are compared with the results of the simulations on our developed analytical mathematical model. The references applied signals include commonly testing ones (e.g., step, sine, or sawthooth), as well as different custom signals. Also, the simulations include a Predictive Control solution for the GS algorithm. The capability of the different control structures to provide a fast response of the system or a stability to perturbations is concluded. The trade-off between these two aspects is studied.
    PC12138-37
    In person: 6 April 2022 • 17:40 - 19:30 CEST | Hall Rhin, Poster Area
    Show Abstract + Hide Abstract
    Hyperspectral imaging analysis is a technique for obtaining the unique reflection wavelength of an object by subdividing a certain wavelength range into nm units. This technology is used in research to detect a specific object by analyzing the unique spectral data of a pixel. Hyperspectral imaging technology applied to the marine environment is mainly being studied, such as monitoring the range of marine pollution caused by oil spills and detecting algae to confirm the occurrence of green and red algae. However, there have been few attempts to use hyperspectral imaging in marine search research to detect small objects in the sea. The purpose of this study is to present a framework structure designed to utilize hyperspectral data for maritime search research and to prepare a plan to utilize it. The framework is designed to analyze the input hyperspectral data with artificial intelligence technology and to detect and identify marine objects. The overall structure of the framework consists of hyperspectral data input, data preprocessing, artificial intelligence analysis, building a spectrum/image-based learning data library, and detection and identification of maritime objects. The hyperspectral data input was designed in consideration of the input conditions of time series data, and the optimized technology for detecting small maritime objects was applied to the data preprocessing process. The machine learning analysis part uses DBSCAN, a clustering algorithm, to detect marine objects and applies the technology to build the spectrum of the detected pixels as a spectral library. The deep learning analysis part uses Faster R-CNN to detect and identify maritime objects, and the labeled training data used for deep learning analysis is built into a library. Whenever artificial intelligence analysis proceeds, spectral data and labeling images are built into their respective libraries and continuously updated to improve the reliability of analysis results. Currently, this study is in the process of producing a web platform for detecting maritime objects with hyperspectral data based on the aforementioned design. The platform to be completed through additional research in the future can be utilized in the field of maritime search research using hyperspectral data, and can be used as a basic platform for various hyperspectral data research requiring artificial intelligence technology. In addition, a data linkage plan for using hyperspectral data acquired through aerial photography as public dataset is being considered.
    PC12138-38
    Author(s): Youngbin Na, Do-Kyeong Ko, Gwangju Institute of Science and Technology (Korea, Republic of)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    In this research, we propose the application of a deep-learning method for recognizing fractional orbital angular momentum (OAM) beams distorted by atmospheric turbulence (AT). In order to acquire data sets for model training and testing, we first simulate the propagation of fractional OAM beams through a 1000-m optical channel. Here, AT effects in the channel are modeled with five random phase screens equally spaced. Then, the effects of turbulence strength and OAM mode spacing on the recognition accuracy are analyzed. In particular, we compare the recognition accuracy for three types of OAM-encoding schemes, two single-mode sets (integer and fractional OAM) and one multiplexed fractional mode set. Despite the strong distortion, our designed model shows the ability to recognize transmitted OAM modes with high accuracy and high resolution. In addition to this, we investigate the generalization ability of deep learning to accommodate unknown turbulence environments.
    12138-39
    Author(s): Yue Wang, John J. Healy, Univ. College Dublin (Ireland)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    Gibbs ringing is an artefact that occurs at edges due to the action of an aperture. Traditionally, it is analysed as a Fourier transform phenomenon. In this paper, we analyse it in terms of the Fresnel transform.
    12138-40
    Author(s): Thibault Behaghel, Lab. d'Astrophysique de Marseille (France); Eduard R. Muslimov, ASTRON (Netherlands)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    Due to the historical reasons all the image capturing and projection systems work with a “flat-to-flat” configuration: the image is detected in a camera focal plane and then projected to a flat display or a flat screen. Recently, we entered a new era with two major technical levers – curved sensors and 3D/immersive imaging. This new combination allows us, on the one hand, to easily capture spherical images and, on another hand, to view spherical images without any intermediate plane picture. Indeed, the image used in an immersive projection system can be assimilated to a sphere where the user can move his head in different directions. Meanwhile, a camera based on curved sensor would be able to capture almost a perfect spherical scene. All the basic processes for editing and post-production can thus be done on a spherical data basis. In this work we consider design of lenses for capturing and projecting images on spherical surfaces. Due to the spherical symmetry reasons it is just natural to use monocentric lenses for these purposes. Such a design evolves from a simple ball lens, where the pupil center coincides with the center of symmetry, to a more realistic design with 4 components in 2 groups. We consider a lens with 12 mm focal length and F/1.77 aperture, covering the field of view up to 90 degrees. It works with an object located 3 m away from the camera and the spatial resolution reaches 57 lines/mm. The same design can be re-scaled and modified to serve as a projection system working with a curved screen. We consider a spherical screen with 12 m radius, which can be related to a planetarium cupola. We analyze image quality of such a system and show that the image distortion should be re-defined and the corrected value is lower than a conventional one by factor of 1.4. Also, we perform an end-to-end image simulation to demonstrate that the projected wide-angle scene is close enough to a one observed directly by a human eye.
    12138-41
    Author(s): Maretta Kazaryan, North Ossetian State Medical Academy (Russian Federation); Evgeny A. Semenishchev, Viacheslav V. Voronin, Moscow State Univ. of Technology "STANKIN" (Russian Federation)
    On demand
    Show Abstract + Hide Abstract
    Remote sensing of the Earth allows to receive medium information, a high spatial resolution from space vehicles, and to conduct hyperspectral measurements. This study presents a remote sensing application using time-series Landsat satellite images to monitor the solid waste disposal site (WDS). The article proposes algorithms for working with spatial information, namely the transformation (convolution) of these manifolds into a one-dimensional sample. Recursive quasi-continuous sweeps are used for which the following conditions are satisfied: 1) preservation of the topological proximity of the original and expanded spaces, 2) preservation of correlations between the elements of the original and transformed spaces. An automated system is proposed for detecting and investigating waste objects based on the concept of fractal sets and convolutional neural networks. The first neural network detects WDS, the second works to localize the waste objects. This technique can become the object of further research on developing a medical-prophylactic expert system at the territorial level to detect and neutralize unauthorized waste disposal sites based on medium and high-resolution space images. As a result, the proposed method demonstrates good accuracy in detecting the solid waste disposal site on real satellite images.
    12138-42
    Author(s): Evgeny A. Semenishchev, Moscow State Univ. of Technology "STANKIN" (Russian Federation); Aleksandr Zelensky, Andrey Alepko, Marina zdanova, Moscow State Univ. of Technology (Russian Federation); Viacheslav V. Voronin, Moscow State Univ. of Technology "STANKIN" (Russian Federation)
    On demand
    Show Abstract + Hide Abstract
    The current technological cycle for the manufacture of complex products includes using a large number of specialized equipment. To improve the quality of the product, expensive machine tools are used. During the production process, of defects in the structure of the product and as a result of errors in the formation of production cycles, deviations may appear on the surface of the desktop or mechanisms for its movement. On different loading, pressures are tested on parts of different surfaces. Constant point overloads in parts of the working area of the robotic complex (machine tool) introduce distortions in the operation of the debugged system. Inaccuracy in setting the object of processing or constant increased pressure on one of the elements of the table, located far from the stiffening ribs of the structure, causes it to bend. Another factor influencing the appearance of a positioning error is surface wear caused by destruction or abrasion (most often corrected by replacing the table). The third factor is the presence of several points of movement of the working table of the robotic complex, as a result of which uneven influences are applied to different areas and power units. All these factors have an impact on the non-resultant error detected during the high-precision processing of products. To eliminate or compensate for such types of distortions, the work proposes the development of a method for constructing a map of deviations of a matrix of points plotted on the surface of the table of a robotic complex (machine tool). For the analysis of deviations, the work proposes developing a system consisting of a laser level and an electronic ruler, which fixes the deviation relative to the horizontal surface with an accuracy of hundredths of microns. Measurements are made pointwise within the generated mesh. The distance between the grid nodes allows you to vary the analyzed accuracy. The generated measurement matrix is three-dimensional. It is obtained in layers and performed with displacements in the three axes of the machine (robot). The process of forming a two-dimensional measurement space is carried out in two stages. At the first stage, a rough construction of three-dimensional measurements is carried out (the grid pitch is selected 10-100 times larger than the step used for accurate analysis). Based on the obtained data in three-dimensional space, layer-by-layer processing is performed by a smoothing interpolation method based on applying a multi-criteria objective function. The method is based on minimizing the objective function of the standard deviation of the input data from the values obtained as a result of processing, and the second criterion minimizes the mean square of the difference between the values formed as a result of processing. The adjusting parameter allows you to set the weight of the criterion. Obtaining estimates for different adjustment parameters allows you to determine areas with deviations from a given threshold and exclude single (small in duration) errors. Such a filter allows you to automatically set the degree of smoothness of the output function and determine the areas that have deviations relative to the threshold of the measurement error assigned by the operator. In the second step, the marked spaces and the area next to them are measured with the minimum grid spacing. Areas that are not marked as locally damaged are analyzed with a reduced step (mismatched points) and re-processed by the multicriteria method. If new errors are found, the area is re-scanned, otherwise, it is accepted as acceptable for use. The resulting three-dimensional matrix makes it possible to estimate areas with significant errors (deviations) at standard displacements of the working tool and the working table to calculate the compensation parameters of the displacements. As field data, we used the data of the deviations obtained by analyzing the working table with a minimum step of the measurement grid, a displacement in three coordinates of the machine, and the proposed approach. The result obtained made it possible to identify the same areas of deviations as the full one obtained during the complete measurement cycle. For various conditions, it was possible to reduce the analysis time up to 10 times, in some cases, it was more than 12 hours. Tables of verification tests and examples of calculating the predicted values of the final analysis cycles are given.
    12138-43
    Author(s): Aleksander A. Zelensky, Viacheslav V. Voronin, Moscow State Univ. of Technology "STANKIN" (Russian Federation); Nikolay Gapon, Moscow State Univ. of Technology (Russian Federation); Evgeny A. Semenishchev, Moscow State Univ. of Technology "STANKIN" (Russian Federation); Vadim Egipko, Iliy Khamidullin, Moscow State Univ. of Technology (Russian Federation)
    On demand
    Show Abstract + Hide Abstract
    In this paper, we propose a multisensor SLAM system capable of recovering a globally consistent 3-D structure. The proposed method mainly takes two steps. The first step is to fusion images from visible cameras and depth sensors based on the PLIP model (parameterized model of logarithmic image processing) close to the human visual system's perception. The second step is image reconstruction. This article presents an approach based on a modified exemplar block-based algorithm using the autoencoder-learned local image descriptor for image inpainting. For this purpose, we learn the descriptors using a convolutional autoencoder network. Then, a 3-D point cloud is generated by using the reconstructed data. Our system outperforms the state-of-the-art methods quantitatively in reconstruction accuracy on a benchmark for evaluating RGB-D SLAM systems.
    12138-44
    Author(s): Aleksander A. Zelensky, Viacheslav V. Voronin, Marina M. Zhdanova, Moscow State Univ. of Technology "STANKIN" (Russian Federation); Nikolay Gapon, Olga Tokareva, Moscow State Univ. of Technology (Russian Federation); Evgeny A. Semenishchev, Moscow State Univ. of Technology "STANKIN" (Russian Federation)
    On demand
    Show Abstract + Hide Abstract
    The solution to the problem of recognizing human actions on video sequences is one of the key areas on the path to the development and implementation of computer vision systems in various spheres of life. At the same time, additional sources of information (such as depth sensors, thermal sensors) allow to get more informative features and thus increase the reliability and stability of recognition. In this research, we focus on how to combine the multi-level decompression for depth and color information to improve the state of art action recognition methods. We present the algorithm, combining information from visible cameras and depth sensors based on the deep learning and PLIP model (parameterized model of logarithmic image processing) close to the human visual system's perception. The experiment results on the test dataset confirmed the high efficiency of the proposed action recognition method compared to the state-of-the-art methods that used only one modality image (visible or depth).
    PC12138-46
    Author(s): Benjamin Lochocki, VU Amsterdam, Department of Physics and Astronomy (Netherlands), Advanced Research Center for Nanolithography (ARCNL) (Netherlands); Max V. Verweg, VU Amsterdam (Netherlands), Advanced Research Center for Nanolithography (ARCNL) (Netherlands); Johannes F. de Boer, VU Amsterdam, Department of Physics and Astronomy (Netherlands); Lyubov V. Amitonova, Advanced Research Center for Nanolithography (ARCNL) (Netherlands), VU Amsterdam, Department of Physics and Astronomy (Netherlands)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    Endoscopes are commonly used to image objects hidden in deeper layers. However, their size and stiffness tag them as invasive tools. In contrast, thin and flexible multimode fibers could offer a minimal invasive solution as imaging instrument. Here, we present initial results of fast and diffraction limited fluorescence imaging, by combining quasi-random speckle patterns with compressive sensing. We compare the newly developed imaging methodology to conventional raster scanning and highlight its advantages. In conclusion, we demonstrate the use of multimode fibers as feasible instrument to rapidly acquire high resolution images based on speckle patterns.
    12138-47
    Author(s): Andréa de Lima Ribeiro, Margret C. Fuchs, Sandra Lorenz, Helmholtz-Zentrum Dresden-Rossendorf, Helmholtz Institute Freiberg for Resource Technology (Germany); Christian Röder, Institute of Applied Physics, Faculty of Chemistry and Physics, Technische Universität Bergakademie (Germany); Yuleika Madriz, Erik Herrmann, Richard Gloaguen, Helmholtz-Zentrum Dresden-Rossendorf, Helmholtz Institute Freiberg for Resource Technology (Germany); Johannes Heitmann, Institute of Applied Physics, Faculty of Chemistry and Physics, Technische Universität Bergakademie (Germany)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    Waste from electronic equipment (WEEE) is a fast growing and complex waste stream, and plastics represent around 20% of its total. The proper recycling of plastics from WEEE is dependent on the identification of polymers prior to entering the recycling chain. Technologies aiming for this identification must be compatible with conveyor belt operations, acquiring data at fast speeds. The RAMSES project is developing a smart sensor network for the recycling industry by combining high-speed Hyperspectral Imagery (HSI), with point Raman spectroscopy using advanced data fusion and optimization by machine learning methods. Here we present the application of the RAMSES project for identification of plastic constituents in WEEE. We have selected 23 polymers commonly found in WEEE (PP, PE, ABS, PC, PS, PTFE, PMMA and PVC). Reflectance information is obtained using HSI cameras in the short-wave infrared (SWIR, identification of transparent/light colour plastics) and mid-wave infrared (MWIR, identification of dark plastics). Raman point acquisitions are well-suited for highly specific plastic identification (HORIBA ARAMIS spectrometer: excitation specifications: 532 nm, 50 mW). Integration times varied according to the capabilities of each sensor, never exceeding 2 seconds. We recognised spectral fingerprints (unique features) for each material based on literature reports. A positive identification is assigned for fingerprints with signal-to-noise-ratios > 4 (reflectance - HSI) or > 3 (Raman). Spectral fingerprint identification was possible on 60% of the samples using SWIR-HSI; however, it failed to produce positive results for dark plastics. MWIR-HSI additional information was used to identify 2 dark samples (70% identified using SWIR + MWIR). Fingerprint assignment in short-time Raman acquisition was successful for all samples, being suitable for polymer identification in fast-paced recycling environments. Aiming towards accurate solutions for the recycling industry, we recommend a combination of at least 2 sensors (VNIR and Raman or MWIR and Raman).
    PC12138-48
    Author(s): Ksenia Abrashitova, Lyubov Amitonova, ARCNL (Netherlands)
    In person: 6 April 2022 • 17:40 - 19:30 CEST | Hall Rhin, Poster Area
    Show Abstract + Hide Abstract
    Fast super-resolution deep tissue imaging has long been a challenge. In life sciences, an endoscopic probe can be put inside the region of interest to overcome scattering and absorption. However, the resolution of state-of-the-art micro-endoscopy is limited by the diffraction of light. Super-resolution microscopy techniques suffer from low speed and require specific fluorescent labelling. There is still a high demand for a fast super-resolution label-free technique that can be implemented in a compact format. Here we present fiber-based label-free imaging with video-rate speed and at a resolution more than 2 times better than the diffraction limit.
    PC12138-49
    Author(s): Max Blokker, Vrije Universiteit Amsterdam (Netherlands); Philip C. de Witt Hamer, Pieter Wesseling, Amsterdam UMC location VU University Medical Center (Netherlands); Marloes L. Groot, Vrije Universiteit Amsterdam (Netherlands); Mitko Veta, Eindhoven University of Technology (Netherlands)
    In person: 6 April 2022 • 17:40 - 19:30 CEST | Hall Rhin, Poster Area
    Show Abstract + Hide Abstract
    Gliomas represent approximately 80% of all malignant primary brain and central nervous system tumors diagnosed in the Western world. Extensive surgical resection is associated with delayed tumor progression and improved patient survival. Non-invasive and label-free intraoperative imaging with Third Harmonic Generation microscopy could enable neurosurgeons to detect the tumor margin during surgery. In this study, we demonstrate the importance and effectiveness of pairing THG microscopy and deep learning in tumor vs. non-tumor classification on ex-vivo human brain samples. In addition, we underline the significance of removing noisy image data from the training set based on descriptive statistics.
    12138-50
    Author(s): Andrey Patrakeev, Alexander Trokhimovskiy, Oleg Korablev, Space Research Institute (Russian Federation); Franck Montmessin, LATMOS/IPSL, UVSQ Université Paris-Saclay, Sorbonne Université, CNRS (France); Denis Belyaev, Anna Fedorova, Space Research Institute (Russian Federation); Sandrine Maloreau, Gabriel Guignan, LATMOS/IPSL, UVSQ Université Paris-Saclay, Sorbonne Université, CNRS (France); Yuriy Ivanov, Main Astronomical Observatory (Ukraine); Yuiy Kalinnikov, VNIIFTRI (Russian Federation)
    On demand | Presented live 6 April 2022
    Show Abstract + Hide Abstract
    In 2019, the Indian Space Research Organisation (ISRO) opened an Announcement of Opportunity to the international science community to study Venus for space-based experiments onboard planned ISRO’s Venus Orbiter Mission. The joint proposal of Space Research Institute (Russia) and Laboratoire Atmosphères, Observations Spatiales, UVSQ Université Paris-Saclay (France) was chosen by the ISRO mission program committee. The VIRAL (Venus InfraRed Atmospheric gases Linker) project is aimed at proposing a remote sensing infrared (IR) spectrometer to provide first-class information on the composition and structure of the atmosphere at the top and above the cloud layer of Venus. VIRAL leverages from the legacy of a generation of instruments that has achieved very high detecting performances in a compact, light-weight, and easily-to-design. VIRAL will cover the IR range from 2.3 to 4.3 μm, and achieve high vertical resolution (with a footprint of <1 km at the limb, depending on the mission orbital configuration) to allow the retrieval of the layering of the Venusian upper atmosphere and its composition.
    12138-52
    Author(s): Sergei Zenevich, Space Research Institute (Russian Federation); Iskander Sh. Gazizov, Moscow Institute of Physics & Technology (Russian Federation), Space Research Institute (Russian Federation); Maxim V. Spiridonov, Space Research Institute (Russian Federation); Alexander V. Rodin, Moscow Institute of Physics & Technology (Russian Federation)
    On demand
    Show Abstract + Hide Abstract
    The unusual dynamics of the Venus atmosphere remains one of the most intriguing puzzles of our sister planet. IVOLGA (an Oriole bird in Russian and an abbreviation to “Fiber Infrared Heterodyne Analyzer”) will be the first-ever spaceborne instrument to measure transmittance spectra of Venus atmosphere with an ultra-high spectral resolution (≥ 1 000 000) in the near-infrared. The instrument’s capability to observe the fully resolved shape of individual lines in CO2 rovibrational bands at 1.6 um opens an opportunity to retrieve information about the chemical composition, wind velocity, and temperature of the sounded atmospheric layers. Observations will provide data on wind and temperature profiles in the altitude range of 75-140 km with a high vertical resolution by sounding the atmosphere of Venus in solar occultation mode. The instrument based on commercial telecommunication optical and electronic components is a compact, affordable and reliable unit, planned to be launched onboard the Venus Orbiter Mission of the Indian Space Research Organization (ISRO).
    Hot Topics III
    In person: 7 April 2022 • 09:00 - 10:35 CEST | Schweitzer Auditorium, Niveau/Level 0
    12136-500
    Author(s): Sylvain Gigan, Lab. Kastler Brossel (France)
    In person: 7 April 2022 • 09:05 - 09:50 CEST | Schweitzer Auditorium, Niveau/Level 0
    Show Abstract + Hide Abstract
    Light propagation in complex media, such as paint, clouds, or biological tissues, is a very challenging phenomenon, encompassing fundamental aspects in mesoscopic and statistical physics. It is also of utmost applied interest, in particular for imaging. Wavefront shaping has revolutionized the ability to image through or in complex media. I will discuss how computational tools and machine learning allows to develop further wavefront shaping for imaging applications, and conversely discuss how the same complexity can be leveraged for optical computing tasks.
    PC12130-500
    Author(s): Isabelle Staude, Friedrich-Schiller-Univ. Jena (Germany)
    On demand | Presented live 7 April 2022
    Show Abstract + Hide Abstract
    Optical metasurfaces, two-dimensional arrangements of designed nanoresonators, offer unique opportunities for controlling light fields and for tailoring the interaction of light with nanoscale matter. Due to their flat nature, their integration with two-dimensional materials consisting of only a single molecular layer is particularly interesting. This talk reviews our recent and ongoing activities in hybridizing optical metasurfaces composed of resonant metallic or dielectric building blocks with different types of two-dimensional materials, including monolayer transition metal dichalcogenides (2D-TMDs) and carbon nanomembranes (CNMs). On the one hand, we will show that CNMs can serve as mechanically stable substrates for free-standing metasurface architectures of nanoscale thickness. On the other hand, we will demonstrate that the ability of the nanoresonators to concentrate light into nanoscale volumes can be utilized to carefully control the properties, such as pattern and polarization, of light emitted by 2D-TMDs via photoluminescence or nonlinear processes. Here, the ability of tailored nanostructures to interact selectively with exciton populations located at inequivalent conduction band minima at the corners of the 2D-TMD's Brillouin zone is of particular interest. Such a selective interaction is an important prerequisite for the realization of future miniaturized valleytronic devices.
    Break
    Coffee Break 10:35 - 11:00
    Session 5: Standardization of Plenoptic Coding and Media Security Frameworks
    In person: 7 April 2022 • 11:00 - 12:20 CEST | Salon 1, Niveau/Level 0
    Session Chair: Peter Schelkens, Vrije Univ. Brussel (Belgium)
    12138-22
    Author(s): Cristian Perra, Univ. degli Studi di Cagliari (Italy); Saeed Mahmoudpour, Vrije Univ. Brussel (Belgium); Carla Pagliari, Instituto Militar de Engenharia (Brazil)
    On demand | Presented live 7 April 2022
    Show Abstract + Hide Abstract
    The plenoptic image coding system (JPEG Pleno) aims at representing different imaging modalities such as texture-plus-depth, light field, point cloud, and holography. The standard is currently composed of four parts. JPEG Pleno Part 1 specifies a framework that supports the capture, representation, and exchange of plenoptic imaging modalities: point clouds, light fields, and holography. JPEG Pleno Part 2 specifies the codestream syntax of light field data and associated metadata. JPEG Pleno Part 3 establishes the conformance testing methodology to guarantee that an application complies with the JPEG Pleno Plenoptic image coding system standard. JPEG Pleno Part 4 provides the reference decoder software capable of decoding codestreams that conform to Part 1 and Part 2 together with a sample encoder software. Light field quality assessment strategies have been used to assess the responses to the JPEG Pleno Call for Proposals on Light Field Coding. Nevertheless, the emergence of new light field codecs, including the JPEG Pleno Part 2, as well as new light field datasets and light field display and acquisition technologies brought new challenges to light field quality assessment solutions. The main goal is to specify objective and subjective quality assessment solutions for compressed light field data, as there is (to the best of our knowledge) no standard methodology for quality assessment of light field data and/or any plenoptic content. Finally, as one of the possible directions within future JPEG Pleno standardization activities, JPEG Pleno is currently exploring state-of-the-art light field coding architectures exploiting learning-based approaches to assess the potential of these coding approaches in terms of compression efficiency. This paper reports the status of the JPEG Pleno standard for the light field imaging modality and discusses current activities towards future developments of the standard.
    12138-23
    Author(s): Antonio M. G. Pinheiro, Joao Prazeres, Univ. da Beira Interior (Portugal); Antonin Gilles, b<>com (France); Tobias Birnbaum, Raees K. Kizhakkumkara Muhamad, Peter Schelkens, Vrije Univ. Brussel (Belgium)
    On demand | Presented live 7 April 2022
    Show Abstract + Hide Abstract
    The JPEG committee started the definition of a new standard for holographic content coding under the project JPEG~Pleno (ISO/IEC 21794). As a first standardization effort targeting holographic content, multiple challenges were faced, notably to select appropriate testing content, type of anchors to be used for comparison with proponent's proposals and how to evaluate the quality for this type of content. This paper describes the development and options of the Common Test Conditions (CTC) defined to evaluate the responses to the call for proposals. The application of standard coding technology developed for image/video compression to holographic content led to several complicated issues to solve related to appropriate anchor selection and the generation of test content. Furthermore, knowledge on how to evaluate compression methodologies for holographic data was very limited until recently. Relevant studies typically use signal fidelity for quality evaluation by computing the SNR or the PSNR between the signal and correspondent decoded signal versions, in the hologram plane or the object plane, respectively. Although signal fidelity is always a measure of the ability of the compression technology to recreate a similar signal, it is usually not the best metric for perceptual evaluation. This paper describes the methodologies defined in the CTC for the perceptual and objective quality evaluation of the holographic data.
    12138-24
    Author(s): Tobias Birnbaum, David Blinder, Raees K. Kizhakkumkara Muhamad, Vrije Univ. Brussel (Belgium); Antonin Gilles, b<>com (France); Cristian Perra, Univ. degli Studi di Cagliari (Italy); Tomasz Kozacki, Warsaw University of Technology (Poland); Peter Schelkens, Vrije Univ. Brussel (Belgium)
    On demand | Presented live 7 April 2022
    Show Abstract + Hide Abstract
    There exist a multitude of methods and processing steps for the numerical reconstruction of digital holograms. Because these are not standardized, most research groups follow their own best practices, making it challenging to compare numerical results across groups. Meanwhile, JPEG Pleno holography seeks to define a new standard for the compression of digital holograms. Numerical reconstructions are an essential tool in this research because they provide access to the holographically encoded 3D scenes. In this paper, we outline the available modules of the numerical reconstruction software developed for the purpose of this standardization. A software package was defined that is able to reconstruct all holograms of the JPEG Pleno digital hologram database, evaluating several core experiments. This includes Fresnel holograms recorded or generated in Fresnel or (lensless) Fourier geometries as well as near-field holograms, that require rigorous propagation through the angular spectrum method. Specific design choices are explained and highlighted. We believe that providing information on the current consensus on the numerical reconstruction software package will allow other research groups to replicate the results of JPEG Pleno and improve comparability of results.
    12138-25
    Author(s): Frederik Temmermans, Vrije Univ. Brussel (Belgium); Deepayan Bhowmik, Univ. of Stirling (United Kingdom); Fernando Pereira, Instituto de Telecomunicações (Portugal); Touradj Ebrahimi, Ecole Polytechnique Fédérale de Lausanne (Switzerland)
    On demand | Presented live 7 April 2022
    Show Abstract + Hide Abstract
    Advances in deep neural networks (DNN) and distributed ledger technology (DLT) have shown major influence on media security, authenticity and privacy. Current deepfake techniques can produce near realistic media content which can be used in both good and bad intended use cases. At the same time, DLTs are finding their way in the industry as fair, transparent and reliable means for content distribution. In particular non-fungible tokens (NFTs) are emerging in the digital art market. However, such new developments also introduce new challenges, including the need for robust and reliable metadata, a mechanism to secure the media and associated metadata, means to verify authenticity and interoperability between various stakeholders. This paper identifies emerging challenges in fake media and NFT, and proposes a novel framework to effectively cope with secure media applications allowing for a structured, systematic, and interoperable solution. The framework relies on an architecture that is modular, flexible, extensible, and scalable in the sense that it can be implemented in both lighter as well as more feature-rich and more complex configurations depending on the underlying application, needed features and available resources, while enabling products and services in various ecosystems with desired trust and security capabilities. The framework is inspired by activities and developments within JPEG standardisation related to security, authenticity & privacy.
    Break
    Lunch Break 12:20 - 13:40
    Session 6: Displays and Projections
    In person: 7 April 2022 • 13:40 - 15:40 CEST | Salon 1, Niveau/Level 0
    Session Chair: David Blinder, Vrije Univ. Brussel (Belgium)
    PC12138-26
    Author(s): Michal Makowski, Joanna Starobrat, Andrzej Kolodziejczyk, Maciej Sypek, Adam Kowalczyk, Jaroslaw Suszek, Warsaw Univ. of Technology (Poland)
    On demand | Presented live 7 April 2022
    Show Abstract + Hide Abstract
    Pixelated light modulators, including LCoS Spatial Light Modulators inevitably suffer from the formation of spurious orders of diffraction in far field. This problem is essential in holographic image projection, where copies of the central useful image are created, distracting the viewer and taking away a solid portion of the illumination energy. Although there is not any straightforward way to break the regularity of the cartesian SLM array, its individual pixels can be engineered in order to change the shape of the far field intensity envelope, which being the Fourier transform of a single pixel shape, can be altered by physically applying an apodization mask directly to the SLM surface. The mask effectively changes the transmittance of any given SLM pixel, opening the ways of suppressing selected image duplicates in the far field. In this work we present the numerical simulations showing the possibilities of localization of the light energy into two lower orders, while heavily suppressing the upper order duplicates. Moreover, we present the experimental validation preformed with a binary dithered electron-beam written filter, which proved the significant decrease of the number of visible spurious image duplicates for a human eye. The positioning of the mask has been found to be non-critical, as opposed to the rotational precision, nevertheless such a solution could potentially be implemented as an add-on element providing the improved functionality of pixelated light modulators of any kind. As the Fourier transform of amplitude-only masks is inevitably symmetrical, the possibility of the attenuation of only a single diffraction order are limited. For this reason we investigate the phase-only and mixed filters providing more degrees of freedom in the design and in the tailoring of the far field envelope, leading to the demonstration of projection of a single image with all other orders highly suppressed.
    PC12138-27
    Author(s): Wang Fan, Tomoyoshi Shimobaba, Tomoyoshi Ito, Takashi Kakue, Chiba Univ. (Japan)
    In person: 7 April 2022 • 14:00 - 14:20 CEST | Salon 1, Niveau/Level 0
    Show Abstract + Hide Abstract
    This study proposes a comprehensive acceleration method to calculate polygon holograms. By integrating the controllable energy angular spectrum method and the analytical spectrum calculation method, the proposed comprehensive calculation scheme can be more than 20 times faster than the previous method with the highest reconstructed image quality, while it can maintain almost the same quality as the latter. Some little tips are used for calculating analytical expression that further improve efficiency. The flat illumination model is applied to render shading of 3D objects, thus reconstructing a realist-looking character without increasing computational efforts. The computational efficiency and 3D reconstructed results of the proposed method are overwhelming compared to all previous polygon-based methods.
    12138-29
    Author(s): Tomasz Kozacki, Moncy Sajeev Idicula, Maksymilian Chlipala, Juan Martínez-Carranza, Warsaw Univ. of Technology (Poland)
    On demand | Presented live 7 April 2022
    Show Abstract + Hide Abstract
    The aim of 3D display technology is to reconstruct an object, which can be seen from any viewpoint. The viewer should be free to move his eyes in any direction and observe corresponding perspectives of the object. It is known that the digital holography constitutes the best-known imaging framework. Methods for manipulating the geometry of 3D image by linear transforming of hologram data are very efficient. Such techniques are desirable also for wide-angle viewing holographic display. Especially when there is no direct access to the reconstructed image due to lack of numerical hologram reconstruction techniques. Thus, the methods based on manipulating the holographic image directly cannot be applied. In this paper, we investigate theoretically, numerically, and experimentally the image manipulations based on hologram stretching and addition of spherical curvature for the case of wide-angle near eye holographic display. We investigate cases of transverse and axial image shift and image magnification. We show that one feature of wide-angle display is local paraxiality. Thus, when manipulating the hologram, there is always sharp image obtained, but at different location than predicted with paraxial approach. We prove this analytically. The effect is similar to the field curvature aberration, where sharp image is obtained on curved Petzval surface. Each considered case of image manipulation is investigated theoretically and proved with numerical simulations. Finally, applicability of the investigated image manipulation methods is shown with experiments of hologram reconstructions in a wide-angle holographic display.
    PC12138-30
    CANCELED: Laser light field display
    In person: 7 April 2022 • 14:40 - 15:00 CEST | Salon 1, Niveau/Level 0
    Show Abstract + Hide Abstract
    Light field displays offer an unparalleled 3D experience, but suffer from poor image quality, low resolution and FOV. We show how a laser-lit backlight developed by VitreaLab, comprising a photonic chip which distributes light into an array of millions of tightly confined laser beams, overcomes these shortcomings. These beams illuminate, one-by-one, the subpixels of an LCD before hitting specially designed diffractive optical elements (DOEs) that steer the pixel-beams towards precise viewing positions. This design provides a continuous set of views, at a variable viewing distance, produces a 10x improvement in resolution compared to lenslet-based displays and features a FOV>90°. [This work has been previously submitted to SPIE Photonics West 2022]
    12138-31
    Author(s): Fabian Rainouard, CEA-LETI (France), Univ. de Haute-Alsace (France), Lab. Jean Kuntzmann (France); Matthias Colard, CEA-LETI (France), Univ. de Haute-Alsace (France); Olivier Haeberlé, Univ. de Haute-Alsace (France); Edouard Oudet, Lab. Jean Kuntzmann (France); Christophe Martinez, CEA-LETI (France)
    On demand | Presented live 7 April 2022
    Show Abstract + Hide Abstract
    This paper presents a method to optimize the design of waveguides and electrodes for a retinal projector. The objective is to increase the number of pixels of the final image without jeopardizing the quality of each pixel. Our new mathematical model represents waveguides and electrodes as a succession of segments with constant absolute angle. Thanks to the constant gap between the curves, we multiply by 3.5 the number of pixels for equivalent self-focusing effect in comparison to our previous model. We use B-Spline curves to approximate our succession of segments and we apply a dedicated method to find their intersections.
    12138-32
    Author(s): Eduard R. Muslimov, ASTRON (Netherlands); Damir Akhmetov, Danila Kharitonov, Ilya Guskov, Nadezhda K. Pavlycheva, Kazan National Research Technical Univ. named after A. N. Tupolev - KAI (Russian Federation)
    On demand | Presented live 7 April 2022
    Show Abstract + Hide Abstract
    Waveguide-type displays with volume phase holograms are notable for their small size, large eyebox and high transmission in both the projected image and see-through channels. However, as the aperture, field of view and working spectral range grow, the variation of the hologram replay conditions across its' surface increases and sets a performance limitation in terms of resolution and diffraction efficiency. In order to overcome it, we propose to use a composite hologram, i.e. a volume phase grating split into sub-apertures with independently varying parameters as the fringes tilt, their pattern, the holographic layer thickness and modulation depth. This approach allows to increase the number of free variables in the design drastically, without introducing any additional optical components. We present the optical design and modelling algorithm for a display with a composite outcoupling hologram. The algorithm relies on simultaneous raytracing through auxiliary optical systems and the diffraction efficiency computations with the coupled waves theory equations. We demonstrate its’ application on an example of polychromatic display with extended field of view. It operates in the spectral range of 480-620 nm and covers the field of 6x8 degrees with 8 mm exit pupil diameter. The output hologram has a uniformized diffraction efficiency over the field and the spectral range varying from 26.0% to 65.3%, thus providing throughput around 50% for the projected image and its’ optimal overlapping with the see-through scene. At the same time, the image quality is high and uniform with the PTV angular size of the projected spot better than 1.88’ for the entire image. We compare the results for the initial design using a single classical grating with that for a composite hologram comprising of 4 sub-apertures and show the achieved gain in performance -- the gain in DE is up to 13.8% and that in aberration is 0.4'. Also we demonstrate that the computed parameters are feasible and provide a brief sensitivity analysis for them.
    Conference Chair
    Vrije Univ. Brussel (Belgium)
    Conference Chair
    Warsaw Univ. of Technology (Poland)
    Program Committee
    Olivier Aubreton
    Univ. de Bourgogne (France)
    Program Committee
    Teledyne DALSA (Netherlands)
    Program Committee
    Daping Chu
    Univ. of Cambridge (United Kingdom)
    Program Committee
    Consejo Superior de Investigaciones Científicas (Spain)
    Program Committee
    Otto-von-Guericke-Univ. Magdeburg (Germany)
    Program Committee
    Marek Domanski
    Univ. of Poznan (Poland)
    Program Committee
    Ecole Polytechnique Fédérale de Lausanne (Switzerland)
    Program Committee
    Univ. Nacional Autónoma de México (Mexico)
    Program Committee
    Univ. de València (Spain)
    Program Committee
    Laurent Jacques
    Univ. Catholique de Louvain (Belgium)
    Program Committee
    RT-RK Institute for Computer Based Systems (Serbia)
    Program Committee
    VTT Technical Research Ctr. of Finland (Finland)
    Program Committee
    Univ. Politècnica de Catalunya (Spain)
    Program Committee
    Cristian Perra
    Univ. degli Studi di Cagliari (Italy)
    Program Committee
    Canon Information Systems Research (Australia)
    Program Committee
    Oculus VR, LLC (United States)
    Program Committee
    Nokia Research Ctr. (Finland)
    Program Committee
    Chiba Univ. (Japan)
    Program Committee
    Lea Skorin-Kapov
    Univ. of Zagreb (Croatia)
    Program Committee
    National Univ. of Singapore (Singapore)
    Program Committee
    Univ. of Patras (Greece)
    Program Committee
    AGT Associates (United States)
    Program Committee
    Univ. de Bourgogne (France)
    Program Committee
    FH OÖ Forschungs & Entwicklungs GmbH (Austria)
    Additional Information

    View call for papers