This conference addresses image acquisition and image exploitation topics to solve visual inspection and machine vision tasks automatically. Since elaborated approaches for acquiring images constitute the crucial base to successfully accomplish inspection tasks, particularly illumination, optics, sensors, and the complete acquisition setup composed of these ingredients are within the focus of the conference. Moreover, to extract the inspection-relevant information from images, signal processing and exploitation methods that account for the physical formation of the images are of great interest. As many inspection tasks cannot be solved based on a single image, frequently it is necessary to acquire sequences of images that have to be fused in an adequate manner to draw a final inspection decision. Therefore, the question is not only how to acquire appropriate single images, but how to acquire controlled image series that comprise sufficient information with respect to the inspection task and how such image series can be exploited efficiently.

General items
  • automated visual inspection
  • machine vision
  • robust, high performance inspection
  • visual quality monitoring and control
  • image acquisition and exploitation.

  • Methodology
  • image data based on diverse optical properties of materials (reflectance, roughness, spectrum, complex refraction index, etc.)
  • illumination techniques
  • deflectometry
  • mathematical models and methods
  • image series, image fusion and active vision
  • image processing and exploitation methods
  • detection and classification
  • physically-based image formation models
  • pattern recognition
  • light field methods
  • event-based vision
  • machine learning for automated visual inspection.

  • Applications
  • automated inspection of industrially produced goods
  • material recognition and verification
  • detection of surface defects
  • image-based measurement and control
  • inspection of specular surfaces
  • safety, security, and biometrics
  • medicine and biology
  • other application fields.
  • ;
    In progress – view active session
    Conference 12623

    Automated Visual Inspection and Machine Vision V

    28 June 2023 | ICM Room 12b
    View Session ∨
    • 1: Imaging
    • Optical Metrology Plenary Session
    • Posters-Wednesday
    • 2: Synthetic Data for Machine Learning
    • 3: Machine Learning and Classification
    • 4: Image-based Measurement Technology
    • 5: Image-based Position Measurement
    Session 1: Imaging
    28 June 2023 • 08:30 - 09:50 CEST | ICM Room 12b
    Session Chair: Michael M. Heizmann, Karlsruher Institut für Technologie (Germany)
    12623-1
    Author(s): Bernd Jähne, Sebastian Leyer, Roman Stewing, Kerstin E. Krall, Ruprecht-Karls-Univ. Heidelberg (Germany)
    28 June 2023 • 08:30 - 08:50 CEST | ICM Room 12b
    Show Abstract + Hide Abstract
    Thermographic imaging is applied to measure the shear flow at a wind driven water surface, an essential parameter to understand exchange of momentum, heat and mass between the atmosphere and the oceans. Only a thin line less than 1 mm thick perpendicular to the wind direction is heated with a penetration depth matched to the thickness of the shear layer at the water surface. With pulsed irradiation the shear can be estimated, while continuous irradiation is suitable to measure the orbital velocities of the wind waves. Motion fields and shear are computed by a generalized optical flow approach.
    12623-2
    Author(s): Bernd Jähne, Dennis Hofmann, Ruprecht-Karls-Univ. Heidelberg (Germany)
    28 June 2023 • 08:50 - 09:10 CEST | ICM Room 12b
    Show Abstract + Hide Abstract
    A new approach is described to image near-surface water side concentration fields at the air-water interface and wind waves at the Heidelberg Aeolotron wind-wave tank to study the transport mechanisms of air-sea gas exchange. The concentration fields are made visible by fluorescence imaging, stimulated by a 450 nm laser diode array, with 1-propylamine and pyranine. Light field imaging with seven cameras retrieves also the 3-D shape of the water surface. An additional laser line with 410 nm laser diodes is used to measure wind wave height directly and for precise camera alignment.
    12623-17
    Author(s): Gerard de Mas Giménez, Univ. Politècnica de Catalunya (Spain); Pablo García-Gómez, BEAMAGINE S.L. (Spain); Josep Ramon Casas, Santiago Royo, Univ. Politècnica de Catalunya (Spain)
    On demand | Presented live 28 June 2023
    Show Abstract + Hide Abstract
    Fog dramatically compromises the overall visibility of any scene, critically affecting features such as objects' illumination, contrast, and contours. The decrease in visibility compromises the performance of Computer Vision algorithms such as pattern recognition and segmentation, some of them very relevant to decision-making in the rise of autonomous-driven vehicles. Many dehazing methods have been proposed. However, to the best of our knowledge, all currently used metrics do compare the defogged image to its ground truth, usually the same scene on a non-foggy day, or estimate physical parameters from the scene. This hinders progress in the field, as obtaining proper ground truth images is not always possible and becomes costly and time-consuming because physical parameters greatly depend on the scene conditions. This work aims to tackle this issue by proposing a real-time operating defogging network that only takes an RGB image of the fogged scene as input, performs the defogging, and uses a contour-based metric for Single Image Defogging evaluation even when the ground truth is not available, which is the most common situation. The proposed metric only requires the original hazy image and the image after the defogging procedure. We trained our network using a novel two-stage pipeline with the DENSE dataset and compared our method and metric with currently used metrics and other defogging techniques with the NTIRE 2018 defogging challenge to prove their effectiveness.
    12623-4
    Author(s): Denise Tellbach, Rahul Bhattacharyya, Sanjay E. Sarma, Massachusetts Institute of Technology (United States)
    On demand | Presented live 28 June 2023
    Show Abstract + Hide Abstract
    Increasing demand for vaccines, and complications presented by freeze-drying, create a demand for automated quality inspection. We propose a novel use of polarization imaging to improve the efficacy of automated inspection by providing additional features for classification decisions. We compare gray value distributions of defects with typical product appearance variation as a measure of distinction to test our hypothesis. We find that differences between gray value means of defects and typical product variation are not statistically significant for RGB imaging but are statistically significant for polarization imaging with a two-sample t-test at an alpha level of 0.01.
    12623-3
    CANCELED: Light field Inspection
    Author(s): Christian Kludt, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB (Germany)
    28 June 2023 • 09:50 CEST | ICM Room 12b
    Show Abstract + Hide Abstract
    In automated visual inspection, illumination plays a crucial role as it is the very first component in the image processing chain. This is particularly important when inspecting non-diffuse, i.e. transparent or specular objects that change the directional distribution of the incident light in various ways. Such objects are usually inspected by hand by specially trained personnel using a collimated light source while varying the direction of illumination. We mimic this approach, but instead of using complicated or expensive optical components, we deploy inexpensive hardware with no moving parts in the form of a light field display. We reliably reveal deviations from the learned, defect-free object appearance with maximum contrast. In contrast to conventional approaches, the light field illumination controls the distribution of the luminance not only with regard to position, but also with regard to direction, thus functioning as a highly tunable lighting device.
    Break
    Coffee Break 09:50 - 10:30
    Optical Metrology Plenary Session
    28 June 2023 • 10:30 - 11:25 CEST | ICM, Saal 1
    10:30 to 10:40 hrs
    Welcome Address and Plenary Speaker Introduction

    Marc P. Georges, Liège Univ. (Belgium)
    Jörg Seewig, Technische Univ. Kaiserslautern (Germany)
    2023 Symposium Chairs
    PC12622-500
    Remote photonic medicine (Plenary Presentation)
    Author(s): Zeev Zalevsky, Bar-Ilan Univ. (Israel)
    28 June 2023 • 10:40 - 11:25 CEST | ICM, Saal 1
    Show Abstract + Hide Abstract
    I will present a photonic sensor that can be used for remote sensing of many biomedical parameters simultaneously and continuously. The technology is based upon illuminating a surface with a laser and then using an imaging camera to perform temporal and spatial tracking of secondary speckle patterns in order to have nano metric accurate estimation of the movement of the back reflecting surface. The capability of sensing those movements in nano-metric precision allows connecting the movement with remote bio-sensing and with medical diagnosis capabilities. The proposed technology was already applied for remote and continuous estimation of vital bio-signs (such as heart beats, respiration, blood pulse pressure and intra ocular pressure), for molecular sensing of chemicals in the blood stream (such as for estimation of alcohol, glucose and lactate concentrations in blood stream, blood coagulation and oximetry) as well as for sensing of hemodynamic characteristics such as blood flow related to brain activity. The sensor can be used for early diagnosis of diseases such as otitis, melanoma and breast cancer and lately it was tested in large scale clinical trials and provided highly efficient medical diagnosis capabilities for cardiopulmonary diseases. The capability of the sensor was also tested and verified in providing remote high-quality characterization of brain activity.
    Break
    Lunch Break 11:25 - 12:30
    Posters-Wednesday
    28 June 2023 • 12:30 - 13:30 CEST | ICM, Hall B0
    Poster authors, please set up posters between the morning coffee break and the end of lunch break on Wednesday. Plan to stand by your poster to discuss it with session attendees during the poster session. Remove your poster following the poster session conclusion as posters left on the boards will be discarded.
    12623-18
    Author(s): Dahyun Park, Jaehyeon Cho, So-myeong Ahn, Kyunghwan Moon, Youngmin Hwang, Hyojin Lee, HB Technology (Korea, Republic of)
    On demand | Presented live 28 June 2023
    Show Abstract + Hide Abstract
    Display and semiconductor manufacturing require inspection and repair process steps to increase the final product yield. To this end, it is necessary to divide into normal and defective images based on display and semiconductor images taken through an optical camera. This is a simple binary classification problem, but for the repair process, a more detailed classification technique is required. In order to automate this and solve it through deep learning, it is necessary to collect enough training data for each class. However, there are problems with certain defective classes that the deep learning model can't get enough to train. This greatly delays the time to apply the classification algorithm to the field, which adversely affects product mass production. In this paper, by using the deep learning method, sparse defective class images are naturally created, contributing to improving the performance of the final classification model. In addition, it is confirmed through experiments that artificially created images are made with the same shape and characteristics as non-made images of the same class.
    12623-20
    Author(s): Motoki Okazaki, F. C. C. Co., Ltd. (Japan); Ryohei Hanayama, The Graduate School for the Creation of New Photonics Industries (Japan)
    On demand | Presented live 28 June 2023
    Show Abstract + Hide Abstract
    CNN-based visual inspection for defect detection is discussed in this paper. Convolutional Neural Networks (CNN), an AI-based image recognition technology, shows good ability and is adopted to automate appearance inspections. However, it is difficult for CNN-based visual inspection to reduce both missed and over-detection at the same time. In human visual inspections, there are differences in the detection process between normal and defective products to suppress missed products. Normal product detection is a passive process that detects only when none of the defects are applicable, whereas that of defective products is an active process that detects when any of the defects may be applicable. We call it asymmetry in visual inspection. We tried to implement the asymmetry in a CNN-based visual inspection system. The developed inspection system is based on a multi-class classification problem that classifies normal and defective products as well as types of defects. The asymmetry can be realized by adjusting labels applied for each training data to indicate the probability that the data falls into each class. For the training data for normal products, probabilities are given only to the genuine class. This minimizes the chances of normal products falling into other defective product classes. On the other hand, for that for defective products, the correct defective product class is given the highest probability. Other defective product classes are also given some probabilities. However, no probability is given to the normal product class. This prevents defects from being overlooked by not eliminating the possibility of defects of all kinds and minimizes the chances of defective products being classified as normal products. The system is thus trained to respond asymmetrically to normal and defective products. This developed method was named the asymmetric label smoothing method. The developed method was applied to the visual inspection of clutch discs. As the results of experiments, missed rates with the conventional CNN method were 1.65%, while that with the developed method was 0.08%. The missed rate reduction with the developed method was confirmed. And suppression of over-detection was also confirmed.
    12623-21
    Author(s): Shih-Yu Chen, Fong-Ji Tsai, National Yunlin Univ. of Science and Technology (Taiwan)
    28 June 2023 • 12:30 - 13:30 CEST | ICM, Hall B0
    Session 2: Synthetic Data for Machine Learning
    28 June 2023 • 13:30 - 14:30 CEST | ICM Room 12b
    Session Chair: Jürgen Beyerer, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB (Germany)
    12623-5
    Author(s): Ole Schmedemann, Technische Univ. Hamburg-Harburg (Germany); Simon Schlodinski, Dirk Holst, Hamburg University of Technology (Germany); Thorsten Schüppstuhl, Technische Univ. Hamburg-Harburg (Germany)
    On demand | Presented live 28 June 2023
    Show Abstract + Hide Abstract
    Learning models from synthetic image data rendered from 3D models and applying them to real-world applications can reduce costs and improve performance when using deep learning for image processing in automated visual inspection tasks. However, sufficient generalisation from synthetic to real-world data is challenging, because synthetic samples only approximate the inherent structure of real-world images and lack image properties present in real-world data, a phenomenon called domain gap. In this work, we propose to combine synthetic generation approaches with CycleGAN, a style transfer method based on Generative Adversarial Networks (GANs). CycleGAN learns the inherent structure from real-world samples and adapts the synthetic data accordingly. We investigate how synthetic data can be adapted for a use case of visual inspection of automotive cast iron parts, and show that supervised deep object detectors trained on the adapted data can successfully generalise to real- world data and outperform object detectors trained on synthetic data alone. This demonstrates that generative domain adaptation helps to leverage synthetic data in deep learning-assisted inspection systems for automated visual inspection tasks.
    12623-6
    Author(s): Sankarsan Mohanty, Eugene Su, Chao-Ching Ho, National Taipei Univ. of Technology (Taiwan)
    On demand | Presented live 28 June 2023
    12623-7
    Author(s): Stefan Siemens, Markus Kästner, Eduard Reithmeier, Leibniz Univ. Hannover (Germany)
    On demand | Presented live 28 June 2023
    Show Abstract + Hide Abstract
    This study presents a method to generate synthetic microscopic surface images by adapting the pre-trained latent diffusion model Stable Diffusion and the pre-trained text encoder OpenCLIP-ViT/H. A confocal laser scanning microscope was used to acquire the dataset for transfer learning. The measured samples include metallic surfaces processed with different abrasive machining methods like grinding, polishing, or honing. The network is trained to generate microtopographies with these machining methods, with different materials (for example, aluminum, PVC, and steel) and roughness values (for example, milling with Ra=0.4 to Ra =12.5). The performance of the network is evaluated through visual inspection, and the objective image quality measures peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and Frechet Inception Distance (FID). The results demonstrate that the proposed method can generate realistic microtopographies, albeit with some limitations. These limitations may be due to the fact that the original training data for the Stable Diffusion network used mostly images from the Internet, which often show people or landscapes. It was also found that the lack of post-processing of the synthetic images may lead to a reduction in perceived sharpness and less finely detailed structures. Nevertheless, the performance of the model demonstrates a promising and effective approach to surface metrology and materials science, contributing to fields such as materials science and surface engineering.
    Session 3: Machine Learning and Classification
    28 June 2023 • 14:30 - 15:30 CEST | ICM Room 12b
    Session Chair: Jürgen Beyerer, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB (Germany)
    12623-8
    Author(s): Kolja Hedrich, Lennart Hinz, Eduard Reithmeier, Leibniz Univ. Hannover (Germany)
    On demand | Presented live 28 June 2023
    Show Abstract + Hide Abstract
    The automation of inspection processes in aircraft engines comprises challenging computer vision tasks. In particular, the inspection of coating damages in confined spaces with hand-held endoscopes is based on image data acquired under dynamic operating conditions. In this study, large coating areas are analyzed by processing 2D RGB video data to generate high-resolution overview images of the coating area. For the quantification of coating damages, convolutional neural networks (CNNs) are utilized. To achieve high segmentation accuracies, the CNNs are applied on different scales, which raises the challenge to combine the predictions of these networks. Therefore, this study presents a novel method to efficiently interpret the network results to derive a final segmentation mask.
    12623-9
    Author(s): Manuel Bihler, Karlsruher Institut für Technologie (Germany); Lukas Roming, Fraunhofer Institute of Optronics, System Technology and Image Exploitation (Germany); Yifan Jiang, Ahmed J. Afifi, Karlsruher Institut für Technologie (Germany); Jochen Aderhold, Fraunhofer Institute for Wood Research, Wilhelm-Klauditz-Institut (WKI) (Germany); Dovile Cibiraite-Lukenskiene, Fraunhofer Institute for Industrial Mathematics (ITWM) (Germany); Sandra Lorenz, Richard Gloaguen, Helmholtz-Institut Freiberg für Ressourcentechnologie (Germany); Robin Gruna, Fraunhofer Institute of Optronics (Germany); Michael Heizmann, Karlsruher Institut für Technologie (Germany)
    On demand | Presented live 28 June 2023
    Show Abstract + Hide Abstract
    Deep Learning techniques are commonly applied to RGB images to solve different computer vision tasks such as classification, recognition, and segmentation. By using various sensors, more information can be gathered, and the performance of such classifiers can be increased. A data fusion strategy is necessary to combine all information in an optimal way. In this contribution, we propose to apply different data fusion strategies to deep learning models for detection and classification tasks. We test early, intermediate, and late fusion to determine the optimal fusion strategy for multimodal and multispectral datasets, respectively.
    12623-15
    Author(s): Mohammed A. Isa, Mojtaba A. Khanesar, Richard K. Leach, David Branson, Samanta Piano, The Univ. of Nottingham (United Kingdom)
    On demand | Presented live 28 June 2023
    Show Abstract + Hide Abstract
    The majority of industrial production processes can be divided into a series of object manipulation and handling tasks that can be adapted for robots. Through significant advances in compliant grasping, sensing and actuation technologies, robots are now capable of carrying out human-like flexible and dexterous object manipulation tasks. During operation, robots are required to position objects within tolerances specified for every operation in an industrial process. The ability of a robot to meet these tolerances is the critical deciding factor that determines where the robot can be integrated and how proficient the robot can carry out high-precision tasks. Therefore, improving the positioning accuracy of robots can lead to new avenues for their integration into production industries. Given that tolerances in manufacturing processes are in the order of tens of micrometres or less, robots should guarantee high positioning accuracy when manipulating objects. The direct method of ensuring high accuracy is by introducing an additional measurement system(s) that can improve the inherent joint-angle-based robot position determination. In this paper, we present a high-accuracy robotic pose measurement (HARPM) system based on coordinate measurements from a multi-camera vision system. We also discuss the integration of measurements obtained by absolute distance interferometry and how the interferometric measurements can complement the vision system measurements. The performance of the HARPM system is evaluated using a laser interferometer to investigate robotic positions along a trajectory. The performance results show that the HARPM system can improve the positioning accuracy of robots from hundreds to a few tens of micrometres.
    12623-10
    CANCELED: Deep learning for detection of powder on complex surfaces using computer vision
    Author(s): Ali Ghandour, Samanta Piano, Mohammed A. Isa, The Univ. of Nottingham (United Kingdom); Konstantin Rybalcenko, AMT Ltd. (United Kingdom)
    28 June 2023 • 15:30 CEST | ICM Room 12b
    Show Abstract + Hide Abstract
    Successful integration of additive manufacturing (AM) of polymer parts into the business models of major manufacturers across various industries has been seen in recent years. Despite increased focus on process monitoring, the post-processing, which can account for over half of production costs, has often been overlooked. Our goal is to use optical metrology and deep learning techniques to improve post-processing control in AM.
    Break
    Coffee Break 15:30 - 16:00
    Session 4: Image-based Measurement Technology
    28 June 2023 • 16:00 - 17:00 CEST | ICM Room 12b
    Session Chair: Jürgen Beyerer, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB (Germany)
    12623-11
    Author(s): Tobias J. Haist, Felix Lodholz, Andreas Faulhaber, Stephan Reichelt, Institut für Technische Optik (Germany)
    On demand | Presented live 28 June 2023
    Show Abstract + Hide Abstract
    Differential perspective is a simple and cost effective monocular distance measurement technique that works by taking two images from two different (axially separated) locations. The two images are then analyzed using image processing in order to obtain the change of size for different objects within the scene. Based on this information the distances to the objects can be easily computed. We use this principle to realize a sensor for assisted driving where the camera takes two images separated by 0.32 seconds. Distances to objects (e.g. number plates, traffic signs) of up to 200 meters can be measured with satisfactory accuracy. In the presentation we explain the basic principle and the employed image processing and show real measurements based on real world images.
    12623-12
    Author(s): Anna Khatyreva, Iris Kuntz, Tobias Schmid-Schirling, Fraunhofer-Institut für Physikalische Messtechnik IPM (Germany); Thomas Brox, Univ. of Freiburg (Germany); Daniel Carl, Fraunhofer-Institut für Physikalische Messtechnik IPM (Germany)
    On demand | Presented live 28 June 2023
    Show Abstract + Hide Abstract
    We present an extension of our automatic anomaly detection approach for quality inspection of industrially manufactured parts. The idea is to image the whole surface of a free-falling sample from different perspectives. Therefore, we modify the state-of-the-art PatchCore algorithm for anomaly detection to handle multiple perspectives simultaneously. The extension of the algorithm includes a weighting step in the processing pipeline to distinguish artifacts in the images from real anomalies. Datasets were created for two different objects to evaluate the proposed approach. The results show that the developed pipeline outperforms PatchCore and the current free-fall inspection system algorithm.
    12623-13
    Author(s): Christoph S. Werner, Simon Frey, Fraunhofer-Institut für Physikalische Messtechnik IPM (Germany); Alexander Reiterer, Fraunhofer-Institut für Physikalische Messtechnik IPM (Germany), Univ. of Freiburg (Germany)
    On demand | Presented live 28 June 2023
    Show Abstract + Hide Abstract
    Unwanted vegetation leads to damages of transport infrastructure and poses a safety risk. This causes high maintenance efforts and environmental impacts due to the widespread use of herbicides. To address this issue, we present a camera-based inspection system which reliably detects and documents vegetation on traffic routes. This is achieved through a specialized multi-spectral camera system which detects the characteristic spectral fingerprint of chlorophyll and detects vegetation with 5 mm resolution. Embedded data pre-processing and reduction allow operating speeds of up to 100 km/h. A sophisticated sequence of bright and dark frames in combination with active illumination enables ambient-light independent operation.
    Session 5: Image-based Position Measurement
    28 June 2023 • 17:00 - 17:40 CEST | ICM Room 12b
    Session Chair: Michael M. Heizmann, Karlsruher Institut für Technologie (Germany)
    12623-14
    Author(s): Yan Zhang, Gunther Notni, Technische Univ. Ilmenau (Germany)
    On demand | Presented live 28 June 2023
    Show Abstract + Hide Abstract
    In this paper, we introduce an interactive multimodal vision-based robot teaching method. Here, a multimodal 3D image (color (RGB), thermal (T) and point cloud (3D)) was used to capture the temperature, texture and geometry information required to analyze human action. By our method, we only need to move our finger on an object surface, and then the heat traces left by the finger on the object surface will be recorded by the multimodal 3D sensor. By analyzing the multimodal point cloud dynamically, the accurate finger trace on the object is recognized. A robot trajectory is computed using this finger trace.
    12623-16
    Author(s): Leo Miyashita, Masatoshi Ishikawa, Tokyo Univ. of Science (Japan)
    On demand | Presented live 28 June 2023
    Show Abstract + Hide Abstract
    In this paper, we propose a new optical system to simultaneously measure the pose, position, and surface normals of a target at high speed. In the computer vision field, it is a fundamental task to measure a motion of a target, and a high-speed motion measurement system has been researched for real-time applications. In most cases, a target is idealized as a rigid body, and the motion is described as 6-DoF pose and position. However, general objects are not entirely rigid, and a non-rigid motion component is significant in some applications. We focus on dynamic projection mapping as an application example and propose an integration system of rigid and non-rigid motion sensing. To achieve a high-speed and non-interference measurement, we introduce a 3-band infrared optical system and the lighting setup and evaluate the coupling efficiency by demonstrating the simultaneous measurement of a pose, position, and surface normals.
    Conference Chair
    Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB (Germany), Karlsruher Institut für Technologie (Germany)
    Conference Chair
    Karlsruher Institut für Technologie, Institute of Industrial Information Technology (Germany)
    Program Committee
    Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung (Germany)
    Program Committee
    Hochschule Aalen (Germany)
    Program Committee
    Ruprecht-Karls-Univ. Heidelberg (Germany)
    Program Committee
    Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung (Germany)
    Program Committee
    VITRONIC Dr.-Ing. Stein Bildverarbeitungssysteme GmbH (Germany)
    Program Committee
    Univ. Stuttgart (Germany)
    Program Committee
    Institute of Applied Optics, Univ. Stuttgart (Germany)
    Program Committee
    Univ. Politécnica de Madrid (Spain)
    Program Committee
    Fraunhofer-Institut für Produktionstechnologie (Germany)
    Program Committee
    Vrije Univ. Brussel (Belgium)
    Program Committee
    Duale Hochschule Baden-Würtemberg (Germany)
    Program Committee
    Serious Enterprises (Germany)
    Program Committee
    Hochschule für angewandte Wissenschaften Würzburg-Schweinfurt (Germany)
    Additional Information

    View call for papers

     

    What you will need to submit

    • Title
    • Author(s) information
    • Speaker biography
    • 250-word abstract for technical review
    • 100-word summary for the program
    • Keywords used in search for your paper (optional)
    Note: Only original material should be submitted. Commercial papers, papers with no new research/development content, and papers with proprietary restrictions will not be accepted for presentation.