Proceedings Volume 10441

Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies

cover
Proceedings Volume 10441

Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 7 December 2017
Contents: 7 Sessions, 22 Papers, 10 Presentations
Conference: SPIE Security + Defence 2017
Volume Number: 10441

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10441
  • Detection and Identification of CBRNE
  • Spectroscopy and Raman/LIBS
  • Computer Vision and Video Content Analysis
  • Person and Object Detection, Tracking, and Behavior Analysis
  • Big Data Analysis and Deep Learning
  • Autonomous Sensors and Mobile Robots
Front Matter: Volume 10441
icon_mobile_dropdown
Front Matter: Volume 10441
This PDF file contains the front matter associated with SPIE Proceedings Volume 10441, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Detection and Identification of CBRNE
icon_mobile_dropdown
MMW/THz imaging using upconversion to visible, based on glow discharge detector array and CCD camera
Avihai Aharon, Daniel Rozban, Amir Abramovich, et al.
An inexpensive upconverting MMW/THz imaging method is suggested here. The method is based on glow discharge detector (GDD) and silicon photodiode or simple CCD/CMOS camera. The GDD was previously found to be an excellent room-temperature MMW radiation detector by measuring its electrical current. The GDD is very inexpensive and it is advantageous due to its wide dynamic range, broad spectral range, room temperature operation, immunity to high power radiation, and more. An upconversion method is demonstrated here, which is based on measuring the visual light emitting from the GDD rather than its electrical current. The experimental setup simulates a setup that composed of a GDD array, MMW source, and a basic CCD/CMOS camera. The visual light emitting from the GDD array is directed to the CCD/CMOS camera and the change in the GDD light is measured using image processing algorithms. The combination of CMOS camera and GDD focal plane arrays can yield a faster, more sensitive, and very inexpensive MMW/THz camera, eliminating the complexity of the electronic circuits and the internal electronic noise of the GDD. Furthermore, three dimensional imaging systems based on scanning prohibited real time operation of such imaging systems. This is easily solved and is economically feasible using a GDD array. This array will enable us to acquire information on distance and magnitude from all the GDD pixels in the array simultaneously. The 3D image can be obtained using methods like frequency modulation continuous wave (FMCW) direct chirp modulation, and measuring the time of flight (TOF).
Design optimization of Cassegrain telescope for remote explosive trace detection
The past three years have seen a global increase in explosive-based terror attacks. The widespread use of improvised explosives and anti-personnel landmines have caused thousands of civilian casualties across the world. Current scenario of globalized civilization threat from terror drives the need to improve the performance and capabilities of standoff explosive trace detection devices to be able to anticipate the threat from a safe distance to prevent explosions and save human lives. In recent years, laser-induced breakdown spectroscopy (LIBS) is an emerging approach for material or elemental investigations. All the principle elements on the surface are detectable in a single measurement using LIBS and hence, a standoff LIBS based method has been used to remotely detect explosive traces from several to tens of metres distance. The most important component of LIBS based standoff explosive trace detection system is the telescope which enables remote identification of chemical constituents of the explosives. However, in a compact LIBS system where Cassegrain telescope serves the purpose of laser beam delivery and light collection, need a design optimization of the telescope system. This paper reports design optimization of a Cassegrain telescope to detect explosives remotely for LIBS system. A design optimization of Schmidt corrector plate was carried out for Nd:YAG laser. Effect of different design parameters was investigated to eliminate spherical aberration in the system. Effect of different laser wavelengths on the Schmidt corrector design was also investigated for the standoff LIBS system.
Banknote authentication using chaotic elements technology
Sajan Ambadiyil, Krishnendu P.S., V.P. Mahadevan Pillai , et al.
The counterfeit banknote is a growing threat to the society since the advancements in the field of computers, scanners and photocopiers, as they have made the duplication process for banknote much simpler. The fake note detection systems developed so far have many drawbacks such as high cost, poor accuracy, unavailability, lack of user-friendliness and lower effectiveness. One possible solution to this problem could be the use of a system uniquely linked to the banknote itself. In this paper, we present a unique identification and authentication process for the banknote using chaotic elements embedded in it. A chaotic element means that the physical elements are formed from a random process independent from human intervention. The chaotic elements used in this paper are the random distribution patterns of such security fibres set into the paper pulp. A unique ID is generated from the fibre pattern obtained from UV image of the note, which can be verified by any person who receives the banknote to decide whether the banknote is authentic or not. Performance analysis of the system is also studied in this paper.
Spectroscopy and Raman/LIBS
icon_mobile_dropdown
Active vortex sampling system for remote contactless survey of surfaces by laser-based field asymmetrical ion mobility spectrometer
Artem E. Akmalov, Alexander A. Chistyakov, Gennadii E. Kotkovskii, et al.
The ways for increasing the distance of non-contact sampling up to 40 cm for a field asymmetric ion mobility (FAIM) spectrometer are formulated and implemented by the use of laser desorption and active shaper of the vortex flow. Numerical modeling of air sampling flows was made and the sampling device for a laser–based FAIM spectrometer on the basis of high speed rotating impeller, located coaxial with the ion source, was designed. The dependence of trinitrotoluene vapors signal on the rotational speed and the optimization of the value of the sampling flow were obtained. The effective distance of sampling is increased up to 28 cm for trinitrotoluene vapors detection by a FAIM spectrometer with a rotating impeller. The distance is raised up to 40 cm using laser irradiation of traces of explosives. It is shown that efficient desorption of low-volatile explosives is achieved at laser intensity 107 W / cm2 , wavelength λ=266 nm, pulse energy about 1mJ and pulse frequency not less than 10 Hz under ambient conditions. The ways of optimization of internal gas flows of a FAIM spectrometer for the work at increased sampling distances are discussed.
Raman lidar for remote control explosives in the subway
Aleksandr Grishkanich, Dmitriy Redka, Sergey Vasiliev, et al.
Laser sensing can serve as a highly effective method of searching and monitoring of explosives in the subway. The first method is essence consists in definition the explosives concentration by excitation and registration ramans shifts at wavelength of λ = 0.261 - 0.532 μm at laser sounding. Preliminary results of investigation show the real possibility to register of 2,4,6-trinitrophenylmethylnitramine with concentration on surface at level of 108÷109 сm-3 on a safe distance 50 m from the object.
New approach for detection and identification of substances using THz TDS
Vyacheslav A. Trofimov, Irina G. Zakharova, Dmitry Yu. Zagursky, et al.
We propose and discuss new effective approach for the detection and identification of substances using a THz TDS. It consists in using a substance emission at high frequencies corresponding to high energy levels relaxation due to cascade mechanism of their excitation under the broadband THz pulse action. The second approach is based on a possibility of observing the absorption frequencies of substance under the frequency up-conversion. To explain a physical mechanism of considered possibilities we make a computer simulation using 1D Maxwell's equations and density matrix formalism.
Computer Vision and Video Content Analysis
icon_mobile_dropdown
Automatically assessing properties of dynamic cameras for camera selection and rapid deployment of video content analysis tasks in large-scale ad-hoc networks
Richard J. M. den Hollander, Henri Bouma, Jeroen H. C. van Rest, et al.
Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users’ developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.
Face recognition in the thermal infrared domain
Biometrics refers to unique human characteristics. Each unique characteristic may be used to label and describe individuals and for automatic recognition of a person based on physiological or behavioural properties. One of the most natural and the most popular biometric trait is a face. The most common research methods on face recognition are based on visible light. State-of-the-art face recognition systems operating in the visible light spectrum achieve very high level of recognition accuracy under controlled environmental conditions. Thermal infrared imagery seems to be a promising alternative or complement to visible range imaging due to its relatively high resistance to illumination changes. A thermal infrared image of the human face presents its unique heat-signature and can be used for recognition. The characteristics of thermal images maintain advantages over visible light images, and can be used to improve algorithms of human face recognition in several aspects. Mid-wavelength or far-wavelength infrared also referred to as thermal infrared seems to be promising alternatives. We present the study on 1:1 recognition in thermal infrared domain. The two approaches we are considering are stand-off face verification of non-moving person as well as stop-less face verification on-the-move. The paper presents methodology of our studies and challenges for face recognition systems in the thermal infrared domain.
Three-dimensional measurement system for crime scene documentation
Three dimensional measurements (such as photogrammetry, Time of Flight, Structure from Motion or Structured Light techniques) are becoming a standard in the crime scene documentation process. The usage of 3D measurement techniques provide an opportunity to prepare more insightful investigation and helps to show every trace in the context of the entire crime scene. In this paper we would like to present a hierarchical, three-dimensional measurement system that is designed for crime scenes documentation process. Our system reflects the actual standards in crime scene documentation process – it is designed to perform measurement in two stages. First stage of documentation, the most general, is prepared with a scanner with relatively low spatial resolution but also big measuring volume – it is used for the whole scene documentation. Second stage is much more detailed: high resolution but smaller size of measuring volume for areas that required more detailed approach. The documentation process is supervised by a specialised application CrimeView3D, that is a software platform for measurements management (connecting with scanners and carrying out measurements, automatic or semi-automatic data registration in the real time) and data visualisation (3D visualisation of documented scenes). It also provides a series of useful tools for forensic technicians: virtual measuring tape, searching for sources of blood spatter, virtual walk on the crime scene and many others. In this paper we present our measuring system and the developed software. We also provide an outcome from research on metrological validation of scanners that was performed according to VDI/VDE standard. We present a CrimeView3D – a software-platform that was developed to manage the crime scene documentation process. We also present an outcome from measurement sessions that were conducted on real crime scenes with cooperation with Technicians from Central Forensic Laboratory of Police.
Person and Object Detection, Tracking, and Behavior Analysis
icon_mobile_dropdown
Robust visual object tracking with interleaved segmentation
Peter Abel, Hilke Kieritz, Stefan Becker, et al.
In this paper we present a new approach for tracking non-rigid, deformable objects by means of merging an on-line boosting-based tracker and a fast foreground background segmentation. We extend an on-line boosting- based tracker, which uses axes-aligned bounding boxes with fixed aspect-ratio as tracking states. By constructing a confidence map from the on-line boosting-based tracker and unifying this map with a confidence map, which is obtained from a foreground background segmentation algorithm, we build a superior confidence map. For constructing a rough confidence map of a new frame based on on-line boosting, we employ the responses of the strong classifier as well as the single weak classifier responses that were built before during the updating step. This confidence map provides a rough estimation of the object’s position and dimension. In order to refine this confidence map, we build a fine, pixel-wisely segmented confidence map and merge both maps together. Our segmentation method is color-histogram-based and provides a fine and fast image segmentation. By means of back-projection and the Bayes’ rule, we obtain a confidence value for every pixel. The rough and the fine confidence maps are merged together by building an adaptively weighted sum of both maps. The weights are obtained by utilizing the variances of both confidence maps. Further, we apply morphological operators in the merged confidence map in order to reduce the noise. In the resulting map we estimate the object localization and dimension via continuous adaptive mean shift. Our approach provides a rotated rectangle as tracking states, which enables a more precise description of non-rigid, deformable objects than axes-aligned bounding boxes. We evaluate our tracker on the visual object tracking (VOT) benchmark dataset 2016.
Tracking, aiming, and hitting the UAV with ordinary assault rifle
František Racek, Teodor Baláž, Jaroslav Krejčí, et al.
The usage small-unmanned aerial vehicles (UAVs) is significantly increasing nowadays. They are being used as a carrier of military spy and reconnaissance devices (taking photos, live video streaming and so on), or as a carrier of potentially dangerous cargo (intended for destruction and killing). Both ways of utilizing the UAV cause the necessity to disable it. From the military point of view, to disable the UAV means to bring it down by a weapon of an ordinary soldier that is the assault rifle. This task can be challenging for the soldier because he needs visually detect and identify the target, track the target visually and aim on the target. The final success of the soldier’s mission depends not only on the said visual tasks, but also on the properties of the weapon and ammunition. The paper deals with possible methods of prediction of probability of hitting the UAV targets.
3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging
Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.
Big Data Analysis and Deep Learning
icon_mobile_dropdown
Transferring x-ray based automated threat detection between scanners with different energies and resolution
M. Caldwell, M. Ransley, T. W. Rogers, et al.
A significant obstacle to developing high performance Deep Learning algorithms for Automated Threat Detection (ATD) in security X-ray imagery, is the difficulty of obtaining large training datasets. In our previous work, we circumvented this problem for ATD in cargo containers, using Threat Image Projection and data augmentation. In this work, we investigate whether data scarcity for other modalities, such as parcels and baggage, can be ameliorated by transforming data from one domain so that it approximates the appearance of another. We present an ontology of ATD datasets to assess where transfer learning may be applied. We define frameworks for transfer at the training and testing stages, and compare the results for both methods against ATD where a common data source is used for training and testing. Our results show very poor transfer, which we attribute to the difficulty of accurately matching the blur and contrast characteristics of different scanners.
On the application of semantic technologies to the domain of forensic investigations in financial crimes
Tobias Scheidat, Ronny Merkel, Volker Krummel, et al.
In daily police practice, forensic investigation of criminal cases is mainly based on manual work and the experience of individual forensic experts, using basic storage and data processing technologies. However, an individual criminal case does not only consist of the actual offence, but also of a variety of different aspects involved. For example, in order to solve a financial criminal case, an investigator has to find interrelations between different case entities as well as to other cases. The required information about these different entities is often stored in various databases and mostly requires to be manually requested and processed by forensic investigators. We propose the application of semantic technologies to the domain of forensic investigations at the example of financial crimes. Such combination allows for modelling specific case entities and their interrelations within and between cases. As a result, an explorative search of connections between case entities in the scope of an investigation as well as an automated derivation of conclusions from an established fact base is enabled. The proposed model is presented in the form of a crime field ontology, based on different types of knowledge obtained from three individual sources: open source intelligence, forensic investigators and captive interviews of detained criminals. The modelled crime field ontology is illustrated at two examples using the well known crime type of explosive attack on ATM and the potentially upcoming crime type data theft by NFC crowd skimming. Of these criminal modi operandi, anonymized fictional are modelled, visualized and exploratively searched. Modelled case entities include modi operandi, events, actors, resources, exploited weaknesses as well as flows of money, data and know how. The potential exploration of interrelations between the different case entities of such examples is illustrated in the scope of a fictitious investigation, highlighting the potential of the approach.
Automatic analysis of online image data for law enforcement agencies by concept detection and instance search
Maaike H. T. de Boer, Henri Bouma, Maarten C. Kruithof, et al.
The information available on-line and off-line, from open as well as from private sources, is growing at an exponential rate and places an increasing demand on the limited resources of Law Enforcement Agencies (LEAs). The absence of appropriate tools and techniques to collect, process, and analyze the volumes of complex and heterogeneous data has created a severe information overload. If a solution is not found, the impact on law enforcement will be dramatic, e.g. because important evidence is missed or the investigation time is too long. Furthermore, there is an uneven level of capabilities to deal with the large volumes of complex and heterogeneous data that come from multiple open and private sources at national level across the EU, which hinders cooperation and information sharing. Consequently, there is a pertinent need to develop tools, systems and processes which expedite online investigations. In this paper, we describe a suite of analysis tools to identify and localize generic concepts, instances of objects and logos in images, which constitutes a significant portion of everyday law enforcement data. We describe how incremental learning based on only a few examples and large-scale indexing are addressed in both concept detection and instance search. Our search technology allows querying of the database by visual examples and by keywords. Our tools are packaged in a Docker container to guarantee easy deployment on a system and our tools exploit possibilities provided by open source toolboxes, contributing to the technical autonomy of LEAs.
Optimizing a neural network for detection of moving vehicles in video
Noëlle M. Fischer, Maarten C. Kruithof, Henri Bouma
In the field of security and defense, it is extremely important to reliably detect moving objects, such as cars, ships, drones and missiles. Detection and analysis of moving objects in cameras near borders could be helpful to reduce illicit trading, drug trafficking, irregular border crossing, trafficking in human beings and smuggling. Many recent benchmarks have shown that convolutional neural networks are performing well in the detection of objects in images. Most deep-learning research effort focuses on classification or detection on single images. However, the detection of dynamic changes (e.g., moving objects, actions and events) in streaming video is extremely relevant for surveillance and forensic applications. In this paper, we combine an end-to-end feedforward neural network for static detection with a recurrent Long Short-Term Memory (LSTM) network for multi-frame analysis. We present a practical guide with special attention to the selection of the optimizer and batch size. The end-to-end network is able to localize and recognize the vehicles in video from traffic cameras. We show an efficient way to collect relevant in-domain data for training with minimal manual labor. Our results show that the combination with LSTM improves performance for the detection of moving vehicles.
Deep learning-based fine-grained car make/model classification for visual surveillance
Erhan Gundogdu, Enes Sinan Parıldı, Berkan Solmaz, et al.
Fine-grained object recognition is a potential computer vision problem that has been recently addressed by utilizing deep Convolutional Neural Networks (CNNs). Nevertheless, the main disadvantage of classification methods relying on deep CNN models is the need for considerably large amount of data. In addition, there exists relatively less amount of annotated data for a real world application, such as the recognition of car models in a traffic surveillance system. To this end, we mainly concentrate on the classification of fine-grained car make and/or models for visual scenarios by the help of two different domains. First, a large-scale dataset including approximately 900K images is constructed from a website which includes fine-grained car models. According to their labels, a state-of-the-art CNN model is trained on the constructed dataset. The second domain that is dealt with is the set of images collected from a camera integrated to a traffic surveillance system. These images, which are over 260K, are gathered by a special license plate detection method on top of a motion detection algorithm. An appropriately selected size of the image is cropped from the region of interest provided by the detected license plate location. These sets of images and their provided labels for more than 30 classes are employed to fine-tune the CNN model which is already trained on the large scale dataset described above. To fine-tune the network, the last two fully-connected layers are randomly initialized and the remaining layers are fine-tuned in the second dataset. In this work, the transfer of a learned model on a large dataset to a smaller one has been successfully performed by utilizing both the limited annotated data of the traffic field and a large scale dataset with available annotations. Our experimental results both in the validation dataset and the real field show that the proposed methodology performs favorably against the training of the CNN model from scratch.
Autonomous Sensors and Mobile Robots
icon_mobile_dropdown
Control system of the inspection robots group applying auctions and multi-criteria analysis for task allocation
In the paper presented is a control system of a mobile robots group intended for carrying out inspection missions. The main research problem was to define such a control system in order to facilitate a cooperation of the robots resulting in realization of the committed inspection tasks. Many of the well-known control systems use auctions for tasks allocation, where a subject of an auction is a task to be allocated. It seems that in the case of missions characterized by much larger number of tasks than number of robots it will be better if robots (instead of tasks) are subjects of auctions. The second identified problem concerns the one-sided robot-to-task fitness evaluation. Simultaneous assessment of the robot-to-task fitness and task attractiveness for robot should affect positively for the overall effectiveness of the multi-robot system performance. The elaborated system allows to assign tasks to robots using various methods for evaluation of fitness between robots and tasks, and using some tasks allocation methods. There is proposed the method for multi-criteria analysis, which is composed of two assessments, i.e. robot’s concurrency position for task among other robots and task’s attractiveness for robot among other tasks. Furthermore, there are proposed methods for tasks allocation applying the mentioned multi-criteria analysis method. The verification of both the elaborated system and the proposed tasks’ allocation methods was carried out with the help of simulated experiments. The object under test was a group of inspection mobile robots being a virtual counterpart of the real mobile-robot group.
Autonomous mobile platform with simultaneous localisation and mapping system for patrolling purposes
This work describes an autonomous mobile platform for supervision and surveillance purposes. The system can be adapted for mounting on different types of vehicles. The platform is based on a SLAM navigation system which performs a localization task. Sensor fusion including laser scanners, inertial measurement unit (IMU), odometry and GPS lets the system determine its position in a certain and precise way. The platform is able to create a 3D model of a supervised area and export it as a point cloud. The system can operate both inside and outside as the navigation algorithm is resistant to typical localization errors caused by wheel slippage or temporal GPS signal loss. The system is equipped with a path-planning module which allows operating in two modes. The first mode is for periodical observation of points in a selected area. The second mode is turned on in case of an alarm. When it is called, the platform moves with the fastest route to the place of the alert. The path planning is always performed online with use of the most current scans, therefore the platform is able to adjust its trajectory to the environment changes or obstacles that are in the motion. The control algorithms are developed under the Robot Operating System (ROS) since it comes with drivers for many devices used in robotics. Such a solution allows for extending the system with any type of sensor in order to incorporate its data into a created area model. Proposed appliance can be ported to other existing robotic platforms or used to develop a new platform dedicated to a specific kind of surveillance. The platform use cases are to patrol an area, such as airport or metro station, in search for dangerous substances or suspicious objects and in case of detection instantly inform security forces. Second use case is a tele-operation in hazardous area for an inspection purposes.
Autonomous mobile robotic system for supporting counterterrorist and surveillance operations
Marek Adamczyk, Kazimierz Bulandra, Wojciech Moczulski
Contemporary research on mobile robots concerns applications to counterterrorist and surveillance operations. The goal is to develop systems that are capable of supporting the police and special forces by carrying out such operations. The paper deals with a dedicated robotic system for surveillance of large objects such as airports, factories, military bases, and many others. The goal is to trace unauthorised persons who try to enter to the guarded area, document the intrusion and report it to the surveillance centre, and then warn the intruder by sound messages and eventually subdue him/her by stunning through acoustic effect of great power. The system consists of several parts. An armoured four-wheeled robot assures required mobility of the system. The robot is equipped with a set of sensors including 3D mapping system, IR and video cameras, and microphones. It communicates with the central control station (CCS) by means of a wideband wireless encrypted system. A control system of the robot can operate autonomously, and under remote control. In the autonomous mode the robot follows the path planned by the CCS. Once an intruder has been detected, the robot can adopt its plan to allow tracking him/her. Furthermore, special procedures of treatment of the intruder are applied including warning about the breach of the border of the protected area, and incapacitation of an appropriately selected very loud sound until a patrol of guards arrives. Once getting stuck the robot can contact the operator who can remotely solve the problem the robot is faced with.
Modular robotic system for forensic investigation support
Grzegorz Kowalski, Jakub Główka, Mateusz Maciaś, et al.
Forensic investigation on the crime scene is an activity that requires not only knowledge about the ways of searching for evidence, collecting and processing them. In some cases the area of operation might not be properly secured and poses threat to human health or life. Some devices or materials may be left intentionally or not to injure potential investigators. Besides conventional explosives, threats can be in form of CBRN materials, which have not only immediate effect on the exposed personnel, but can contaminate further people, when being transferred for example on clothes or unsecured equipment. In this case a risk evaluation should be performed, that can lead to conclusions that it is too dangerous for investigators to work. In that kind of situation remote devices, which are able to examine the crime scene and secure samples, can be used. In the course of R&D activities PIAP developed a system, which is based on small UGV capable of carrying out inspection of suspicious places and securing evidence, when needed. The system consists of remotely controlled mobile robot, its control console and a set of various inspection and support tools, that enable detection of CBRN threats as well as revelation, documentation and securing of the evidence. This paper will present main features of the system, like mission adjustment possibilities and communication aspects, and also examples of the forensic accessories.