Proceedings Volume 9152

Software and Cyberinfrastructure for Astronomy III

cover
Proceedings Volume 9152

Software and Cyberinfrastructure for Astronomy III

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 5 August 2014
Contents: 15 Sessions, 99 Papers, 0 Presentations
Conference: SPIE Astronomical Telescopes + Instrumentation 2014
Volume Number: 9152

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9152
  • Software Hack Day
  • Project Overview
  • Control Systems Using PLC Technology and Field Buses
  • Data Management and Archives
  • Control Systems: Camera and Data Acquisition
  • Data Processing and Pipelines
  • Control Systems for Spectrographs
  • Cyberinfrastructure I
  • Control Systems
  • Software Engineering
  • Innovations
  • Cyberinfrastructure II
  • Project Management
  • Poster Session
Front Matter: Volume 9152
icon_mobile_dropdown
Front Matter: Volume 9152
This PDF file contains the front matter associated with SPIE Proceedings Volume 9152 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Software Hack Day
icon_mobile_dropdown
The first SPIE software Hack Day
S. Kendrew, C. Deen, N. Radziwill, et al.
We report here on the software Hack Day organised at the 2014 SPIE conference on Astronomical Telescopes and Instrumentation in Montréal. The first ever Hack Day to take place at an SPIE event, the aim of the day was to bring together developers to collaborate on innovative solutions to problems of their choice. Such events have proliferated in the technology community, providing opportunities to showcase, share and learn skills. In academic environments, these events are often also instrumental in building community beyond the limits of national borders, institutions and projects. We show examples of projects the participants worked on, and provide some lessons learned for future events.
Project Overview
icon_mobile_dropdown
Quasi-automatic software support for Gaia ground based optical tracking
S. Bouquillon, C. Barache, T. Carlucci, et al.
The ESA Gaia satellite mission will create a catalog of 1 billion stars with unprecedented astrometric precision. To achieve its aim in terms of astrometric precision, a ground based optical tracking campaign (GBOT) of the satellite itself is necessary during the five years of the mission. We present an overview of the GBOT project as a whole in another contribution1 (Altmann et al. in SPIE category "observatory operations"). The present paper will focus more specifically on the software solutions developed by the GBOT group.
The ASTRI/CTA mini-array software system
ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) is a Flagship Project financed by the Italian Ministry of Education, University and Research, and led by INAF, the Italian National Institute of Astrophysics. The main goals of the ASTRI project are the realization of an end-to-end prototype of a Small Size Telescope (SST) for the Cherenkov Telescope Array (CTA) in a dual- mirror configuration (SST-2M) and, subsequently, of a mini-array comprising seven SST-2M telescopes. The mini-array will be placed at the final CTA Southern Site, which will be part of the CTA seed array, around which the whole CTA observatory will be developed. The Mini-Array Software System (MASS) will provide a comprehensive set of tools to prepare an observing proposal, to perform the observations specified therein (monitoring and controlling all the hardware components of each telescope), to analyze the acquired data online and to store/retrieve all the data products to/from the archive. Here we present the main features of the MASS and its first version, to be tested on the ASTRI SST-2M prototype that will be installed at the INAF observing station located at Serra La Nave on Mount Etna in Sicily.
Discovery Channel Telescope software progress report: addressing early commissioning and operations challenges
Michael Lacasse, Paul J. Lotz
The Discovery Channel Telescope is a 4.3m astronomical research telescope in northern Arizona constructed through a partnership between Discovery Communications and Lowell Observatory. In transition from construction phase to commissioning and operations, we faced a variety of software challenges, both foreseen and unforeseen, and addressed those with a variety of solutions including, isolation of the control systems network, development of an Operations Log application, extension of the interface to instrumentation software, improvements to engineering data analysis, provisions to avoid failure modes, and enhanced user experience. We describe these solutions and present an overview of the current project status.
Large Binocular Telescope Observatory (LBTO) software and IT group operations status update and near-term development roadmap
The LBTO software and IT group was originally responsible for development of the Telescope Control System (TCS) software, and build-out of observatory Information Technology (IT) infrastructure. With major construction phases of the observatory mostly completed, emphasis is transitioning toward instrument software handover support, IT infrastructure obsolescence upgrades, and software development in support of efficient operations. This paper discusses recent software and IT group activities, metrics, issues, some lessons learned, and a near-term development road-map for support of efficient operations.
Control Systems Using PLC Technology and Field Buses
icon_mobile_dropdown
PC based PLCs and ethernet based fieldbus: the new standard platform for future VLT instrument control
Mario J. Kiekebusch, Christian Lucuix, Toomas M. Erm, et al.
ESO is currently in the final phase of the standardization process for PC-based Programmable Logical Controllers (PLCs) as the new platform for the development of control systems for future VLT/VLTI instruments. The standard solution used until now consists of a Local Control Unit (LCU), a VME-based system having a CPU and commercial and proprietary boards. This system includes several layers of software and many thousands of lines of code developed and maintained in house. LCUs have been used for several years as the interface to control instrument functions but now are being replaced by commercial off-the-shelf (COTS) systems based on BECKHOFF Embedded PCs and the EtherCAT fieldbus. ESO is working on the completion of the software framework that enables a seamless integration into the VLT control system in order to be ready to support upcoming instruments like ESPRESSO and ERIS, that will be the first fully VLT compliant instruments using the new standard. The technology evaluation and standardization process has been a long and combined effort of various engineering disciplines like electronics, control and software, working together to define a solution that meets the requirements and minimizes the impact on the observatory operations and maintenance. This paper presents the challenges of the standardization process and the steps involved in such a change. It provides a technical overview of how industrial standards like EtherCAT, OPC-UA, PLCOpen MC and TwinCAT can be used to replace LCU features in various areas like software engineering and programming languages, motion control, time synchronization and astronomical tracking.
Developing a PLC-friendly state machine model: lessons learned
Wim Pessemier, Geert Deconinck, Gert Raskin, et al.
Modern Programmable Logic Controllers (PLCs) have become an attractive platform for controlling real-time aspects of astronomical telescopes and instruments due to their increased versatility, performance and standardization. Likewise, vendor-neutral middleware technologies such as OPC Unified Architecture (OPC UA) have recently demonstrated that they can greatly facilitate the integration of these industrial platforms into the overall control system. Many practical questions arise, however, when building multi-tiered control systems that consist of PLCs for low level control, and conventional software and platforms for higher level control. How should the PLC software be structured, so that it can rely on well-known programming paradigms on the one hand, and be mapped to a well-organized OPC UA interface on the other hand? Which programming languages of the IEC 61131-3 standard closely match the problem domains of the abstraction levels within this structure? How can the recent additions to the standard (such as the support for namespaces and object-oriented extensions) facilitate a model based development approach? To what degree can our applications already take advantage of the more advanced parts of the OPC UA standard, such as the high expressiveness of the semantic modeling language that it defines, or the support for events, aggregation of data, automatic discovery, ... ? What are the timing and concurrency problems to be expected for the higher level tiers of the control system due to the cyclic execution of control and communication tasks by the PLCs? We try to answer these questions by demonstrating a semantic state machine model that can readily be implemented using IEC 61131 and OPC UA. One that does not aim to capture all possible states of a system, but rather one that attempts to organize the course-grained structure and behaviour of a system. In this paper we focus on the intricacies of this seemingly simple task, and on the lessons that we’ve learned during the development process of such a “PLC-friendly” state machine model.
Motion control solution for new PLC-based standard development platform for VLT instrument control systems
D. Popovic, R. Brast, N. Di Lieto, et al.
More than a decade ago, due to obsolescence issues, ESO initiated the design and implementation of a custom-made CANbus based motion controller (CAN-RMC) to provide, together with a tailor-made software library (motor library), the motion control capabilities for the VME platform needed for the second generation VLT/VLTI instruments. The CAN-RMC controller has been successfully used in a number of VLT instruments but it has high production costs compared to the commercial off-the-shelf (COTS) industrial solutions available on the market today. In the scope of the selection of a new PLC-based platform for the VLT instrument control systems, ESO has evaluated motion control solutions from the company Beckhoff. This paper presents the investigation, implementation and testing of the PLC/TwinCAT/EtherCAT motion controllers for DC and stepper motors and their adaptation and integration into the VLT instrumentation framework. It reports functional and performance test results for the most typical use cases of astronomical instruments like initialization sequences, tracking, switch position detections, backslash compensation, brake handling, etc. In addition, it gives an overview of the main features of TwinCAT NC/PTP, PLCopen MC, EtherCAT motion control terminals and the engineering tools like TwinCAT Scope that are integrated into the development environment and simplify software development, testing and commissioning of motorized instrument functions.
Data Management and Archives
icon_mobile_dropdown
The design and operation of the Keck Observatory archive
G. Bruce Berriman, Christopher R. Gelino, Robert W. Goodrich, et al.
The Infrared Processing and Analysis Center (IPAC) and the W. M. Keck Observatory (WMKO) operate an archive for the Keck Observatory. At the end of 2013, KOA completed the ingestion of data from all eight active observatory instruments. KOA will continue to ingest all newly obtained observations, at an anticipated volume of 4 TB per year. The data are transmitted electronically from WMKO to IPAC for storage and curation. Access to data is governed by a data use policy, and approximately two-thirds of the data in the archive are public.
GRBSpec: a multi-observatory database for gamma-ray burst spectroscopy
Antonio de Ugarte Postigo, Martin Blazek, Petr Janout, et al.
Gamma-ray bursts (GRBs) are the most luminous explosions in the Universe. They are produced during the collapse of massive stellar-sized objects, which create a black hole and eject material at ultra-relativistic speeds. They are unique tools to study the evolution of our Universe, as they are the only objects that, thanks to their extraordinary luminosity, can be observed during the complete history of star formation, from the era of reionisation to our days. One of the main tools to obtain information from GRBs and their environment is optical and near-infrared spectroscopy. After 17 years of studies spectroscopic data for around 300 events that have been collected. However, spectra were obtained by many groups, at different observatories, and using instruments of very different types, making data difficult to access, process and compare. Here we present GRBspec: A collaborative database that includes processed GRB spectra from multiple observatories and makes them available to the community. The website provides access to the datasets, allowing queries based not only on the observation characteristics but also on the properties of the GRB that was observed. Furthermore, the website provides visualisation and analysis tools, that allow the user to asses the quality of the data before downloading and even make data analysis online.
Modular VO oriented Java EE service deployer
Marco Molinaro, Francesco Cepparo, Marco De Marco, et al.
The International Virtual Observatory Alliance (IVOA) has produced many standards and recommendations whose aim is to generate an architecture that starts from astrophysical resources, in a general sense, and ends up in deployed consumable services (that are themselves astrophysical resources). Focusing on the Data Access Layer (DAL) system architecture, that these standards define, in the last years a web based application has been developed and maintained at INAF-OATs IA2 (Italian National institute for Astrophysics – Astronomical Observatory of Trieste, Italian center of Astronomical Archives) to try to deploy and manage multiple VO (Virtual Observatory) services in a uniform way: VO-Dance. However a set of criticalities have arisen since when the VO-Dance idea has been produced, plus some major changes underwent and are undergoing at the IVOA DAL layer (and related standards): this urged IA2 to identify a new solution for its own service layer. Keeping on the basic ideas from VO-Dance (simple service configuration, service instantiation at call time and modularity) while switching to different software technologies (e.g. dismissing Java Reflection in favour of Enterprise Java Bean, EJB, based solution), the new solution has been sketched out and tested for feasibility. Here we present the results originating from this test study. The main constraints for this new project come from various fields. A better homogenized solution rising from IVOA DAL standards: for example the new DALI (Data Access Layer Interface) specification that acts as a common interface system for previous and oncoming access protocols. The need for a modular system where each component is based upon a single VO specification allowing services to rely on common capabilities instead of homogenizing them inside service components directly. The search for a scalable system that takes advantage from distributed systems. The constraints find answer in the adopted solutions hereafter sketched. The development of the new system using Java Enterprise technologies can better benefit from existing libraries to build up the single tokens implementing the IVOA standards. Each component can be built from single standards and each deployed service (i.e. service components instantiations) can consume the other components' exposed methods and services without the need of homogenizing them in dedicated libraries. Scalability can be achieved in an easier way by deploying components or sets of services on a distributed environment and using JNDI (Java Naming and Directory Interface) and RMI (Remote Method Invocation) technologies. Single service configuration will not be significantly different from the VO-Dance solution given that Java class instantiation that benefited from Java Reflection will only be moved to Java EJB pooling (and not, e.g. embedded in bundles for subsequent deployment).
Practical experience with test-driven development during commissioning of the multi-star AO system ARGOS
M. Kulas, Jose Luis Borelli, Wolfgang Gässler, et al.
Commissioning time for an instrument at an observatory is precious, especially the night time. Whenever astronomers come up with a software feature request or point out a software defect, the software engineers have the task to find a solution and implement it as fast as possible. In this project phase, the software engineers work under time pressure and stress to deliver a functional instrument control software (ICS). The shortness of development time during commissioning is a constraint for software engineering teams and applies to the ARGOS project as well. The goal of the ARGOS (Advanced Rayleigh guided Ground layer adaptive Optics System) project is the upgrade of the Large Binocular Telescope (LBT) with an adaptive optics (AO) system consisting of six Rayleigh laser guide stars and wavefront sensors. For developing the ICS, we used the technique Test- Driven Development (TDD) whose main rule demands that the programmer writes test code before production code. Thereby, TDD can yield a software system, that grows without defects and eases maintenance. Having applied TDD in a calm and relaxed environment like office and laboratory, the ARGOS team has profited from the benefits of TDD. Before the commissioning, we were worried that the time pressure in that tough project phase would force us to drop TDD because we would spend more time writing test code than it would be worth. Despite this concern at the beginning, we could keep TDD most of the time also in this project phase This report describes the practical application and performance of TDD including its benefits, limitations and problems during the ARGOS commissioning. Furthermore, it covers our experience with pair programming and continuous integration at the telescope.
ODI - Portal, Pipeline, and Archive (ODI-PPA): a web-based astronomical compute archive, visualization, and analysis service
Arvind Gopu, Soichi Hayashi, Michael D. Young, et al.
The One Degree Imager-Portal, Pipeline, and Archive (ODI-PPA) is a web science gateway that provides astronomers a modern web interface that acts as a single point of access to their data, and rich computational and visualization capabilities. Its goal is to support scientists in handling complex data sets, and to enhance WIYN Observatory's scientific productivity beyond data acquisition on its 3.5m telescope. ODI-PPA is designed, with periodic user feedback, to be a compute archive that has built-in frameworks including: (1) Collections that allow an astronomer to create logical collations of data products intended for publication, further research, instructional purposes, or to execute data processing tasks (2) Image Explorer and Source Explorer, which together enable real-time interactive visual analysis of massive astronomical data products within an HTML5 capable web browser, and overlaid standard catalog and Source Extractor-generated source markers (3) Workflow framework which enables rapid integration of data processing pipelines on an associated compute cluster and users to request such pipelines to be executed on their data via custom user interfaces. ODI-PPA is made up of several light-weight services connected by a message bus; the web portal built using Twitter/Bootstrap, AngularJS and jQuery JavaScript libraries, and backend services written in PHP (using the Zend framework) and Python; it leverages supercomputing and storage resources at Indiana University. ODI-PPA is designed to be reconfigurable for use in other science domains with large and complex datasets, including an ongoing offshoot project for electron microscopy data.
Exploring No-SQL alternatives for ALMA monitoring system
Tzu-Chiang Shen, Ruben Soto, Patricio Merino, et al.
The Atacama Large Millimeter /submillimeter Array (ALMA) will be a unique research instrument composed of at least 66 reconfigurable high-precision antennas, located at the Chajnantor plain in the Chilean Andes at an elevation of 5000 m. This paper describes the experience gained after several years working with the monitoring system, which has a strong requirement of collecting and storing up to 150K variables with a highest sampling rate of 20.8 kHz. The original design was built on top of a cluster of relational database server and network attached storage with fiber channel interface. As the number of monitoring points increases with the number of antennas included in the array, the current monitoring system demonstrated to be able to handle the increased data rate in the collection and storage area (only one month of data), but the data query interface showed serious performance degradation. A solution based on no-SQL platform was explored as an alternative to the current long-term storage system. Among several alternatives, mongoDB has been selected. In the data flow, intermediate cache servers based on Redis were introduced to allow faster streaming of the most recently acquired data to web based charts and applications for online data analysis.
Control Systems: Camera and Data Acquisition
icon_mobile_dropdown
The DECam DAQ System: lessons learned after one year of operations
K. Honscheid, A. Elliott, M. Bonati, et al.
The Dark Energy Camera (DECam) is a new 520 Mega Pixel CCD camera with a 3 square degree field of view built for the Dark Energy Survey (DES). DECam is mounted at the prime focus of the Blanco 4-m telescope at the Cerro-Tololo International Observatory (CTIO). DES is a 5-year, high precision, multi-bandpass, photometric survey of 5000 square degrees of the southern sky that started August 2013. In this paper we briefly review SISPI, the data acquisition and control system of the Dark Energy Camera and follow with a discussion of our experience with the system and the lessons learned after one year of survey operations.
Wendelstein Observatory control software
Claus Gössl, Jan Snigula, Mihael Kodric, et al.
LMU München operates an astrophysical observatory on Mt. Wendelstein1 which has been equipped with a modern 2m-class telescope2, 3 recently. The new Fraunhofer telescope has started science operations in autumn 2013 with a 64 Mpixel, 0:5 x 0:5 square degree FoV wide field camera,4 and will successively be equipped with a 3 channel optical/NIR camera5 and 2 fibre coupled spectrographs (IFU spectrograph VIRUSW6 already in operation at the 2.7 McDonald, Texas and an upgraded Echelle spectrograph FOCES7, 8 formerly operated at Calar Alto oberservatory, Spain). All instruments will be mounted simultaneously and can be activated within a minute. The observatory also operates a small 40cm telescope with a CCD-camera and a simple fibre coupled spectrograph for students lab and photometric monitoring as well as a large number of support equipment like a meteo station, allsky cameras, a multitude of webcams, in addition to a complex building control system environment. Here we describe the ongoing effort to build a centralised controlling interface for all hardware. This includes remote/robotic operation, visualisation via web browser technologies, and data processing and archiving.
VLT instruments: industrial solutions for non-scientific detector systems
P. Duhoux, J. Knudstrup, P. Lilley, et al.
Recent improvements in industrial vision technology and products together with the increasing need for high performance, cost efficient technical detectors for astronomical instrumentation have led ESO with the contribution of INAF to evaluate this trend and elaborate ad-hoc solutions which are interoperable and compatible with the evolution of VLT standards. The ESPRESSO spectrograph shall be the first instrument deploying this technology. ESO's Technical CCD (hereafter TCCD) requirements are extensive and demanding. A lightweight, low maintenance, rugged and high performance TCCD camera product or family of products is required which can operate in the extreme environmental conditions present at ESO's observatories with minimum maintenance and minimal downtime. In addition the camera solution needs to be interchangeable between different technical roles e.g. slit viewing, pupil and field stabilization, with excellent performance characteristics under a wide range of observing conditions together with ease of use for the end user. Interoperability is enhanced by conformance to recognized electrical, mechanical and software standards. Technical requirements and evaluation criteria for the TCCD solution are discussed in more detail. A software architecture has been adopted which facilitates easy integration with TCCD's from different vendors. The communication with the devices is implemented by means of dedicated adapters allowing usage of the same core framework (business logic). The preference has been given to cameras with an Ethernet interface, using standard TCP/IP based communication. While the preferred protocol is the industrial standard GigE Vision, not all vendors supply cameras with this interface, hence proprietary socket-based protocols are also acceptable with the provision of a validated Linux compliant API. A fundamental requirement of the TCCD software is that it shall allow for a seamless integration with the existing VLT software framework. ESPRESSO is a fiber-fed, cross-dispersed echelle spectrograph that will be located in the Combined-Coudé Laboratory of the VLT in the Paranal Observatory in Chile. It will be able to operate either using the light of any of the UT's or using the incoherently combined light of up to four UT's. The stabilization of the incoming beam is achieved by dedicated piezo systems controlled via active loops closed on 4 + 4 dedicated TCCD's for the stabilization of the pupil image and of the field with a frequency goal of 3 Hz on a 2nd to 3rd magnitude star. An additional 9th TCCD system shall be used as an exposure-meter. In this paper we will present the technical CCD solution for future VLT instruments.
Data Processing and Pipelines
icon_mobile_dropdown
ALMA service data analysis and level 2 quality assurance with CASA
Dirk Petry, Baltasar Vila-Vilaro, Eric Villard, et al.
The Atacama Large mm and sub-mm Array (ALMA) radio observatory is one of the world’s largest astronomical projects. After the very successful conclusion of the first observation cycles Early Science Cycles 0 and 1, the ALMA project can report many successes and lessons learned. The science data taken interleaved with commis- sioning tests for the still continuing addition of new capabilities has already resulted in numerous publications in high-profile journals. The increasing data volume and complexity are challenging but under control. The radio-astronomical data analysis package ”Common Astronomy Software Applications” (CASA) has played a crucial role in this effort. This article describes the implementation of the ALMA data quality assurance system, in particular the level 2 which is based on CASA, and the lessons learned.
On-board CME detection algorithm for the Solar Orbiter-METIS coronagraph
A. Bemporad, V. Andretta, M. Pancrazzi, et al.
The METIS coronagraph is one of the instruments part of the payload of the ESA – Solar Orbiter mission to be launched in 2017. The spacecraft will operate much like a planetary encounter mission, with the main scientific activity taking place with the remote-sensing instruments during three 10-days intervals per orbit: optimization of the different instrument observing modes will be crucial. One of the key scientific targets of METIS will be the study of transient ejections of mass through the solar corona (Coronal Mass Ejections – CMEs) and their heliospheric evolution. METIS will provide for the first time imaging of CMEs in two different wavelengths: VL (visible light 580- 640 nm) and UV (Lyman-α line of HI at 121.6 nm). The detection of transient phenomena shall be managed directly by the METIS Processing and Power Unit (MPPU) by means of both external triggers (“flags”) coming from other Solar Orbiter instruments, and internal “flags” produced directly by the METIS on-board software. METIS on-board algorithm for the automatic detection of CMEs will be based on running differences between consecutive images re-binned to very low resolution and thresholded for significant changes over a minimum value. Given the small relative variation of white light intensity during CMEs, the algorithm will take advantage of VL images acquired with different polarization angles to maximize the detection capability: possible false detections should be automatically managed by the algorithm. The algorithm will be able to provide the CME first detection time, latitudinal direction of propagation on the plane of the sky (within 45 degrees), a binary flag indicating whether a "halo CME" has been detected.
MASCARA: data handling, processing, and calibration
R. Stuik, A.-L. Lesage, A. Jakobs, et al.
MASCARA, the Multi-site All-Sky CAmeRA, consists of several fully-automated stations distributed across the globe. Its goal is to find exoplanets transiting the brightest stars, in the V = 4 to 8 magnitude range, currently probed neither by space- nor by ground-based surveys. The nearby transiting planet systems that MASCARA is expected to discover will be key targets for future detailed planet atmosphere observations. Each station contains five wide-angle cameras monitoring the near-entire sky at each location. Once fully deployed, MASCARA will provide a nearly continuous coverage of the dark sky, down to magnitude 8, at sub-minute cadence. Effectively taking an image of the full sky every 6.4 seconds, MASCARA will produce approximately 500 GB of raw data per night, per station. This data needs to be processed in order to produce calibrated light curves, for up to ~40,000 stars down to magnitude 8 and with a signal-to-noise-ratio of better than 100. The aim of the data reduction pipeline is to process the data locally and in real time, both to immediately have quality control, as well as to prevent a data back-log. Although the cameras are fixed and the stars are therefore drifting over the CCDs, MASCARA is a targeted mission. Data processing consists of three main steps: 1. Compute a complete astrometric solution to sub-pixel level for each exposure and extracting postage stamps for each of the stars in the field of view. 2. Perform accurate photometry on each of the postage stamps, including back-ground subtraction and identification of errors in the photometry due to bad pixels, satellites, air planes or Laser Guide Stars. 3. Remove fluctuations on time scales typical for transits, i.e., several hours, caused by for example the camera and atmospheric transmission, color variations in stars and pixel-to-pixel gain fluctuations. Photometry on short time scales already shows noise levels close to the photon noise limit, and using a combination of calibration and relative photometry the red-noise component can be reduced to close to this photon noise limit, allowing for semi-automated identification of exo-planet transits. This paper discusses the data handling, processing and calibration and shows the first results of the pipeline.
Data management pipeline and hardware facilities for J-PAS and J-PLUS surveys archiving and processing
D. Cristóbal-Hornillos, J. Varela, A. Ederoclite, et al.
The Observatorio Astrofísico de Javalambre have two main telescopes: JST/T250, a 2.5m 3deg FoV and JAST/T80 with 2deg FoV. From OAJ two surveys of 8500 square degrees will be carried out. J-PAS using 54 narrow and several broad band filters and J-PLUS using 12 filters. Both surveys will produce ~2.5 PB of data. This contribution presents the software and hardware architecture to store, process and publish the data. Results about pipeline and hardware performance with data collected during the first months of JAST/T80 operation will be presented.
Control Systems for Spectrographs
icon_mobile_dropdown
Fibre positioning algorithms for the WEAVE spectrograph
David L. Terrett, Ian J. Lewis, Gavin Dalton, et al.
WEAVE is the next-generation wide-field optical spectroscopy facility for the William Herschel Telescope (WHT) in La Palma, Canary Islands, Spain. It is a multi-object "pick and place" fibre fed spectrograph with more than one thousand fibres, similar in concept to the Australian Astronomical Observatory's 2dF1 instrument with two observing plates, one of which is observing the sky while other is being reconfigured by a robotic fibre positioner. It will be capable of acquiring more than 10000 star or galaxy spectra a night. The WEAVE positioner concept uses two robots working in tandem in order to reconfigure a fully populated field within the expected 1 hour dwell-time for the instrument (a good match between the required exposure times and the limit of validity for a given configuration due to the effects of differential refraction). This presents additional constraints and complications for the software that determines the optimal path from one configuration to the next, particularly given the large number of fibre crossings implied by the 1000 fibre multiplex. This paper describes the algorithms and programming techniques used in the prototype implementations of the field configuration tool and the fibre positioner robot controller developed to support the detailed design of WEAVE.
Collision-free motion planning for fiber positioner robots: discretization of velocity profiles
Laleh Makarem, Jean-Paul Kneib, Denis Gillet, et al.
The next generation of large-scale spectroscopic survey experiments such as DESI, will use thousands of fiber positioner robots packed on a focal plate. In order to maximize the observing time with this robotic system we need to move in parallel the fiber-ends of all positioners from the previous to the next target coordinates. Direct trajectories are not feasible due to collision risks that could undeniably damage the robots and impact the survey operation and performance. We have previously developed a motion planning method based on a novel decentralized navigation function for collision-free coordination of fiber positioners. The navigation function takes into account the configuration of positioners as well as their envelope constraints. The motion planning scheme has linear complexity and short motion duration (2.5 seconds with the maximum speed of 30 rpm for the positioner), which is independent of the number of positioners. These two key advantages of the decentralization designate the method as a promising solution for the collision-free motion-planning problem in the next-generation of fiber-fed spectrographs. In a framework where a centralized computer communicates with the positioner robots, communication overhead can be reduced significantly by using velocity profiles consisting of a few bits only. We present here the discretization of velocity profiles to ensure the feasibility of a real-time coordination for a large number of positioners. The modified motion planning method that generates piecewise linearized position profiles guarantees collision-free trajectories for all the robots. The velocity profiles fit few bits at the expense of higher computational costs.
WEAVE core processing system
Nicholas A. Walton, Mike Irwin, James R. Lewis, et al.
WEAVE is an approved massive wide field multi-object optical spectrograph (MOS) currently entering its build phase, destined for use on the 4.2-m William Herschel Telescope (WHT). It will be commissioned and begin survey operations in 2017. This paper describes the core processing system (CPS) system being developed to process the bulk data flow from WEAVE. We describe the processes and techniques to be used in producing the scientifically validated 'Level 1' data products from the WEAVE data. CPS outputs will include calibrated one-d spectra and initial estimates of basic parameters such as radial velocities (for stars) and redshifts (for galaxies).
Field target allocation and routing algorithms for Starbugs
Michael Goodwin, Nuria P. F. Lorente, Christophe Satorre, et al.
Starbugs are miniaturised robotic devices that position optical fibres over a telescope’s focal plane in parallel operation for high multiplex spectroscopic surveys. The key advantage of the Starbug positioning system is its potential to configure fields of hundreds of targets in a few minutes, consistent with typical detector readout times. Starbugs have been selected as the positioning technology for the TAIPAN (Transforming Astronomical Imaging surveys through Polychromatic Analysis of Nebulae) instrument, a prototype for MANIFEST (Many Instrument Fiber System) on the GMT (Giant Magellan Telescope). TAIPAN consists of a 150-fibre Starbug positioner accessing the 6 degree field-ofview of the AAO’s UK Schmidt Telescope at Siding Spring Observatory. For TAIPAN, it is important to optimise the target allocation and routing algorithms to provide the fastest configurations times. We present details of the algorithms and results of the simulated performance.
Commissioning MOS and Fabry-Perot modes for the Robert Stobie Spectrograph on the Southern African Large Telescope
A. R. Koeslag, T. B. Williams, K. H. Nordsieck, et al.
The Southern African Large Telescope (SALT) currently has three instruments: the imaging SALTICAM, the new High Resolution Spectrograph (HRS) which is in the process of being commissioned and the Robert Stobie Spectrograph (RSS). RSS has multiple science modes, of which long slit spectroscopy was originally commissioned; We have commissioned two new science modes: Multi Object Spectroscopy (MOS) and Fabry-Perot (FP). Due to the short track times available on SALT it is vital that acquisition is as efficient as possible. This paper will discuss how we implemented these modes in software and some of the challenges we had to overcome. MOS requires a slit-mask to be aligned with a number of stars. This is done in two phases: in MOS calibration the positions of the slits are detected using a through-slit image and RA/DEC database information, and in MOS acquisition the detector sends commands to the telescope control system (TCS) in an iterative and interactive fashion for fine mask/detector alignment to get the desired targets on the slits. There were several challenges involved with this system, and the user interface evolved to make the process as efficient as possible. We also had to overcome problems with the manufacturing process of the slit-masks. FP requires the precise alignment each of the two etalons installed on RSS. The software makes use of calibration tables to get the etalons into roughly aligned starting positions. An exposure is then done using a calibration arc lamp, producing a ring pattern. Measurement of the rings allows the determination of the adjustments needed to properly align the etalons. The software has been developed to optimize this process, along with software tools that allow us to fine tune the calibration tables. The software architecture allows the complexity of automating the FP calibration and procedures to be easily managed.
Cyberinfrastructure I
icon_mobile_dropdown
Recommissioning Cassegrain instruments at the telescope currently still known as UKIRT
Maren Hauschildt-Purves, Craig A. Walther, Timothy C. Chuter, et al.
Apart from a brief Cassegrain run in the summer of 2011, UKIRT has been operated in WFCAM-only mode since January 2009 and remotely from Hilo since December 2010. UKIRT operations are now in the process of being handed over to the University of Arizona who are interested in recommissioning at least some of the Cassegrain instruments. While at the time of this writing the work is mostly still in the planning stage it is actively being thought about, and some of the infrastructure is being put (back) into place.
ALMA communication backbone in Chile goes optical
G. Filippi, J. Ibsen, Sandra Jaque, et al.
High-bandwidth communication has become a key factor for scientific installations as Observatories. This paper describes the technical, organizational, and operational goals and the level of completion of the ALMA Optical Link Project. The project focus is the creation and operation of an effective and sustainable communication infrastructure to connect the ALMA Observatory, located in the Atacama Desert, in the Northern region of Chile, with the point of presence in ANTOFAGASTA, about 400km away, of the EVALSO infrastructure, and from there to the Central Office in the Chilean capital, Santiago. This new infrastructure that will be operated in behalf of ALMA by REUNA, the Chilean National Research and Education Network, will use state of the art technologies, like dark fiber from newly built cables and DWDM transmission, allowing extending the reach of high capacity communication to the remote region where the Observatory is located. When completed, the end-to-end Gigabit-per-second (Gbps) capable link will provide ALMA with a modern, effective, robust, communication infrastructure capable to cope with present and future demands, like those coming from fast growing data transfer to rapid response mode, from remote monitoring and engineering to virtual presence.
Back to the future: virtualization of the computing environment at the W. M. Keck Observatory
Kevin L. McCann, Denny A. Birch, Jennifer M. Holt, et al.
Over its two decades of science operations, the W.M. Keck Observatory computing environment has evolved to contain a distributed hybrid mix of hundreds of servers, desktops and laptops of multiple different hardware platforms, O/S versions and vintages. Supporting the growing computing capabilities to meet the observatory’s diverse, evolving computing demands within fixed budget constraints, presents many challenges. This paper describes the significant role that virtualization is playing in addressing these challenges while improving the level and quality of service as well as realizing significant savings across many cost areas. Starting in December 2012, the observatory embarked on an ambitious plan to incrementally test and deploy a migration to virtualized platforms to address a broad range of specific opportunities. Implementation to date has been surprisingly glitch free, progressing well and yielding tangible benefits much faster than many expected. We describe here the general approach, starting with the initial identification of some low hanging fruit which also provided opportunity to gain experience and build confidence among both the implementation team and the user community. We describe the range of challenges, opportunities and cost savings potential. Very significant among these was the substantial power savings which resulted in strong broad support for moving forward. We go on to describe the phasing plan, the evolving scalable architecture, some of the specific technical choices, as well as some of the individual technical issues encountered along the way. The phased implementation spans Windows and Unix servers for scientific, engineering and business operations, virtualized desktops for typical office users as well as more the more demanding graphics intensive CAD users. Other areas discussed in this paper include staff training, load balancing, redundancy, scalability, remote access, disaster readiness and recovery.
Refactoring GBT software to support high data rate instruments using data streaming technology
Ramón Creager, Mark Whitehead
This paper describes the motivation, design and implementation for re-factoring the Robert C. Byrd Green Bank Telescope (GBT) monitor and control software to utilize the ZeroMQ messaging library.
Software design for the VIS instrument onboard the Euclid mission: a multilayer approach
E. Galli, A. M. Di Giorgio, S. Pezzuto, et al.
The Euclid mission scientific payload is composed of two instruments: a VISible Imaging Instrument (VIS) and a Near Infrared Spectrometer and Photometer instrument (NISP). Each instrument has its own control unit. The Instrument Command and Data Processing Unit (VI-CDPU) is the control unit of the VIS instrument. The VI-CDPU is connected directly to the spacecraft by means of a MIL-STD-1553B bus and to the satellite Mass Memory Unit via a SpaceWire link. All the internal interfaces are implemented via SpaceWire links and include 12 high speed lines for the data provided by the 36 focal plane CCDs readout electronics (ROEs) and one link to the Power and Mechanisms Control Unit (VI-PMCU). VI-CDPU is in charge of distributing commands to the instrument sub-systems, collecting their housekeeping parameters and monitoring their health status. Moreover, the unit has the task of acquiring, reordering, compressing and transferring the science data to the satellite Mass Memory. This last feature is probably the most challenging one for the VI-CDPU, since stringent constraints about the minimum lossless compression ratio, the maximum time for the compression execution and the maximum power consumption have to be satisfied. Therefore, an accurate performance analysis at hardware layer is necessary, which could delay too much the design and development of software. In order to mitigate this risk, in the multilayered design of software we decided to design a middleware layer that provides a set of APIs with the aim of hiding the implementation of the HW connected layer to the application one. The middleware is built on top of the Operating System layer (which includes the Real-Time OS that will be adopted) and the onboard Computer Hardware. The middleware itself has a multi-layer architecture composed of 4 layers: the Abstract RTOS Adapter Layer (AOSAL), the Speci_c RTOS Adapter Layer (SOSAL), the Common Patterns Layer (CPL), the Service Layer composed of two subgroups which are the Common Service (CSL) and the Specific Service layer (SSL). The middleware design is made using the UML 2.0 standard. The AOSAL includes the abstraction of services provided by a generic RTOS (e.g Thread/Task, Time Management, Mutex and Semaphores) as well as an abstraction of SpaceWire and 1553-B bus Interface. The SOSAL is the implementation of AOSAL for the adopted RTOS. The CPL provides a set of patterns that are a general solution for common problems related to embedded hard Real Time systems. This set includes patterns for memory management, homogenous redundancy channels, pipes and filters for data exchange, proxies for slow memories, watchdog and reactive objects. The CPL is designed using a soft-metamodeling approach, so as to be as general as possible. Finally, the SL provides a set of services that are common to space applications. The testing of this middleware can be done both during the design using appropriate tools of analysis and in the implementation phase by means of unit testing tools.
Control Systems
icon_mobile_dropdown
DKIST controls model for synchronization of instrument cameras, polarization modulators, and mechanisms
The Daniel K. Inouye Solar Telescope (DKIST) will include facility instruments that perform polarimetric observations of the sun. In order for an instrument to successfully perform these observations its Instrument Controller (IC) software must be able to tightly synchronize the activities of its sub-systems including polarization modulators, cameras, and mechanisms. In this paper we discuss the DKIST control model for synchronizing these sub-systems without the use of hardware trigger lines by using the DKIST Time Reference And Distribution System (TRADS) as a common time base and through sub-system control interfaces that support configuring the timing and cadence of their behavior. The DKIST Polarization Modulator Controller System (PMCS) provides an interface that allows the IC to characterize the rotation of the modulator in terms of a reference time (t0), rate, and start state. The DKIST Virtual Camera (VC) provides a complimentary interface that allows data acquisitions and accumulation sequences to be specified using a reference time (t0), rate, and execution block time slices, which are cumulative offsets from t0. Re-configuration of other instrument mechanisms such as filter, slits, or steering mirrors during the observation is the responsibility of the IC and must be carefully scheduled at known and pre-determined gaps in the VC data acquisition sequence. The DKIST TRADS provides an IEEE-1588-2008 Precision Time Protocol (PTP) service that is used to synchronize the activities of instrument sub-systems. The modulator, camera, and mechanism sub-systems subscribe to this service and can therefore perform their tasks according to a common time base. In this paper we discuss the design of the PMCS, VC, and mechanism control interfaces, and how the IC can use them to configure the behavior of these sub-systems during an observation. We also discuss the interface to TRADS and how it is used as a common time base in each of these sub-systems. We present our preliminary results of the system performance against known instrument use cases.
ACS (Alma Common Software) operating a set of robotic telescopes
C. Westhues, M. Ramolla, R. Lemke, et al.
We use the ALMA Common Software (ACS) to establish a unified middleware for robotic observations with the 40cm Optical, 80cm Infrared and 1.5m Hexapod telescopes located at OCA (Observatorio Cerro Armazones) and the ESO 1-m located at La Silla. ACS permits to hide from the observer the technical specifications, like mount-type or camera-model. Furthermore ACS provides a uniform interface to the different telescopes, allowing us to run the same planning program for each telescope. Observations are carried out for long-term monitoring campaigns to study the variability of stars and AGN. We present here the specific implementation to the different telescopes.
Achieving autonomous data flow of the Automated Planet Finder (APF)
Jennifer Burt, Russell Hanson, Eugenio Rivera, et al.
The Automated Planet Finder (APF) is a dedicated, ground-based precision radial velocity facility located at Lick Observatory, operated by University of California Observatories (UCO), atop Mt. Hamilton in California. The 2.4-m telescope and accompanying high-resolution echelle spectrograph were specifically designed for the purpose of detecting planets in the liquid water habitable zone of low-mass stars. The telescope is operated every night (weather permitting) to achieve meaningful signal-to-noise gains from high cadence observing and to avoid the aliasing problems inherent to planets whose periods are close to the lunar month. To take full advantage of the consistent influx of data it is necessary to analyze each night's results before designing the next evening's target list. To address this requirement, we are in the process of developing a fully automated reduction pipeline that will take each night's data from raw FITS files to final radial velocity values and integrate those values into a master database. The database is then accessed by the publicly available Systemic console, a general-purpose software package for the analysis and combined multiparameter fitting of Doppler radial velocity observations. As each stellar system is updated, Systemic evaluates the probability that a planetary signal is present in the data, and uses this value, along with other considerations such as the star's brightness and chromospheric activity level, to assign it a priority rating for future observations. When the telescope is once again on sky it determines the optimal targets to observe in real time using an in-house dynamic scheduler.
STARS: a software application for the EBEX autonomous daytime star cameras
Daniel Chapman, Joy Didier, Shaul Hanany, et al.
The E and B Experiment (EBEX) is a balloon-borne telescope designed to probe polarization signals in the CMB resulting from primordial gravitational waves, gravitational lensing, and Galactic dust emission. EBEX completed an 11 day flight over Antarctica in January 2013 and data analysis is underway. EBEX employs two star cameras to achieve its real-time and post-flight pointing requirements. We wrote a software application called STARS to operate, command, and collect data from each of the star cameras, and to interface them with the main flight computer. We paid special attention to make the software robust against potential in-flight failures. We report on the implementation, testing, and successful in flight performance of STARS.
Upgrade and standardization of real-time software for telescope systems at the Gemini telescopes
William N. Rambold, Pedro Gigoux, Cristian Urrutia, et al.
The real-time control systems for the Gemini Telescopes were designed and built in the 1990s using state-of-the-art software tools and operating systems of that time. Since these systems are in use every night they have not been kept upto- date and are now obsolete and very labor intensive to support. Gemini is currently engaged in a major upgrade of its telescope control systems. This paper reviews the studies performed to select and develop a new standard operating environment for Gemini real-time systems and the work performed so far in implementing it.
Software Engineering
icon_mobile_dropdown
Software and cyber-infrastructure development to control the Observatorio Astrofísico de Javalambre (OAJ)
A. Yanes-Díaz, J. L. Antón, S. Rueda-Teruel, et al.
The Observatorio Astrofísico de Javalambre (OAJ) is a new astronomical facility located at the Sierra de Javalambre (Teruel, Spain) whose primary role will be to conduct all-sky astronomical surveys with two unprecedented telescopes of unusually large fields of view: the JST/T250, a 2.55m telescope of 3deg field of view, and the JAST/T80, an 83cm telescope of 2deg field of view. CEFCA engineering team has been designing the OAJ control system as a global concept to manage, monitor, control and maintain all the observatory systems including not only astronomical subsystems but also infrastructure and other facilities. In order to provide quality, reliability and efficiency, the OAJ control system (OCS) design is based on CIA (Control Integrated Architecture) and OEE (Overall Equipment Effectiveness) as a key to improve day and night operation processes. The OCS goes from low level hardware layer including IOs connected directly to sensors and actuators deployed around the whole observatory systems, including telescopes and astronomical instrumentation, up to the high level software layer as a tool to perform efficiently observatory operations. We will give an overview of the OAJ control system design and implementation from an engineering point of view, giving details of the design criteria, technology, architecture, standards, functional blocks, model structure, development, deployment, goals, report about the actual status and next steps.
Evolution of the SOFIA tracking control system
Norbert Fiebig, Holger Jakob, Enrico Pfüller, et al.
The airborne observatory SOFIA (Stratospheric Observatory for Infrared Astronomy) is undergoing a modernization of its tracking system. This included new, highly sensitive tracking cameras, control computers, filter wheels and other equipment, as well as a major redesign of the control software. The experiences along the migration path from an aged 19" VMbus based control system to the application of modern industrial PCs, from VxWorks real-time operating system to embedded Linux and a state of the art software architecture are presented. Further, the concept is presented to operate the new camera also as a scientific instrument, in parallel to tracking.
Towards a global software architecture for operating and controlling the Cherenkov Telescope Array
Matthias Füssling, Igor Oya, Ullrich Schwanke, et al.
The Cherenkov Telescope Array (CTA) is an international initiative to build the next generation ground-based gamma-ray instrument. CTA will allow studying the Universe in the very-high-energy gamma-ray domain with energies ranging from a few tens of GeV to more than a hundred TeV. It will extend the currently accessible energy band, while increasing the sensitivity by a factor of 10 with respect to existing Cherenkov facilities. Furthermore, CTA will enhance other important aspects like angular and energy resolution. CTA will comprise two arrays, one in the Northern hemisphere and one in the Southern hemisphere, of in total more than one hundred of telescopes of three different sizes. The CTA performance requirements and the increased complexity in operation, control and monitoring of such a large distributed multi-telescope array leads to new challenges in designing and developing the CTA control software system. Indeed, the control software system must be flexible enough to allow for the simultaneous operation of multiple sub-arrays of different types of telescopes, to be ready to react in short timescales to changing weather conditions or to automatic alarms for transient phenomena, to be able to operate the observatory with a minimum personal effort on site, to cope with the malfunctioning of single telescopes or sub-arrays of telescopes, and to readout and control a large and heterogeneous set of devices. This report describes the preliminary architectural design concept for the CTA control software system that will be responsible to manage all the functionality of the CTA array, thereby enabling CTA to reach its scientific goals.
Experiences with the design and construction of wideband spectral line and pulsar instrumentation with CASPER hardware and software: the digital backend system
NRAO recently built the Digital Backend System (DIBAS) for the Shanghai Astronomical Observatory's (SHAO) 65 meter radio telescope. The machine was created from the design of the VErsatile GBT Astronomical Spec- trometer (VEGAS) by adding pulsar search and timing modes to complement the VEGAS spectral line modes. Together the pulsar and spectral line modes cover all anticipated science requirements for the 65 meter, except VLBI. This paper introduces the radio telescope backend and explores the project management challenges of the project. These include managing the high level of reuse of existing FPGA designs, an aggressive schedule for the project, and the software design constraints imposed on the project.
Phasing up ALMA
Matias Mora, Geoffrey Crew, Helge Rottmann, et al.
With the completion of the ALMA array, Development Projects are being initiated to expand the observatory’s technical capabilities. The ALMA Phasing Project is one of the early ones, with the main goal of adding Very Long Baseline Interferometry (VLBI) observation capabilities. This will enable ALMA to join observations with other millimeter observatories having VLBI data capabilities around the globe. ALMA would therefore become the most powerful millimeter VLBI station yet. A minimal impact approach has been taken to cause as little overall work overhead at the observatory as possible and integrate seamlessly with existing infrastructure. New hardware elements and software features are being delivered in incremental cycles to the observatory, adhering to existing workflows. This paper addresses one of the main software challenges of this project and its implementation: the continuous phasing corrections of the ALMA antenna signals. As antenna signals are summed during the online processing for correlation after the observation, a phased array is a key requirement for successful VLBI observations. A new observing mode that inherits all of the existing interferometry functionality is the cornerstone of this development. Further additions include new correlator protocols to modify the data flow, new VLBI specific device controllers, online phase solvers and observation metadata adaptations. All of these are being added to existing ALMA Software subsystems, taking advantage of the modular design and reusing as much code as possible. The design has included a strong focus on simulation capabilities to verify as much of the functionality as possible without the need for sparse telescope time. The first on-site tests of the phasing loop using the ALMA baseline correlator and antennas were performed in early 2014, and the hardware is expected to be completely installed by the middle of the same year.
Reflective memory recorder upgrade: an opportunity to benchmark PowerPC and Intel architectures for real time
Roberto Abuter, Helmut Tischer, Robert Frahm
Several high frequency loops are required to run the VLTI (Very Large Telescope Interferometer) 2, e.g. for fringe tracking11, 5, angle tracking, vibration cancellation, data capture. All these loops rely on low latency real time computers based on the VME bus, Motorola PowerPC14 hardware architecture. In this context, one highly demanding application in terms of cycle time, latency and data transfer volume is the VLTI centralized recording facility, so called, RMN recorder1 (Reflective Memory Recorder). This application captures and transfers data flowing through the distributed memory of the system in real time. Some of the VLTI data producers are running with frequencies up to 8 KHz. With the evolution from first generation instruments like MIDI3, PRIMA5, and AMBER4 which use one or two baselines, to second generation instruments like MATISSE10 and GRAVITY9 which will use all six baselines simultaneously, the quantity of signals has increased by, at least, a factor of six. This has led to a significant overload of the RMN recorder1 which has reached the natural limits imposed by the underlying hardware. At the same time, new, more powerful computers, based on the Intel multicore families of CPUs and PCI buses have become available. With the purpose of improving the performance of the RMN recorder1 application and in order to make it capable of coping with the demands of the new generation instruments, a slightly modified implementation has been developed and integrated into an Intel based multicore computer15 running the VxWorks17 real time operating system. The core of the application is based on the standard VLT software framework for instruments13. The real time task reads from the reflective memory using the onboard DMA access12 and captured data is transferred to the outside world via a TCP socket on a dedicated Ethernet connection. The diversity of the software and hardware that are involved makes this application suitable as a benchmarking platform. A quantitative comparison between the two implementations (PowerPC14 and Intel Multicore20, 15) under different workloads will be presented. In particular, the interrupt handling, the reflective memory access, DMA readout, and TCP stack performances will be compared. To test the limits of the new hardware, the separation, on the different cores, of each of the basic tasks, as the readout, the network transfer, etc, were implemented and throughput reevaluated. The result shows that the RMN recorder1 can extend its operational range from 10 KHz to above 16 KHz by moving from a PowerPC14 to Intel Multicore20, 15. In general, a reduction of latencies and computational delays of could be expected by upgrading applications to Intel Multicore20, 15 architectures.
Innovations
icon_mobile_dropdown
A web-based dashboard for the high-level monitoring of ALMA
Emmanuel Pietriga, Giorgio Filippi, Luis Véliz, et al.
The ALMA radio-telescope’s operations depend on the availability of high-level, easy-to-understand status information about all of its components. The ALMA Dashboard aims at providing an all-in-one-place near-real-time overview of the observatory’s key elements and figures to both line and senior management. The Dashboard covers a wide range of elements beyond antennas, such as pads, correlator and central local oscillator. Data can be displayed in multiple ways, including: a table view, a compact view fitting on a single screen, a timeline showing detailed information over time, a logbook, a geographical map.
Software for autonomous astronomical observatories: challenges and opportunities in the age of big data
Piotr W. Sybilski, Rafał Pawłaszek, Stanisław K. Kozłowski, et al.
We present the software solution developed for a network of autonomous telescopes, deployed and tested in Solaris Project. The software aims to fulfil the contemporary needs of distributed autonomous observatories housing medium sized telescopes: ergonomics, availability, security and reusability. The datafication of such facilities seems inevitable and we give a preliminary study of the challenges and opportunities waiting for software developers. Project Solaris is a global network of four 0.5 m autonomous telescopes conducting a survey of eclipsing binaries in the Southern Hemisphere. The Project's goal is to detect and characterise circumbinary planets using the eclipse timing method. The observatories are located on three continents, and the headquarters coordinating and monitoring the network is in Poland. All four are operational as of December 2013.
DKIST visible tunable filter control software: connecting the DKIST framework to OPC UA
Alexander Bell, Clemens Halbgewachs, Thomas J. Kentischer, et al.
The Visible Tunable Filter (VTF) is a narrowband tunable filter system for imaging spectroscopy and spectropolarimetry based on large-format Fabry Perot interferometers that is currently built by the Kiepenheuer Institut fuer Sonnenphysik for the Daniel K. Inouye Solar Telescope (DKIST). The control software must handle around 30 motorised drives, 3 etalons, a polarizing modulator, a helium neon laser for system calibration, temperature controllers and a multitude of sensors. The VTF is foreseen as one of the DKISTs first-light instruments and should become operational in 2019. In the design of the control software we strongly separate between the high-level part interfacing to the DKIST common services framework (CSF) and the low-level control system software which guarantees real-time performance and synchronization to precision time protocol (PTP) based observatory time. For the latter we chose a programmable logic controller (PLC) from Beckhoff Automation GmbH which supports a wide set of input and output devices as well as distributed clocks for synchronizing signals down to the sub-microsecond level. In this paper we present the design of the required control system software as well as our work on extending the DKIST CSF to use the OPC Unified Architecture (OPC UA) standard which provides a cross-platform communication standard for process control and automation as an interface between the high-level software and the real-time control system.
The Robo-AO automated intelligent queue system
Reed L. Riddle, Kristina Hogstrom, Athanasios Papadopoulos, et al.
Robo-AO is the first automated laser adaptive optics instrument. In just its second year of scientific operations, it has completed the largest adaptive optics surveys to date, each comprising thousands of targets. Robo-AO uses a fully automated queue scheduling system that selects targets based on criteria entered on a per observing program or per target basis, and includes the ability to coordinate with US Strategic Command automatically to avoid lasing space assets. This enables Robo-AO to select among thousands of targets at a time, and achieve an average observation rate of approximately 20 targets per hour.
High-performance quantitative robust switching control for optical telescopes
William P. Lounsbury, Mario Garcia-Sanz
This paper introduces an innovative robust and nonlinear control design methodology for high-performance servosystems in optical telescopes. The dynamics of optical telescopes typically vary according to azimuth and altitude angles, temperature, friction, speed and acceleration, leading to nonlinearities and plant parameter uncertainty. The methodology proposed in this paper combines robust Quantitative Feedback Theory (QFT) techniques with nonlinear switching strategies that achieve simultaneously the best characteristics of a set of very active (fast) robust QFT controllers and very stable (slow) robust QFT controllers. A general dynamic model and a variety of specifications from several different commercially available amateur Newtonian telescopes are used for the controller design as well as the simulation and validation. It is also proven that the nonlinear/switching controller is stable for any switching strategy and switching velocity, according to described frequency conditions based on common quadratic Lyapunov functions (CQLF) and the circle criterion.
Cyberinfrastructure II
icon_mobile_dropdown
Unveiling ALMA software behavior using a decoupled log analysis framework
Juan Pablo Gil, Alexis Tejeda, Tzu-Chiang Shen, et al.
ALMA Software is a complex distributed system installed in more than one hundred of computers, which interacts with more than one thousand of hardware device components. A normal observation follows a flow that interacts with almost that entire infrastructure in a coordinated way. The Software Operation Support team (SOFTOPS) comprises specialized engineers, which analyze the generated software log messages in daily basis to detect bugs, failures and predict eventual failures. These log message can reach up to 30 GB per day. We describe a decoupled and non-intrusive log analysis framework and implemented tools to identify well known problems, measure times taken by specific tasks and detect abnormal behaviors in the system in order to alert the engineers to take corrective actions. The main advantage of this approach among others is that the analysis itself does not interfere with the performance of the production system, allowing to run multiple analyzers in parallel. In this paper we'll describe the selected framework and show the result of some of the implemented tools.
Performance testing open source products for the TMT event service
K. Gillies, Yogesh Bhate
The software system for TMT is a distributed system with many components on many computers. Each component integrates with the overall system using a set of software services. The Event Service is a publish-subscribe message system that allows the distribution of demands and other events. The performance requirements for the Event Service are demanding with a goal of over 60 thousand events/second. This service is critical to the success of the TMT software architecture; therefore, a project was started to survey the open source and commercial market for viable software products. A trade study led to the selection of five products for thorough testing using a specially constructed computer/network configuration and test suite. The best performing product was chosen as the basis of a prototype Event Service implementation. This paper describes the process and performance tests conducted by Persistent Systems that led to the selection of the product for the prototype Event Service.
EMIR: a configurable hierarchical system for event monitoring and incident response
The Event Monitor and Incident Response system (emir) is a flexible, general-purpose system for monitoring and responding to all aspects of instrument, telescope, and general facility operations, and has been in use at the Automated Planet Finder telescope for two years. Responses to problems can include both passive actions (e.g. generating alerts) and active actions (e.g. modifying system settings). Emir includes a monitor-and-response daemon, plus graphical user interfaces and text-based clients that automatically configure themselves from data supplied at runtime by the daemon. The daemon is driven by a configuration file that describes each condition to be monitored, the actions to take when the condition is triggered, and how the conditions are aggregated into hierarchical groups of conditions. Emir has been implemented for the Keck Task Library (KTL) keyword-based systems used at Keck and Lick Observatories, but can be readily adapted to many event-driven architectures. This paper discusses the design and implementation of Emir , and the challenges in balancing the competing demands for simplicity, flexibility, power, and extensibility. Emir ’s design lends itself well to multiple purposes, and in addition to its core monitor and response functions, it provides an effective framework for computing running statistics, aggregate values, and summary state values from the primitive state data generated by other subsystems, and even for creating quick-and-dirty control loops for simple systems.
DKIST visible broadband imager data processing pipeline
The Daniel K. Inouye Solar Telescope (DKIST) Data Handling System (DHS) provides the technical framework and building blocks for developing on-summit instrument quality assurance and data reduction pipelines. The DKIST Visible Broadband Imager (VBI) is a first light instrument that alone will create two data streams with a bandwidth of 960 MB/s each. The high data rate and data volume of the VBI require near-real time processing capability for quality assurance and data reduction, and will be performed on-summit using Graphics Processing Unit (GPU) technology. The VBI data processing pipeline (DPP) is the first designed and developed using the DKIST DHS components, and therefore provides insight into the strengths and weaknesses of the framework. In this paper we lay out the design of the VBI DPP, examine how the underlying DKIST DHS components are utilized, and discuss how integration of the DHS framework with GPUs was accomplished. We present our results of the VBI DPP alpha release implementation of the calibration, frame selection reduction, and quality assurance display processing nodes.
Software framework for the upcoming MMT Observatory primary mirror re-aluminization
Details of the software framework for the upcoming in-situ re-aluminization of the 6.5m MMT Observatory (MMTO) primary mirror are presented. This framework includes: 1) a centralized key-value store and data structure server for data exchange between software modules, 2) a newly developed hardware-software interface for faster data sampling and better hardware control, 3) automated control algorithms that are based upon empirical testing, modeling, and simulation of the aluminization process, 4) re-engineered graphical user interfaces (GUI’s) that use state-of-the-art web technologies, and 5) redundant relational databases for data logging. Redesign of the software framework has several objectives: 1) automated process control to provide more consistent and uniform mirror coatings, 2) optional manual control of the aluminization process, 3) modular design to allow flexibility in process control and software implementation, 4) faster data sampling and logging rates to better characterize the approximately 100-second aluminization event, and 5) synchronized “real-time” web application GUI’s to provide all users with exactly the same data. The framework has been implemented as four modules interconnected by a data store/server. The four modules are integrated into two Linux system services that start automatically at boot-time and remain running at all times. Performance of the software framework is assessed through extensive testing within 2.0 meter and smaller coating chambers at the Sunnyside Test Facility. The redesigned software framework helps ensure that a better performing and longer lasting coating will be achieved during the re-aluminization of the MMTO primary mirror.
Project Management
icon_mobile_dropdown
Ten things we would do differently today: reflections on a decade of ALMA software development
Brian Glendenning, Erich Schmid, George Kosugi, et al.
The software for the Atacama Large Millimeter/submillimeter Array (ALMA) that has been developed in a collaboration of ESO, NRAO, NAOJ and the Joint ALMA Observatory for well over a decade is an integrated end-to-end software system of about six million lines of source code. As we enter the third cycle of science observations, we reflect on some of the decisions taken and call out ten topics where we could have taken a different approach at the time, or would take a different approach in today’s environment. We believe that these lessons learned should be helpful as the next generation of large telescope projects move into their construction phases.
Implementing Kanban for agile process management within the ALMA Software Operations Group
After the inauguration of the Atacama Large Millimeter/submillimeter Array (ALMA), the Software Operations Group in Chile has refocused its objectives to: (1) providing software support to tasks related to System Integration, Scientific Commissioning and Verification, as well as Early Science observations; (2) testing the remaining software features, still under development by the Integrated Computing Team across the world; and (3) designing and developing processes to optimize and increase the level of automation of operational tasks. Due to their different stakeholders, each of these tasks presents a wide diversity of importances, lifespans and complexities. Aiming to provide the proper priority and traceability for every task without stressing our engineers, we introduced the Kanban methodology in our processes in order to balance the demand on the team against the throughput of the delivered work. The aim of this paper is to share experiences gained during the implementation of Kanban in our processes, describing the difficulties we have found, solutions and adaptations that led us to our current but still evolving implementation, which has greatly improved our throughput, prioritization and problem traceability.
End-to-end observatory software modeling using domain specific languages
José M. Filgueira, Matthieu Bec, Ning Liu, et al.
The Giant Magellan Telescope (GMT) is a 25-meter extremely large telescope that is being built by an international consortium of universities and research institutions. Its software and control system is being developed using a set of Domain Specific Languages (DSL) that supports a model driven development methodology integrated with an Agile management process. This approach promotes the use of standardized models that capture the component architecture of the system, that facilitate the construction of technical specifications in a uniform way, that facilitate communication between developers and domain experts and that provide a framework to ensure the successful integration of the software subsystems developed by the GMT partner institutions.
The cost of developing and maintain the monitoring and control software of large ground-based telescopes
There are several large ground-based telescopes currently under development such as the SKA and CCAT in radio as well as several 30m-class in the optical. A common challenge that all of these telescopes are facing is estimating the cost of design, construction and maintenance of their required software. This paper will present a cost breakdown of the monitoring and control software packages implemented for ASKAP including the effort spent to develop and maintain the in-house code and the effort saved by using third-party software, such as EPICS. The costing for ASKAP will be compared to those for the monitoring and control software of other large ground-based telescopes such as ALMA and VLT. This comparison will highlight trends and commonalities in the costing and provides a useful guide for costing future telescope control software builds or upgrades and the ongoing maintenance cost.
Poster Session
icon_mobile_dropdown
INO340 telescope control system: software architecture and development
The Iranian National Observatory telescope (INO340) is a 3.4m Alt-Az reflecting optical telescope under design and development. It is f/11 Ritchey-Chretien with a 0.3° field-of-view. INO340 telescope control system utilizes a distributed control system paradigm that includes four major systems: Telescope Control System (TCS), Observation System Supervisor (OSS), Interlock System (ILS) and Observatory Monitoring System (OMS). The control system software also employs 3-tiered hierarchical architecture. In this paper, after presenting the fundamental concepts and operations of the INO340 control system, we propose the distributed control system software architecture including technical and functional architecture, middleware and infrastructure design and finally the software development process.
Similarities between GCS and human motor cortex: complex movement coordination
The “Gran Telescopio de Canarias” (GTC1) is an optical-infrared 10-meter segmented mirror telescope at the ORM observatory in Canary Islands (Spain). The GTC control system (GCS), the brain of the telescope, is is a distributed object & component oriented system based on RT-CORBA and it is responsible for the management and operation of the telescope, including its instrumentation. On the other hand, the Human motor cortex (HMC) is a region of the cerebrum responsible for the coordination of planning, control, and executing voluntary movements. If we analyze both systems, as far as the movement control of their mechanisms and body parts is concerned, we can find extraordinary similarities in their architectures. Both are structured in layers, and their functionalities are comparable from the movement conception until the movement action itself: In the GCS we can enumerate the Sequencer high level components, the Coordination libraries, the Control Kit library and the Device Driver library as the subsystems involved in the telescope movement control. If we look at the motor cortex, we can also enumerate the primary motor cortex, the secondary motor cortices, which include the posterior parietal cortex, the premotor cortex, and the supplementary motor area (SMA), the motor units, the sensory organs and the basal ganglia. From all these components/areas we will analyze in depth the several subcortical regions, of the the motor cortex, that are involved in organizing motor programs for complex movements and the GCS coordination framework, which is composed by a set of classes that allow to the high level components to transparently control a group of mechanisms simultaneously.
OCS: towards a more efficient telescope
Jose Guerra Sr., Jose San Juan , Marcello Lodi, et al.
The OCS1 (Observation Control System) is a software architecture that allows automatic observations at the TNG2 (Telescopio Nazionale Galileo). It plays a critical role as it is responsible for orchestrating several devices and going through several heterogeneous software interfaces across the telescope. One successful key aspect of the OCS is the interface decoupling over the telescope devices in order to reach a high grade of automatism. The OCS architecture was successfully developed and installed at the TNG just before the arrival of HARPSN3 (more than 2 years ago) and has now reached a good level of maturity and flexibility. Using standard protocols like HTTP which allows a quick integration with new instruments and devices. At the beginning, when only two systems were connected to the OCS the HARPS-N scheduler and the TNG tracking system, other systems have been added to include more features, for example the service offered by the Active Optics system.
Development of aberration measurement program using curvature sensing technique
Hyun-Il Sung, Tae-Ho Ha, Yoon-Ho Park, et al.
The aberration measurement program has been developed for improving the optical quality of ground-based telescopes. This program is based on the Curvature Sensing Technique (CST) and aberrations are estimated from two defocused stellar images. We used iterative algorithm that simulates closed-loop wave-front compensation in optics. It has been applied to the telescope of Bohyunsan Optical Astronomy Observatory (BOAO) in Korea. The most frequently encountered aberration by misalignment was coma. As a result, the image quality of telescope was improved after adjustment the alignment of secondary mirror.
The control, monitor, and alarm system for the ICT equipment of the ASTRI SST-2M telescope prototype for the Cherenkov Telescope Array
ASTRI is an Italian flagship project whose first goal is the realization of an end-to-end telescope prototype, named ASTRI SST-2M, for the Cherenkov Telescope Array (CTA). The prototype will be installed in Italy during Fall 2014. A second goal will be the realization of the ASTRI/CTA mini-array which will be composed of seven SST-2M telescopes placed at the CTA Southern Site. The Information and Communication Technology (ICT) equipment necessary to drive the infrastructure for the ASTRI SST-2M prototype is being designed as a complete and stand-alone computer center. The design goal is to obtain basic ICT equipment that might be scaled, with a low level of redundancy, for the ASTRI/CTA mini-array, taking into account the necessary control, monitor and alarm system requirements. The ICT equipment envisaged at the Serra La Nave observing station in Italy, where the ASTRI SST-2M telescope prototype will operate, includes computers, servers and workstations, network devices, an uninterruptable power supply system, and air conditioning systems. Suitable hardware and software tools will allow the parameters related to the behavior and health of each item of equipment to be controlled and monitored. This paper presents the proposed architecture and technical solutions that integrate the ICT equipment in the framework of the Observatory Control System package of the ASTRI/CTA Mini- Array Software System, MASS, to allow their local and remote control and monitoring. An end-toend test case using an Internet Protocol thermometer is reported in detail.
Application of combined controller based on CMAC and nonlinear PID in dual redundant telescope tracking system
Heng Li, Changzhi Ren, Libin Song, et al.
The direct drive tracking system of Telescope is one multivariable, nonlinear and strong coupling complex mechanical control system which is disturbed by some nonlinear disturbance such torque ripple, wind disturbance during the tracking process. the traditional PID control cannot fundamentally solved the contradiction between static and dynamic performance, tracking data and disturbance .This paper explores a kind of CMAC with nonlinear PID parallel composite control method for dual redundant telescope tracing servo system. The simulation result proves that combined algorithm based on CMAC and PID realizes the servo system without overshoot and accelerates the response of the system. What’s more, CMAC feedforward control improves anti-disturbance ability and the control precision of the servo system.
The sliding mode control algorithm used in the SONG tracking servo system
China SONG telescope would achieve the goal for long time continuous, uninterrupted, full automatic observation and works in the diffraction limit condition with high tracking precision and reliability, which puts forward a serious challenge to the tracking control system. This paper explores one sliding mode control algorithm to improve the performance of China SONG telescope tracking system. The results show that the algorithm can get higher precision, which has high exploration significance for the telescope tracking system.
Using DARC in a multi-object AO bench and in a dome seeing instrument
The Durham adaptive Optics Real Time Controller (DARC)1 is a real-time system for astronomical adaptive optics systems originally developed at Durham University and in use for the CANARY instrument. One of its main strengths is to be a generic and high performance real-time controller running on an off-the-shelf Linux computer. We are using DARC for two different implementations: BEAGLE,2 a Multi-Object AO (MOAO) bench system to experiment with novel tomographic reconstructors and LOTUCE2,3 an in-dome turbulence instrument. We present the software architecture for each application, current benchmarks and lessons learned for current and future DARC developers.
A CCD experimental platform for large telescope in Antarctica based on FPGA
Yuhua Zhu, Yongjun Qi
The CCD , as a detector , is one of the important components of astronomical telescopes. For a large telescope in Antarctica, a set of CCD detector system with large size, high sensitivity and low noise is indispensable. Because of the extremely low temperatures and unattended, system maintenance and software and hardware upgrade become hard problems. This paper introduces a general CCD controller experiment platform, using Field programmable gate array FPGA, which is, in fact, a large-scale field reconfigurable array. Taking the advantage of convenience to modify the system, construction of driving circuit, digital signal processing module, network communication interface, control algorithm validation, and remote reconfigurable module may realize. With the concept of integrated hardware and software, the paper discusses the key technology of building scientific CCD system suitable for the special work environment in Antarctica, focusing on the method of remote reconfiguration for controller via network and then offering a feasible hardware and software solution.
Improving the WIYN Telescope's pointing and tracking performance with a star tracker camera
Jayadev K. Rajagopal, Daniel R. Harbeck, Charles Corson, et al.
We report on the implementation of a star tracker camera to improve the telescope pointing and tracking, at the WIYN 3.5 m telescope on Kitt Peak, Arizona. We base the overall concept on a star tracker system developed at the University of Wisconsin and routinely in use now for rocket and high-altitude balloon navigation. This fairly simple system provides pointing and station-keeping information, accurate to a few arcseconds, typically within a second.
INO340 telescope control system: hardware design and development
In order to meet high image quality requirements of the INO340 telescope, one of the significant issues is the design and development of the Telescope Control System (TCS) architecture. The architecture of TCS is designed based on distributed control system configuration, which consists of four major subsystems: Telescope Control System supervisor (TCSS), Dome Control System (DCS), Mount Control System (MCS), and Active Optic System (AOS). Another system which plays important role in the hardware architecture is Interlock System (ILS), which is responsible for safety of staff, telescope and data. ILS architecture is also designed, using distributed system method based on the fail-safe PLCs. All subsystems of TCS are designed with an adequate safety subsystem, which are responsible for the safety of the subsystem and communicates through reliable lines with the main controller, placed in control room. In this paper, we explain the innovative architecture of Telescope Control System together with Interlock System and in brief show the interface control issues between different subsystems.
CARMENES instrument control system and operational scheduler
Alvaro Garcia-Piquer, Josep Guàrdia, Josep Colomé, et al.
The main goal of the CARMENES instrument is to perform high-accuracy measurements of stellar radial velocities (1m/s) with long-term stability. CARMENES will be installed in 2015 at the 3.5 m telescope in the Calar Alto Observatory (Spain) and it will be equipped with two spectrographs covering from the visible to the near-infrared. It will make use of its near-IR capabilities to observe late-type stars, whose peak of the spectral energy distribution falls in the relevant wavelength interval. The technology needed to develop this instrument represents a challenge at all levels. We present two software packages that play a key role in the control layer for an efficient operation of the instrument: the Instrument Control System (ICS) and the Operational Scheduler. The coordination and management of CARMENES is handled by the ICS, which is responsible for carrying out the operations of the different subsystems providing a tool to operate the instrument in an integrated manner from low to high user interaction level. The ICS interacts with the following subsystems: the near-IR and visible channels, composed by the detectors and exposure meters; the calibration units; the environment sensors; the front-end electronics; the acquisition and guiding module; the interfaces with telescope and dome; and, finally, the software subsystems for operational scheduling of tasks, data processing, and data archiving. We describe the ICS software design, which implements the CARMENES operational design and is planned to be integrated in the instrument by the end of 2014. The CARMENES operational scheduler is the second key element in the control layer described in this contribution. It is the main actor in the translation of the survey strategy into a detailed schedule for the achievement of the optimization goals. The scheduler is based on Artificial Intelligence techniques and computes the survey planning by combining the static constraints that are known a priori (i.e., target visibility, sky background, required time sampling coverage) and the dynamic change of the system conditions (i.e., weather, system conditions). Off-line and on-line strategies are integrated into a single tool for a suitable transfer of the target prioritization made by the science team to the real-time schedule that will be used by the instrument operators. A suitable solution will be expected to increase the efficiency of telescope operations, which will represent an important benefit in terms of scientific return and operational costs. We present the operational scheduling tool designed for CARMENES, which is based on two algorithms combining a global and a local search: Genetic Algorithms and Hill Climbing astronomy-based heuristics, respectively. The algorithm explores a large amount of potential solutions from the vast search space and is able to identify the most efficient ones. A planning solution is considered efficient when it optimizes the objectives defined, which, in our case, are related to the reduction of the time that the telescope is not in use and the maximization of the scientific return, measured in terms of the time coverage of each target in the survey. We present the results obtained using different test cases.
A complete solar eruption activity processing tool with robotization and real time (II)
Intense solar active events have made significant impacts on the modern high technology system and living environment of human being, therefore solar activities forecast and space weather forecast are getting more and more attention. Meanwhile, data volume acquisitioned by solar monitor facility is growing larger and larger due to the requirement of multiple dimensions observation and high temporal and spatial resolution. As staffs of a solar monitor data producer, we are encouraged to adopt new techniques and methods to provide valuable information to solar activities forecast organization and the other related users, and provide convenient products and tools to the users. In the previous paper “A complete solar eruption activities processing tool with robotization and real time (I)”, we presented a fully automatic and real time detecting architecture for different solar erupt activities. In this paper, we present new components of new data sets in the architecture design, latest progresses on automatic recognition of solar flare, filament and magnetic field, and a newly introduced method with which solar photospheric magnetic nonpotentiality parameters are processed in real time, then its result directly can be used in solar active forecast.
The software for the AAT's HERMES instrument
Tony J. Farrell, Michael N. Birchall, Ron W. Heald, et al.
The High Efficiency and Resolution Multi Element Spectrograph, HERMES, was an approximately $12 million dollar project to provide a new facility class instrument for the Anglo-Australian Telescope (AAT). It was commissioned in Q4 2013. This paper examines how software challenges presented by HERMES were handled, including: minimizing cost through reusing the existing AAT 2dF/AAOmega facility software as far as possible; using instrument and data simulators to ensure new software was almost ready before any hardware had been seen; extensive upgrading of our fiber data reduction software; dealing with the tighter calibration and alignment tolerances of a high-resolution spectrograph.
MUSE instrument software
Gérard Zins, Arlette Pécontal, Marie Larrieu, et al.
MUSE Instrumentation Software is the software devoted to the control of the Multi-Unit Spectroscopic Explorer (MUSE), a second-generation VLT panoramic integral-field spectrograph instrument, installed at Paranal in January 2014. It includes an advanced and user-friendly GUI to display the raw data of the 24 detectors, as well as the on-line reconstructed images of the field of view allowing users to assess the quality of the data in quasi-real time. Furthermore, it implements the slow guiding system used to remove effects of possible differential drifts between the telescope guide probe and the instrument, and reach high image stability (<0.03 arcsec RMS stability). In this paper we report about the software design and describe the developed tools that efficiently support astronomers while operating this complex instrument at the telescope.
Developments in simulations and software for a near-infrared precision radial velocity spectrograph
Ryan C. Terrien, Chad F. Bender, Suvrath Mahadevan, et al.
We present developments in simulations and software for the Habitable Zone Planet Finder (HPF), an R~50,000 near-infrared cross-dispersed radial velocity spectrograph that will be used to search for planets around M dwarfs. HPF is fiber-fed, operates in the zYJ bands, and uses a 1.7μm cutoff HAWAII-2RG (H2RG) NIR detector. We have constructed an end-to-end simulator that accepts as input a range of stellar models contaminated with telluric features and processes these through a simulated detector. This simulator accounts for the characteristics of the H2RG, including interpixel capacitance, persistence, nonlinearities, read noise, and other detector characteristics, as measured from our engineering-grade H2RG. It also implements realistic order curvature. We describe applications of this simulator including optimization of the fiber configuration at the spectrograph slit and selection of properties for a laser frequency comb calibration source. The simulator has also provided test images for development of the HPF survey extraction and RV analysis pipeline and we describe progress on this pipeline itself, which will implement optimal extraction, laser frequency comb and emission lamp wavelength calibration, and cross-correlation based RV measurement.
The telescope control of the ASTRI SST-2M prototype for the Cherenkov telescope Array: hardware and software design architecture
ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) is a flagship project of the Italian Ministry of Research and led by the Italian National Institute of Astrophysics (INAF). One of its aims is to develop, within the Cherenkov Telescope Array (CTA) framework, an end-to-end small-sized telescope prototype in a dual-mirror configuration (SST-2M) in order to investigate the energy range E ~ 1-100 TeV. A long-term goal of the ASTRI program is the production of an ASTRI/CTA mini-array composed of seven SST-2M telescopes. The prototype, named ASTRI SST-2M, is seen as a standalone system that needs only network and power connections to work. The software system that is being developed to control the prototype is the base for the Mini-Array Software System (MASS), which has the task to make possible the operation of both the ASTRI SST-2M prototype and the ASTRI/CTA mini-array. The scope of this contribution is to give an overview of the hardware and software architecture adopted for the ASTRI SST- 2M prototype, showing how to apply state of the art industrial technologies to telescope control and monitoring systems.
ESPRESSO instrument control electronics: a PLC based distributed layout for a second generation instrument at ESO VLT
ESPRESSO is an ultra-stable fiber-fed spectrograph designed to combine incoherently the light coming from up to 4 Unit Telescopes of the ESO VLT. From the Nasmyth focus of each telescope the light, through an optical path, is fed by the Coudé Train subsystems to the Front End Unit placed in the Combined Coudé Laboratory. The Front End is composed by one arm for each telescope and its task is to convey the incoming light, after a calibration process, into the spectrograph fibers. To perform these operations a large number of functions are foreseen, like motorized stages, lamps, digital and analog sensors that, coupled with dedicated Technical CCDs (two per arms), allow to stabilize the incoming beam up to the level needed to exploit the ESPRESSO scientific requirements. The Instrument Control Electronics goal is to properly control all the functions in the Combined Coudé Laboratory and the spectrograph itself. It is fully based on a distributed PLC architecture, abandoning in this way the VME-based technology previously adopted for the ESO VLT instruments. In this paper we will describe the ESPRESSO Instrument Control Electronics architecture, focusing on the distributed layout and its interfaces with the other ESPRESSO subsystems.
The upgrade of an educational observatory control system with a PLC-based architecture
A Celestron C14 telescope equipped with a robotic Paramount ME equatorial mount is being used for public outreach at the Basovizza site of the INAF-Astronomical Observatory of Trieste. Although the telescope could be fully remotely controlled, the control of the instrumentations and the movement of the main motor of the dome requires the physical presence of an operator. To overcome this limitation the existing control system has been upgraded using a Beckhoff PLC to allow the remote control of the whole instrumentation, including the management of the newly installed weather sensor and the access to the telescope area. Exploiting the decentralization features typical of a PLC based solution, the PLC modules are placed in two different racks, according to the function to be controlled. A web interface is used for the communication between the user and the instrumentation. The architecture of this control system will be presented in detail in this paper.
HERMES travels by CAN bus
Lewis G. Waller, Keith Shortridge, Tony J. Farrell, et al.
The new HERMES spectrograph represents the first foray by AAO into the use of commercial off-the-shelf industrial field bus technology for instrument control, and we regard the final system, with its relatively simple wiring requirements, as a great success. However, both software and hardware teams had to work together to solve a number of problems integrating the chosen CANopen/CAN bus system into our normal observing systems. A Linux system running in an industrial PC chassis ran the HERMES control software, using a PCI CAN bus interface connected to a number of distributed CANopen/CAN bus I/O devices and servo amplifiers. In the main, the servo amplifiers performed impressively, although some experimentation with homing algorithms was required, and we hit a significant hurdle when we discovered that we needed to disable some of the encoders used during observations; we learned a lot about how servo amplifiers respond when their encoders are turned off, and about how encoders react to losing power. The software was based around a commercial CANopen library from Copley Controls. Early worries about how this heavily multithreaded library would work with our standard data acquisition system led to the development of a very low-level CANopen software simulator to verify the design. This also enabled the software group to develop and test almost all the control software well in advance of the construction of the hardware. In the end, the instrument went from initial installation at the telescope to successful commissioning remarkably smoothly.
MathWorks Simulink and C++ integration with the new VLT PLC-based standard development platform for instrument control systems
Mario J. Kiekebusch, Nicola Di Lieto, Stefan Sandrock, et al.
ESO is in the process of implementing a new development platform, based on PLCs, for upcoming VLT control systems (new instruments and refurbishing of existing systems to manage obsolescence issues). In this context, we have evaluated the integration and reuse of existing C++ libraries and Simulink models into the real-time environment of BECKHOFF Embedded PCs using the capabilities of the latest version of TwinCAT software and MathWorks Embedded Coder. While doing so the aim was to minimize the impact of the new platform by adopting fully tested solutions implemented in C++. This allows us to reuse the in house expertise, as well as extending the normal capabilities of the traditional PLC programming environments. We present the progress of this work and its application in two concrete cases: 1) field rotation compensation for instrument tracking devices like derotators, 2) the ESO standard axis controller (ESTAC), a generic model-based controller implemented in Simulink and used for the control of telescope main axes.
Advances in the development of FRIDA's mechanisms control system and house-keeping
R. Flores-Meza, J. Garcés, G. Lara, et al.
FRIDA will be a near infrared imager and integral field spectrograph covering the wavelength range from 0.9 to 2.5 microns. Primary observing modes are: direct imaging and integral field spectroscopy. This paper describes the main advances in the development of the electronics and control system for both the mechanisms and house-keeping of FRIDA. In order to perform several tests of mechanisms in both room and cryogenic environments, a set of programs had been developed. All variables of the vacuum control system were determined and the main control structure based on one Programmable Logic Controller (PLC) had been established. A key function of the FRIDA’s control system is keeping the integrity of cryostat during all processes, so we have designed a redundant heating control system which will be in charge of avoiding cryostat inner overheating. In addition, some improvements of cryogenic and room temperature cabling structure are described.
The ASTRI SST-2M telescope prototype for the Cherenkov Telescope Array: camera DAQ software architecture
ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) is a Flagship Project financed by the Italian Ministry of Education, University and Research, and led by INAF, the Italian National Institute of Astrophysics. Within this framework, INAF is currently developing an end‐to‐end prototype of a Small Size dual‐mirror Telescope. In a second phase the ASTRI project foresees the installation of the first elements of the array at CTA southern site, a mini-array of 7 telescopes. The ASTRI Camera DAQ Software is aimed at the Camera data acquisition, storage and display during Camera development as well as during commissioning and operations on the ASTRI SST-2M telescope prototype that will operate at the INAF observing station located at Serra La Nave on the Mount Etna (Sicily). The Camera DAQ configuration and operations will be sequenced either through local operator commands or through remote commands received from the Instrument Controller System that commands and controls the Camera. The Camera DAQ software will acquire data packets through a direct one-way socket connection with the Camera Back End Electronics. In near real time, the data will be stored in both raw and FITS format. The DAQ Quick Look component will allow the operator to display in near real time the Camera data packets. We are developing the DAQ software adopting the iterative and incremental model in order to maximize the software reuse and to implement a system which is easily adaptable to changes. This contribution presents the Camera DAQ Software architecture with particular emphasis on its potential reuse for the ASTRI/CTA mini-array.
LBT prime focus camera (LBC) control software upgrades
Kellee R. Summers, Andrea Di Paola, Mauro Centrone, et al.
The control software of the Large Binocular Telescope's (LBT) double prime focus cameras (LBC) has been in use for a decade: the software passed acceptance testing in April 2004 and is currently in routine use for science. LBC was the first light instrument of the telescope. Over the last decade of use, the control software has changed as operations with the telescope have evolved. The major updates to the LBC control software since 2004 are described, including details for the upgrade to a single control computer from the current five computer architecture.
Recent developments for the Large Binocular Telescope Guiding Control Subsystem
T. Golota, M. D. De La Peña, C. Biddick, et al.
The Large Binocular Telescope (LBT) has eight Acquisition, Guiding, and wavefront Sensing Units (AGw units). They provide guiding and wavefront sensing capability at eight different locations at both direct and bent Gregorian focal stations. Recent additions of focal stations for PEPSI and MODS instruments doubled the number of focal stations in use including respective motion, camera controller server computers, and software infrastructure communicating with Guiding Control Subsystem (GCS). This paper describes the improvements made to the LBT GCS and explains how these changes have led to better maintainability and contributed to increased reliability. This paper also discusses the current GCS status and reviews potential upgrades to further improve its performance.
The control system of the 12-m medium-size telescope prototype: a test-ground for the CTA array control
I. Oya, E. A. Anguner, B. Behera, et al.
The Cherenkov Telescope Array (CTA) will be the next generation ground-based very-high energy -ray observatory. CTA will consist of two arrays: one in the Northern hemisphere composed of about 20 telescopes, and the other one in the Southern hemisphere composed of about 100 telescopes, both arrays containing telescopes of different sizes and types and in addition numerous auxiliary devices. In order to provide a test-ground for the CTA array control, the steering software of the 12-m medium size telescope (MST) prototype deployed in Berlin has been implemented using the tools and design concepts under consideration to be used for the control of the CTA array. The prototype control system is implemented based on the Atacama Large Millimeter/submillimeter Array (ALMA) Common Software (ACS) control middleware, with components implemented in Java, C++ and Python. The interfacing to the hardware is standardized via the Object Linking and Embedding for Process Control Unified Architecture (OPC UA). In order to access the OPC UA servers from the ACS framework in a common way, a library has been developed that allows to tie the OPC UA server nodes, methods and events to the equivalents in ACS components. The front-end of the archive system is able to identify the deployed components and to perform the sampling of the monitoring points of each component following time and value change triggers according to the selected configurations. The back-end of the archive system of the prototype is composed by two different databases: MySQL and MongoDB. MySQL has been selected as storage of the system configurations, while MongoDB is used to have an efficient storage of device monitoring data, CCD images, logging and alarm information. In this contribution, the details and conclusions on the implementation of the control software of the MST prototype are presented.
A traffic analyzer for multiple SpaceWire links
Modern space missions are becoming increasingly complex: the interconnection of the units in a satellite is now a network of terminals linked together through routers, where devices with different level of automation and intelligence share the same data-network. The traceability of the network transactions is performed mostly at terminal level through log analysis and hence it is difficult to verify in real time the reliability of the interconnections and the interchange protocols. To improve and ease the traffic analysis in a SpaceWire network we implemented a low-level link analyzer, with the specific goal to simplify the integration and test phases in the development of space instrumentation. The traffic analyzer collects signals coming from pod probes connected in-series on the interested links between two SpaceWire terminals. With respect to the standard traffic analyzers, the design of this new tool includes the possibility to internally reshape the LVDS signal. This improvement increases the robustness of the analyzer towards environmental noise effects and guarantees a deterministic delay on all analyzed signals. The analyzer core is implemented on a Xilinx FPGA, programmed to decode the bidirectional LVDS signals at Link and Network level. Successively, the core packetizes protocol characters in homogeneous sets of time ordered events. The analyzer provides time-tagging functionality for each characters set, with a precision down to the FPGA Clock, i.e. about 20nsec in the adopted HW environment. The use of a common time reference for each character stream allows synchronous performance measurements. The collected information is then routed to an external computer for quick analysis: this is done via high-speed USB2 connection. With this analyzer it is possible to verify the link performances in terms of induced delays in the transmitted signals. A case study focused on the analysis of the Time-Code synchronization in presence of a SpaceWire Router is shown in this paper as well.
Metadata and data management for the Keck Observatory Archive
H. D. Tran, J. Holt, R. W. Goodrich, et al.
A collaboration between the W. M. Keck Observatory (WMKO) in Hawaii and the NASA Exoplanet Science Institute (NExScI) in California, the Keck Observatory Archive (KOA) was commissioned in 2004 to archive observing data from WMKO, which operates two classically scheduled 10 m ground-based telescopes. The observing data from Keck is not suitable for direct ingestion into the archive since the metadata contained in the original FITS headers lack the information necessary for proper archiving. Coupled with different standards among instrument builders and the heterogeneous nature of the data inherent in classical observing, in which observers have complete control of the instruments and their observations, the data pose a number of technical challenges for KOA. For example, it is often difficult to determine if an observation is a science target, a sky frame, or a sky flat. It is also necessary to assign the data to the correct owners and observing programs, which can be a challenge for time-domain and target-of-opportunity observations, or on split nights, during which two or more principle investigators share a given night. In addition, having uniform and adequate calibrations are important for the proper reduction of data. Therefore, KOA needs to distinguish science files from calibration files, identify the type of calibrations available, and associate the appropriate calibration files with each science frame. We describe the methodologies and tools that we have developed to successfully address these difficulties, adding content to the FITS headers and "retrofitting" the metadata in order to support archiving Keck data, especially those obtained before the archive was designed. With the expertise gained from having successfully archived observations taken with all eight currently active instruments at WMKO, we have developed lessons learned from handling this complex array of heterogeneous metadata that help ensure a smooth ingestion of data not only for current but also future instruments, as well as a better experience for the archive user.
Advanced data products for the JCMT Science Archive
Graham S. Bell, Sarah F. Graves, Malcolm J. Currie, et al.
The JCMT Science Archive is a collaboration between the James Clerk Maxwell Telescope and the Canadian Astronomy Data Centre to provide access to raw and reduced data from SCUBA-2 and the telescope’s heterodyne instruments. It was designed to include a range of advanced data products, created either by external groups, such as the JCMT Legacy Survey teams, or by the JCMT staff at the Joint Astronomy Centre. We are currently developing the archive to include a set of advanced data products which combine all of the publicly available data. We have developed a sky tiling scheme based on HEALPix tiles to allow us to construct co-added maps and data cubes on a well-defined grid. There will also be source catalogs both of regions of extended emission and the compact sources detected within these regions.
The ASTRI project within Cherenkov Telescope Array: data analysis and archiving
Lucio Angelo Antonelli, Denis Bastieri, Milvia Capalbi, et al.
ASTRI is the flagship project of INAF (Italian National Institute for Astrophysics) mainly devoted to the development of Cherenkov small-size dual-mirror telescopes (SST-2M) in the framework of the international Cherenkov Telescope Array (CTA) Project. ASTRI SST-2M is an end-to-end prototype including scientific and technical operations as well as the related data analysis and archiving activities. We present here the ASTRI data handling and archiving system: it is responsible for both the on-site and off-site data processing and archiving. All the scientific, calibration, and engineering ASTRI data will be stored and organized in dedicated archives aimed to provide access to both the monitoring and data analysis systems.
An experiment in big data: storage, querying and visualisation of data taken from the Liverpool Telescope's wide field cameras
The Small Telescopes Installed at the Liverpool Telescope (STILT) project has been in operation since March 2009, collecting data with three wide field unfiltered cameras: SkycamA, SkycamT and SkycamZ. To process the data, a pipeline was developed to automate source extraction, catalogue cross-matching, photometric calibration and database storage. In this paper, modifications and further developments to this pipeline will be discussed, including a complete refactor of the pipeline's codebase into Python, migration of the back-end database technology from MySQL to PostgreSQL, and changing the catalogue used for source cross-matching from USNO-B1 to APASS. In addition to this, details will be given relating to the development of a preliminary front-end to the source extracted database which will allow a user to perform common queries such as cone searches and light curve comparisons of catalogue and non-catalogue matched objects. Some next steps and future ideas for the project will also be presented.
EMIR data factory system
Josefina Rosich Minguell, M. Barreto , N. Castro , et al.
EMIR (Espectrógrafo Multiobjeto Infrarrojo) is a wide-field, near-infrared, multi-object spectrograph, with image capabilities, which will be located at the Nasmyth focus of GTC (Gran Telescopio Canarias). It will allow observers to obtain many intermediate resolution spectra simultaneously, in the nIR bands Z, J, H, K. A multi-slit mask unit will be used for target acquisition. This paper shows an overview of EMIR Data Factory System which main functionality is to receive raw images from DAS (Data Acquisition system), collect FITS header keywords, store images to database and propagate images to other GCS (GTC Control System) components to produce astronomical data. This system follows the standards defined by the telescope to permit the integration of this software on the GCS. The Data Factory System needs the DAS, the Sequencer, GUI and the Monitor Manager subsystems to operate. DAS generates images and sends them to the Data Factory. Sequencer and GUI (Graphical User Interface) provide information about instrument and observing program. The Monitor Manager supplies information about telescope and instrument state.
Chilean Virtual Observatory services implementation for the ALMA public data
Jonathan Antognini, Mauricio Solar, Jorge Ibsen, et al.
The success of an observatory is usually measured by its impact in the scientific community, so a common objective is to provide transparent ways to access the generated data. The Chilean Virtual Observatory (ChiVO), started working in the implementation of a prototype, in collaboration with ALMA, considering the current needs of the Chilean astronomical community, in addition to the protocols and standards of IVOA, and the comparison of different existing data access toolkit services. Based on this efforts, a VO prototype was designed and implemented for the ALMA large scale of data.
On-board detection and removal of cosmic ray and solar energetic particle signatures for the Solar Orbiter-METIS coronagraph
V. Andretta, A. Bemporad, M. Focardi, et al.
METIS is part of the science payload of Solar Orbiter. It is a coronagraph designed to obtain images of the outer solar corona both in the visible 580-640 nm band and in the UV, in a narrow band centered around the hydrogen Lyman-α line. We describe the main features of the procedures to remove signatures due to cosmic rays (CRs) and to solar energetic particles (SEPs) comparing them with alternatives in other contexts and in other solar coronagraphic missions. Our analysis starts from a realistic assessment of the radiation environment where the instrument is expected to operate, which is characteristic of the interplanetary space of the inner solar system, but quite unusual for most solar missions.
Automatic detection and automatic classification of structures in astronomical images
Rodrigo Gregorio, Mauricio Solar, Diego Mardones, et al.
The study of the astronomical structures is important to the astronomical community because it can help to identify objects, which can be classified based on their internal structure or their relation to other objects. For this reason, it is developed an automated tool to analyze astronomical images into its components. Firstly, a 2D images is decomposed into different spatial scales based on wavelet transform. Then, it is implemented a detection algorithms to each spatial scale, such as Clumpfind, Gaussclump, or Dendrogram techniques. The goal is to build a new algorithm and tool that is available to the community and satisfies the requirements of the next Chilean Virtual Observatory (ChiVO).
BASKET on-board software library
Armin Luntzer, Roland Ottensamer, Franz Kerschbaum
The University of Vienna is a provider of on-board data processing software with focus on data compression, such as used on board the highly successful Herschel/PACS instrument, as well as in the small BRITE-Constellation fleet of cube-sats. Current contributions are made to CHEOPS, SAFARI and PLATO. The effort was taken to review the various functions developed for Herschel and provide a consolidated software library to facilitate the work for future missions. This library is a shopping basket of algorithms. Its contents are separated into four classes: auxiliary functions (e.g. circular buffers), preprocessing functions (e.g. for calibration), lossless data compression (arithmetic or Rice coding) and lossy reduction steps (ramp fitting etc.). The "BASKET" has all functionality that is needed to create an on-board data processing chain. All sources are written in C, supplemented by optimized versions in assembly, targeting popular CPU architectures for space applications. BASKET is open source and constantly growing
Improving Herschel imaging datasets
Marko Mečina, Andreas Mayer, Roland Ottensamer, et al.
The Herschel Space Observatory enabled a deep and detailed look into the far-infrared universe and even though the mission ended in May 2013, the rich archive is full of data to be analysed. The Herschel data reduction pipelines have progressed to give good results for most types of observations, but for other, more specialized programmes, improved image reconstruction tools and methods need to be applied to fully exploit the telescope's unique sensitivity and spatial resolution. This is the case for post-main-sequence objects observed with Her- schel/PACS as part of the MESS sample. Here, the standard pipeline as well as most map-makers cannot handle the bright central source surrounded by faint dust emission well. We compare the six standard map-making tools for Herschel observations and show how an improved astrometry can produce better maps. Our data processing steps can be applied to the standard pipeline as well as to external mappers based on inverse techniques.
Integrating the ODI-PPA scientific gateway with the QuickReduce pipeline for on-demand processing
As imaging systems improve, the size of astronomical data has continued to grow, making the transfer and processing of data a significant burden. To solve this problem for the WIYN Observatory One Degree Imager (ODI), we developed the ODI-Portal, Pipeline, and Archive (ODI-PPA) science gateway, integrating the data archive, data reduction pipelines, and a user portal. In this paper, we discuss the integration of the QuickReduce (QR) pipeline into PPA's Tier 2 processing framework. QR is a set of parallelized, stand-alone Python routines accessible to all users, and operators who can create master calibration products and produce standardized calibrated data, with a short turn-around time. Upon completion, the data are ingested into the archive and portal, and made available to authorized users. Quality metrics and diagnostic plots are generated and presented via the portal for operator approval and user perusal. Additionally, users can tailor the calibration process to their specific science objective(s) by selecting custom datasets, applying preferred master calibrations or generating their own, and selecting pipeline options. Submission of a QuickReduce job initiates data staging, pipeline execution, and ingestion of output data products all while allowing the user to monitor the process status, and to download or further process/analyze the output within the portal. User-generated data products are placed into a private user-space within the portal. ODI-PPA leverages cyberinfrastructure at Indiana University including the Big Red II supercomputer, the Scholarly Data Archive tape system and the Data Capacitor shared file system.
Cherenkov Telescope Array science data analysis using the ctools
Jürgen Knödlseder, Sylvie Brau-Nogué, Christoph Deil, et al.
The ctools are a set of analysis executables that are being developed as a framework for analysing Cherenkov Telescope Array (CTA) high-level data products. The ctools are inspired by science analysis software available for existing high-energy astronomy instruments, and they follow the modular ftools model developed by the High Energy Astrophysics Science Archive Research Center (HEASARC). The ctools are based on GammaLib, a C++ library interfaced to Python that provides a framework for an instrument-independent analysis of gamma-ray data. We present the status of the software development, and describe possible workflows that can be implemented for the analysis of gamma-ray data.
An overview of the planned CCAT software system
Tim Jenness, Martin C. Shepherd, Reinhold Schaaf, et al.
CCAT will be a 25m diameter sub-millimeter telescope capable of operating in the 0.2 to 2.1mm wavelength range. It will be located at an altitude of 5600m on Cerro Chajnantor in northern Chile near the ALMA site. The anticipated first generation instruments include large format (60,000) kinetic inductance detector (KID) cameras, a large format heterodyne array and a direct detection multi-object spectrometer. The paper describes the architecture of the CCAT software and the development strategy.
Generic control software connecting astronomical instruments to the reflective memory data recording system of VLTI - bossvlti
Eszter Pozna, A. Ramirez, A. Mérand, et al.
The quality of data obtained by VLTI instruments may be refined by analyzing the continuous data supplied by the Reflective Memory Network (RMN). Based on 5 years experience providing VLTI instruments (PACMAN, AMBER, MIDI) with RMN data, the procedure has been generalized to make the synchronization with observation trouble-free. The present software interface saves not only months of efforts for each instrument but also provides the benefits of software frameworks. Recent applications (GRAVITY, MATISSE) supply feedback for the software to evolve. The paper highlights the way common features been identified to be able to offer reusable code in due course.
A multi-threaded approach to using asynchronous C libraries with Java
It is very common to write device drivers and code that access low level operation system functions in C or C+ +. There are also many powerful C and C++ libraries available for a variety of tasks. Java is a programming language that is meant to be system independent and is arguably much simpler to code than C/C++. However, Java has minimal support for talking to native libraries, which results in interesting challenges when using C/C++ libraries with Java code. Part of the problem is that Java's standard mechanism for communicating with C libraries, Java Native Interface, requires a significant amount of effort to do fairly simple things, such as copy structure data from C to a class in Java. This is largely solved by using the Java Native Access Library, which provides a reasonable way of transferring data between C structures and Java classes and calling C functions from Java. A more serious issue is that there is no mechanism for a C/C++ library loaded by a Java program to call a Java function in the Java program, as this is a major issue with any library that uses callback functions. A solution to this problem was found using a moderate amount of C code and multiple threads in Java. The Keck Task Language API (KTL) is used as a primary means of inter-process communication at Keck and Lick Observatory. KTL is implemented in a series or C libraries and uses callback functions for asynchronous communication. It is a good demonstration of how to use a C library within a Java program.