Proceedings Volume 7740

Software and Cyberinfrastructure for Astronomy

cover
Proceedings Volume 7740

Software and Cyberinfrastructure for Astronomy

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 19 July 2010
Contents: 14 Sessions, 128 Papers, 0 Presentations
Conference: SPIE Astronomical Telescopes + Instrumentation 2010
Volume Number: 7740

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 7740
  • Control Systems I
  • Control Systems II
  • Real-Time/Events
  • Data Processing Operations
  • VO/Archive
  • Cyberinfrastructure I
  • Common Services/Reuse
  • Web 2.0/User Interfaces
  • Pipelines/Kepler
  • Kepler Session
  • Cyberinfrastructure II
  • Current Project Overviews
  • Poster Session
Front Matter: Volume 7740
icon_mobile_dropdown
Front Matter: Volume 7740
This PDF file contains the front matter associated with SPIE Proceedings Volume 7740, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Control Systems I
icon_mobile_dropdown
Control software and electronics architecture design in the framework of the E-ELT instrumentation
During the last years the European Southern Observatory (ESO), in collaboration with other European astronomical institutes, has started several feasibility studies for the E-ELT (European-Extremely Large Telescope) instrumentation and post-focal adaptive optics. The goal is to create a flexible suite of instruments to deal with the wide variety of scientific questions astronomers would like to see solved in the coming decades. In this framework INAF-Astronomical Observatory of Trieste (INAF-AOTs) is currently responsible of carrying out the analysis and the preliminary study of the architecture of the electronics and control software of three instruments: CODEX (control software and electronics) and OPTIMOS-EVE/OPTIMOS-DIORAMAS (control software). To cope with the increased complexity and new requirements for stability, precision, real-time latency and communications among sub-systems imposed by these instruments, new solutions have been investigated by our group. In this paper we present the proposed software and electronics architecture based on a distributed common framework centered on the Component/Container model that uses OPC Unified Architecture as a standard layer to communicate with COTS components of three different vendors. We describe three working prototypes that have been set-up in our laboratory and discuss their performances, integration complexity and ease of deployment.
Flight control software for the wave-front sensor of SUNRISE 1m balloon telescope
Alexander Bell, Peter Barthol, Thomas Berkefeld, et al.
This paper describes the flight control software of the wave-front correction system that flew on the 2009 science flight of the Sunrise balloon telescope. The software discussed here allowed fully automated operations of the wave-front sensor, communications with the adaptive optics sub-system, the pointing system, the instrument control unit and the main telescope controller. The software was developed using modern object oriented analysis and design techniques, and consists of roughly 13.000 lines of C++ code not counting code written for the on-board communication layer. The software operated error free during the 5.5 day flight.
The LUCIFER control software
The successful roll-out of the control software for a complex NIR imager/spectrograph with MOS calls for flexible development strategies due to changing requirements during different phases of the project. A waterfall strategy used in the beginning has to change to a more iterative and agile process in the later stages. The choice of an appropriate program language as well as suitable software layout is crucial. For example the software has to accomplish multiple demands of different user groups, including a high level of flexibility for later changes and extensions. Different access levels to the instrument are mandatory to afford direct control mechanisms for lab operations and inspections of the instrument as well as tools to accomplish efficient science observations. Our hierarchical software structure with four layers of increasing abstract levels and the use of an object oriented language ideally supports these requirements. Here we describe our software architecture, the software development process, the different access levels and our commissioning experiences with LUCIFER 1.
The LBT real-time based control software to mitigate and compensate vibrations
J. Borelli, J. Trowitzsch, M. Brix, et al.
The Large Binocular Telescope (LBT) uses two 8.4 meters active primary mirrors and two adaptive secondary mirrors on the same mounting to take advantage of its interferometric capabilities. Both applications, interferometry and AO, are sensitive to vibrations. Several measurement campaigns have been carried out at the LBT and their results strongly indicate that a vibration monitoring system is required to improve the performance of LINC-NIRVANA, LBTI, and ARGOS, the laser guided ground layer adaptive optic system. Currently, a control software for mitigation and compensation of the vibrations is being designed. A complex set of algorithms collects real-time vibration data, archiving it for further analysis, and in parallel, generating the tip-tilt and optical path difference (OPD) data for the control loop of the instruments. A real-time data acquisition device equipped with embedded real-time Linux is used in our systems. A set of quick-look tools is currently under development in order to verify if the conditions at the telescope are suitable for interferometric/adaptive observations.
Control Systems II
icon_mobile_dropdown
Control software architecture for the SALT Robert Stobie Spectrograph
Anthony Koeslag, Janus Brink, Peter Menzies, et al.
Of the two first-light instruments commissioned for the Southern African Large Telescope (SALT) the Robert Stobie Spectrograph (RSS) represents the most intricate instrument to control on SALT. As such the RSS control software (RCON) called for a design that would handle the necessary elaborate configuration commands required to coordinate the various hardware mechanisms of RSS into the desired states. In this paper we describe in detail the software architecture, developed in LabVIEW, used to control the RSS hardware mechanisms. A command ID based array system was developed to allow the multi-faceted commands to execute in order to facilitate the shortest configuration times and managing all of the hardware interdependencies.
Software systems for operation, control, and monitoring of the EBEX instrument
Michael Milligan, Peter Ade, François Aubin, et al.
We present the hardware and software systems implementing autonomous operation, distributed real-time monitoring, and control for the EBEX instrument. EBEX is a NASA-funded balloon-borne microwave polarimeter designed for a 14 day Antarctic flight that circumnavigates the pole. To meet its science goals the EBEX instrument autonomously executes several tasks in parallel: it collects attitude data and maintains pointing control in order to adhere to an observing schedule; tunes and operates up to 1920 TES bolometers and 120 SQUID amplifiers controlled by as many as 30 embedded computers; coordinates and dispatches jobs across an onboard computer network to manage this detector readout system; logs over 3 GiB/hour of science and housekeeping data to an onboard disk storage array; responds to a variety of commands and exogenous events; and downlinks multiple heterogeneous data streams representing a selected subset of the total logged data. Most of the systems implementing these functions have been tested during a recent engineering flight of the payload, and have proven to meet the target requirements. The EBEX ground segment couples uplink and downlink hardware to a client-server software stack, enabling real-time monitoring and command responsibility to be distributed across the public internet or other standard computer networks. Using the emerging dirfile standard as a uniform intermediate data format, a variety of front end programs provide access to different components and views of the downlinked data products. This distributed architecture was demonstrated operating across multiple widely dispersed sites prior to and during the EBEX engineering flight.
Faking it for pleasure and profit: the use of hardware simulation at AAO
K. Shortridge, M. Vuong
Traditionally, AAO tasks controlling hardware were able to operate in a simulation mode, simply ignoring the actual hardware and responding as if the hardware were working properly. However, this did not allow a rigorous testing of the low-level details of the hardware control software. For recent projects, particularly the replacement of the control system for the 3.9m AAT, we have introduced detailed software simulators that mimic the hardware and its interactions down to the individual bit level in the interfaces. By having one single simulator task representing the whole of the hardware, we get a realistic simulation of the whole system. Communications with the simulator task are introduced just above the driver calls that would normally communicate with the real hardware, allowing all of the hardware control software to be tested. Simulation can be partial, only simulating those bits of the hardware not yet available This allows incremental software releases that demonstrate full functioning of complete aspects of the system before any hardware is available, and supports a rigorous 'value-added' approach for tracking the software development process. This was particularly successful for the telescope control system, and has been used since for other projects including the new HERMES spectrograph.
The TJO-OAdM robotic observatory: OpenROCS and dome control
Josep Colomé, Xavier Francisco, Ignasi Ribas, et al.
The Telescope Joan Oró at the Montsec Astronomical Observatory (TJO - OAdM) is a small-class observatory working in completely unattended control. There are key problems to solve when a robotic control is envisaged, both on hardware and software issues. We present the OpenROCS (ROCS stands for Robotic Observatory Control System), an open source platform developed for the robotic control of the TJO - OAdM and similar astronomical observatories. It is a complex software architecture, composed of several applications for hardware control, event handling, environment monitoring, target scheduling, image reduction pipeline, etc. The code is developed in Java, C++, Python and Perl. The software infrastructure used is based on the Internet Communications Engine (Ice), an object-oriented middleware that provides object-oriented remote procedure call, grid computing, and publish/subscribe functionality. We also describe the subsystem in charge of the dome control: several hardware and software elements developed to specially protect the system at this identified single point of failure. It integrates a redundant control and a rain detector signal for alarm triggering and it responds autonomously in case communication with any of the control elements is lost (watchdog functionality). The self-developed control software suite (OpenROCS) and dome control system have proven to be highly reliable.
Real-Time/Events
icon_mobile_dropdown
Heterogeneous real-time computing in radio astronomy
John M. Ford, Paul Demorest, Scott Ransom
Modern computer architectures suited for general purpose computing are often not the best choice for either I/O-bound or compute-bound problems. Sometimes the best choice is not to choose a single architecture, but to take advantage of the best characteristics of different computer architectures to solve your problems. This paper examines the tradeoffs between using computer systems based on the ubiquitous X86 Central Processing Units (CPU's), Field Programmable Gate Array (FPGA) based signal processors, and Graphical Processing Units (GPU's). We will show how a heterogeneous system can be produced that blends the best of each of these technologies into a real-time signal processing system. FPGA's tightly coupled to analog-to-digital converters connect the instrument to the telescope and supply the first level of computing to the system. These FPGA's are coupled to other FPGA's to continue to provide highly efficient processing power. Data is then packaged up and shipped over fast networks to a cluster of general purpose computers equipped with GPU's, which are used for floating-point intensive computation. Finally, the data is handled by the CPU and written to disk, or further processed. Each of the elements in the system has been chosen for its specific characteristics and the role it can play in creating a system that does the most for the least, in terms of power, space, and money.
Transiting planet search in the Kepler pipeline
Jon M. Jenkins, Hema Chandrasekaran, Sean D. McCauliff, et al.
The Kepler Mission simultaneously measures the brightness of more than 160,000 stars every 29.4 minutes over a 3.5-year mission to search for transiting planets. Detecting transits is a signal-detection problem where the signal of interest is a periodic pulse train and the predominant noise source is non-white, non-stationary (1/f) type process of stellar variability. Many stars also exhibit coherent or quasi-coherent oscillations. The detection algorithm first identifies and removes strong oscillations followed by an adaptive, wavelet-based matched filter. We discuss how we obtain super-resolution detection statistics and the effectiveness of the algorithm for Kepler flight data.
Adapting a publish-subscribe middleware to a RPC response pattern
Douglas A. Morrison, James M. Johnson
At W. M. Keck Observatory we recently examined and prototyped what it would take to adapt publish-subscribe middleware, namely Data Distribution Service (DDS), to a remote procedure call (RPC) style peer-to-peer architecture. The design and prototype was based on the middleware neutral Common Services Framework (CSF) in use by NSO for the Advanced Technology Solar Telescope (ATST). The paper describes the process used to adapt DDS for RPC style commands. It highlights the differences encountered between the RTI and PrismTech implementations, and contrasts the ICE based connection service in CSF to one based on DDS.
Data Processing Operations
icon_mobile_dropdown
Lessons learned deploying a second generation Observation Control System for Subaru Telescope
Eric Jeschke, Takeshi Inagaki
Subaru Telescope is deploying and commissioning a second-generation Observation Control System (OCS), building upon a 10 hear history of using the first generation OCS, and seeking to improve several key aspects of managing and using it. Replacing an extensive, functional, mission-critical software at the core of the telescope is an ambitious undertaking. In this paper we present some important and sometimes surprising lessons learned during the buildout and commissioning phase of the Generation 2 OCS at Subaru Telescope. We present our experience with the rewrite vs. refactor decision, aspects of testing including unit and functional tests, compatibility decisions regarding legacy systems, and managing telescope priorities vs. developer priorities.
Data handling and control for the European Solar Telescope
Ilaria Ermolli, Felix Bettonvil, Gianna Cauzzi, et al.
We introduce the concepts for the control and data handling systems of the European Solar Telescope (EST), the main functional and technical requirements for the definition of these systems, and the outcomes from the trade-off analysis to date. Concerning the telescope control, EST will have performance requirements similar to those of current medium-sized night-time telescopes. On the other hand, the science goals of EST require the simultaneous operation of three instruments and of a large number of detectors. This leads to a projected data flux that will be technologically challenging and exceeds that of most other astronomical projects. We give an overview of the reference design of the control and data handling systems for the EST to date, focusing on the more critical and innovative aspects resulting from the overall design of the telescope.
Science data quality assessment for the Large Synoptic Survey Telescope
Richard A. Shaw, Deborah Levine, Timothy Axelrod, et al.
LSST will have a Science Data Quality Assessment (SDQA) subsystem for the assessment of the data products that will be produced during the course of a 10 yr survey. The LSST will produce unprecedented volumes of astronomical data as it surveys the accessible sky every few nights. The SDQA subsystem will enable comparisons of the science data with expectations from prior experience and models, and with established requirements for the survey. While analogous systems have been built for previous large astronomical surveys, SDQA for LSST must meet a unique combination of challenges. Chief among them will be the extraordinary data rate and volume, which restricts the bulk of the quality computations to the automated processing stages, as revisiting the pixels for a post-facto evaluation is prohibitively expensive. The identification of appropriate scientific metrics is driven by the breadth of the expected science, the scope of the time-domain survey, the need to tap the widest possible pool of scientific expertise, and the historical tendency of new quality metrics to be crafted and refined as experience grows. Prior experience suggests that contemplative, off-line quality analyses are essential to distilling new automated quality metrics, so the SDQA architecture must support integrability with a variety of custom and community-based tools, and be flexible to embrace evolving QA demands. Finally, the time-domain nature of LSST means every exposure may be useful for some scientific purpose, so the model of quality thresholds must be sufficiently rich to reflect the quality demands of diverse science aims.
LBT data mining leads to increased open shutter time
Norman Cushing, Chris Biddick, Dave Thompson, et al.
The software group at the Large Binocular Telescope Observatory (LBTO)1 used logs and telemetry related to telescope control system behavior to investigate improving the operational efficiency of the telescope. Our investigation unearthed several surprises of unknown, unexpected, and undesired system behavior. What had been implemented was not always the same as what we thought had been implemented. A bit of rework using minimal resources would provide an inexpensive and immediate benefit leading directly to a more efficient operation. Also noted were software resource usage anomalies that had gone unnoticed and areas where logging and telemetry data was inadequate to answer fundamental questions. We considered trade-offs regarding what and when to modify configuration parameters, hardware, and software that when changed, would increase performance. In this paper we statistically examine the raw data and model system improvements for different implementations when viewed as a system. We also compare the overall system performance before and after the modifications we have implemented.
An algorithm for the fitting of planet models to Kepler light curves
Peter Tenenbaum, Stephen T. Bryson, Hema Chandrasekaran, et al.
We describe an algorithm which fits model planetary system parameters to light curves from Kepler Mission target stars. The algorithm begins by producing an initial model of the system which is used to seed the fit, with particular emphasis on obtaining good transit timing parameters. An attempt is then made to determine whether the observed transits are more likely due to a planet or an eclipsing binary. In the event that the transits are consistent with a transiting planet, an iterative fitting process is initiated: a wavelet-based whitening filter is used to eliminate stellar variations on timescales long compared to a transit; a robust nonlinear fitter operating on the whitened light curve produces a new model of the system; and the procedure iterates until convergence upon a self-consistent whitening filter and planet model. The fitted transits are removed from the light curve and a search for additional planet candidates is performed upon the residual light curve. The fitted models are used in additional tests which identify false positive planet detections: multiple planet candidates with near-identical fitted periods are far more likely to be an eclipsing binary, for example, while target stars in which the model light curve is correlated with the star centroid position may indicate a background eclipsing binary, and subtraction of all model planet candidates yields a light curve of pure noise and stellar variability, which can be used to study the probability that the planet candidates result from statistical fluctuations in the data.
VO/Archive
icon_mobile_dropdown
Building archives in the virtual observatory era
Raymond L. Plante, Gretchen Greene, Robert J. Hanisch, et al.
Broad support for Virtual Observatory (VO) standards by astronomical archives is critical for the success of the VO as a research platform. Indeed, a number of effective data discovery, visualization, and integration tools have been created which rely on this broad support. Thus, to an archive, the motivation for supporting VO standards is strong. However, we are now seeing a growing trend among archive developers towards leveraging VO standards and technologies not just to provide interoperability with the VO, but also to support an archive's internal needs and the needs of the archive's primary user base. We examine the motivation for choosing VO technologies for implementing an archive's functionality and list several current examples, including from the Hubble Legacy Archive, NASA HEASARC, NOAO, and NRAO. We will also speculate on the effect that VO will have on some of the ambitious observatory projects planned for the near future.
Shannon sampling and nonlinear dynamics on graphs for representation, regularization and visualization of complex data
M. Pesenson, I. Pesenson, B. McCollum, et al.
Data is now produced faster than it can be meaningfully analyzed. Many modern data sets present unprecedented analytical challenges, not merely because of their size but by their inherent complexity and information richness. Large numbers of astronomical objects now have dozens or hundreds of useful parameters describing each one. Traditional color-color plots using a limited number of symbols and some color-coding are clearly inadequate for finding all useful correlations given such large numbers of parameters. To capitalize on the opportunities provided by these data sets one needs to be able to organize, analyze and visualize them in fundamentally new ways. The identification and extraction of useful information in multiparametric, high-dimensional data sets - data mining - is greatly facilitated by finding simpler, that is, lower-dimensional abstract mathematical representations of the data sets that are more amenable to analysis. Dimensionality reduction consists of finding a lower-dimensional representation of high-dimensional data by constructing a set of basis functions that capture patterns intrinsic to a particular state space. Traditional methods of dimension reduction and pattern recognition often fail to work well when performed upon data sets as complex as those that now confront astronomy. We present here our developments of data compression, sampling, nonlinear dimensionality reduction, and clustering, which are important steps in the analysis of large-scale, complex datasets.
The Kepler DB: a database management system for arrays, sparse arrays, and binary data
Sean McCauliff, Miles T. Cote, Forrest R. Girouard, et al.
The Kepler Science Operations Center stores pixel values on approximately six million pixels collected every 30 minutes, as well as data products that are generated as a result of running the Kepler science processing pipeline. The Kepler Database management system (Kepler DB)was created to act as the repository of this information. After one year of flight usage, Kepler DB is managing 3 TiB of data and is expected to grow to over 10 TiB over the course of the mission. Kepler DB is a non-relational, transactional database where data are represented as one-dimensional arrays, sparse arrays or binary large objects. We will discuss Kepler DB's APIs, implementation, usage and deployment at the Kepler Science Operations Center.
The WIYN ODI instrument software configuration and scripting
The WIYN One Degree Imager (ODI) exposure system must drive multiple complex subsystems requiring large amounts of configuration information that may change substantially between exposures. The mosaic OTA focal plane provides up to 64 streams of image data during readout or 512 ROI video streams for guidance or real time photometry. The work flows and instrument operation sequences are numerous, and are evolving to adapt to the new capabilities the mosaic OTA (Orthogonal Transfer Array) camera presents. By making scripting data driven, the ODI exposure system provides a very flexible, but structured and powerful paradigm for instrumentation process control and operation.
Discovery Channel Telescope software key technologies
The Discovery Channel Telescope (DCT) is a 4.3-meter astronomical research telescope being built in northern Arizona as a partnership between Discovery Communications and Lowell Observatory. The project software team has designed and partially implemented a component-based system. We describe here the key features of that design (state-based components that respond to signals) and detail specific implementation technologies we expect to be of most interest: examples of the Command Pattern, State Pattern, and XML-based configuration file handling using LabVIEW classes and shared variables with logging and alarming features.
Future management needs of a "software-driven" science community
Kim K. Nilsson, Ole Möller-Nilsson
The work of astronomers is getting more complex and advanced as the progress of computer development occurs. With improved computing capabilities and increased data flow, more sophisticated software is required in order to interpret, and fully exploit, astronomic data. However, it is not possible for every astronomer to also be a software specialist. As history has shown, the work of scientists always becomes increasingly specialised, and we here argue in favour of another, at least partial, split between "programmers" and "interpreters". In this presentation we outline our vision for a new approach and symbiosis between software specialists and scientists, and present its advantages along with a simple test case.
Cyberinfrastructure I
icon_mobile_dropdown
An observation execution system for next-generation large telescopes
The telescope development projects of the 1990's produced a set of capable 8-10m telescopes that are now in operations across the northern and southern hemispheres. This was the first generation of telescopes to benefit from carefully engineered software systems, yet several years of 8m operations have revealed weaknesses in a common architecture employed by many of them. Today engineers are working on the next generation of telescopes, the extremely large telescopes (ELTs), along with their software systems. It is our view that many of the fundamental assumptions about how software systems for 8-m class large telescopes should be constructed are not optimal for the next generation of extremely large telescopes. In fact, these ideas may constrain the solution space and result in overly complex software and increased development costs. This paper points out issues with current architecture solutions and how they impact the software needed for extremely large telescopes. It then provides the outline of a new approach for the design of the software running at the telescope that is targeted towards the development issues of ELTs and large telescope operations.
Software architecture of the Magdalena Ridge Observatory Interferometer
Allen Farris, Dan Klinglesmith, John Seamons, et al.
Merging software from 36 independent work packages into a coherent, unified software system with a lifespan of twenty years is the challenge faced by the Magdalena Ridge Observatory Interferometer (MROI). We solve this problem by using standardized interface software automatically generated from simple highlevel descriptions of these systems, relying only on Linux, GNU, and POSIX without complex software such as CORBA. This approach, based on gigabit Ethernet with a TCP/IP protocol, provides the flexibility to integrate and manage diverse, independent systems using a centralized supervisory system that provides a database manager, data collectors, fault handling, and an operator interface.
Designing a high-availability cluster for the Subaru Telescope second generation observation control system
Eric Jeschke, Takeshi Inagaki
Subaru Telescope is commissioning a second-generation Observation Control System (OCS), building upon a 10 hear history of using the first generation OCS. One of the primary lessons learned about maintaining a distributed OCS system is that the idea of individual computer nodes specialized for specific functions greatly complicates troubleshooting and failover, even with a dedicated "hot spare" for each specialized node. In contrast, the Generation 2 (Gen2) system was designed from the ground up around the principle of a High-Availability (HA) cluster, commonly used for high-traffic, mission-critical web sites. In such a cluster, nodes are not specialized, and any node can perform any function of the OCS. We describe the problems encountered in trying to troubleshoot and manage failure on the legacy OCS system and describe the architectural design of the HA cluster for the new system, including special characteristics designed for the high-altitude, remote environment of the summit of Mauna Kea, where there is a greatly increased probability of such failures. Although the focus is primarily on the hardware, we touch upon the software architecture written to take advantage of the features of the HA cluster design. Finally, we outline the advantages of the new system and show how the design greatly facilitates troubleshooting, robustness and ease of failure management. The results may be of interest to anyone designing a distributed system using COTS hardware and open-source software to withstand failure and improve manageability in a remote environment.
Evolution of the VLT instrument control system toward industry standards
The VLT control system is a large distributed system consisting of Linux Workstations providing the high level coordination and interfaces to the users, and VME-based Local Control Units (LCU's) running the VxWorks real-time operating system with commercial and proprietary boards acting as the interface to the instrument functions. After more than 10 years of VLT operations, some of the applied technologies used by the astronomical instruments are being discontinued making it difficult to find adequate hardware for future projects. In order to deal with this obsolescence, the VLT Instrumentation Framework is being extended to adopt well established Commercial Off The Shelf (COTS) components connected through industry standard fieldbuses. This ensures a flexible state of the art hardware configuration for the next generation VLT instruments allowing the access to instrument devices via more compact and simpler control units like PC-based Programmable Logical Controllers (PLC's). It also makes it possible to control devices directly from the Instrument Workstation through a normal Ethernet connection. This paper outlines the requirements that motivated this work, as well as the architecture and the design of the framework extension. In addition, it describes the preliminary results on a use case which is a VLTI visitor instrument used as a pilot project to validate the concepts and the suitability of some COTS products like a PC-based PLCs, EtherCAT8 and OPC UA6 as solutions for instrument control.
Operating a global network of autonomous observatories
Petr Kubánek, Alberto J. Castro-Tirado, Antonio de Ugarte Postigo, et al.
We discuss our experiences operating a heterogeneous global network of autonomous observatories. The observatories are presently situated on four continents, with a fifth expected during the summer of 2010. The network nodes are small to intermediate diameter telescopes (<= 150 cm) owned by different institutions but running the same observatory control software. We report on the experience gained during construction, commissioning and operation of the observatories, as well as future plans. Problems encountered in the construction and operation of the nodes are summarised. Operational statistics as well as scientific results from the observatories are also presented.
Common Services/Reuse
icon_mobile_dropdown
Achieving reusability in KMOS instrument software through design patterns
KMOS is a near-infrared multi-object spectrometer, which is currently being built by a British-German consortium for the ESO VLT. As for any other VLT instrument, the KMOS instrument software is based on the application framework given by the VLT Common Software, but faces particular design challenges in addition. As separate parts of the software require a similar functionality with respect to mechanical and optical permissibility checks, user interface, and configuration control, a number of tasks have to be implemented twice and slightly differently. It turns out that most of these issues can be tackled successfully by means of well-known object-oriented design patterns, providing for reusability and improving the overall software design. We present a set of sample problems along with their particular pattern solution.
Reusing the VLT control system on the VISTA Telescope
D. L. Terrett, Malcolm Stewart
Once it was decided that the VISTA infra-red survey telescope would be built on Paranal and operated by ESO it was clear that there would be many long term advantages in basing the control system on that of the VLTs. Benefits over developing a new system such as lower development costs or disadvantages such as constraints on the design were not the most important factors in deciding how to implement the TCS, but now that the telescope is complete the pros and cons of re-using an existing system can be evaluated. This paper reviews the lessons learned during construction and commissioning and attempts to show where reusing an existing system was a help and where it was a hindrance. It highlights those things that could have been done differently to better exploit the fact the we were using a system that was already proven to work and where, with hindsight, we would have been better to re-implement components from scratch rather than modifying an existing one.
Evaluating and evolving common services framework for use at W. M. Keck Observatory
Jimmy Johnson, Steve Wampler, Kevin McCann
The Common Services Framework (CSF) is a software architecture developed at the National Solar Observatory for control of the Advanced Technology Solar Telescope (ATST). The framework was designed with the intent to make it independent of the ATST application and freely available to other projects. As part of the System Design phase for the Telescope Control System upgrade and Next Generation Adaptive Optics projects at the W. M. Keck Observatory a number of software frameworks and middleware were evaluated. Of those evaluated, CSF was selected as one of the primary choices for all or part of the software architecture and will be pursued further in the next design phases. This paper discusses the evaluation of CSF at Keck and some possible evolutions of the framework.
Integration of SCUBA-2 within the JCMT Observatory Control System
Craig A. Walther, Xiaofeng Gao, Dennis Kelly, et al.
The high data rates and unique operation modes of the SCUBA-2 instrument made for an especially challenging effort to get it working with the existing JCMT Observatory Control System (OCS). Due to some forethought by the original designers of the OCS, who had envisioned a SCUBA-2 like instrument years before it was reality, the JCMT was already being coordinated by a versatile Real Time Sequencer (RTS). The timing pulses from the RTS are fanned out to all of the SCUBA-2 Multi Channel Electronics (MCE) boxes allowing for precision timing of each data sample. The SCUBA-2 data handing and OCS communications are broken into two tasks, one doing the actual data acquisition and file writing, the other communicates with the OCS through Drama. These two tasks talk to each other via shared memory and semaphores. It is possible to swap back and forth between heterodyne and SCUBA-2 observing simply by selecting an observation for a particular instrument. This paper also covers the changes made to the existing OCS in order to integrate it with the new SCUBA-2 specific software.
Commensal observing with the Allen Telescope array: software command and control
Colby Gutierrez-Kraybill, Garrett K. Keating, David MacMahon, et al.
The Allen Telescope Array (ATA) is a Large-Number-Small-Diameter radio telescope array currently with 42 individual antennas and 5 independent back-end science systems (2 imaging FX correlators and 3 time domain beam formers) located at the Hat Creek Radio Observatory (HCRO). The goal of the ATA is to run multiple back-ends simultaneously, supporting multiple science projects commensally. The primary software control systems are based on a combination of Java, JRuby and Ruby on Rails. The primary control API is simplified to provide easy integration with new back-end systems while the lower layers of the software stack are handled by a master observing system. Scheduling observations for the ATA is based on finding a union between the science needs of multiple projects and automatically determining an efficient path to operating the various sub-components to meet those needs. When completed, the ATA is expected to be a world-class radio telescope, combining dedicated SETI projects with numerous radio astronomy science projects.
Web 2.0/User Interfaces
icon_mobile_dropdown
Writing Web 2.0 applications for science archives
William Roby
Writing these sorts of science archive web applications is now possible because of some significant breakthroughs in web technology over the last four years. The Web browser is no longer a glorified batch processing terminal, but an interactive environment that allows the user to have a similar experience as one might expect with an installed desktop application. Taking advantage of this technology requires a significant amount of UI design and advanced interactions with the web server. There are new levels of sophistication required to effectively develop this sort of web application. The IRSA group (NASA/IPAC Infrared Science Archive) is developing web-based software that equally takes advantage of modern technology and is designed to be reused easily. This way we can add new missions and data sets without a large programming effort while keeping the advanced interface. We can now provide true web-based FITS viewing, data overlays, and interaction without any plugins. Our tabular display allows us to filter, sort, and interact with large amounts data in ways that take advantage of the browser's power. This talk will show how we can us AJAX technology, the Google Web Toolkit (GWT), and Java to develop a data archive that is both well designed and creates a truly interactive experience.
Build great web search applications quickly with Solr and Blacklight
Ron DuPlain, Dana S. Balser, Nicole M. Radziwill
The NRAO faced performance and usability issues after releasing a single-search-box ("Google-like") web application to query data across all NRAO telescope archives. Running queries with several relations across multiple databases proved to be very expensive in compute resources. An investigation for a better platform led to Solr and Blacklight, a solution stack which allows in-house development to focus on in-house problems. Solr is an Apache project built on Lucene to provide a modern search server with a rich set of features and impressive performance. Blacklight is a web user interface (UI) for Solr primarily developed by libraries at the University of Virginia and Stanford University. Though Blacklight targets libraries, it is highly adaptable for many types of search applications which benefit from the faceted searching and browsing, minimal configuration, and flexible query parsing of Solr and Lucene. The result: one highly reused codebase provides for millisecond response times and a flexible UI. Not just for observational data, NRAO is rolling out Solr and Blacklight across domains of library databases, telescope proposals, and more -- in addition to telescope data products, where integration with the Virtual Observatory is on-going.
Graphical user interfaces of the dark energy survey
Jacob Eiting, Ann Elliott, Klaus Honscheid, et al.
The Dark Energy Survey (DES) is a 5000 square degree survey of the southern galactic cap set to take place on the Blanco 4-m telescope at Cerra Tololo Inter-American Observatory. A new 500 MP camera and control system are being developed for this survey. To facilitate the data acquisition and control, a new user interface is being designed that utilizes the massive improvements in web based technologies in the past year. The work being done on DES shows that these new technologies provide the functionality and performance required to provide a productive and enjoyable user experience in the browser.
User interface software development for the WIYN One Degree Imager (ODI)
John Ivens, Andrey Yeatts, Daniel Harbeck, et al.
User interfaces (UIs) are a necessity for almost any data acquisition system. The development team for the WIYN One Degree Imager (ODI) chose to develop a user interface that allows access to most of the instrument control for both scientists and engineers through the World Wide Web, because of the web's ease of use and accessibility around the world. Having a web based UI allows ODI to grow from a visitor-mode instrument to a queue-managed instrument and also facilitate remote servicing and troubleshooting. The challenges of developing such a system involve the difficulties of browser inter-operability, speed, presentation, and the choices involved with integrating browser and server technologies. To this end, the team has chosen a combination of Java, JBOSS, AJAX technologies, XML data descriptions, Oracle XML databases, and an emerging technology called the Google Web Toolkit (GWT) that compiles Java into Javascript for presentation in a browser. Advantages of using GWT include developing the front end browser code in Java, GWT's native support for AJAX, the use of XML to describe the user interface, the ability to profile code speed and discover bottlenecks, the ability to efficiently communicate with application servers such as JBOSS, and the ability to optimize and test code for multiple browsers. We discuss the inter-operation of all of these technologies to create fast, flexible, and robust user interfaces that are scalable, manageable, separable, and as much as possible allow maintenance of all code in Java.
The use of Flex as a viable toolkit for astronomy software applications
Kim Gillies, Alberto Conti, Anthony Rogers
The challenges facing the developers of user interfaces for astronomy applications has never been greater. Astronomers and engineers often use well-designed commercial and web applications outside their work environment and have come to expect a similar user experience with applications developed for their work tasks. The connectivity provided by the Internet and the ability to work from anywhere can improve user productivity, but it is a challenge to provide the kind of interactivity and responsiveness needed for astronomical applications to web based projects. It is fair to say that browserbased applications have not been adequate for many kinds of workhorse astronomy applications. The Flex/Actionscript framework from Adobe has been used successfully at the Space Telescope Science Institute in a variety of situations that were not possible with other technologies. In this paper, the Flex framework and technology is briefly introduced followed by a discussion of its advantages and disadvantages and how it addresses user expectations. Three astronomy applications will be presented demonstrating the technology capabilities with useful performance data. Flex/Actionscript is not well known within the astronomy development community, and our goal is to demonstrate that it can be the right choice for many astronomy applications.
Pipelines/Kepler
icon_mobile_dropdown
An open source application framework for astronomical imaging pipelines
The LSST Data Management System is built on an open source software framework that has middleware and application layers. The middleware layer provides capabilities to construct, configure, and manage pipelines on clusters of processing nodes, and to manage the data the pipelines consume and produce. It is not in any way specific to astronomical applications. The complementary application layer provides the building blocks for constructing pipelines that process astronomical data, both in image and catalog forms. The application layer does not directly depend upon the LSST middleware, and can readily be used with other middleware implementations. Both layers have object oriented designs that make the creation of more specialized capabilities relatively easy through class inheritance. This paper outlines the structure of the LSST application framework and explores its usefulness for constructing pipelines outside of the LSST context, two examples of which are discussed. The classes that the framework provides are related within a domain model that is applicable to any astronomical pipeline that processes imaging data. Specifically modeled are mosaic imaging sensors; the images from these sensors and the transformations that result as they are processed from raw sensor readouts to final calibrated science products; and the wide variety of catalogs that are produced by detecting and measuring astronomical objects in a stream of such images. The classes are implemented in C++ with Python bindings provided so that pipelines can be constructed in any desired mixture of C++ and Python.
Automated calibration and imaging with the Allen Telescope Array
Garrett K. Keating, William C. Barott, Melvyn Wright
Planned instruments such as the Atacama Large Millimeter Array (ALMA), the Large Synoptic Survey Telescope (LSST) and the Square Kilometer Array (SKA) will measure their data in petabytes. Innovative approaches in signal processing, computing hardware, algorithms, and data handling are necessary. The Allen Telescope Array (ATA) is a 42-antenna aperture synthesis array equipped with broadband, dual polarization receivers from 0.5 to 11 GHz. Four independent IF bands feed 4 spectral cross correlators and 3 beamformers. In this paper we describe the automated data processing to handle the high data rate and RFI in close to real time at the ATA.
Kepler Science Operations Center pipeline framework
Todd C. Klaus, Sean McCauliff, Miles T. Cote, et al.
The Kepler Mission is designed to continuously monitor up to 170,000 stars at a 30-minute cadence for 3.5 years searching for Earth-size planets. The data are processed at the Science Operations Center at NASA Ames Research Center. Because of the large volume of data and the memory needed, as well as the CPU-intensive nature of the analyses, significant computing hardware is required. We have developed generic pipeline framework software that is used to distribute and synchronize processing across a cluster of CPUs and provide data accountability for the resulting products. The framework is written in Java and is, therefore, platform-independent. The framework scales from a single, standalone workstation (for development and research on small data sets) to a full cluster of homogeneous or heterogeneous hardware with minimal configuration changes. A plug-in architecture provides customized, dynamic control of the unit of work without the need to modify the framework. Distributed transaction services provide for atomic storage of pipeline products for a unit of work across a relational database and the custom Kepler DB. Generic parameter management and data accountability services record parameter values, software versions, and other metadata used for each pipeline execution. A graphical user interface allows for configuration, execution, and monitoring of pipelines. The framework was developed for the Kepler Mission based on Kepler requirements, but the framework itself is generic and could be used for a variety of applications where these features are needed.
The Kepler Science Operations Center pipeline framework extensions
Todd C. Klaus, Miles T. Cote, Sean McCauliff, et al.
The Kepler Science Operations Center (SOC) is responsible for several aspects of the Kepler Mission, including managing targets, generating onboard data compression tables, monitoring photometer health and status, processing science data, and exporting Kepler Science Processing Pipeline products to the Multi-mission Archive at Space Telescope [Science Institute] (MAST). We describe how the pipeline framework software developed for the Kepler Mission is used to achieve these goals, including development of pipeline configurations for processing science data and performing other support roles, and development of custom unit-of-work generators for controlling how Kepler data are partitioned and distributed across the computing cluster. We describe the interface between the Java software that manages data retrieval and storage for a given unit of work and the MATLAB algorithms that process the data. The data for each unit of work are packaged into a single file that contains everything needed by the science algorithms, allowing the files to be used to debug and evolve the algorithms offline.
Data validation in the Kepler Science Operations Center pipeline
Hayley Wu, Joseph D. Twicken, Peter Tenenbaum, et al.
We present an overview of the Data Validation (DV) software component and its context within the Kepler Science Operations Center (SOC) pipeline and overall Kepler Science mission. The SOC pipeline performs a transiting planet search on the corrected light curves for over 150,000 targets across the focal plane array. We discuss the DV strategy for automated validation of Threshold Crossing Events (TCEs) generated in the transiting planet search. For each TCE, a transiting planet model is fitted to the target light curve. A multiple planet search is conducted by repeating the transiting planet search on the residual light curve after the model flux has been removed; if an additional detection occurs, a planet model is fitted to the new TCE. A suite of automated tests are performed after all planet candidates have been identified. We describe a centroid motion test to determine the significance of the motion of the target photocenter during transit and to estimate the coordinates of the transit source within the photometric aperture; a series of eclipsing binary discrimination tests on the parameters of the planet model fits to all transits and the sequences of odd and even transits; and a statistical bootstrap to assess the likelihood that the TCE would have been generated purely by chance given the target light curve with all transits removed.
Kepler Session
icon_mobile_dropdown
Kepler Science Operations Center architecture
Christopher Middour, Todd C. Klaus, Jon Jenkins, et al.
We give an overview of the operational concepts and architecture of the Kepler Science Processing Pipeline. Designed, developed, operated, and maintained by the Kepler Science Operations Center (SOC) at NASA Ames Research Center, the Science Processing Pipeline is a central element of the Kepler Ground Data System. The SOC consists of an office at Ames Research Center, software development and operations departments, and a data center which hosts the computers required to perform data analysis. The SOC's charter is to analyze stellar photometric data from the Kepler spacecraft and report results to the Kepler Science Office for further analysis. We describe how this is accomplished via the Kepler Science Processing Pipeline, including the hardware infrastructure, scientific algorithms, and operational procedures. We present the high-performance, parallel computing software modules of the pipeline that perform transit photometry, pixel-level calibration, systematic error correction, attitude determination, stellar target management, and instrument characterization. We show how data processing environments are divided to support operational processing and test needs. We explain the operational timelines for data processing and the data constructs that flow into the Kepler Science Processing Pipeline.
Semi-weekly monitoring of the performance and attitude of Kepler using a sparse set of targets
Hema Chandrasekaran, Jon M. Jenkins, Jie Li, et al.
The Kepler spacecraft is in a heliocentric Earth-trailing orbit, continuously observing ~160,000 select stars over ~115 square degrees of sky using its photometer containing 42 highly sensitive CCDs. The science data from these stars, consisting of ~6 million pixels at 29.4-minute intervals, is downlinked only every ~30 days. Additional low-rate Xband communications contacts are conducted with the spacecraft twice a week to downlink a small subset of the science data. This paper describes how we assess and monitor the performance of the photometer and the pointing stability of the spacecraft using such a sparse data set.
Focal plane geometry characterization of the Kepler Mission
Peter Tenenbaum, Jon M. Jenkins
The Kepler Mission focal plane contains 42 charge-coupled device (CCD) photodetectors. Each CCD is composed of 2.2 million square pixels, 27 micrometers on a side, arranged in a grid of 2,200 columns by 1,044 rows. The science goals of the Kepler Mission require that the position of each CCD be determined with an accuracy of 0.1 pixels, corresponding to 2.7 micrometers or 0.4 seconds of arc, a level which is not achievable through pre-flight metrology. We describe a technique for determining the CCD positioning using images of the Kepler field of view (FOV) obtained in flight. The technique uses the fitted centroid row and column positions of 400 pre-selected stars on each CCD to obtain empirical polynomials which relate sky coordinates (right ascension and declination) to chip coordinates (row and column). The polynomials are in turn evaluated to produce constraints for a nonlinear model fit which directly determines the model parameters describing the location and orientation of each CCD. The focal plane geometry characterization algorithm is itself embedded in an iterative process which determines the focal plane geometry and the Pixel Response Function for each CCD in a self-consistent manner. In addition to the fully automated calculation, a person-in-the-loop implementation was developed to allow an initial determination of the geometry in the event of large misalignments, achieving a much looser capture tolerance for more modest accuracy and reduced automation.
Selecting pixels for Kepler downlink
Stephen T. Bryson, Jon M. Jenkins, Todd C. Klaus, et al.
The Kepler mission monitors ~ 165, 000 stellar targets using 42 2200 × 1024 pixel CCDs. Onboard storage and bandwidth constraints prevent the storage and downlink of all 96 million pixels per 30-minute cadence, so the Kepler spacecraft downlinks a specified collection of pixels for each target. These pixels are selected by considering the object brightness, background and the signal-to-noise in each pixel, and maximizing the signal-to- noise ratio of the target. This paper describes pixel selection, creation of spacecraft apertures that efficiently capture selected pixels, and aperture assignment to a target. Engineering apertures, short-cadence targets and custom-specified shapes are discussed.
Kepler Mission's focal plane characterization models implementation
Christopher Allen, Todd Klaus, Jon Jenkins
The Kepler Mission photometer is an unusually complex array of CCDs. A large number of time-varying instrumental and systematic effects must be modeled and removed from the Kepler pixel data to produce light curves of sufficiently high quality for the mission to be successful in its planet-finding objective. After the launch of the spacecraft, many of these effects are difficult to remeasure frequently, and various interpolations over a small number of sample measurements must be used to determine the correct value of a given effect at different points in time. A library of software modules, called Focal Plane Characterization (FC) Models, is the element of the Kepler Science Data Pipeline (hereafter "pipeline") that handles this. FC, or products generated by FC, are used by nearly every element of the SOC processing chain. FC includes Java components: database persistence classes, operations classes, model classes, and data importers; and MATLAB code: model classes, interpolation methods, and wrapper functions. These classes, their interactions, and the database tables they represent, are discussed. This paper describes how these data and the FC software work together to provide the pipeline with the correct values to remove non-photometric effects caused by the photometer and its electronics from the Kepler light curves. The interpolation mathematics is reviewed, as well as the special case of the sky-to-pixel/pixel-to-sky coordinate transformation code, which incorporates a compound model that is unique in the SOC software.
Cyberinfrastructure II
icon_mobile_dropdown
The application of cloud computing to the creation of image mosaics and management of their provenance
G. Bruce Berriman, Ewa Deelman, Paul Groth, et al.
We have used the Montage image mosaic engine to investigate the cost and performance of processing images on the Amazon EC2 cloud, and to inform the requirements that higher-level products impose on provenance management technologies. We will present a detailed comparison of the performance of Montage on the cloud and on the Abe high performance cluster at the National Center for Supercomputing Applications (NCSA). Because Montage generates many intermediate products, we have used it to understand the science requirements that higher-level products impose on provenance management technologies. We describe experiments with provenance management technologies such as the "Provenance Aware Service Oriented Architecture" (PASOA).
EVALSO: a high-bandwidth communication infrastructure to efficiently connect the ESO Paranal and the Cerro Armazones Observatories to Europe
G. Filippi, S. Jaque, F. Liello, et al.
This paper describes the technical choices and the solutions adopted to create high bandwidth (>1Gbps) communication links to both the ESO Paranal and the Cerro Armazones Observatories located in the Atacama Desert, in the Northern region of Chile. The complete system is planned to be in place by mid-2010. This infrastructure is part of the EVALSO[1] (Enabling Virtual Access to Latin-America Southern Observatories) project that is done by a consortium of 9 members and co-founded by the EC (European Commission) within the frame of the FP7-INFRASTRUCTURES-2007-1.2-02. More on the project is available at www.evalso.eu.
File-storage cyberinfrastructure for large-scale projects: years before first-light
Arun S. Jagatheesan, Jeff Kantor, Raymond Plante, et al.
Large ground-based and space-based telescopes are expected to make exciting discoveries in the upcoming decade. These large projects start their construction phase many years before first-light and continue to operate for many years after first-light and usually span multiple countries. The file-storage cyberinfrastructure ("file-storage CI") of these largescale projects has to evolve over several years from a conceptual prototype to a highly flexible data distribution network. During this long period the file-storage CI has to transition into multiple stages, starting with a conceptual prototype before first-light, to a large-scale distributed network in production, and finally into a persistent archive once the project is decommissioned. While the project makes these transitions, the file-storage CI has to incorporate several requirements including but not limited to: Technology Evolution, due to changes in Cyberinfrastructure (CI) software or hardware during the lifetime of the project; International Partnerships that are updated during the various phases of the project; and Data Lifecycle that exists in the project. The file-storage and management software's architecture has to be designed with significant consideration of these requirements for these large projects. In this paper, we provide the generic requirements, for file-storage and management cyberinfrastructure in a large project similar to LSST before first-light.
CANFAR: the Canadian Advanced Network for Astronomical Research
Séverin Gaudet, Norman Hill, Patrick Armstrong, et al.
The Canadian Advanced Network For Astronomical Research (CANFAR) is a 2 1/2-year project that is delivering a network-enabled platform for the accessing, processing, storage, analysis, and distribution of very large astronomical datasets. The CANFAR infrastructure is being implemented as an International Virtual Observatory Alliance (IVOA) compliant web service infrastructure. A challenging feature of the project is to channel all survey data through Canadian research cyberinfrastructure. Sitting behind the portal service, the internal architecture makes use of high-speed networking, cloud computing, cloud storage, meta-scheduling, provisioning and virtualisation. This paper describes the high-level architecture and the current state of the project.
Current Project Overviews
icon_mobile_dropdown
The Australian SKA Pathfinder (ASKAP) software architecture
Juan C. Guzman, Ben Humphreys
The Australian SKA Pathfinder (ASKAP) is a 1% Square Kilometre Array (SKA) pathfinder radio telescope, comprising of 36 12-metre diameter reflector antennas, each with a Focal Plane Array consisting of approximately 100 dualpolarised elements operating at centimetre wavelengths and yielding a wide field-of-view (FOV) on the sky of about 30 square degrees. ASKAP is currently under construction and will be located in the remote radio-quiet desert Midwest region of Western Australia. It is expected to be fully operational in 2013. Key challenges include near real-time processing of large amount of data (~ 4 GB/s), control and monitoring of widely distributed devices (approx. 150,000 monitoring I/O points) and remote semi-automated operations. After evaluating several software technologies we have decided to use the EPICS framework for the Telescope Operating System and the Internet Communications Engine (ICE) middleware for the high-level service bus. This paper presents a summary of the overall ASKAP software architecture, as well as describing how EPICS and ICE technologies fit in the control software design.
The DECam data acquisition and control system
K. Honscheid, J. Eiting, A. Elliott, et al.
In this paper we describe the data acquisition and control system of the Dark Energy Camera (DECam), which will be the primary instrument used in the Dark Energy Survey (DES). DES is a high precision multibandpath wide area survey of 5000 square degrees of the southern sky. DECam currently under construction at Fermilab will be a 3 square degree mosaic camera mounted at the prime focus of the Blanco 4m telescope at the Cerro-Tololo International Observatory (CTIO). The DECam data acquisition system (SISPI) is implemented as a distributed multi-processor system with a software architecture built on the Client-Server and Publish-Subscribe design patterns. The underlying message passing protocol is based on PYRO, a powerful distributed object technology system written entirely in Python. A distributed shared variable system was added to support exchange of telemetry data and other information between different components of the system. In this paper we discuss the SISPI infrastructure software, the image pipeline, the observer interface and quality monitoring system, and the instrument control system.
ALMA software management and deployment
B. E. Glendenning, J. Ibsen, G. Kosugi, et al.
The ALMA Software (~ 80% completed) is in daily use at the ALMA Observatory and has been developed as an end-toend system including: proposal preparation, dynamic scheduling, instrument control, data handling and formatting, data archiving and retrieval, automatic and manual data processing, and support for observatory operations. This presentation will expand on some software management aspects, procedures for releases, integrated system testing and deployment in Chile. The need for a realistic validation environment, now achieved with a two antenna interferometer at the observatory, and the balance between incremental development and stability of the software (a challenge at the moment) will be explained.
Discovery Channel Telescope software development overview
Paul J. Lotz, Daniel Greenspan, Ryan Godwin, et al.
The Discovery Channel Telescope (DCT) is a 4.3-meter astronomical research telescope being built in northern Arizona as a partnership between Discovery Communications and Lowell Observatory. We present an overview of the current status of the project software effort, including the iterative development process (including planning, requirements management and traceability, design, code, test, issue tracking, and version control), our experience with management and design techniques and tools the team uses that support the effort, key features of the component-based architectural design, and implementation examples that leverage new LabVIEW-based technologies.
The Large Synoptic Survey Telescope data management overview
The LSST Data Management System (DMS) processes the incoming stream of images that the camera system generates to produce transient alerts and to archive the raw images, periodically creates new calibration data products that other processing functions will use, creates and archives an annual Data Release (a static self-consistent collection of data products generated from all survey data taken from the date of survey initiation to the cutoff date for the Data Release), and makes all LSST data available through an interface that uses community-based standards and facilitates user data analysis and production of user-defined data products with supercomputing-scale resources. This paper discusses DMS distributed processing and data, and DMS architecture and design, with an emphasis on the particular technical challenges that must be met. The DMS publishes transient alerts in community-standard formats (e.g. VOEvent) within 60 seconds of detection. The DMS processes and archives over 50 petabytes of exposures (over the 10- year survey). Data Releases, include catalogs of tens of trillions of detected sources and tens of billions of astronomical objects, 2000-deep co-added exposures, and calibration products accurate to standards not achieved in wide-field survey instruments to date. These Data Releases grow in size to tens of petabytes over the survey period. The expected data access patterns drive the design of the database and data access services. Finally, the DMS permits interactive analysis and provides nightly summary statistics describing DMS output quality and performance.
The Large Synoptic Survey Telescope data challenges
The Data Management system for the LSST will have to perform near-real-time calibration and analysis of acquired images, particularly for transient detection and alert generation; annual processing of the entire dataset for precision calibration, object detection and characterization, and catalog generation; and support of user data access and analysis. Images will be acquired at roughly a 17-second cadence, with alerts generated within one minute. The ten-year survey will result in tens of petabytes of image and catalog data and will require ~250 teraflops of processing to reduce. The LSST project is carrying out a series of Data Challenges (DC) to refine the design, evaluate the scientific and computational performance of candidate algorithms, and address the challenging scaling issues that the LSST dataset will present. This paper discusses the progress of the DCs to date and plans for future DCs. Algorithm development must address dual requirements for the efficient use of computational resources and the accurate, reliable processing of the deep and broad survey data. The DCs incorporate both existing astronomical images and image data resulting from detailed photon-level simulations. The data is used to ensure that the system can scale to the LSST field of view and 3.2 gigapixel camera scale and meet the scientific data quality requirements. Future DCs, carried out in conjunction with the LSST Science Collaborations, are planned to deliver data products verified by computeraided analysis and actual applications as suitable for high-quality science.
Poster Session
icon_mobile_dropdown
Development of an analysis framework for HSC and Belle II
Sogo Mineo, Hiroaki Aihara, Ryosuke Itoh, et al.
We report an analysis framework developed for the Hyper Suprime-Cam. The framework is featured by distributed parallel execution and a Python interface. With the Python interface, it collaborates with the LSST application framework. Thus we have developed a test pipeline with both the frameworks, and tested its parallelization performance.
Experience with a new approach for instrument software at Gemini
Gemini Observatory is using a new approach with instrument software that takes advantage of the strengths of our instrument builders and at the same time better supports our own operational needs. A lightweight software library in conjunction with modern agile software development methodologies is being used to ameliorate the problems encountered with the development of the first and second-generation Gemini instruments. Over the last two years, Gemini and the team constructing the software for the Gemini Planet Imager (GPI) have been using an agile development process to implement the Gemini Instrument Application Interface (GIAPI) and the highlevel control software for the GPI instrument. The GPI is being tested and exercised with the GIAPI, and this has allowed us to perform early end-to-end testing of the instrument software. Early in 2009 for the first time in our development history, we were able to move instrument mechanisms with Gemini software during early instrument construction. As a result of this approach, we discovered and fixed software interface issues between Gemini and GPI. Resolving these problems at this stage is simpler and less expensive than when the full instrument is completed. GPI is currently approaching its integration and testing phase, which will occur in 2010. We expect that utilizing this new approach will yield a more robust software implementation resulting in smoother instrument integration, testing, and commissioning phases. In this paper we describe the key points of our approach and results of applying the new instrument API approach together with agile development methodologies. The paper concludes with lessons learned and suggestions for adapting agile approaches in other astronomy development projects.
New architectures support for ALMA common software: lessons learned
Camilo E. Menay, Gabriel A. Zamora, Rodrigo J. Tobar, et al.
ALMA Common Software (ACS) is a distributed control framework based on CORBA that provides communication between distributed pieces of software. Because of its size and complexity it provides its own compilation system, a mix of several technologies. The current ACS compilation process depends on specific tools, compilers, code generation, and a strict dependency model induced by the large number of software components. This document presents a summary of several porting and compatibility attempts at using ACS on platforms other than the officially supported one. A porting of ACS to the Microsoft Windows Platform and to the ARM processor architecture were attempted, with different grades of success. Also, support for LINUX-PREEMPT (a set of real-time patches for the Linux kernel) using a new design for real-time services was implemented. These efforts were integrated with the ACS building and compilation system, while others were included in its design. Lessons learned in this process are presented, and a general approach is extracted from them.
Photometer performance assessment in Kepler science data processing
Jie Li, Christopher Allen, Stephen T. Bryson, et al.
This paper describes the algorithms of the Photometer Performance Assessment (PPA) software component in the Kepler Science Operations Center (SOC) Science Processing Pipeline. The PPA performs two tasks: One is to analyze the health and performance of the Kepler photometer based on the long cadence science data down-linked via Ka band approximately every 30 days. The second is to determine the attitude of the Kepler spacecraft with high precision at each long cadence. The PPA component has demonstrated the capability to work effectively with the Kepler flight data.
Presearch data conditioning in the Kepler Science Operations Center pipeline
Joseph D. Twicken, Hema Chandrasekaran, Jon M. Jenkins, et al.
We describe the Presearch Data Conditioning (PDC) software component and its context in the Kepler Science Operations Center (SOC) Science Processing Pipeline. The primary tasks of this component are to correct systematic and other errors, remove excess flux due to aperture crowding, and condition the raw flux light curves for over 160,000 long cadence (~thirty minute) and 512 short cadence (~one minute) stellar targets. Long cadence corrected flux light curves are subjected to a transiting planet search in a subsequent pipeline module. We discuss science algorithms for long and short cadence PDC: identification and correction of unexplained (i.e., unrelated to known anomalies) discontinuities; systematic error correction; and removal of excess flux due to aperture crowding. We discuss the propagation of uncertainties from raw to corrected flux. Finally, we present examples from Kepler flight data to illustrate PDC performance. Corrected flux light curves produced by PDC are exported to the Multi-mission Archive at Space Telescope [Science Institute] (MAST) and are made available to the general public in accordance with the NASA/Kepler data release policy.
Design of modular C++ observatory control system: from observatories to laboratories and back
Petr Kubánek, Michael Prouza, Ronan Cunniffe, et al.
For almost a decade we have been developing an open source control system for autonomous observatories called Remote Telescope System, 2nd version - RTS2. The system is currently used to operate about dozen observatories. It was designed from the beginning as the ultimate tool for autonomously performing any possible observing plan on any hardware. Its modular design allows exactly this and enables even more. Currently it is used to control not only observatories but also CCD testing laboratories. We present the internal design of this open source observatory and laboratory control package, and discuss its overall structure. We emphasise new developments and our experiences building a community of users and developers of the package. Design of the system modularity is explained in detail, and various approaches to software reuse are discussed, with a demonstration of how the best solution emerged. We describe problems that were encountered as mirror sizes and associated operational complexity grew. We also describe how the system is being used at a CCD testing laboratory, and detail the quick transition from previously unsupported hardware to fully automated operation. We discuss how the system's evolution has affected code design, and present unexpected benefits it is brought. Our experience with use of open source code and libraries are discussed.
Pixel-level calibration in the Kepler Science Operations Center pipeline
Elisa V. Quintana, Jon M. Jenkins, Bruce D. Clarke, et al.
We present an overview of the pixel-level calibration of flight data from the Kepler Mission performed within the Kepler Science Operations Center Science Processing Pipeline. This article describes the calibration (CAL) module, which operates on original spacecraft data to remove instrument effects and other artifacts that pollute the data. Traditional CCD data reduction is performed (removal of instrument/detector effects such as bias and dark current), in addition to pixel-level calibration (correcting for cosmic rays and variations in pixel sensitivity), Kepler-specific corrections (removing smear signals which result from the lack of a shutter on the photometer and correcting for distortions induced by the readout electronics), and additional operations that are needed due to the complexity and large volume of flight data. CAL operates on long (~30 min) and short (~1 min) sampled data, as well as full-frame images, and produces calibrated pixel flux time series, uncertainties, and other metrics that are used in subsequent Pipeline modules. The raw and calibrated data are also archived in the Multi-mission Archive at Space Telescope at the Space Telescope Science Institute for use by the astronomical community.
New direction in the development of the observation software framework (BOSS)
Eszter Pozna, Alain Smette, Ricardo Schmutzer, et al.
The Observation Software (OS) of astronomical instruments, which lie directly beneath the instructions of astronomers, carrying out exposures and calibrations is the supervisor of the multi-process and multi-layer instrument software package. The main responsibility of the OS is the synchronization of the subsystems (detectors and groups of mechanical devices) and the telescope during exposures. At ESO a software framework Base Observation Software Stub (BOSS) takes care of the common functionalities of all OS of various instruments at the various sites VLT, VLTI, La Silla and Vista. This paper discusses the latest applications and how their new generic requirements contributes to the BOSS framework. The paper discusses the resolution of problems of event queues, interdependent functionalities, parallel commands and asynchronous messages in the OS using OO technologies.
JCMT Telescope Control System upgrades for SCUBA-2
Russell Kackley, Douglas Scott, Edward Chapin, et al.
The James Clerk Maxwell Telescope (JCMT) Telescope Control System (TCS) received significant upgrades to provide new observing capabilities to support the requirements of the SCUBA-2 instrument. The core of the TCS is the Portable Telescope Control System (PTCS), which was developed through collaboration between the Joint Astronomy Centre and the Anglo-Australian Observatory. The PTCS provides a well-designed virtual telescope function library that simplifies these sorts of upgrades. The TCS was previously upgraded to provide the required scanning modes for the JCMT heterodyne instruments. The heterodyne instruments required only relatively simple raster or boustrophedon patterns, which are basically composed of multiple straight-line scans to cover a rectangular area. The most recent upgrades built upon those heterodyne scanning modes to satisfy the SCUBA-2 requirements. With these upgrades, the TCS can scan the telescope in any pattern that can be described as a continuous function of time. This new capability has been utilized during the current SCUBA-2 on-sky commissioning phase to scan the telescope in a variety of patterns (Lissajous, pong, ellipse, and daisy) on the sky. This paper will give a brief description of the PTCS, provide information on the selection of the SCUBA-2 scanning modes, describe the changes to the TCS that were necessary to implement the new scanning modes, and show the performance of the telescope during SCUBA-2 commissioning.
A framework for propagation of uncertainties in the Kepler data analysis pipeline
Bruce D. Clarke, Christopher Allen, Stephen T. Bryson, et al.
The Kepler space telescope is designed to detect Earth-like planets around Sun-like stars using transit photometry by simultaneously observing more than 100,000 stellar targets nearly continuously over a three-and-a-half year period. The 96.4-megapixel focal plane consists of 42 Charge-Coupled Devices (CCD), each containing two 1024 x 1100 pixel arrays. Since cross-correlations between calibrated pixels are introduced by common calibrations performed on each CCD, downstream data processing requires access to the calibrated pixel covariance matrix to properly estimate uncertainties. However, the prohibitively large covariance matrices corresponding to the ~75,000 calibrated pixels per CCD preclude calculating and storing the covariance in standard lock-step fashion. We present a novel framework used to implement standard Propagation of Uncertainties (POU) in the Kepler Science Operations Center (SOC) data processing pipeline. The POU framework captures the variance of the raw pixel data and the kernel of each subsequent calibration transformation, allowing the full covariance matrix of any subset of calibrated pixels to be recalled on the fly at any step in the calibration process. Singular Value Decomposition (SVD) is used to compress and filter the raw uncertainty data as well as any data-dependent kernels. This combination of POU framework and SVD compression allows the downstream consumer access to the full covariance matrix of any subset of the calibrated pixels which is traceable to the pixel-level measurement uncertainties, all without having to store, retrieve, and operate on prohibitively large covariance matrices. We describe the POU framework and SVD compression scheme and its implementation in the Kepler SOC pipeline.
SPHERE data reduction software: first insights into data reduction software development for next-generation instruments
The Spectro-Polarimetric High-contrast Exoplanet Research (SPHERE) instrument for the VLT is designed for discovering new giant planets orbiting nearby stars by direct imaging. The accuracy demands on this complex instrument and its data reduction handling are high. Here, we outline the design of the data reduction software for SPHERE and argue that SPHERE can be seen as one of the first of a new generation of instruments. We discuss what can be learned from SPHERE about new challenges in reduction software design, management and development. Along with the key issues, we formulate some general principles to help overcoming these challenges.
Photometric analysis in the Kepler Science Operations Center pipeline
Joseph D. Twicken, Bruce D. Clarke, Stephen T. Bryson, et al.
We describe the Photometric Analysis (PA) software component and its context in the Kepler Science Operations Center (SOC) Science Processing Pipeline. The primary tasks of this module are to compute the photometric flux and photocenters (centroids) for over 160,000 long cadence (~thirty minute) and 512 short cadence (~one minute) stellar targets from the calibrated pixels in their respective apertures. We discuss science algorithms for long and short cadence PA: cosmic ray cleaning; background estimation and removal; aperture photometry; and flux-weighted centroiding. We discuss the end-to-end propagation of uncertainties for the science algorithms. Finally, we present examples of photometric apertures, raw flux light curves, and centroid time series from Kepler flight data. PA light curves, centroid time series, and barycentric timestamp corrections are exported to the Multi-mission Archive at Space Telescope [Science Institute] (MAST) and are made available to the general public in accordance with the NASA/Kepler data release policy.
High performance graphical data trending in a distributed system
Cristián Maureira, Arturo Hoffstadt, Joao López, et al.
Trending near real-time data is a complex task, specially in distributed environments. This problem was typically tackled in financial and transaction systems, but it now applies to its utmost in other contexts, such as hardware monitoring in large-scale projects. Data handling requires subscription to specific data feeds that need to be implemented avoiding replication, and rate of transmission has to be assured. On the side of the graphical client, rendering needs to be fast enough so it may be perceived as real-time processing and display. ALMA Common Software (ACS) provides a software infrastructure for distributed projects which may require trending large volumes of data. For theses requirements ACS offers a Sampling System, which allows sampling selected data feeds at different frequencies. Along with this, it provides a graphical tool to plot the collected information, which needs to perform as well as possible. Currently there are many graphical libraries available for data trending. This imposes a problem when trying to choose one: It is necessary to know which has the best performance, and which combination of programming language and library is the best decision. This document analyzes the performance of different graphical libraries and languages in order to present the optimal environment when writing or re-factoring an application using trending technologies in distributed systems. To properly address the complexity of the problem, a specific set of alternative was pre-selected, including libraries in Java and Python, languages which are part of ACS. A stress benchmark will be developed in a simulated distributed environment using ACS in order to test the trending libraries.
A simple way to build an ANSI-C like compiler from scratch and embed it on the instrument's software
Alicia Rodríguez Trinidad, Rafael Morales Muñoz, Miguel Abril Martí, et al.
This paper examines the reasons for building a compiled language embedded on an instrument software. Starting from scratch and step by step, all the compiler stages of an ANSI-C like language are analyzed, simplified and implemented. The result is a compiler and a runner with a small footprint that can be easily transferable and embedded into an instrument software. Both have about 75 KBytes when similar solutions have hundreds. Finally, the possibilities that arise from embedding the runner inside an instrument software are explored.
A methodological proposal for the development of an HPC-based antenna array scheduler
Roberto Bonvallet, Arturo Hoffstadt, Diego Herrera, et al.
As new astronomy projects choose interferometry to improve angular resolution and to minimize costs, preparing and optimizing schedules for an antenna array becomes an increasingly critical task. This problem shares similarities with the job-shop problem, which is known to be a NP-hard problem, making a complete approach infeasible. In the case of ALMA, 18000 projects per season are expected, and the best schedule must be found in the order of minutes. The problem imposes severe difficulties: the large domain of observation projects to be taken into account; a complex objective function, composed of several abstract, environmental, and hardware constraints; the number of restrictions imposed and the dynamic nature of the problem, as weather is an ever-changing variable. A solution can benefit from the use of High-Performance Computing for the final implementation to be deployed, but also for the development process. Our research group proposes the use of both metaheuristic search and statistical learning algorithms, in order to create schedules in a reasonable time. How these techniques will be applied is yet to be determined as part of the ongoing research. Several algorithms need to be implemented, tested and evaluated by the team. This work presents the methodology proposed to lead the development of the scheduler. The basic functionality is encapsulated into software components implemented on parallel architectures. These components expose a domain-level interface to the researchers, enabling then to develop early prototypes for evaluating and comparing their proposed techniques.
Choosing a control system for CCAT
D. L. Terrett, Patrick Wallace, Alan Bridger, et al.
The Cornell Caltech Atacama Telescope1 is a 25m aperture sub-millimeter wavelength telescope to be built in northern Chile at an altitude of 5600m. Like any modern telescope, CCAT will require a powerful and comprehensive control system; writing one from scratch is not affordable, so the CCAT TCS must be based, at least in part, on existing software. This paper describes how the search for a suitable system (or systems) was carried out, looks at the criteria used to judge the feasibility of various approaches to developing the new system, and suggests the further studies needed to validate the choices. Although the purpose of the study was to find a control system for a specific telescope with its own particular technical requirements, many of the factors considered, such as maintainability, the ability to adapt to new requirements in the future and so on, are of concern to all telescopes. Consequently, the processes used to select the system for CCAT are relevant to other projects faced with the same decision, even if the conclusions turn out to be different.
Progress in cancellable multi-threaded control software
K. Shortridge, T. J. Farrell
The AAO's DRAMA data acquisition environment provides a very successful flexible model for instrument control tasks based on the concept of named 'actions'. A task can execute a number of these actions simultaneously, and - something we have found to be of paramount importance in control systems - they can be cancelled cleanly if necessary. However, this flexibility has been achieved by use of what is essentially a collaborative multi-threading system, each action running in short 'stages' in a single-threaded task. The original DRAMA design pre-dated the general availability of multi-threading systems, but until now we have been reluctant to move to a multi-threading model because of the difficulties associated with attempting to cleanly cancel a thread stuck in a blocking operation. We now believe we have an acceptable solution to this problem, and are modifying the internals of DRAMA to produce an approach - compatible with the existing system - that will allow individual actions to execute in separate threads. It will be able to carry out dialogues with hardware in a much simpler manner than has been allowed so far, and this should simplify the coding of DRAMA tasks enormously.
A solution for remote-upgrading field controllers based on FPGA Cyclone 2C35
Dan Zhu, Yuhua Zhu, Jianing Wang
Modern telescopes usually have more controlled nodes than classical ones. Those nodes are separately distributed at various locations of the instrument and not easy to access. While in adjustment, it always requires to modify the control software, or sometimes to reform the hardware structure and to upgrade the related programs. To solve the problems of renewing the field controllers, we introduce a FPGA based telescope controller system and a scheme for remoteupgrading it via Ethernet. This paper mainly describes the structure of the field controller, the requirements for remoteupgrading and system structure. Also discussed are the protocol applications and extensions, the processing methods as well as the ideal of software design. The scheme has been in trial run for a large telescope with 16 field controller's subsystem and excellent results were obtained. It may effectively solve the remote-upgrading problems for multiple field controllers of large telescopes. Besides the scheme can be used in other multi-nodes industrial control systems too, which is of high value in applications.
A virtual reality environment for telescope operation
Luis A. Martínez, José L. Villarreal, Fernando Ángeles, et al.
Astronomical observatories and telescopes are becoming increasingly large and complex systems, demanding to any potential user the acquirement of great amount of information previous to access them. At present, the most common way to overcome that information is through the implementation of larger graphical user interfaces and computer monitors to increase the display area. Tonantzintla Observatory has a 1-m telescope with a remote observing system. As a step forward in the improvement of the telescope software, we have designed a Virtual Reality (VR) environment that works as an extension of the remote system and allows us to operate the telescope. In this work we explore this alternative technology that is being suggested here as a software platform for the operation of the 1-m telescope.
Middleware design and implementation for LSST
The LSST middleware design is based on a set of software abstractions; which provide standard interfaces for common communications services. The observatory requires communication between many subsystems, and comprehensive archiving of subsystem status data. Control commands as well as health and status data from across the observatory must be stored to support both the science data analysis, and trending analysis for the early detection of hardware anomalies. The Service Abstraction Layer (SAL) is implemented using open source packages that implement open standards of DDS (Data DistributionService) for data communication and SQL for storage. Designs for the automatic generation of code, documentation, and subsystem simulation, are being developed. Abstractions for the Telemetry datastreams, each with customized data structures, Command/Response, and the Logging and Alert messages are described.
The research on direct drives control system in the large aperture telescope
Xiaoyan Li, Zhenchao Zhang, Daxing Wang
A 30m giant telescope project, Chinese Future Giant Telescope (CFGT), has been proposed by Chinese astronomers. At present, a series of key techniques are being developed. This paper explores a method to control direct drive servo motor in giant telescope application, which is based on a segmented Surface-mounted Permanent Magnet Synchronous Motor (SMPMSM). The losses of SMPMSM and the method of reducing the losses are discussed in this paper. Phase-controlled rectification circuit is chosen to regulate rectified voltage according to the telescope status. Such design can decrease the losses of the motor to some extent. In the control system Space-vector PWM (SVPWM) algorithm acts as a control algorithm and three-phase voltage source inverter circuit acts as drive circuit. This project is subsidized by Chinese National Natural Science Funds (10833004).
The PANIC software system
José M. Ibáñez Mengual, Matilde Fernández, Julio F. Rodríguez Gómez, et al.
PANIC is the Panoramic Near Infrared Camera for the 2.2m and 3.5m telescopes at Calar Alto observatory. The aim of the project is to build a wide-field general purpose NIR camera. In this paper we describe the software system of the instrument, which comprises four main packages: GEIRS for the instrument control and the data acquisition; the Observation Tool (OT), the software used for detailed definition and pre-planning the observations, developed in Java; the Quick Look tool (PQL) for easy inspection of the data in real-time and a scientific pipeline (PAPI), both based on the Python programming language.
Practical considerations for pointing a binocular telescope
Michele D. De La Peña, David L. Terrett, David Thompson, et al.
The Large Binocular Telescope (LBT) consists of two 8.4-meter primary mirrors on a common mount. When the telescope is complete, to complement the two primaries there will be two 0.9-meter adaptive secondaries and two tertiary mirror flats that all work to support a variety of Gregorian focal stations, as well as prime focus. A fundamental goal of the telescope is to perform interferometric observations, and therefore, there is a critical need for the ability to co-point the individual telescopes to high precision. Further, a unique aspect of the LBT is the comparatively large range over which the optics can be adjusted which provides flexibility for the acquisition of targets. In the most general case, an observer could be performing an observation using different targets, within constraints, with different instruments on each of the two telescope sides, with different observing duty cycles. As a consequence of the binocular nature of the telescope and the number of possible observing combinations, there are unique requirements imposed on the Telescope Control System (TCS), and in particular, on the Pointing Control Subsystem (PCS). It is the responsibility of the PCS to arbitrate the pointing requests made on the two sides of the telescope by the observers, incorporate guide updates, and generate tracking trajectories for the mount and the rotators, in conjunction with providing tip/tilt demands on the subsystem controlling the optical elements, and ensure each target remains on the specified location (i.e., pointing origin) in the focal plane during an active observation. This paper describes the current design and implementation of the LBT PCS.
A high efficient and fast kNN algorithm based on CUDA
The k Nearest Neighbor (kNN) algorithm is an effective classification approach in the statistical methods of pattern recognition. But it could be a rather time-consuming approach when applied on massive data, especially facing large survey projects in astronomy. NVIDIA CUDA is a general purpose parallel computing architecture that leverages the parallel compute engine in NVIDIA graphics processing units (GPUs) to solve many complex computational problems in a fraction of the time required on a CPU. In this paper, we implement a CUDAbased kNN algorithm, and compare its performance with CPU-only kNN algorithm using single-precision and double-precision datatype on classifying celestial objects. The results demonstrate that CUDA can speedup kNN algorithm effectively and could be useful in astronomical applications.
The Blanco Telescope TCS upgrade
German Schumacher, Eduardo Mondaca, Michael Warner, et al.
The Blanco 4-meter telescope has been in operation for over 30 years and is now subject to an extensive upgrade of its control system, both of the hardware and software aspects. The motivation for the upgrade, besides the normal replacement of obsolete components, is the preparation of the telescope for the installation of the DECAM instrument, which makes greater operational demands than can't be met by the current system. The architecture of the new system is in line with the designs proposed for modern telescopes like the Large Synoptic Survey Telescope (LSST), and its implementation utilizes similar technologies as proposed for that project. In this paper we present a detailed description of the upgraded system, including tape encoders, control algorithms, the use of trajectories to optimize motions, communications middleware, and its performance as a whole.
A prototype of Hyper Suprime-Cam data analysis system
Hisanori Furusawa, Naoki Yasuda, Yuki Okura, et al.
We develop a prototype of data analysis system for the wide-field camera Hyper Suprime-Cam (HSC) at Subaru Telescope. The current prototype is optimized for data of the current Subaru prime-focus camera Suprime-Cam, which is a precursor instrument of HSC, to study the on-site data evaluation for wide-field imaging. The system conducts realtime data evaluation for every data frame obtaining statistical information including seeing, sky-background level, astrometric solution, and photometric zeropoint when available. Variations in time of the derived values are shown on a web-based status monitor. The on-demand analysis such as mosaicing analysis is performed using the data evaluation results. This system consists of analysis pipelines responsible for data processing, and the analysis organizing software for controlling analysis tasks and data flow and the database. The XML-based database maintains all the analysis results and analysis histories. Improvement of the analysis speed by parallel data processing is achieved with the aid of the organizing software. This system has started operation in general observations since March 2010, and will be extended to process the 104 CCD's of HSC. The system may be used for observing support and also possible to apply to another imaging-mode instruments in the future.
Instrument-specific features within the observation preparation software for LINC-NIRVANA
Alexey Pavlov, Jan Trowitzsch
The LINC-NIRVANA (LN) Observation Preparation Software (LOPS) supports an observer during the complex process of preparing the observations for LINC-NIRVANA (LN). LN is a German-Italian beam combiner for the Large Binocular Telescope. The instrument exploits its full capability by means of Multi-Conjugated Adaptive Optics and an IR Fringe and Flexure Tracker. These sub-systems of the LN instrument and the fixed geometry of the telescope put specific constraints on the observation and scheduling process. LOPS is committed to a generic approach which allows to easily include new features on the so called procedure-plug-in level (low level). Considering specific aspects of the LN instrument the implementation on the generic procedure level is not adequate enough, because an user/observer needs to deal with a lot of instrument-specific parameters when preparing an observation program (OP). For this reason, LOPS provides a high-level application plug-in system which allows to maintain the features of an OP also as separate application in order to benefit from the more advanced GUI. In this paper we present the Guide Star Buffer concept as an exemplary feature-specific application in the framework of LOPS. It is dedicated to search, select and organize guide stars in the corresponding groups needed for LN observations.
Research of remote control for Chinese Antarctica Telescope based on iridium satellite communication
Astronomers are ever dreaming of sites with best seeing on the Earth surface for celestial observation, and the Antarctica is one of a few such sites only left owing to the global air pollution. However, Antarctica region is largely unaccessible for human being due to lacking of fundamental living conditions, travel facilities and effective ways of communication. Worst of all, the popular internet source as a general way of communication scarcely exists there. Facing such a dilemma and as a solution remote control and data transmission for telescopes through iridium satellite communication has been put forward for the Chinese network Antarctic Schmidt Telescopes 3 (AST3), which is currently under all round research and development. This paper presents iridium satellite-based remote control application adapted to telescope control. The pioneer work in China involves hardware and software configuration utilizing techniques for reliable and secure communication, which is outlined in the paper too.
Comparison of several algorithms for celestial object classification
We present a comparative study of implementation of supervised classification algorithms on classification of celestial objects. Three different algorithms including Linear Discriminant Analysis (LDA), K-Dimensional Tree (KD-tree), Support Vector Machines (SVMs) are used for classification of pointed sources from the Sloan Digital Sky Survey (SDSS) Data Release Seven. All of them have been applied and tested on the SDSS photometric data which are filtered by stringent conditions to make them play the best performance. Each of six performance metrics of SVMs can achieve very high performance (99.00%). The performances of KD-tree are also very good since six metrics are over 97.00%. Although five metrics are more than 90.00%, the performances of LDA are relatively poor because the accuracy of positive prediction only reaches 85.98%. Moreover, we discuss what input pattern is the best combination of different parameters for the effectiveness of these methods, respectively.
Design and realization of the IP control core in field controllers for LAMOST spectroscopes
Jianing Wang, Zhongyi Han, Yizhong Zeng, et al.
The China-made telescope, LAMOST, consists of 16 spectroscopes to detect stellar spectra via 4000 optical fibers. In each spectroscope, many movable parts work in phase. Those parts are real-time controlled and managed by field controllers based on FPGA. This paper mainly introduces how to use DSP Builder module library in MATLAB / Simulink to construct the IP control core on FPGA chip. This method can also be used to design the control core of PID arithmetic, to carry out arithmetic simulation and generate VHDL language file, as well as to integrate it into SOPC developing environment so as to repeatedly use. In this way, the design period of the control system may be shortened and design process simplified. Finally due to the reversibility and programmability of the IP control core ,a system on a chip for field controllers of spectroscope is realized, which meets astronomical control requirements, providing an effective scheme for embedded system in astronomical instrument applications.
Approaches for photometric redshift estimation of quasars from SDSS and UKIDSS
We investigate two methods: kernel regression and nearest neighbor algorithm for photometric redshift estimation with the quasar samples from SDSS (the Sloan Digital Sky Survey) and UKIDSS (the UKIRT Infrared Deep Sky Survey) databases. Both kernel regression and nearest neighbor algorithm belong to the family of instance-based learning algorithms, which store all the training examples and "delay learning" until prediction time. The major difference between the two algorithms is that kernel regression is a weighted average of spectral redshifts of the neighbors for a query point while nearest neighbor algorithm utilizes the spectral redshift of the nearest neighbor for a query point. Each algorithm has its own advantage and disadvantage. Our experimental results show that kernel regression obtains more accurate predicting results, and nearest neighbor algorithm shows its superiority especially for more thinly spread data, e.g. high redshift quasars.
Synchronization of motor controller and PC system clocks
The power of the Large Binocular Telescope (LBT) with its two 8.4m primary mirrors sharing a common mount will unfold its full potential with the LINC-NIRVANA (LN) instrument. LINC-NIRVANA is a German-Italian beam combiner for the LBT and will interfere the light from the two 8.4m mirrors of the LBT in Fizeau mode. More than 140 motors have to be handled by custom developed Motor Controllers (MoCons). One important feature of the MoCon is the support of externally computed trajectories. Motion profiles provide information on the movement of the motor along a defined path over a certain period of time. Such profiles can be uploaded to the MoCon over Ethernet and can be started at a specific time. For field derotation it is critical that the derotation trajectories are executed with a very precise relative and absolute timing. This raises the problem of the synchronization of the MoCon internal clock with the system time of the servers that are hosting LINCNIRVANA's Instrument Control Software. The MoCon time should be known by the servers with an uncertainty of few milliseconds in order to match the start time of the motion profile and the field rotation trajectory. In this paper we will discuss how to synchronize the MoCon internal time and the PC system time.
The ATST base: command-action-response in action
The Advanced Technology Solar Telescope (ATST) Common Services Framework (CSF) provides the technical framework necessary to quickly and easily develop applications implementing the command-action-response model. The ATST Base builds on top of CSF and provides applications that, with a few modifications, can be dropped into a telescope control system or an instrument control system. This is done by extending the CSF Controller and writing applications that perform some of the common tasks needed by telescope and instrument control systems. This paper includes a general look at the Hardware Controller and an in-depth look at the Management Controller and Motion Controller classes. Telescope and instrument control systems typically have multiple axes of motion that need to be coordinated. Management Controllers allow a simple command to be given to a single Controller and then aggregated to multiple worker Controllers who can perform multiple actions. Management Controllers aggregate the state and status of their workers. The workers may be of the same type (e.g., multiple servo control systems) or of different types (e.g., two different servo controllers, a hexapod controller, digital I/O controller and a camera controller). Most users of turnkey motion control solutions use only a few of the commands that the motion control system provides. The ATST Base Motion Controller abstracts the hardware, and provides a simple interface (focusing on a few common instructions) to use in controlling different types of motion stages.
Automated classification of pointed sources
Yanxia Zhang, Yongheng Zhao, Hongwen Zheng
Facing very large and frequently high dimensional data in astronomy, effectiveness and efficiency of algorithms are always the hot issue. Excellent algorithms must avoid the curse of dimensionality and simultaneously should be computationally efficient. Adopting survey data from optical bands (SDSS, USNO-B1.0) and radio band (FIRST), we investigate feature weighting and feature selection by means of random forest algorithm. Then we employ a kd-tree based k-nearest neighbor method (KD-KNN) to discriminate quasars from stars. Then the performance of this approach based on all features, weighted features and selected features are compared. The experimental result shows that the accuracy improves when using weighted features or selected features. KD-KNN is a quite easy and efficient approach to nonparametric classification. Obviously KD-KNN combined with random forests is more effective to separate quasars from stars with multi-wavelength data.
Support vector machines for quasar selection
We introduce an automated method called Support Vector Machines (SVMs) for quasar selection in order to compile an input catalogue for the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) and improve the efficiency of its 4000 fibers. The data are adopted from the Sloan Digital Sky Survey (SDSS) Data Release Seven (DR7) which is the latest world release now. We carefully study the discrimination of quasars from stars by finding the hyperplane in high-dimensional space of colors with different combinations of model parameters in SVMs and give a clear way to find the optimal combination (C-+ = 2, C+- = 2, kernel = RBF, gamma = 3.2). Furthermore, we investigate the performances of SVMs for the sake of predicting the photometric redshifts of quasar candidates and get optimal model parameters of (w = 0.001, C-+ = 1, C+- = 2, kernel = RBF, gamma = 7.5) for SVMs. Finally, the experimental results show that the precision and the recall of SVMs for separating quasars from stars both can be over 95%. Using the optimal model parameters, we estimate the photometric redshifts of 39353 identified quasars, and find that 72.99% of them are consistent with the spectroscopic redshifts within |▵z| < 0.2. This approach is effective and applicable for our problem.
The design of LBT's telemetry source registration
Tony Edgin, Norman Cushing
In the current report, we describe the structure of the telemetry logging system of the Large Binocular Telescope (LBT) and its approach to telemeter registration. The telemetry logging system, called Telemetry, has three functions. It will provide system data to LBT Observatory (LBTO) personnel in order to facilitate engineering activities such as commissioning, failure diagnosis and system repair. In order to detect failures as soon as possible after they occur, telemetry will allow for live monitoring of the functional status of key telescope systems. Finally, in order to help personnel understand how the LBT operating characteristics evolve with time, telemetry will provide access to historical telescope system data. Given this range of functions, a key requirement of Telemetry is that it must easily adapt to new sources of data. To minimize the changes required to Telemetry, it has no pre-existing knowledge about the structure of the data it will collect. Instead, it engages in a telemeter registration process, in which the data source must describe its own structure. This registration process requires no external data files to be maintained since the description is built up by a sequence of function calls to a C++ library. So far, this strategy has proven successful, as only minor modifications have been made to accommodate the nearly 400 sources of data introduced to the system in the past year. The current report describes the design of the LBT telemetry system and its source registration process.
Design considerations for LBTI observer interface
V. Vaitheeswaran, P. Hinz, C. O'Connell, et al.
We outline the design considerations and principles for developing a graphical user interface for configuring and operating Large Binocular Telescope Interferometer (LBTI) on sky, and examine the "weblication" methodology to deliver this astronomical software over the web. LBTI is an instrument to be installed at the Large Binocular Telescope to search for exo-planets. The instrument consists of a universal beam combiner to combine the light from both arms of the LBT, an L and M band science camera, a K band nulling channel along with wave front sensor units for adaptive optics correction. Additionally, the application will have an interface to the telescope control system and XML based telescope telemetry data flow.
A multistrategy control system for field controllers of astronomical instruments
Dan Zhu, Yuhua Zhu
As well-known, system on a programmable chip (SOPC) is widely used in a variety of field control systems , due to their flexible configurations and intelligent stand-alone characteristics. They are also increasingly used in astronomical instrument control nowadays. For those complex and diverse systems, a number of different control strategies are stored in FLASH, but the controller of on-chip determines which one to load. At the same time, it can be switched intelligently and remotely to form a multi-strategy control system, so as to extend the control functions and achieve system on-line reconfiguration quickly. In this paper we describe a design concept and realization method of a multistrategy control system on the basis of FPGA-based system on a chip. Its hardware core is Altera's Cyclone series EP3C25 chip .In SOPC BUILDER development environment ,a control system is constructed, which consists of NIOS II soft core as CPU ,REMOTE_UPDATE IP core and control algorithms as well. The concept and design has been verified in the field controllers for various astronomical applications. Satisfactory results have been obtained.
A simple and effective algorithm for quasar candidate selection
Nanbo Peng, Yanxia Zhang, Tong Pei, et al.
K-Nearest Neighbor (kNN) algorithm is one of the simplest and most flexible and effective classification algorithms, which has been widely used in many fields. Using the multi-band samples extracted from large surveys of SDSS DR7 and UKIDSS DR3, we investigate the performance of kNN with different combinations of colors to select quasar candidates. The color histograms of quasars and stars is helpful to select the optimal input pattern for the classifier of kNN. The best input pattern is (u-g, g-r, r-i, i-z, z-Y, Y-J, J-H, H-K, Y-K, g-z). In our case, the performance of kNN is assessed by different performance metrics, which indicate kNN has rather high performance for discriminating quasars from stars. As a result, kNN is an applicable and effective method to select quasar candidates for large sky survey projects.
An automated algorithm for determining photometric redshifts of quasars
We employ k-nearest neighbor algorithm (KNN) for photometric redshift measurement of quasars with the Fifth Data Release (DR5) of the Sloan Digital Sky Survey (SDSS). KNN is an instance learning algorithm where the result of new instance query is predicted based on the closest training samples. The regressor do not use any model to fit and only based on memory. Given a query quasar, we find the known quasars or (training points) closest to the query point, whose redshift value is simply assigned to be the average of the values of its k nearest neighbors. Three kinds of different colors (PSF, Model or Fiber) and spectral redshifts are used as input parameters, separatively. The combination of the three kinds of colors is also taken as input. The experimental results indicate that the best input pattern is PSF + Model + Fiber colors in all experiments. With this pattern, 59.24%, 77.34% and 84.68% of photometric redshifts are obtained within ▵z < 0.1, 0.2 and 0.3, respectively. If only using one kind of colors as input, the model colors achieve the best performance. However, when using two kinds of colors, the best result is achieved by PSF + Fiber colors. In addition, nearest neighbor method (k = 1) shows its superiority compared to KNN (k ≠ 1) for the given sample.
Separating quasars from stars by support vector machines
Yanxia Zhang, Hongwen Zheng, Yongheng Zhao
Based on survey databases from different bands, we firstly employed random forest approach for feature selection and feature weighting, and investigated support vector machines (SVMs) to classify quasars from stars. Two sets of data were used, one from SDSS, USNO-B1.0 and FIRST (short for FIRST sample), and another from SDSS, USNO-B1.0 and ROSAT (short for ROSAT sample). The classification results with different data were compared. Moreover the SVM performance with different features was presented. The experimental result showed that the accuracy with FIRST sample was superior to that with ROSAT sample, in addition, when compared to the result with original features, the performance using selected features improved and that using weighted features decreased. Therefore we consider that while SVMs is applied for classification, feature selection is necessary since this not only improves the performance, but also reduces the dimensionalities. The good performance of SVMs indicates that SVMs is an effective method to preselect quasar candidates from multiwavelength data.
Calibration of LAMOST spectral analysis
The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) started its test observation in the past year (2009). The spectroscopic reduction and analysis software is avalible for the determination of spectral classifications, redshifts and the fundamental stellar atmospheric parameters (effective temperature, surface gravity, and metallicity). The analysis results show some systematic errors that need to be calibrated. We will present the results of these calibrations in this paper. Comparing with known objects observed by the Sloan Digital Sky Survey (SDSS), we can calibrate the redshifts of LAMOST galaxy spectra. Results from external spectral analysis software, the Sloan Extension for Galactic Exploration and Understanding (SEGUE) Stellar Parameter Pipeline (SSPP), will be applied to check the accuracy of the radial velocity (RV) measurement. Meanwhile, the atmospheric parameters of LAMOST stellar spectra are compared with known objects for calibrating our spectral analysis.
Control, acquisition and quick-look software for infrared spectrometers
Emanuel Rossetti, Ernesto Oliva, Livia Origlia
Modern IR spectrometers must be equipped with software well suited to a wide range of purposes starting from the low level controls and going through acquisition, handling and fast quick-look of data. Such a complex software structure can be conveniently managed by means of dedicated GUIs which allow one to access the system at different levels: from the lowest (cryogenic controls) to the highest (data acquisition and quick-look). We will briefly describe the structure of this software, whose original characteristics is in the use of SQLite. We will also show a couple of results from our quick-look procedure based on customized DS9 and IRAF. The software architecture, that we present here, is general and can be adapted to any astronomical instruments. We have specifically implemented and tested it on GIANO spectrometer
Robustness of LAMOST networked control system
A-Li Luo, Ke-Fei Wu, Jian Dong
LAMOST running is executed by a networked control system (NCS), named OCS. The robustness of OCS depends on reliability of network, bandwidth allocation, communication protocol etc., which will be discussed respectively in this paper. A simulated software NS2 was used to analyze the complex of the system. Experiments were designed to verify time delay effect and package loss, which could degrade the performance of the system.
Research of Large Telescope Control System
Xiaoying Shuai, Zhenchao Zhang
With the development of active optics, control technology and computer, telescope control system (TCS) becomes more and more complicated. Large telescope control system contains thousands of controlled objects, and large telescopes are usually built in special location. These are challenge for control system. This paper research advanced control technique of TCS, presents the topology structure of wireless local area networks control, remote control based on satellites and wireless portable control and expatiates the real-time design of telescope control units.
Position measurement of the direct drive motor of Large Aperture Telescope
Ying Li, Daxing Wang
Along with the development of space and astronomy science, production of large aperture telescope and super large aperture telescope will definitely become the trend. It's one of methods to solve precise drive of large aperture telescope using direct drive technology unified designed of electricity and magnetism structure. A direct drive precise rotary table with diameter of 2.5 meters researched and produced by us is a typical mechanical & electrical integration design. This paper mainly introduces position measurement control system of direct drive motor. In design of this motor, position measurement control system requires having high resolution, and precisely aligning the position of rotor shaft and making measurement, meanwhile transferring position information to position reversing information corresponding to needed motor pole number. This system has chosen high precision metal band coder and absolute type coder, processing information of coders, and has sent 32-bit RISC CPU making software processing, and gained high resolution composite coder. The paper gives relevant laboratory test results at the end, indicating the position measurement can apply to large aperture telescope control system. This project is subsidized by Chinese National Natural Science Funds (10833004).
A control system for LAMOST CCD cameras
32 scientific CCD cameras within 16 low-dispersion spectrographs of LAMOST are used for object spectra. This paper introduced the CCD Master system designed for camera management and control based on UCAM controller. The layers of Master, UDP and CCD-end daemons are described in detail. The commands, statuses, user interface and spectra viewer are discussed.
The primary mirror system control software for the VST
Pietro Schipani, Laurent Marty, Francesco Perrotta, et al.
The most important element of the VST active optics is the primary mirror, with its active support system located within the primary mirror cell structure. The primary mirror support system is composed by an axial and a lateral independent systems and includes an earthquake safety system. The primary mirror system software has been designed with a system engineering approach. The software has to change the mirror shape during observations, but also shall allow the user to perform a number of other activities. It has to support: periodic maintenance operations like the alignment, the mirror removal and installation for recoating; the functional tests; the engineering operations; the recalibration of several parameters. This paper describes how the primary mirror system software has been developed to support both the observations and engineering activities.
Telescope information service system of LAMOST
Shi Wei Sun, A-Li Luo
The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) had been built. A telescope information service system (TISS) for LAMOST is planning to be developed, which will be used as a maintenance tool to improve the efficiency of LAMOST. In this paper, the maintenance extent of software and hardware are presented, and the standard of interface is introduced. The driving of the model and the communication between TISS and different subsystems are also analyzed. Besides, the methods of information collection, storage and display of data are shown.
A code generation framework for the ALMA common software
Nicolás Troncoso, Horst H. von Brand, Jorge Ibsen, et al.
Code generation helps in smoothing the learning curve of a complex application framework and in reducing the number of Lines Of Code (LOC) that a developer needs to craft. The ALMA Common Software (ACS) has adopted code generation in specific areas, but we are now exploiting the more comprehensive approach of Model Driven code generation to transform directly an UML Model into a full implementation in the ACS framework. This approach makes it easier for newcomers to grasp the principles of the framework. Moreover, a lower handcrafted LOC reduces the error rate. Additional benefits achieved by model driven code generation are: software reuse, implicit application of design patterns and automatic tests generation. A model driven approach to design makes it also possible using the same model with different frameworks, by generating for different targets. The generation framework presented in this paper uses openArchitectureWare1 as the model to text translator. OpenArchitectureWare provides a powerful functional language that makes this easier to implement the correct mapping of data types, the main difficulty encountered in the translation process. The output is an ACS application readily usable by the developer, including the necessary deployment configuration, thus minimizing any configuration burden during testing. The specific application code is implemented by extending generated classes. Therefore, generated and manually crafted code are kept apart, simplifying the code generation process and aiding the developers by keeping a clean logical separation between the two. Our first results show that code generation improves dramatically the code productivity.
The interaction between pointing and active optics on the VISTA telescope
D. L. Terrett, William J. Sutherland
The VISTA telescope has a field of view of 45 arc minutes radius and an f/1 primary mirror and so, in order to meet the image quality requirements at the edge of the field, the position of M2 has to be actively controlled in all 5 axes (focus, centring and tilt). Tilting M2 not only affects the image quality, it also shifts the image in the focal plane which introduces an interaction between the active optics and the telescope pointing. VISTA uses the VLT control system and the M2 hexapod does not allow movements of M2 and the telescope to be coordinated well enough for M2 to be tilted while a science exposure is in progress without introducing unacceptable image motion. Therefore, application of tilts requested by the active optics system have to be coordinated with the activity of the infra-red camera. This paper describes how the active optics system measures M2 tilt corrections and how the application of these tilts to the mirror and the compensating adjustments the telescope pointing are integrated with the operation of the telescope and camera in order to deliver the best possible image quality without reducing the survey efficiency.
Towards a new Mercator Observatory Control System
W. Pessemier, G. Raskin, S. Prins, et al.
A new control system is currently being developed for the 1.2-meter Mercator Telescope at the Roque de Los Muchachos Observatory (La Palma, Spain). Formerly based on transputers, the new Mercator Observatory Control System (MOCS) consists of a small network of Linux computers complemented by a central industrial controller and an industrial real-time data communication network. Python is chosen as the high-level language to develop flexible yet powerful supervisory control and data acquisition (SCADA) software for the Linux computers. Specialized applications such as detector control, auto-guiding and middleware management are also integrated in the same Python software package. The industrial controller, on the other hand, is connected to the majority of the field devices and is targeted to run various control loops, some of which are real-time critical. Independently of the Linux distributed control system (DCS), this controller makes sure that high priority tasks such as the telescope motion, mirror support and hydrostatic bearing control are carried out in a reliable and safe way. A comparison is made between different controller technologies including a LabVIEW embedded system, a PROFINET Programmable Logic Controller (PLC) and motion controller, and an EtherCAT embedded PC (soft-PLC). As the latter is chosen as the primary platform for the lower level control, a substantial part of the software is being ported to the IEC 61131-3 standard programming languages. Additionally, obsolete hardware is gradually being replaced by standard industrial alternatives with fast EtherCAT communication. The use of Python as a scripting language allows a smooth migration to the final MOCS: finished parts of the new control system can readily be commissioned to replace the corresponding transputer units of the old control system with minimal downtime. In this contribution, we give an overview of the systems design, implementation details and the current status of the project.
A high-availability distributed hardware control system using Java
Albert F. Niessner
Two independent coronagraph experiments that require 24/7 availability with different optical layouts and different motion control requirements are commanded and controlled with the same Java software system executing on many geographically scattered computer systems interconnected via TCP/IP. High availability of a distributed system requires that the computers have a robust communication messaging system making the mix of TCP/IP (a robust transport), and XML (a robust message) a natural choice. XML also adds the configuration flexibility. Java then adds object-oriented paradigms, exception handling, heavily tested libraries, and many third party tools for implementation robustness. The result is a software system that provides users 24/7 access to two diverse experiments with XML files defining the differences.
Zigbee networking technology and its application in Lamost optical fiber positioning and control system
4,000 fiber positioning units need to be positioned precisely in LAMOST(Large Sky Area Multi-object Optical Spectroscopic Telescope) optical fiber positioning & control system, and every fiber positioning unit needs two stepper motors for its driven, so 8,000 stepper motors need to be controlled in the entire system. Wireless communication mode is adopted to save the installing space on the back of the focal panel, and can save more than 95% external wires compared to the traditional cable control mode. This paper studies how to use the ZigBee technology to group these 8000 nodes, explores the pros and cons of star network and tree network in order to search the stars quickly and efficiently. ZigBee technology is a short distance, low-complexity, low power, low data rate, low-cost two-way wireless communication technology based on the IEEE 802.15.4 protocol. It based on standard Open Systems Interconnection (OSI): The 802.15.4 standard specifies the lower protocol layers-the physical layer (PHY), and the media access control (MAC). ZigBee Alliance defined on this basis, the rest layers such as the network layer and application layer, and is responsible for high-level applications, testing and marketing. The network layer used here, based on ad hoc network protocols, includes the following functions: construction and maintenance of the topological structure, nomenclature and associated businesses which involves addressing, routing and security and a self-organizing-self-maintenance functions which will minimize consumer spending and maintenance costs. In this paper, freescale's 802.15.4 protocol was used to configure the network layer. A star network and a tree network topology is realized, which can build network, maintenance network and create a routing function automatically. A concise tree network address allocate algorithm is present to assign the network ID automatically.
Realizing software longevity over a system's lifetime
Kyle Lanclos, William T. S. Deich, Robert I. Kibrick, et al.
A successful instrument or telescope will measure its productive lifetime in decades; over that period, the technology behind the control hardware and software will evolve, and be replaced on a per-component basis. These new components must successfully integrate with the old, and the difficulty of that integration depends strongly on the design decisions made over the course of the facility's history. The same decisions impact the ultimate success of each upgrade, as measured in terms of observing efficiency and maintenance cost. We offer a case study of these critical design decisions, analyzing the layers of software deployed for instruments under the care of UCO/Lick Observatory, including recent upgrades to the Low Resolution Imaging Spectrometer (LRIS) at Keck Observatory in Hawaii, as well as the Kast spectrograph, Lick Adaptive Optics system, and Hamilton spectrograph, all at Lick Observatory's Shane 3-meter Telescope at Mt. Hamilton. These issues play directly into design considerations for the software intended for use at the next generation of telescopes, such as the Thirty Meter Telescope. We conduct our analysis with the future of observational astronomy infrastructure firmly in mind.
Instrument control software requirement specification for Extremely Large Telescopes
Engineers in several observatories are now designing the next generation of optical telescopes, the Extremely Large Telescopes (ELT). These are very complex machines that will host sophisticated astronomical instruments to be used for a wide range of scientific studies. In order to carry out scientific observations, a software infrastructure is required to orchestrate the control of the multiple subsystems and functions. This paper will focus on describing the considerations, strategies and main issues related to the definition and analysis of the software requirements for the ELT's Instrument Control System using modern development processes and modelling tools like SysML.
Introducing high performance distributed logging service for ACS
Jorge A. Avarias, Joao S. López, Cristián Maureira, et al.
The ALMA Common Software (ACS) is a software framework that provides the infrastructure for the Atacama Large Millimeter Array and other projects. ACS, based on CORBA, offers basic services and common design patterns for distributed software. Every properly built system needs to be able to log status and error information. Logging in a single computer scenario can be as easy as using fprintf statements. However, in a distributed system, it must provide a way to centralize all logging data in a single place without overloading the network nor complicating the applications. ACS provides a complete logging service infrastructure in which every log has an associated priority and timestamp, allowing filtering at different levels of the system (application, service and clients). Currently the ACS logging service uses an implementation of the CORBA Telecom Log Service in a customized way, using only a minimal subset of the features provided by the standard. The most relevant feature used by ACS is the ability to treat the logs as event data that gets distributed over the network in a publisher-subscriber paradigm. For this purpose the CORBA Notification Service, which is resource intensive, is used. On the other hand, the Data Distribution Service (DDS) provides an alternative standard for publisher-subscriber communication for real-time systems, offering better performance and featuring decentralized message processing. The current document describes how the new high performance logging service of ACS has been modeled and developed using DDS, replacing the Telecom Log Service. Benefits and drawbacks are analyzed. A benchmark is presented comparing the differences between the implementations.
A new control system hardware architecture for the Hobby-Eberly Telescope prime focus instrument package
Chuck Ramiller, Trey Taylor, Tom H. Rafferty, et al.
The Hobby-Eberly Telescope (HET) will be undergoing a major upgrade as a precursor to the HET Dark Energy Experiment (HETDEX). As part of this upgrade, the Prime Focus Instrument Package (PFIP) will be replaced with a new design that supports the HETDEX requirements along with the existing suite of instruments and anticipated future additions. This paper describes the new PFIP control system hardware plus the physical constraints and other considerations driving its design. Because of its location at the top end of the telescope, the new PFIP is essentially a stand-alone remote automation island containing over a dozen subsystems. Within the PFIP, motion controllers and modular IO systems are interconnected using a local Controller Area Network (CAN) bus and the CANOpen messaging protocol. CCD cameras that are equipped only with USB 2.0 interfaces are connected to a local Ethernet network via small microcontroller boards running embedded Linux. Links to ground-level systems pass through a 100 m cable bundle and use Ethernet over fiber optic cable exclusively; communications are either direct or through Ethernet/CAN gateways that pass CANOpen messages transparently. All of the control system hardware components are commercially available, designed for rugged industrial applications, and rated for extended temperature operation down to -10 °C.
Integrating a university team in the ALMA software development process: a successful model for distributed collaborations
Matias Mora, Jorge Ibsen, Gianluca Chiozzi, et al.
Observatories are not all about exciting new technologies and scientific progress. Some time has to be dedicated to the future engineers' generations who are going to be on the front line in a few years from now. Over the past six years, ALMA Computing has been helping to build up and collaborating with a well-organized engineering students' group at Universidad T´ecnica Federico Santa Maria in Chile. The Computer Systems Research Group (CSRG) currently has wide collaborations with national and international organizations, mainly in the astronomical observations field. The overall coordination and technical work is done primarily by students, working side-by-side with professional engineers. This implies not only using high engineering standards, but also advanced organization techniques. This paper aims to present the way this collaboration has built up an own identity, independently of individuals, starting from its origins: summer internships at international observatories, the open-source community, and the short and busy student's life. The organizational model and collaboration approaches are presented, which have been evolving along with the years and the growth of the group. This model is being adopted by other university groups, and is also catching the attention of other areas inside the ALMA project, as it has produced an interesting training process for astronomical facilities. Many lessons have been learned by all participants in this initiative. The results that have been achieved at this point include a large number of projects, funds sources, publications, collaboration agreements, and a growing history of new engineers, educated under this model.
SPHERE instrumentation software in the construction and integration phases
A. Baruffolo, P. Bruno, D. Fantinel, et al.
SPHERE is a second generation instrument for the VLT whose prime objective is the discovery and study of new extrasolar giant planets orbiting nearby stars. It is a complex instrument, consisting of an extreme Adaptive Optics System (SAXO), various coronagraphs, an infrared differential imaging camera (IRDIS), an infrared integral field spectrograph (IFS) and a visible differential polarimeter (ZIMPOL). SPHERE INS is the software devoted to the control of all instrument functions; it implements all the observing, calibration and maintenance procedures, the interactive GUIs and manages the software interfaces with the observation handling system and the data flow management system. Development of the SPHERE INS has been conducted by a team distributed over four nations. The SPHERE subsystems are nearing completion and the integration of the whole instrument will start soon. In this paper we report on the current status of the software and on the activities concerning its construction and integration with the SPHERE subsystems. In particular, we will discuss how we managed development and integration within our distributed team, including the tools that we employed to support our work.
The TJO-OAdM Robotic Observatory: the scheduler
Josep Colomé, Kevin Casteels, Ignasi Ribas, et al.
The Joan Oró Telescope at the Montsec Astronomical Observatory (TJO - OAdM) is a small-class observatory working under completely unattended control, due to the isolation of the site. Robotic operation is mandatory for its routine use. The level of robotization of an observatory is given by its reliability in responding to environment changes and by the required human interaction due to possible alarms. These two points establish a level of human attendance to ensure low risk at any time. But there is another key point when deciding how the system performs as a robot: the capability to adapt the scheduled observation to actual conditions. The scheduler represents a fundamental element to fully achieve an intelligent response at any time. Its main task is the mid- and short-term time optimization and it has a direct effect on the scientific return achieved by the observatory. We present a description of the scheduler developed for the TJO - OAdM, which is separated in two parts. Firstly, a pre-scheduler that makes a temporary selection of objects from the available projects according to their possibility of observation. This process is carried out before the beginning of the night following different selection criteria. Secondly, a dynamic scheduler that is executed any time a target observation is complete and a new one must be scheduled. The latter enables the selection of the best target in real time according to actual environment conditions and the set of priorities.
UCam: universal camera controller and data acquisition system
S. A. McLay, N. N. Bezawada, D. C. Atkinson, et al.
This paper describes the software architecture and design concepts used in the UKATC's generic camera control and data acquisition software system (UCam) which was originally developed for use with the ARC controller hardware. The ARC detector control electronics are developed by Astronomical Research Cameras (ARC), of San Diego, USA. UCam provides an alternative software solution programmed in C/C++ and python that runs on a real-time Linux operating system to achieve critical speed performance for high time resolution instrumentation. UCam is a server based application that can be accessed remotely and easily integrated as part of a larger instrument control system. It comes with a user friendly client application interface that has several features including a FITS header editor and support for interfacing with network devices. Support is also provided for writing automated scripts in python or as text files. UCam has an application centric design where custom applications for different types of detectors and read out modes can be developed, downloaded and executed on the ARC controller. The built-in de-multiplexer can be easily reconfigured to readout any number of channels for almost any type of detector. It also provides support for numerous sampling modes such as CDS, FOWLER, NDR and threshold limited NDR. UCam has been developed over several years for use on many instruments such as the Wide Field Infra Red Camera (WFCAM) at UKIRT in Hawaii, the mid-IR imager/spectrometer UIST and is also used on instruments at SUBARU, Gemini and Palomar.
A software framework for telemetry and data logging, MMT Observatory, Arizona, USA
J. D. Gibson, T. Trebisky, S. Schaller, et al.
An object-oriented software approach to acquisition and logging of telemetry data has been implemented at the MMT Observatory (MMTO). This approach includes: 1) a uniform interface to RS-232 serial and TCP/UDP network-enabled hardware devices, 2) a multiplexed socket server able to handle multiple simultaneous connections, 3) a simple ASCII network protocol, 4) standardized relational and round-robin database logging, 5) consistent parameter naming conventions, 6) automatic data validation, 7) centralized configuration files, and 8) unified process control. Over 25 miniservers, each of which corresponds to a single hardware device, implement the hardware-specific protocol for communication with that hardware device. The miniserver collects data from the device and allows network access to the dataset for that device via a uniform ASCII protocol. Each miniserver also periodically logs data to relational and, optionally, round-robin databases. Over 29 gigabytes of logged telemetry data, representing over 1500 distinct parameters and 120,000,000 MySQL records, are currently available for the past 4-5 years through this software framework. Essentially any scripting language can be used to access the ASCII-based network interface and MySQL relational databases. This object-oriented approach to telemetry provides a framework into which new hardware devices can easily be added and leverages existing data acquisition, analysis, and visualization tools.
Software for automated testing and characterization of CCDs for Large Synoptic Survey Telescope (LSST)
Michael Prouza, Petr Kubánek, Paul O'Connor, et al.
We present the latest modifications of the open source observatory control software package RTS2. New features were developed specifically for the automated testing of CCD chips for the mosaic camera of the Large Synoptic Survey Telescope. Currently, the system is in operation at Brookhaven National Laboratory in Upton, USA and at Laboratoire de Physique Nucl´eaire et des Hautes ´Energies in Paris, France. RTS2 software is currently used to characterize the sensors from various vendors and will be used first for selection and then for testing of production CCD sensors. With our system we are able to automatically obtain a series of images for analysis. Data is used to study many aspects of sensor characteristics, including wavelength dependence of quantum efficiency, the dark current, and the linearity of the CCD response as a function of back-bias voltage and temperature. We also can measure a point spread function over the whole surface of the CCD sensors.
Software for automated run-time determination of calibration values and hardware capabilities in torrent detector control systems
The MONSOON Torrent image Acquisition system is being designed partially to reduce the complexity in configuring a Detector controller system. This paper will discuss how we have achieved this goal by creating a system of automation for the configuration task. We also discuss how the automated systems work to insure proper focal plane operation in the face of potential network, communications and controller hardware failures during observing sessions. The Torrent hardware design is discussed in section 2. In Sections 4 and 5 we discuss the automated processes used to develop the description of the Torrent hardware used by the rest of the automation system. In Sections 6 through 8 we discuss the semi automated system configuration/integration/design software. In Section 9 we present the automated run-time configuration tools and discuss how it operates in the face of various failures. In Section 10 we discuss how Torrent and the automated systems will achieve the goal of reducing observing down time in the face of hardware failures.
World coordinate system keywords for FITS files from Lick Observatory
Steven L. Allen, John Gates, Robert I. Kibrick
Every bit of metadata added at the time of acquisition increases the value of image data, facilitates automated processing of those data, and decreases the effort required during subsequent data curation activities. In 2002 the FITS community completed a standard for World Coordinate System (WCS) information which describes the celestial coordinates of pixels in astronomical image data. Most of the instruments in use at Lick Observatory and Keck Observatory predate this standard. None of them was designed to produce FITS files with celestial WCS information. We report on the status of WCS keywords in the FITS files of various astronomical detectors at Lick and Keck. These keywords combine the information from sources which include the telescope pointing system, the optics of the telescope and instrument, a description of the pixel layout of the detector focal plane, and the hardware and software mappings between the silicon pixels of the detector and the pixels in the data array of the FITS file. The existing WCS keywords include coordinates which refer to the detector structure itself (for locating defects and artifacts), but not celestial coordinates. We also present proof-of-concept from the first data acquisition system at Lick Observatory which inserts the WCS keywords for a celestial coordinate system.
Re-using the NOCS as a common instrument interface
The NEWFIRM Observation Control System (NOCS) was developed to support an IR mosaic camera. New projects at NOAO include an OUV CCD mosaic imager upgrade and an OUV CCD (cloned) spectrograph to be designed, developed and implemented on a relatively rapid timescale. Rather than re-invent the wheel, we report on adapting the NOCS to support these new capabilities to the 4m instrumentation suite.
Upgrading the Gemini secondary mirror micro-controller
Mathew J. Rippa, Jose Soto, Mike Sheehan, et al.
The Gemini Observatory is continuing in the preliminary design stages of upgrading the micro-controller and related data acquisition components for the Secondary Mirror Tip/tilt System (M2TS). The Gemini North M2TS has surpassed a decade of service in the scientific community, yet the designs at both sites are nearly twenty years old and maintenance costs continue to increase. The next generation M2TS acquisition system takes a look at today's more common practices such as alternatives to VME, and the use of Industry Pack modules and high-rate data logging. An overview of the refactored software design will be described including the use of The Real-Time Executive for Multiprocessor Systems, or RTEMS, as the operating system of choice to meet the real-time performance requirements.
Effect of noise in image restoration of multi-aperture telescope
Zhiwei Zhou, Dayong Wang, Yunxin Wang, et al.
Multi-aperture telescope is proposed to achieve high angular resolution without fabricating a large diameter monolithic primary mirror. Due to the array structure, the multi-aperture telescope has almost the same cut-off frequency as an equivalent diameter telescope, but decrease in the area of light collecting, which is the reason that the direct output image of multi-aperture telescope is blurred and low contrast. The additive noise level is another reason for low image quality. The Wiener filter is sensitive to noise because of the zero value out of the cut-off frequency in optical transfer function. An alternative image deblurring method is total variation (TV) blind deconvolution. The TV method is an iterative algorithm and preserves the edge information well. The most important characteristic of TV blind deconvolution is that the algorithm is still working with high noise level and produces reasonable result.
Programmable workflow control with rule check on LAMOST
A programmable workflow environment grants more flexibility to conduct an observation. This paper provides an easy way to develop visual workflow control. Users can drag and drop command elements to make up a workflow. The workflow supports sequence and parallel patterns. When a workflow starts to run automatically, a full set of manual interventions is supplied, which enable users to cope with unpredictable online situation. Beside this, rule check is implemented to workflows, which ensures there is no incorrect operation sequence in a user-defined workflow. Rules are not permanent. They can be modified or added if necessary.