Proceedings Volume 9910

Observatory Operations: Strategies, Processes, and Systems VI

cover
Proceedings Volume 9910

Observatory Operations: Strategies, Processes, and Systems VI

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 23 August 2016
Contents: 14 Sessions, 99 Papers, 0 Presentations
Conference: SPIE Astronomical Telescopes + Instrumentation 2016
Volume Number: 9910

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9910
  • Operations Benchmarking and Metrics
  • Archive Operations, Surveys and Legacy Datasets
  • Virtual Observatory
  • Data Flow and Data Management Operations Processes and Workflows
  • Time Domain and Transient Surveys
  • Site and Facility Operations I
  • Site and Facility Operations II
  • Program and Observation Scheduling I
  • Program and Observation Scheduling II
  • Operations and Data Quality Control
  • Science Operations Processes and Workflows I
  • Science Operations Processes and Workflows II
  • Poster Session
Front Matter: Volume 9910
icon_mobile_dropdown
Front Matter: Volume 9910
This PDF file contains the front matter associated with SPIE Proceedings Volume 9910, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Operations Benchmarking and Metrics
icon_mobile_dropdown
Operational metrics for the ESO Very Large Telescope: lessons learned and future steps
F. Primas, S. Marteau, L. E. Tacconi-Garman, et al.
When ESO’s Very Large Telescope opened its first dome in April 1999 it was the first ground-based facility to offer to the scientific community access to an 8-10m class telescope with both classical and queue observing. The latter was considered to be the most promising way to ensure the observing flexibility necessary to execute the most demanding scientific programmes under the required, usually very well defined, conditions.

Since then new instruments have become operational and 1st generation ones replaced, filling the 12 VLT foci and feeding the VLT Interferometer and its four Auxiliary Telescopes. Operating efficiently such a broad range of instruments installed and available every night of the year on four 8-metre telescopes offers many challenges. Although it may appear that little has changed since 1999, the underlying VLT operational model has evolved in order to accommodate different requirements from the user community and features of new instruments.

Did it fulfil its original goal and, if so, how well? How did it evolve? What are the lessons learned after more than 15 years of operations? A careful analysis and monitoring of statistics and trends in Phase 1 and Phase 2 has been deployed under the DOME (Dashboard for Operational Metrics at ESO) project. The main goal of DOME is to provide robust metrics that can be followed with time in a user-friendly manner. Here, we summarize the main findings on the handling of service mode observations and present the most recent developments.
The impact of science operations on science return at the Very Large Telescope
M. Sterzik, U. Grothkopf, A. Kaufer, et al.
The operational implementation of observing programs influences the scientific return of an Observatory. More than 15 years of observations with the VLT/Paranal Observatory allow us to assess the impact of science operations and program implementation. Bibliometric parameters are used to derive program productivities and citation rates and their relation to scheduling realizations (such as service and visitor mode), program types and service mode rank classes. In this contribution we present a set of performance indicators comparing specific program execution parameters. Results of this analysis help us to identify strengths and weaknesses of the adopted operational model, as well as possible improvements for an integrated VLT and ELT operations scheme in the next decade.
Improving SALT productivity by using the theory of constraints
Johannes C. Coetzee, Petri Väisänen, Darragh E. O'Donoghue, et al.
SALT, the Southern African Large Telescope, is a very cost effective 10 m class telescope. The operations cost per refereed science paper is currently approximately $70,000. To achieve this competitive advantage, specific design tradeoffs had to be made leading to technical constraints. On the other hand, the telescope has many advantages, such as being able to rapidly switch between different instruments and observing modes during the night. We provide details of the technical and operational constraints and how they were dealt with, by applying the theory of constraints, to substantially improve the observation throughput during the last semester.
A bibliometric analysis of observatory publications for the period 2010-2014
This paper examines the primary scientific output from a number of telescopes, which is the collection of papers published in refereed journals based on data from each telescope. A telescope's productivity is measured by the number of papers published, while its scientific impact is the sum of each individual paper’s impact as measured quantitatively by the number of citations that the paper receives. In this paper I will examine the productivity and impact of 27 telescopes, mainly optical/IR, for the years between 2010 and 2014.
Observatory bibliographies: a vital resource in operating an observatory
Sherry Winkelman, Arnold Rots
The Chandra Data Archive (CDA) maintains an extensive observatory bibliography. By linking the published articles with the individual datasets analyzed in the paper, we have the opportunity to join the bibliographic metadata (including keywords, subjects, objects, data references from other observatories, etc.) with the meta- data associated with the observational datasets. This rich body of information is ripe for far more sophisticated data mining than the two repositories (publications and data) would afford individually. Throughout the course of the mission the CDA has investigated numerous questions regarding the impact of specific types of Chandra programs such as the relative science impact of GTO, GO, and DDT programs or observing, archive, and theory programs. Most recently the Chandra bibliography was used to assess the impact of programs based on the size of the program to examine whether the dividing line between standard and large projects should be changed and whether another round of X-ray Visionary Programs should be offered. Traditionally we have grouped observations by proposal when assessing the impact of programs. For this investigation we aggregated observations by pointing and instrument configuration such that objects observed multiple times in the mission were considered single observing programs. This change in perspective has given us new ideas for assessing the science impact of Chandra and for presenting data to our users. In this paper we present the methodologies used in the recent study, some of its results, and most importantly some unexpected insights into assessing the science impact of an observatory.
Archive Operations, Surveys and Legacy Datasets
icon_mobile_dropdown
Data products of the ALMA and NRAO archives
Radio astronomy archives present particular challenges due to the complexity of the data processing. New radio telescopes such as the Jansky-VLA and ALMA also have much larger data volumes than the previous generation of instruments, requiring large amounts of storage and processing. Here we describe the approach taken by NRAO towards making the data products of the VLA and ALMA available to our users. This includes traditional approaches of pipelining and imaging, and also on-demand server side processing, visualization and analysis. We discuss how the size of the image products is related to that of the visibility data, and how this places variable demands on the data flow from the telescope and its data center as configurations are changed throughout the year. Finally, we look ahead to the next generation of radio telescopes such as the SKA and ngVLA.
Validation of ESO Phase 3 data submissions
N. Delmotte, M. Arnaboldi, L. Mascetti, et al.
The data validation phase is an essential step of the Phase 3 process at ESO that is defining and providing an infrastructure to deal with interactions between the data producers and the archive. We are using a controlled process to systematically review all Phase 3 data submissions to ensure a homogeneous and consistent science archive with well traceable and characterised data products, to the benefits of archive users. How the Phase 3 data validation plan is defined and how its results are subsequently managed will be described in the presentation. For a description of its technical implementation, please refer to the contribution by L. Mascetti.
Publication of science data products through the ESO archive: lessons learned and future evolution
Jörg Retzlaff, Magda Arnaboldi, Nausicaa A. R. Delmotte, et al.
Phase 3 denotes the process of preparation, submission, validation and ingestion of science data products for storage in the ESO Science Archive Facility and subsequent publication to the scientific community. In this paper we will review more than four years of Phase 3 operations at ESO and we will discuss the future evolution of the Phase 3 system.
Virtual Observatory
icon_mobile_dropdown
Providing comprehensive and consistent access to astronomical observatory archive data: the NASA archive model
Thomas McGlynn, Giuseppina Fabbiano, Alberto Accomazzi, et al.
Since the turn of the millennium a constant concern of astronomical archives have begun providing data to the public through standardized protocols unifying data from disparate physical sources and wavebands across the electromagnetic spectrum into an astronomical virtual observatory (VO). In October 2014, NASA began support for the NASA Astronomical Virtual Observatories (NAVO) program to coordinate the efforts of NASA astronomy archives in providing data to users through implementation of protocols agreed within the International Virtual Observatory Alliance (IVOA). A major goal of the NAVO collaboration has been to step back from a piecemeal implementation of IVOA standards and define what the appropriate presence for the US and NASA astronomy archives in the VO should be. This includes evaluating what optional capabilities in the standards need to be supported, the specific versions of standards that should be used, and returning feedback to the IVOA, to support modifications as needed.

We discuss a standard archive model developed by the NAVO for data archive presence in the virtual observatory built upon a consistent framework of standards defined by the IVOA. Our standard model provides for discovery of resources through the VO registries, access to observation and object data, downloads of image and spectral data and general access to archival datasets. It defines specific protocol versions, minimum capabilities, and all dependencies. The model will evolve as the capabilities of the virtual observatory and needs of the community change.
Data Flow and Data Management Operations Processes and Workflows
icon_mobile_dropdown
Public surveys at ESO
Magda Arnaboldi, Nausicaa Delmotte, Michael Hilker, et al.
ESO has a strong mandate to survey the Southern Sky. In this article, we describe the ESO telescopes and instruments that are currently used for ESO Public Surveys, and the future plans of the community with the new wide-field-spectroscopic instruments. We summarize the ESO policies governing the management of these projects on behalf of the community. The on-going ESO Public Surveys and their science goals, their status of completion, and the new projects selected during the second ESO VISTA call in 2015/2016 are discussed. We then present the impact of these projects in terms of current numbers of refereed publications and the scientific data products published through the ESO Science Archive Facility by the survey teams, including the independent access and scientific use of the published survey data products by the astronomical community.
Not letting the perfect be the enemy of the good: steps toward science-ready ALMA images
Amanda A. Kepley, Jennifer Donovan Meyer, Crystal Brogan, et al.
Historically, radio observatories have placed the onus of calibrating and imaging data on the observer, thus restricting their user base to those already initiated into the mysteries of radio data or those willing to develop these skills. To expand its user base, the Atacama Large Millimeter/submillimeter Array (ALMA) has a high- level directive to calibrate users' data and, ultimately, to deliver scientifically usable images or cubes to principle investigators (PIs). Although an ALMA calibration pipeline is in place, all delivered images continue to be produced for the PI by hand. In this talk, I will describe on-going efforts at the Northern American ALMA Science Center to produce more uniform imaging products that more closely meet the PI science goals and provide better archival value. As a first step, the NAASC imaging group produced a simple imaging template designed to help scientific staff produce uniform imaging products. This script allowed the NAASC to maximize the productivity of data analysts with relatively little guidance by the scientific staff by providing a step-by-step guide to best practices for ALMA imaging. Finally, I will describe the role of the manually produced images in verifying the imaging pipeline and the on-going development of said pipeline. The development of the imaging template, while technically simple, shows how small steps toward unifying processes and sharing knowledge can lead to large gains for science data products.
Science data management for ESO's La Silla Paranal Observatory
Martino Romaniello
Providing the best science data is at the core of ESO’s mission to enable major science discoveries from its science community. We describe here the steps that ESO undertakes to fulfill this, namely ensuring that instruments are working properly, that the science content can be extracted from the data and, finally, delivering the science data to our users, PIs and archive researchers alike. Metrics and statistics that gauge the results and impact of these efforts are discussed.
Time Domain and Transient Surveys
icon_mobile_dropdown
CARMENES: data flow
J. A. Caballero, J. Guàrdia, M. López del Fresno, et al.
CARMENES, the new Calar Alto spectrograph especially built for radial-velocity surveys of exoearths around M dwarfs, is a very complicated system. For reaching the goal of 1 m/s radial-velocity accuracy, it is appropriate not only to monitor stars with the best observing procedure, but to monitor also the parameters of the CARMENES subsystems and safely store all the engineer and science data. Here we describe the CARMENES data flow from the different subsystems, through the instrument control system and pipeline, to the virtual-observatory data server and astronomers.
ANTARES: progress towards building a 'broker' of time-domain alerts
Abhijit Saha, Zhe Wang, Thomas Matheson, et al.
The Arizona-NOAO Temporal Analysis and Response to Events System (ANTARES) is a joint effort of NOAO and the Department of Computer Science at the University of Arizona to build prototype software to process alerts from time-domain surveys, especially LSST, to identify those alerts that must be followed up immediately. Value is added by annotating incoming alerts with existing information from previous surveys and compilations across the electromagnetic spectrum and from the history of past alerts. Comparison against a knowledge repository of properties and features of known or predicted kinds of variable phenomena is used for categorization. The architecture and algorithms being employed are described.
DDOTI: the deca-degree optical transient imager
Alan M. Watson, William H. Lee, Eleonora Troja, et al.
DDOTI will be a wide-field robotic imager consisting of six 28-cm telescopes with prime focus CCDs mounted on a common equatorial mount. Each telescope will have a field of view of 12 deg2, will have 2 arcsec pixels, and will reach a 10σ limiting magnitude in 60 seconds of r ≈ 18:7 in dark time and r ≈ 18:0 in bright time. The set of six will provide an instantaneous field of view of about 72 deg2. DDOTI uses commercial components almost entirely. The first DDOTI will be installed at the Observatorio Astronómico Nacional in Sierra San Pedro Martír, Baja California, México in early 2017. The main science goals of DDOTI are the localization of the optical transients associated with GRBs detected by the GBM instrument on the Fermi satellite and with gravitational-wave transients. DDOTI will also be used for studies of AGN and YSO variability and to determine the occurrence of hot Jupiters. The principal advantage of DDOTI compared to other similar projects is cost: a single DDOTI installation costs only about US$500,000. This makes it possible to contemplate a global network of DDOTI installations. Such geographic diversity would give earlier access and a higher localization rate. We are actively exploring this option.
Site and Facility Operations I
icon_mobile_dropdown
Operations concept for the Square Kilometre Array
The Square Kilometre Array (SKA) is an ambitious project to build the world’s largest radio telescope, eventually reaching one square kilometre in collecting area. The first phase of the project, SKA1, will consist of two telescopes: SKA1-LOW, comprising ~131,000 dipole antennas at the Murchison Radio Observatory in Western Australia covering the range 50–350 MHz, and SKA1-MID, comprising ~200 x 15-m dishes in the Karoo desert in South Africa covering the range 0.35–13.8 GHz. SKA1 is scheduled to commence operations in 2023 and, in order to appropriately influence the design of the system, operational planning has commenced. This paper presents an overview of the operational concept for SKA1.
The Australian SKA Pathfinder: operations management and user engagement
Lisa Harvey-Smith
This paper describes the science operations model for the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. ASKAP is a radio interferometer currently being commissioned in Western Australia. It will be operated by a dedicated team of observatory staff with the support of telescope monitoring, control and scheduling software. These tools, as well as the proposal tools and data archive will enable the telescope to operate with little direct input from the astronomy user. The paper also discusses how close engagement with the telescope user community has been maintained throughout the ASKAP construction and commissioning phase, leading to positive outcomes including early input into the design of telescope systems and a vibrant early science program.
Power monitoring and control for large scale projects: SKA, a case study
Domingos Barbosa, João Paulo Barraca, Dalmiro Maia, et al.
Large sensor-based science infrastructures for radio astronomy like the SKA will be among the most intensive datadriven projects in the world, facing very high demanding computation, storage, management, and above all power demands. The geographically wide distribution of the SKA and its associated processing requirements in the form of tailored High Performance Computing (HPC) facilities, require a Greener approach towards the Information and Communications Technologies (ICT) adopted for the data processing to enable operational compliance to potentially strict power budgets. Addressing the reduction of electricity costs, improve system power monitoring and the generation and management of electricity at system level is paramount to avoid future inefficiencies and higher costs and enable fulfillments of Key Science Cases. Here we outline major characteristics and innovation approaches to address power efficiency and long-term power sustainability for radio astronomy projects, focusing on Green ICT for science and Smart power monitoring and control.
A preliminary operations concept for the ngVLA
Mark McKinnon, Claire Chandler, John Hibbard, et al.
A future large area radio array optimized to perform imaging of thermal emission down to milliarcsecond scales is currently under consideration in North America. This `Next Generation Very Large Array' (ngVLA) will have ten times the effective collecting area and ten times longer baselines (300 km) than the JVLA. The large number of antennas and their large geographical distribution pose significant challenges to ngVLA operations and maintenance. We draw on experience from operating the JVLA, VLBA, and ALMA to highlight notable operational issues and outline a preliminary operations concept for the ngVLA.
Centralized operations and maintenance planning at the Atacama Large Millimeter/submillimeter Array (ALMA)
Bernhard Lopez, Nicholas D. Whyborn, Serge Guniat, et al.
The Atacama Large Millimeter/submillimeter Array (ALMA) is a joint project between astronomical organizations in Europe, North America, and East Asia, in collaboration with the Republic of Chile. ALMA consists of 54 twelve-meter antennas and 12 seven-meter antennas operating as an aperture synthesis array in the (sub)millimeter wavelength range. Since the inauguration of the observatory back in March 2013 there has been a continuous effort to establish solid operations processes for effective and efficient management of technical and administrative tasks on site. Here a key aspect had been the centralized maintenance and operations planning: input is collected from science stakeholders, the computerized maintenance management system (CMMS) and from the technical teams spread around the world, then this information is analyzed and consolidated based on the established maintenance strategy, the observatory long-term plan and the short-term priorities definitions. This paper presents the high-level process that has been developed for the planning and scheduling of planned- and unplanned maintenance tasks, and for site operations like the telescope array reconfiguration campaigns. We focus on the centralized planning approach by presenting its genesis, its current implementation for the observatory operations including related planning products, and we explore the necessary next steps in order to fully achieve a comprehensive centralized planning approach for ALMA in steady-state operations.
STELLA: 10 years of robotic observations on Tenerife
STELLA is a robotic observatory on Tenerife housing two 1.2m robotic telescopes. One telescope is fibre-feeding a high-resolution (R=55,000) échelle spectrograph (SES), while the other telescope is equipped with a visible wide- field (FOV=22' x 22') imaging instrument (WiFSIP). Robotic observations started mid 2006, and the primary scientific driver is monitoring of stellar-activity related phenomena. The STELLA Control System (SCS) software package was originally tailored to the STELLA roll-off style building and high-resolution spectroscopy, but was extended over the years to support the wide-field imager, an off-axis guider for the imager, separate acquisition telescopes, classical domes, and targets-of-opportunity. The SCS allows for unattended, off-line operation of the observatory, targets can be uploaded at any time and are selected based on merit-functions in real-time (dispatch scheduling). We report on the current status of the observatory and the current capabilities of the SCS.
Site and Facility Operations II
icon_mobile_dropdown
The JCMT as operated by the East Asian Observatory: a brief (but thrilling) history
Jessica T. Dempsey, Paul T. P. Ho, Craig Walther, et al.
The newly formed East Asian Observatory assumed operations of the James Clerk Maxwell Telescope in March of 2015. In just three weeks, the facility needed to run up completely mothballed observatory operations, introduce the telescope to a vast new scientist base with no familiarity with the facility, and create a non-existent science program. The handover to the EAO has since been a succession of challenging time-lines, and nearly unique problems requiring novel solutions. The results, however, have been spectacular, with subscription rates at unprecedented levels, a new series of Large Programs underway, as well as an exciting Future Instrumentation Project that together promises to keep JCMT at the forefront of wide-field submillimeter astronomy for the next decade.
Precipitable Water Vapour at the Canarian Observatories (Teide and Roque de los Muchachos) from routine GPS
Julio A. Castro-Almazán, Casiana Muñoz-Tuñón, Begoña García-Lorenzo, et al.
We are presenting two years (2012 and 2013) of preliminary statistical results of calibrated PWV values from the GPS geodesic antennas (LPAL and IZAN) at Teide and Roque de los Muchachos Observatories (OT and ORM), Canary Islands. To calibrate the PWV from both GPS antennas we have selected a set of simultaneous high vertical resolution radio-sounding profiles from the closest operational balloon station, Güímar (GUI-WMO 60018; ≈15 km distant from OT and ≈150 km from ORM). The calibrations showed a correlation of 0.994 and 0.970 for OT and ORM, respectively, with rmse of 0.44 and 0.70 mm. The calibrated PWV series brought median values of 3.5 mm at OT and 4.0 mm at ORM. The difference is explained by the ~200 m of difference in height of the antennas (LPAL antenna is below the telescopes altitude). Twenty five percent of the time, PWV is less than 1.7 mm.
CARMENES-NIR channel spectrograph: how to achieve the full AIV at system level of a cryo-instrument in nine months
S. Becerril, C. Cárdenas, P. Amado, et al.
CARMENES is the new high-resolution high-stability spectrograph built for the 3.5m telescope at the Calar Alto Observatory (CAHA, Almería, Spain) by a consortium formed by German and Spanish institutions. This instrument is composed by two separated spectrographs: VIS channel (550-1050 nm) and NIR channel (950- 1700 nm). The NIR-channel spectrograph's responsible institution is the Instituto de Astrofísica de Andalucía, IAA-CSIC.

The contouring conditions have led CARMENES-NIR to be a schedule-driven project with a extremely tight plan. The operation start-up was mandatory to be before the end of 2015. This plays in contradiction to the very complex, calm-requiring tasks and development phases faced during the AIV, which has been fully designed and implemented at IAA through a very ambitious, zero-contingency plan. As a large cryogenic instrument, this plan includes necessarily a certain number cryo-vacuum cycles, this factor being the most important for the overall AIV duration. Indeed, each cryo-vacuum cycle of the NIR channel runs during 3 weeks. This plan has therefore been driven to minimize the amount of cryo-vacuum cycles.

Such huge effort has led the AIV at system level at IAA lab to be executed in 9 months from start to end -an astonishingly short duration for a large cryogenic, complex instrument like CARMENES NIR- which has been fully compliant with the final deadline of the installation of the NIR channel at CAHA 3.5m telescope. The detailed description of this planning, as well as the way how it was actually performed, is the main aim of the present paper.
Response to major earthquakes affecting Gemini twins
Michiel van der Hoeven, Rolando Rogers, Mathew Rippa, et al.
Both Gemini telescopes, in Hawaii and Chile, are located in highly seismic active areas. That means that the seismic protection is included in the structural design of the telescope, instruments and auxiliary structure. We will describe the specific design features to reduce permanent damage in case of major earthquakes. At this moment both telescopes have been affected by big earthquakes in 2006 and 2015 respectively. There is an opportunity to compare the original design to the effects that are caused by these earthquakes and analyze their effectiveness.

The paper describes the way the telescopes responded to these events, the damage that was caused, how we recovered from it, the modifications we have done to avoid some of this damage in future occasions, and lessons learned to face this type of events. Finally we will cover on how we pretend to upgrade the limited monitoring tools we currently have in place to measure the impact of earthquakes.
LBTO's long march to full operation: step 2
Step 1 (Veillet et al.1), after a review of the development of the Large Binocular Telescope Observatory (LBTO from the early concepts of the early 80s to mid-2014, outlined a six-year plan (LBT2020) aimed at optimizing LBTO's scientific production while mitigating the consequences of the inevitable setbacks brought on by the considerable complexity of the telescope and the very diverse nature of the LBTO partnership. Step 2 is now focusing on the first two years of implementation of this plan, presenting the encountered obstacles, technical, cultural and political, and how they were overcome. Weather and another incident with one of the Adaptive Secondaries slowed down commissioning activities. All the facility instruments should have been commissioned and offered in binocular mode in early or mid-2016. It will happen instead by the end of 2016. On a brighter side, the first scientific publications using the LBT as a 23-m telescope through interferometry were published in 2015 and the overall number of publications has been raising at a good pace. Three second generation instruments were selected, scheduled to come on the telescope in the next three to five years. They will all use the excellent performance of the LBT Adaptive Optics (AO), which will be even better thanks to an upgrade of the AO to be completed in 2018. Less progress than hoped was made to move the current observing mode of the telescope to a whole LBT-wide queue. In two years from now, we should have a fully operational telescope, including a laser-based Ground Layer AO (GLAO) system, hopefully fully running in queue, with new instruments in development, new services offered to the users, and a stronger scientific production.
Worth its SALT: four years of full science operations with the Southern African Large Telescope
Petri Väisänen, Chris Coetzee, Anja Schroeder, et al.
SALT is a 10-m class optical telescope located in Sutherland, South Africa. We present an update on all observatory performance metrics since the start of full science operations in late 2011, as well as key statistics describing the science efficiency and output of SALT, including the completion fractions of observations per priority class, and analysis of the more than 140 refereed papers to date. After addressing technical challenges and streamlining operations, these first years of full operations at SALT have seen good and consistently increasing rates of completion of high priority observations and, in particular, very cost-effective production of science publications.
A collimated beam projector for precise telescope calibration
Michael Coughlin, T. M. C. Abbott, Kairn Brannon, et al.
The precise determination of the instrumental response function versus wavelength is a central ingredient in contemporary photometric calibration strategies. This typically entails propagating narrowband illumination through the system pupil, and comparing the detected photon rate across the focal plane to the amount of incident light as measured by a calibrated photodiode. However, stray light effects and reflections/ghosting (especially on the edges of filter passbands) in the optical train constitute a major source of systematic uncertainty when using a at-field screen as the illumination source. A collimated beam projector that projects a mask onto the focal plane of the instrument can distinguish focusing light paths from stray and scattered light, allowing for a precise determination of instrumental throughput. This paper describes the conceptual design of such a system, outlines its merits, and presents results from a prototype system used with the Dark Energy Camera wide field imager on the 4-meter Blanco telescope. A calibration scheme that blends results from at-field images with collimated beam projector data to obtain the equivalent of an illumination correction at high spectral and angular resolution is also presented. In addition to providing a precise system throughput calibration, by monitoring the evolution of the intensity and behaviour of the ghosts in the optical system, the collimated beam projector can be used to track the evolution of the filter transmission properties and various anti-reflective coatings in the optical system.
SOAR telescope operation in the LSST era: real-time follow-up on large scales
Jonathan H. Elias, Cesar Briceño
The SOAR telescope will be well situated, both in terms of location and aperture, to follow up on the stream of brighter transient events generated by the Large Synoptic Survey Telescope (LSST). A critical aspect is that the operation is less likely to be responding to occasional targets of opportunity, and more likely to be responding to a continuing flow of events that must be efficiently prioritized and observed. We discuss the implications for observatory operations, including potential modifications to the telescope itself or to the instrument suite. Representative “use cases” are described to assist in putting potential operational modes into context.
Organizational transformation to improve operational efficiency at Gemini South
M. van der Hoeven, Diego Maltes, Rolando Rogers
In this paper we will describe how the Gemini South Engineering team has been reorganized from different functional units into a cross-disciplinary team while executing a transition plan that imposes several staff reductions, driven by budget reductions. Several factors are of critical importance to the success of any change in organization. Budgetary processes, staff diversity, leadership style, skill sets and planning are all important factors to take into account to achieve a successful outcome. We will analyze the organizational alignment by using some proven management models and concepts.
Program and Observation Scheduling I
icon_mobile_dropdown
Sharing the skies: the Gemini Observatory international time allocation process
Steven J. Margheim
Gemini Observatory serves a diverse community of four partner countries (United States, Canada, Brazil, and Argentina), two hosts (Chile and University of Hawaii), and limited-term partnerships (currently Australia and the Republic of Korea). Observing time is available via multiple opportunities including Large and Long Pro- grams, Fast-turnaround programs, and regular semester queue programs. The slate of programs for observation each semester must be created by merging programs from these multiple, conflicting sources. This paper de- scribes the time allocation process used to schedule the overall science program for the semester, with emphasis on the International Time Allocation Committee and the software applications used.
Getting NuSTAR on target: predicting mast motion
The Nuclear Spectroscopic Telescope Array (NuSTAR) is the first focusing high energy (3-79 keV) X-ray observatory operating for four years from low Earth orbit. The X-ray detector arrays are located on the spacecraft bus with the optics modules mounted on a flexible mast of 10.14m length. The motion of the telescope optical axis on the detectors during each observation is measured by a laser metrology system and matches the pre-launch predictions of the thermal flexing of the mast as the spacecraft enters and exits the Earths shadow each orbit. However, an additional motion of the telescope field of view was discovered during observatory commissioning that is associated with the spacecraft attitude control system and an additional flexing of the mast correlated with the Solar aspect angle for the observation. We present the methodology developed to predict where any particular target coordinate will fall on the NuSTAR detectors based on the Solar aspect angle at the scheduled time of an observation. This may be applicable to future observatories that employ optics deployed on extendable masts. The automation of the prediction system has greatly improved observatory operations efficiency and the reliability of observation planning.
Feature-based telescope scheduler
Feature-based Scheduler offers a sequencing strategy for ground-based telescopes. This scheduler is designed in the framework of Markovian Decision Process (MDP), and consists of a sub-linear online controller, and an offline supervisory control-optimizer. Online control law is computed at the moment of decision for the next visit, and the supervisory optimizer trains the controller by simulation data. Choice of the Differential Evolution (DE) optimizer, and introducing a reduced state space of the telescope system, offer an efficient and parallelizable optimization algorithm. In this study, we applied the proposed scheduler to the problem of Large Synoptic Survey Telescope (LSST). Preliminary results for a simplified model of LSST is promising in terms of both optimality, and computational cost.
Ongoing evolution of proposal reviews in the Spitzer warm mission
Lisa J. Storrie-Lombardi, Suzanne R. Dodd, Nancy A. Silbermann, et al.
The Spitzer Space Telescope is executing the seventh year of extended warm mission science. The cryogenic mission operated from 2003 to 2009. The observing proposal review process has evolved from large, week-long, in-person meetings during the cryogenic mission to the introduction of panel telecon reviews in the warm mission. Further compression of the schedule and budget for the proposal solicitation and selection process led to additional changes in 2014. Large proposals are still reviewed at an in-person meeting but smaller proposals are no longer discussed by a topical science panel. This hybrid process, involving an in-person committee for the larger proposals and strictly external reviewers for the smaller proposals, has been successfully implemented through two observing cycles. While people like the idea of not having to travel to a review it is still the consensus opinion, in our discussions with the community, that the in-person review panel discussions provide the most satisfying result. We continue to use in-person reviews for awarding greater than 90% of the observing time.
Program and Observation Scheduling II
icon_mobile_dropdown
The LSST Scheduler from design to construction
Francisco Delgado, Michael A. Reuter
The Large Synoptic Survey Telescope (LSST) will be a highly robotic facility, demanding a very high efficiency during its operation. To achieve this, the LSST Scheduler has been envisioned as an autonomous software component of the Observatory Control System (OCS), that selects the sequence of targets in real time. The Scheduler will drive the survey using optimization of a dynamic cost function of more than 200 parameters. Multiple science programs produce thousands of candidate targets for each observation, and multiple telemetry measurements are received to evaluate the external and the internal conditions of the observatory. The design of the LSST Scheduler started early in the project supported by Model Based Systems Engineering, detailed prototyping and scientific validation of the survey capabilities required. In order to build such a critical component, an agile development path in incremental releases is presented, integrated to the development plan of the Operations Simulator (OpSim) to allow constant testing, integration and validation in a simulated OCS environment. The final product is a Scheduler that is also capable of running 2000 times faster than real time in simulation mode for survey studies and scientific validation during commissioning and operations.
Survey strategy optimization for the Atacama Cosmology Telescope
F. De Bernardis, J. R. Stevens, M. Hasselfield, et al.
In recent years there have been significant improvements in the sensitivity and the angular resolution of the instruments dedicated to the observation of the Cosmic Microwave Background (CMB). ACTPol is the first polarization receiver for the Atacama Cosmology Telescope (ACT) and is observing the CMB sky with arcmin resolution over 2000 sq. deg. Its upgrade, Advanced ACTPol (AdvACT), will observe the CMB in five frequency bands and over a larger area of the sky. We describe the optimization and implementation of the ACTPol and AdvACT surveys. The selection of the observed fields is driven mainly by the science goals, that is, small angular scale CMB measurements, B-mode measurements and cross-correlation studies. For the ACTPol survey we have observed patches of the southern galactic sky with low galactic foreground emissions which were also chosen to maximize the overlap with several galaxy surveys to allow unique cross-correlation studies. A wider field in the northern galactic cap ensured significant additional overlap with the BOSS spectroscopic survey. The exact shapes and footprints of the fields were optimized to achieve uniform coverage and to obtain cross-linked maps by observing the fields with different scan directions. We have maximized the efficiency of the survey by implementing a close to 24 hour observing strategy, switching between daytime and nighttime observing plans and minimizing the telescope idle time. We describe the challenges represented by the survey optimization for the significantly wider area observed by AdvACT, which will observe roughly half of the low-foreground sky. The survey strategies described here may prove useful for planning future ground-based CMB surveys, such as the Simons Observatory and CMB Stage IV surveys.
SNR-based queue observations at CFHT
Daniel Devost, Claire Moutou, Nadine Manset, et al.
In an effort to optimize the night time utilizing the exquisite weather on Maunakea, CFHT has equipped its dome with vents and is now moving its Queued Scheduled Observing (QSO)1 based operations toward Signal to Noise Ratio (SNR) observing. In this new mode, individual exposure times for a science program are estimated using a model that uses measurements of the weather conditions as input and the science program is considered completed when the depth required by the scientific requirements are reached. These changes allow CFHT to make better use of the excellent seeing conditions provided by Maunakea, allowing us to complete programs in a shorter time than allocated to the science programs.
Pandeia: a multi-mission exposure time calculator for JWST and WFIRST
Klaus M. Pontoppidan, Timothy E. Pickering, Victoria G. Laidler, et al.
Pandeia is the exposure time calculator (ETC) system developed for the James Webb Space Telescope (JWST) that will be used for creating JWST proposals. It includes a simulation-hybrid Python engine that calculates the two-dimensional pixel-by-pixel signal and noise properties of the JWST instruments. This allows for appropriate handling of realistic point spread functions, MULTIACCUM detector readouts, correlated detector readnoise, and multiple photometric and spectral extraction strategies. Pandeia includes support for all the JWST observing modes, including imaging, slitted/slitless spectroscopy, integral field spectroscopy, and coronagraphy. Its highly modular, data-driven design makes it easily adaptable to other observatories. An implementation for use with WFIRST is also available.
Hinode/EIS science planning and operations tools
We present the design, implementation and maintenance of the suite of software enabling scientists to design and schedule Hinode/EIS1 operations. The total of this software is the EIS Science Planning Tools (EISPT), and is predominately written in IDL (Interactive Data Language), coupled with SolarSoft (SSW), an IDL library developed for solar missions.

Hinode is a multi-instrument and wavelength mission designed to observe the Sun. It is a joint Japan/UK/US consortium (with ESA and Norwegian involvement). Launched in September 2006, its principal scientific goals are to study the Sun's variability and the causes of solar activity. Hinode operations are coordinated at ISAS (Tokyo, Japan). A daily Science Operations meeting is attended by the instrument teams and the spacecraft team. Nominally, science plan uploads cover periods of two or three days. When the forthcoming operations have been agreed, the necessary spacecraft operations parameters are created. These include scheduling for spacecraft pointing and ground stations.

The Extreme UV Imaging Spectrometer (EIS) instrument, led by the UK (the PI institute is MSSL), is designed to observe the emission spectral lines of the solar atmosphere. Observations are composed of reusable, hierarchical components, including lines lists (wavelengths of spectral lines), rasters (exposure times, line list, etc.) and studies (defines one or more rasters). Studies are the basic unit of "timeline" scheduling. They are a useful construct for generating more complex sequences of observations, reducing the planning burden. Instrument observations must first be validated.

An initial requirement was that operations be shared equally by the 3 main EIS teams (Japan, UK and US). Hence, a major design focus of the software was "Remote Operations", whereby any scientist in any location can run the software, schedule a science plan and send it to the spacecraft commanding team. It would then be validated and combined with the science plans of the other instruments. Then uploaded to the spacecraft.

As for any space mission, telemetry size and rate are important constraints. For each planning cycle the instruments are issued a maximum data allocation. EISPT interactively calculates the telemetry requirements of each observation and plan.

Autonomous operations was a challenging concept designed to observe the early onset of various dynamic events, including solar flares. The planning cycle precluded observers responding to such short-term events. Hence, the instrument can be run in a (low-telemetry) "hunter" mode at a suitable target. Upon detecting an event the current observation ceases and another automatically begins at the event location. This "response" observation involves a smaller field-of-view and higher cadence. It's impossible to predict if this mechanism will be activated, and if so how much telemetry is acquired.

The EISPT has operated successfully since it was deployed in November 2006. Nominally it is used six days a week. It has been maintained and updated as required to take account of changing mission operations. A large update was made in 2013/14 to develop the facility to coordinate observations with other solar missions (SDO/AIA and IRIS).
An optical to IR sky brightness model for the LSST
Peter Yoachim, Michael Coughlin, George Z. Angeli, et al.
To optimize the observing strategy of a large survey such as the LSST, one needs an accurate model of the night sky emission spectrum across a range of atmospheric conditions and from the near-UV to the near-IR. We have used the ESO SkyCalc Sky Model Calculator1, 2 to construct a library of template spectra for the Chilean night sky. The ESO model includes emission from the upper and lower atmosphere, scattered starlight, scattered moonlight, and zodiacal light. We have then extended the ESO templates with an empirical fit to the twilight sky emission as measured by a Canon all-sky camera installed at the LSST site. With the ESO templates and our twilight model we can quickly interpolate to any arbitrary sky position and date and return the full sky spectrum or surface brightness magnitudes in the LSST filter system. Comparing our model to all-sky observations, we find typical residual RMS values of ±0.2-0.3 magnitudes per square arcsecond.
Measurements of airglow on Maunakea at Gemini Observatory
Katherine C. Roth, Adam Smith, Andrew Stephens, et al.
Gemini Observatory on Maunakea has been collecting optical and infrared science data for almost 15 years. We have begun a program to analyze imaging data from two of the original facility instruments, GMOS and NIRI, in order to measure sky brightness levels in multiple infrared and optical broad-band filters. The present work includes data from mid-2016 back through late-2008. We present measured background levels as a function of several operational quantities (e.g. moon phase, hours from twilight, season). We find that airglow is a significant contributor to background levels in several filters. Gemini is primarily a queue scheduled telescope, with observations being optimally executed in order to provide the most efficient use of telescope time. We find that while most parameters are well-understood, the atmospheric airglow remains challenging to predict. This makes it difficult to schedule observations which require dark skies in these filters, and we suggest improvements to ensure data quality.
Operations and Data Quality Control
icon_mobile_dropdown
Two years of LCOGT operations: the challenges of a global observatory
Nikolaus Volgenau, Todd Boroson
With 18 telescopes distributed over 6 sites, and more telescopes being added in 2016, Las Cumbres Observatory Global Telescope Network is a unique resource for timedomain astronomy. The Network's continuous coverage of the night sky, and the optimization of the observing schedule over all sites simultaneously, have enabled LCOGTusers to produce significant science results. However, practical challenges to maximizing the Network's science output remain. The Network began providing observations for members of its Science Collaboration and other partners in May 2014. In the two years since then, LCOGT has made a number of improvements to increase the Network's science yield. We also now have two years' experience monitoring observatory performance; effective monitoring of an observatory that spans the globe is a complex enterprise. Here, we describe some of LCOGT's efforts to monitor the Network, assess the quality of science data, and improve communication with our users.
The dark energy survey and operations: years 1 to 3
H. T. Diehl, E. Neilsen, R. Gruendl, et al.
The Dark Energy Survey (DES) is an operating optical survey aimed at understanding the accelerating expansion of the universe using four complementary methods: weak gravitational lensing, galaxy cluster counts, baryon acoustic oscillations, and Type Ia supernovae. To perform the 5000 sq-degree wide field and 30 sq-degree supernova surveys, the DES Collaboration built the Dark Energy Camera (DECam), a 3 square-degree, 570-Megapixel CCD camera that was installed at the prime focus of the Blanco 4-meter telescope at the Cerro Tololo Inter-American Observatory (CTIO). DES has completed its third observing season out of a nominal five. This paper describes DES “Year 1” (Y1) to “Year 3” (Y3), the strategy, an outline of the survey operations procedures, the efficiency of operations and the causes of lost observing time. It provides details about the quality of the first three season's data, and describes how we are adjusting the survey strategy in the face of the El Niño Southern Oscillation.
Lessons from and methods for surveying large areas with the Hubble Space Telescope
John W. MacKenty, Ivelina Momcheva
Although the imagers on the Hubble Space Telescope only provide fields of view of a few square arc minutes, the telescope has been extensively used to conduct large surveys. These range from relatively shallow mappings in a single filter, multi-filter and multi-epoch surveys, and a series of increasingly deep exposures in several carefully selected fields. HST has also conducted extensive “parallel” surveys either coordinated with a prime instrument (typically using two cameras together) or as “pure” parallel observations to capture images of areas on the sky selected by another science programs (typically spectroscopic observations). Recently, we have tested an approach permitting much faster mapping with the WFC3/IR detector under GYRO pointing control and avoiding the overhead associated with multiple target observations. This results in a four to eight fold increase in mapping speed (at the expense of shallower exposures). This approach enables 250-300 second exposures (reaching H~25th magnitude) covering one square degree in 100 orbits.
Spectral calibration for the Maunakea Spectroscopic Explorer: challenges and solutions
Nicolas Flagey, Alan McConnachie, Kei Szeto, et al.
The Maunakea Spectroscopic Explorer (MSE) will each year obtain millions of spectra in the optical to nearinfrared, at low (R ≃ 2; 500) to high (R ≃ 40; 000) spectral resolution by observing >3000 spectra per pointing via a highly multiplexed fiber-fed system. Key science programs for MSE include black hole reverberation mapping, stellar population analysis of faint galaxies at high redshift, and sub-km/s velocity accuracy for stellar astrophysics. This requires highly precise, repeatable and stable spectral calibration over long timescales. To meet these demanding science goals and to allow MSE to deliver data of very high quality to the broad community of astronomers involved in the project, a comprehensive and efficient calibration strategy is being developed. In this paper, we present the different challenges we face to properly calibrate the MSE spectra and the solutions we are considering to address these challenges.
Calibration development strategies for the Daniel K. Inouye Solar Telescope (DKIST) data center
Fraser T. Watson, Steven J. Berukoff, Tony Hays, et al.
The Daniel K. Inouye Solar Telescope (DKIST), currently under construction on Haleakalā, in Maui, Hawai'i will be the largest solar telescope in the world and will use adaptive optics to provide the highest resolution view of the Sun to date. It is expected that DKIST data will enable significant and transformative discoveries that will dramatically increase our understanding of the Sun and its effects on the Sun-Earth environment. As a result of this, it is a priority of the DKIST Data Center team at the National Solar Observatory (NSO) to be able to deliver timely and accurately calibrated data to the astronomical community for further analysis. This will require a process which allows the Data Center to develop calibration pipelines for all of the facility instruments, taking advantage of similarities between them, as well as similarities to current generation instruments. There will also be a challenges which are addressed in this article, such as the large volume of data expected, and the importance of supporting both manual and automated calibrations. This paper will detail the current calibration development strategies being used by the Data Center team at the National Solar Observatory to manage this calibration effort, so as to ensure delivery of high quality scientific data routinely to users.
ALMA quality assurance: concepts, procedures, and tools
A. M. Chavan, S. L. Tanne, E. Akiyama, et al.
Data produced by ALMA for the community undergoes a rigorous quality assurance (QA) process, from the initial observation ("QA0") to the final science-ready data products ("QA2"), to the QA feedback given by the Principal Investigators (PIs) when they receive the data products (“QA3”). Calibration data is analyzed to measure the performance of the observatory and predict the trend of its evolution ("QA1").

The procedure develops over different steps and involves several actors across all ALMA locations; it is made possible by the support given by dedicated software tools and a complex database of science data, meta-data and operational parameters. The life-cycle of each involved entity is well-defined, and it prevents for instance that "bad" data (that is, data not meeting the minimum quality standards) is ever processed by the ALMA pipeline. This paper describes ALMA's quality assurance concepts and procedures, including the main enabling software components.
Data mining spacecraft telemetry: towards generic solutions to automatic health monitoring and status characterisation
P. Royer, J. De Ridder, B. Vandenbussche, et al.
We present the first results of a study aimed at finding new and efficient ways to automatically process spacecraft telemetry for automatic health monitoring. The goal is to reduce the load on the flight control team while extending the "checkability" to the entire telemetry database, and provide efficient, robust and more accurate detection of anomalies in near real time. We present a set of effective methods to (a) detect outliers in the telemetry or in its statistical properties, (b) uncover and visualise special properties of the telemetry and (c) detect new behavior. Our results are structured around two main families of solutions. For parameters visiting a restricted set of signal values, i.e. all status parameters and about one third of all the others, we focus on a transition analysis, exploiting properties of Poincare plots. For parameters with an arbitrarily high number of possible signal values, we describe the statistical properties of the signal via its Kernel Density Estimate. We demonstrate that this allows for a generic and dynamic approach of the soft-limit definition. Thanks to a much more accurate description of the signal and of its time evolution, we are more sensitive and more responsive to outliers than the traditional checks against hard limits. Our methods were validated on two years of Venus Express telemetry. They are generic for assisting in health monitoring of any complex system with large amounts of diagnostic sensor data. Not only spacecraft systems but also present-day astronomical observatories can benefit from them.
Science Operations Processes and Workflows I
icon_mobile_dropdown
The evolution of observing modes at ESO telescopes
S. Marteau, O. Hainaut, G. Hau, et al.
From the implementation of Service Mode at the NTT in 1997 to the recent "designated" Visitor Mode observations, we review how the palette of observing modes and the types of programs have evolved at ESO. In more detail, we present how the evolution of tools at the disposal of external astronomers and the Observatory staff has enabled ESO to implement new types of observations. We also take a look at the new challenges posed by upcoming instruments at the VLT and present the first steps towards enabling remote observing for ESO telescopes.
New facilities, new challenges: the telescope and instrument operators evolution at ESO
Andres Pino Pavez, Stéphane Brillant, Susana Cerda, et al.
Observatories and operational strategies are evolving in connection with the facilities that will be built. For those new facilities, the strategy for dealing with the telescopes, instrumentation, data-flow, reduction process and relationship with the community is more or less handled from its conception. However, for those Observatories already in place, the challenge is to adapt the processes and prepare the existing people for these changes. This talk will show detailed information about current activities, the implemented training plan, the definition of the current operational model, the involvement of the group in projects towards improving operational processes and efficiency, and what new challenges will be involved during the definition of the strategies for the new generation instruments and facilities to be installed.
Science Operations Processes and Workflows II
icon_mobile_dropdown
Science operations at Gemini Observatory
René Rutten, Andy Adamson, Sandy Leggett
Gemini Observatory operates two 8m telescopes, one on Cerro Pachón in Chile and one on Maunakea Hawai´i, on behalf of an international partnership. The telescopes, their software and supporting infrastructure (and some of the instrumentation) are identical at the two sites. We describe the operation of the observatory, present some key performance indicators, and discuss the outcomes in terms of publications and program completion rates. We describe how recent initiatives have been introduced into the operation in parallel with accommodating a significant budget reduction and changes in the partnership.
4MOST: science operations for a large spectroscopic survey program with multiple science cases executed in parallel
C. Jakob Walcher, Roelof S. de Jong, Tom Dwelly, et al.
The 4MOST instrument is a multi-object spectrograph to be mounted to the VISTA telescope at ESOs La- Silla-Paranal observatory. 4MOST will deliver several 10s of millions of spectra from surveys typically lasting 5 years. 4MOST will address Galactic and extra-galactic science cases simultaneously, i.e. by observing targets from a large number of different surveys within one science exposure. This parallel mode of operations as well as the survey nature of 4MOST require some 4MOST-specific operations features within the overall operations model of ESO. These features are necessary to minimize any changes to the ESO operations model at the La- Silla-Paranal observatory on the one hand, and to enable parallel science observing and thus the most efficient use of the instrument on the other hand. The main feature is that the 4MOST consortium will not only deliver the instrument, but also contractual services to the user community, which is why 4MOST is also described as a 'facility'. We describe the operations model for 4MOST as seen by the consortium building the instrument. Among others this encompasses: 1) A joint science team for all participating surveys (i.e. including community surveys as well as those from the instrument-building consortium). 2) Common centralized tasks in observing preparation and data management provided as service by the consortium. 3) Transparency of all decisions to all stakeholders. 4) Close interaction between science and facility operations. Here we describe our efforts to make parallel observing mode efficient, flexible, and manageable.
Planning JWST NIRSpec MSA spectroscopy using NIRCam pre-images
Tracy L. Beck, Leonardo Ubeda, Susan A. Kassin, et al.
The Near-Infrared Spectrograph (NIRSpec) is the work-horse spectrograph at 1-5microns for the James Webb Space Telescope (JWST). A showcase observing mode of NIRSpec is the multi-object spectroscopy with the Micro-Shutter Arrays (MSAs), which consist of a quarter million tiny configurable shutters that are 0. ′′20×0. ′′46 in size. The NIRSpec MSA shutters can be opened in adjacent rows to create flexible and positionable spectroscopy slits on prime science targets of interest. Because of the very small shutter width, the NIRSpec MSA spectral data quality will benefit significantly from accurate astrometric knowledge of the positions of planned science sources. Images acquired with the Hubble Space Telescope (HST) have the optimal relative astrometric accuracy for planning NIRSpec observations of 5-10 milli-arcseconds (mas). However, some science fields of interest might have no HST images, galactic fields can have moderate proper motions at the 5mas level or greater, and extragalactic images with HST may have inadequate source information at NIRSpec wavelengths beyond 2 microns. Thus, optimal NIRSpec spectroscopy planning may require pre-imaging observations with the Near-Infrared Camera (NIRCam) on JWST to accurately establish source positions for alignment with the NIRSpec MSAs. We describe operational philosophies and programmatic considerations for acquiring JWST NIRCam pre-image observations for NIRSpec MSA spectroscopic planning within the same JWST observing Cycle.
Moving toward queue operations at the Large Binocular Telescope Observatory
The Large Binocular Telescope Observatory (LBTO), a joint scientific venture between the Instituto Nazionale di Astrofisica (INAF), LBT Beteiligungsgesellschaft (LBTB), University of Arizona, Ohio State University (OSU), and the Research Corporation, is one of the newest additions to the world’s collection of large optical/infrared ground-based telescopes. With its unique, twin 8.4m mirror design providing a 22.8 meter interferometric baseline and the collecting area of an 11.8m telescope, LBT has a window of opportunity to exploit its singular status as the “first” of the next generation of Extremely Large Telescopes (ELTs). Prompted by urgency to maximize scientific output during this favorable interval, LBTO recently re-evaluated its operations model and developed a new strategy that augments classical observing with queue. Aided by trained observatory staff, queue mode will allow for flexible, multi-instrument observing responsive to site conditions. Our plan is to implement a staged rollout that will provide many of the benefits of queue observing sooner rather than later -- with more bells and whistles coming in future stages. In this paper, we outline LBTO's new scientific model, focusing specifically on our “lean” resourcing and development, reuse and adaptation of existing software, challenges presented from our one-of-a-kind binocular operations, and lessons learned. We also outline further stages of development and our ultimate goals for queue.
The 4MOST Operations System
Tom Dwelly, Andrea Merloni, Jakob C. Walcher, et al.
The 4MOST multi-object spectroscopic instrument (to be mounted on the ESO/VISTA telescope) will be used to conduct an ambitious multi-year wide area sky survey. A disparate set of science goals, requiring observation of tens of millions of galactic and extragalactic targets, must be satisfied by a unified program of observations. The 4MOST Operations System is designed to facilitate this complex task by i) providing sophisticated simulation tools that allow the science team to plan and optimise the 4MOST survey, ii) carrying out optimised mediumterm scheduling using survey forecasting tools and feedback from previous observations, and iii) producing sets of observation blocks ready for execution at the telescope. We present an overview of the Operations System, highlighting the advanced facility simulator tool and the novel strategies that will enable 4MOST to achieve its challenging science goals.
Optimizing parallel observations for the JWST/MIRI instrument
Macarena Garcìa-Marìn, Christopher N. A. Willmer, Alvaro Labiano, et al.
Using unprecedented sensitivity and angular resolution, the Mid-IR (MIRI) instrument for the James Webb Space Telescope (JWST) will provide imaging, coronagraphy, single slit and integral-field spectroscopy for many science observations. The expectation is that the operational design of the JWST will include parallel modes. We have been studying the intricacies of combining MIRI (5-28.5 μm) and Near-IR Camera (NIRCam) (0.6-5 μm) imaging observations for a deep imaging use-case. Such programs present particular challenges from an operations point of view. This contribution presents the overall design of the MIRI observations in parallel with NIRCam, opening the path to collaborative science opportunities between MIRI and the other JWST instruments.
Through thick and thin: quantitative classification of photometric observing conditions on Paranal
Florian Kerber, Richard R. Querel, Bianca Neureiter, et al.
A Low Humidity and Temperature Profiling (LHATPRO) microwave radiometer is used to monitor sky conditions over ESO’s Paranal observatory. It provides measurements of precipitable water vapour (PWV) at 183 GHz, which are being used in Service Mode for scheduling observations that can take advantage of favourable conditions for infrared (IR) observations. The instrument also contains an IR camera measuring sky brightness temperature at 10.5 μm. It is capable of detecting cold and thin, even sub-visual, cirrus clouds. We present a diagnostic diagram that, based on a sophisticated time series analysis of these IR sky brightness data, allows for the automatic and quantitative classification of photometric observing conditions over Paranal. The method is highly sensitive to the presence of even very thin clouds but robust against other causes of sky brightness variations. The diagram has been validated across the complete range of conditions that occur over Paranal and we find that the automated process provides correct classification at the 95% level. We plan to develop our method into an operational tool for routine use in support of ESO Science Operations.
A daily task manager for Paranal Science Operations
Cristian Marcelo Romero, Steffen Mieske, Stephane Brillant, et al.
Paranal Observatory has a department called Science Operations (SciOps), which is in charge of operating the instruments within the global scheme established for the Very Large Telescope. This scheme was improved on what was called SciOps 2.0. The main operational goals of this new scheme were to strengthen the coordination of science operations activities within, and between, the department groups, by increasing the time allocated to “high-level” activities. It also improves the efficiency of the core science operations support to service mode (SM) and visitor mode (VM) observations, and the quality of the astronomical data delivered to the community of Paranal users.

In this context of efficiency and quality improvement of operations within the SciOps department, we had identified a strong need to optimize the management of daily operation tasks, via the development of a daily activity monitoring integrated tool, so this paper details the findings of the Daily Activity Monitoring Integrated Tool (DAMIT), the proof of Concept phase and the first delivered phase. The technical proof of concept was the first phase in development of a daily operation-monitoring tool for the science operations department. The primary objective of this phase was to evaluate the viability and impact of such a tool to improve the quality and efficiency of SciOps at Paranal.

This tool is running after overcoming the first phase of development, after followed an on-site technical analysis of the SciOps daily operation (day and night), the current procedures to certify the completeness and quality of the daily operations, and requirements for this new daily operation monitoring tool.
Delivering data reduction pipelines to science users
Wolfram Freudling, Martino Romaniello
The European Southern Observatory has a long history of providing specialized data processing algorithms, called recipes, for most of its instruments. These recipes are used for both operational purposes at the observatory sites, and for data reduction by the scientists at their home institutions. The two applications require substantially different environments for running and controlling the recipes. In this papers, we describe the ESOReflex environment that is used for running recipes on the users’ desktops. ESOReflex is a workflow driven data reduction environment. It allows intuitive representation, execution and modification of the data reduction workflow, and has facilities for inspection of and interaction with the data. It includes fully automatic data organization and visualization, interaction with recipes, and the exploration of the provenance tree of intermediate and final data products. ESOReflex uses a number of innovative concepts that have been described in Ref. 1. In October 2015, the complete system was released to the public. ESOReflex allows highly efficient data reduction, using its internal bookkeeping database to recognize and skip previously completed steps during repeated processing of the same or similar data sets. It has been widely adopted by the science community for the reduction of VLT data.
Poster Session
icon_mobile_dropdown
ASTRI SST-2M archive system: a prototype for the Cherenkov Telescope Array
Alessandro Carosi, Stefano Gallozzi, Fabrizio Lucarelli, et al.
The ASTRI project of the Italian National Institute for Astrophysics (INAF) is developing, in the framework of the Cherenkov Telescope Array (CTA), an end-to-end prototype system based on a dual-mirror small-sized Cherenkov telescope. Data preservation and accessibility are guaranteed by means of the ASTRI Archive System (AAS) that is responsible for both the on-site and off-site archiving of all data produced by the different sub- systems of the so-called ASTRI SST-2M prototype. Science, calibration, and Monte Carlo data together with the dedicated Instrument Response Functions (IRFs) (and corresponding metadata) will be properly stored and organized in different branches of the archive. A dedicated technical data archive (TECH archive) will store the engineering and auxiliary data and will be organized under a parallel database system. Through the use of a physical system archive and a few logical user archives that reflect the different archive use-cases, the AAS has been designed to be independent from any specific data model and storage technology. A dedicated framework to access, browse and download the telescope data has been identified within the proposal handling utility that stores and arranges the information of the observational proposals. The development of the whole archive system follows the requirements of the CTA data archive and is currently carried out by the INAF-OAR & ASI-Science Data Center (ASDC) team. The AAS is fully adaptable and ready for the ASTRI mini-array that, formed of at least nine ASTRI SST-2M telescopes, is proposed to be installed at the CTA southern site.
Usefulness and dangers of relying on grant acknowledgments in an observatory bibliography
Sherry Winkelman, Arnold Rots
The purpose of this paper is to present a quantitative assessment of how well grant and/or program acknowledgments reflect the science impact of Chandra observing, archive, and theory programs and to assess whether observatory acknowledgments alone are a good indicator for inclusion in an observatory bibliography. For grant citations we find that curators will often need to determine the correct grant being cited and they will need to assess relationship between the content of a paper and the grant proposal being cited for statistics to be meaningful. We also find a significant number of papers can be attributed to observing programs through grant links only and that performing full-text searches against the ADS for grant numbers can lead to additional articles for inclusion in the bibliography. When looking at acknowledgment sections as a whole, we find that using an observatory acknowledgment as the sole source for determining inclusion in a bibliography will greatly underestimate the number of science papers attributable to the observatory.
Gemini base facility operations environmental monitoring: key systems and tools for the remote operator
Martin Cordova, Andrew Serio, Francisco Meza, et al.
In 2014 Gemini Observatory started the base facility operations (BFO) project. The project’s goal was to provide the ability to operate the two Gemini telescopes from their base facilities (respectively Hilo, HI at Gemini North, and La Serena, Chile at Gemini South). BFO was identified as a key project for Gemini’s transition program, as it created an opportunity to reduce operational costs. In November 2015, the Gemini North telescope started operating from the base facility in Hilo, Hawaii. In order to provide the remote operator the tools to work from the base, many of the activities that were normally performed by the night staff at the summit were replaced with new systems and tools. This paper describes some of the key systems and tools implemented for environmental monitoring, and the design used in the implementation at the Gemini North telescope.
Approximations of the synoptic spectra of atmospheric turbulence by sums of spectra of coherent structures
Viktor V. Nosov, Vladimir P. Lukin, Eugene V. Nosov, et al.
It is shown that the known experimental synoptic spectra of the atmospheric turbulence (spectrum of Van der Hoven, 1957; spectrum of Kolesnikov, Monin, 1965) represent the sum of the solitary spectra of coherent structures with various sizes (with variety outer scales). The spectrums registered by us near to a mirror of the Baikal Solar Vacuum Telescope (BSVT) of the Baikal astrophysical observatory and in a under dome room of the Big Alt-Azimuth Telescope (BТА) of the Special astrophysical observatory are presented too. Apparently, the good coincidence of theoretical spectrums with experimental data (from both parties from a micrometeorological maximum) is observed. This spectrum well approximates the experimental spectrums not only in the field of a micrometeorological interval of frequencies, but also on frequencies of lower, than a micrometeorological maximum where the observational spectrums have decrease.
ALMA Array Operations Group process overview
Emilio Barrios, Hector Alarcon
ALMA Science operations activities in Chile are responsibility of the Department of Science Operations, which consists of three groups, the Array Operations Group (AOG), the Program Management Group (PMG) and the Data Management Group (DMG). The AOG includes the Array Operators and have the mission to provide support for science observations, operating safely and efficiently the array. The poster describes the AOG process, management and operational tools.
System-dependent earthquake inspection procedures at Paranal Observatory
J. Osorio, A. Ramirez
Paranal Observatory is located near Antofagasta city, northern Chile, one of the most seismic regions in the world. Telescopes and scientific instruments are permanently exposed to the risk of damage caused by earthquake events, ranging from optical misalignment to complete cease of operations. A seismic monitor is installed on the site, providing real-time data to make rapid post-earthquake assessments of expected damage and determine the areas, type and level of inspections to be carried out before continuing with the regular operations. With more than ten years of seismic data and its correlation with reported issues, we show that the inspection and recovery strategy can be defined taking into account the characteristics of the seismic event and according to system-dependent criteria.
SystMon: a data visualization tool for the analysis of telemetry data
The Paranal Very Large Telescopes (VLT) Observatory is a complex multifunctional observatory where many different systems are generating telemetry parameters.As systems becoming more and more complex, also the amount of telemetry data is increasing. This telemetry data is usually saved in various data repositories.In order to obtain a full system overview, it is necessary to link all that data in a meaningful and easy to interpret way. A step forward from simple telemetry data visualisation has been done by developing a new tool that can combine different data sources and has a powerful graphing capability.This new tool, called SystMon, is developed in iPython an interactive-web browser environment under the philosophy of notebooks which combine the code and the final product. The application can be shared among other colleagues and having the code side by side gives the accessibility to inspect and review the process improving and adding new capabilities to the application. SystMon allows to manipulate, generate andvisualise data in different types of graphs and also to create directly statistical reports. SystMon helps the user tomodel, visualiseand interpret telemetry data in a web-based platform for monitoring the health of systems, understanding short- and long-term behaviour and to anticipate corrective interventions.
Obsolescence of electronics at the VLT
Gerhard Hüdepohl, Juan-Pablo Haddad, Christian Lucuix
The ESO Very Large Telescope Observatory (VLT) at Cerro Paranal in Chile had its first light in 1998. Most of the telescopes’ electronics components were chosen and designed in the mid 1990s and are now around 20 years old. As a consequence we are confronted with increasing failure rates due to aging and lack of spare parts, since many of the components are no longer available on the market. The lifetime of large telescopes is generally much beyond 25 years. Therefore the obsolescence of electronics components and modules becomes an issue sooner or later and forces the operations teams to upgrade the systems to new technology in order to avoid that the telescope becomes inoperable. Technology upgrade is a time and money consuming process, which in many cases is not straightforward and has various types of complications. This paper shows the strategy, analysis, approach, timeline, complications and progress in obsolescence driven electronics upgrades at the ESO Very Large Telescope (VLT) at the Paranal Observatory.
Operation of AST3 telescope and site testing at Dome A, Antarctica
Zhaohui Shang, Yi Hu, Bin Ma, et al.
We have successfully operated the AST3 telescope remotely as well as robotically for time-domain sky survey in 2015 and 2016. We have set up a real-time system to support the operation of the unattended telescope, monitoring the status of all instruments as well as the weather conditions. The weather tower also provides valuable information of the site at the highest plateau in Antarctica, demonstrating the extremely stable atmosphere above the ground and implying excellent seeing at Dome A.
Operations of the laser traffic control system in Paranal
The Laser Traffic Control System (LTCS) of the Paranal Observatory is the first component of the Adaptive Optics Facility (AOF, [8]) entering routine operations: a laser beam avoidance tool to support operations of an observatory equipped with five lasers and several laser-sensitive instruments, providing real-time information about ongoing and future collisions. LTCS-Paranal interfaces with ESO’s observing tools, OT and vOT. Altogether, this system allows the night operators to plan and execute their observations without worrying about possible collisions between the laser beam(s) and other lasersensitive equipment, aiming at a more efficient planning of the night, preventing time losses and laser-contaminated observations.
Planning your JWST/NIRSpec observation: pre-imaging and source catalogue
Leonardo Úbeda, Tracy Beck
Most observations with NIRSpec Spectroscopy will require high spatial resolution images of the science field previous to performing the spectroscopy. This is due to the fact that the standard NIRSpec target acquisition (TA) needs to acquire reference stars to deliver a position RMS of less than 20mas. NIRSpec TA uses 8 - 20 reference stars with accurate astrometry (< 5 mas), calculates pixel centroids of the stars, transforms their pixel coordinate positions to the ideal sky frame and calculates the slew to accurately place the science targets in the Micro Shutter Array. For some planned observations, very high spatial resolution Hubble Space Telescope images might be already available, and in other cases NIRCam observations will be performed. For a planned NIRSpec observation, we describe in detail the proposed method to generate a high resolution image mosaic to plan NIRSpec spectroscopy. We show some of the data products that have been developed using actual HST observations. We also describe the proposed procedure using simulated NIRCam images for derivation of source catalogs. Rapid availability of these two data products will be crucial for the success of many NIRSpec observations.
Sun avoidance strategies at the Large Millimeter Telescope
The Large Millimeter Telescope observatory is extending its night time operation to the day time. A sun avoidance strategy was therefore implemented in the control system in real-time to avoid excessive heating and damage to the secondary mirror and the prime focus.

The LMT uses an ”on-the-fly” trajectory generator that receives as input the target location of the telescope and in turn outputs a commanded position to the servo system. The sun avoidance strategy is also implemented ”on-the-fly” where it intercepts the input to the trajectory generator and alters that input to avoid the sun. Two sun avoidance strategies were explored. The first strategy uses a potential field approach where the sun is represented as a high-potential obstacle in the telescope’s workspace and the target location is represented as a low-potential goal. The potential field is repeatedly calculated as the sun and the telescope move and the telescope follows the induced force by this field. The second strategy is based on path planning using visibility graphs where the sun is represented as a polygonal obstacle and the telescope follows the shortest path from its actual position to the target location via the vertices of the sun’s polygon.

The visibility graph approach was chosen as the favorable strategy due to the efficiency of its algorithm and the simplicity of its computation.
Observing with FIFI-LS on SOFIA: time estimates and strategies to use a field imaging spectrometer on an airborne observatory
Christian Fischer, Aaron Bryant, Siman Beckmann, et al.
Observing on the Stratospheric Observatory for Infrared Astronomy (SOFIA) requires a strategy that takes the specific circumstances of an airborne platform into account. Observations of a source cannot be extended or shortened on the spot due to flight path constraints. Still, no exact prediction of the time on source is available since there are always wind and weather conditions, and sometimes technical issues. Observations have to be planned to maximize the observing efficiency while maintaining full flexibility for changes during the observation. The complex nature of observations with FIFI-LS - such as the interlocking cycles of the mechanical gratings, telescope nodding and dithering - is considered in the observing strategy as well. Since SOFIA Cycle 3 FIFI-LS is available to general investigators. Therefore general investigators must be able to define the necessary parameters simply, without being familiar with the instrument, still resulting in efficient and flexible observations. We describe the observing process with FIFI-LS including the integration time estimate, the mapping and dithering setup and aspects of the scripting for the actual observations performed in flight. We also give an overview of the observing scenarios, which have proven to be useful for FIFI-LS.
High precision tracking method for solar telescopes
Jingjing Guo, Yunfei Yang, Song Feng, et al.
A high-precision real-time tracking method for solar telescopes was introduced in this paper based on the barycenter of full-disk solar images algorithm. To make sure the calculation was accurate and reliable, a series of strictly logic limits were set, such as setting gray threshold, judging the displacement of the barycenter and measuring the deviation from a perfect disk. A closed-loop control system was designed in the method. We located the barycenter of the full-disk images which recorded by large array CCD image sensor in real time and eliminate noise caused by bad weather, such as clouds and fog. The displacement of the barycenter was analyzed and transferred into control signal drove the motor to adjust the axis of telescope. An Ethernet interface was also provided for remote control. In the observation, the precision of this new method was better than 1″/30 minutes.
ESO Phase 3 automatic data validation: groovy-based tool to assure the compliance of the reduced data with the Science Data Product Standard
L. Mascetti, V. Forchì, M. Arnaboldi, et al.
The ESO Phase 3 infrastructure provides a channel to submit reduced data products for publication to the astronomical community and long-term data preservation in the ESO Science Archive Facility. To be integrated into Phase 3, data must comply to the ESO Science Data Product Standard regarding format (one unique standard data format is associated to each type of product, like image, spectrum, IFU cube, etc.) and required metadata. ESO has developed a Groovy based tool that carries out an automatic validation of the submitted reduced products that is triggered when data are uploaded and then submitted. Here we present how the tool is structured and which checks are implemented.
Automated scheduler improvements and generalizations for the Automated Planet Finder
The Automated Planet Finder (APF) was originally designed as a single purpose facility to search for exoplanets. The APF, however, has become a general use observatory that is used by astronomers the world over. We describe the improvements to our software for operations that both optimize finding planets with known periods and supporting a much broader community of astronomers with a variety of interests and requirements. These include a variety of observing modes beyond the originally envisioned fixed target lists, such as time dependent priorities to meet the needs of rapid varying targets, and improved tools for simulating observing cadence for the planet hunting teams. We discuss the underlying software for the APF, illustrating why its simplicity of use allows users to write software that focuses on scientific productivity. Because of this simplicity, we can then develop scheduling software, which is easily integrated into the APF operations suite. We test these new scheduling modes using a nightly simulator which uses historical weather and seeing data. After discussing this new simulation tool, we measure how well the methods work after a 36 month simulated campaign to follow-up transiting targets. We find that the data yield of each of the tested schemes is similar. Therefore, we can focus on the best potential scientific return with little concern about the impact on the number or duration of observations.
Trends and developments in VLT data papers as seen through telbib
Dominic Bordelon, Uta Grothkopf, Silvia Meakins, et al.
The ESO Telescope Bibliography (telbib; http://telbib.eso.org) is a database of refereed papers published by the ESO users community. It links data in the ESO Science Archive with the published literature, and vice versa. Developed and maintained by the ESO library, telbib also provides insights into the organization’s research output and impact as measured through bibliometric studies. Numerous reports, statistics, and visualizatons derived from telbib help to understand the way in which the user community uses ESO/VLT data in publications. Based on selected use cases, we will showcase recent trends and developments.
Large collaboration in observational astronomy: the Gemini Planet Imager exoplanet survey case
Franck Marchis, Paul G. Kalas, Marshall D. Perrin, et al.
The Gemini Planet Imager (GPI) is a next-generation high-contrast imager built for the Gemini Observatory. The GPI exoplanet survey (GPIES) consortium is made up of 102 researchers from ~28 institutions in North and South America and Europe. In November 2014, we launched a search for young Jovian planets and debris disks. In this paper, we discuss how we have coordinated the work done by this large team to improve the technical and scientific productivity of the campaign, and describe lessons we have learned that could be useful for future instrumentation-based astronomical surveys. The success of GPIES lies mostly on its decentralized structure, clear definition of policies that are signed by each member, and the heavy use of modern tools for communicating, exchanging information, and processing data.
Data reduction pipelines for the Keck Observatory Archive
H. D. Tran, R. Cohen, A. Colson, et al.
The Keck Observatory Archive (KOA) currently serves ~ 42 TB of data spanning over 20 years from all ten past and current facility instruments at Keck. Although most of the available data are in the raw form, for four instruments (HIRES, NIRC2, OSIRIS, LWS), quick-look, browse products generated by automated pipelines are also offered to facilitate assessment of the scientific content and quality of the data. KOA underwrote the update of the MAKEE package to support reduction of the CCD upgrade to HIRES, developed scripts for reduction of NIRC2 data and automated the existing OSIRIS and LWS data reduction packages. We describe in some detail the recently completed automated pipeline for NIRSPEC, which will be used to create browse products in KOA and made available for quicklook of the data by the observers at the telescope. We review the currently available data reduction tools for Keck data, and present our plans and anticipated priorities for the development of automated pipelines and release of reduced data products for the rest of the current and future instruments. We also anticipate that Keck's newest instrument, NIRES, which will be delivered with a fully automated pipeline, will be the first to have both raw and level-1 data ingested at commissioning.
E-ELT HIRES the high resolution spectrograph for the E-ELT: integrated data flow system
Guido Cupani, Stefano Cristiani, Valentina D'Odorico, et al.
The current E-ELT instrumentation plan foresees a High Resolution Spectrograph conventionally indicated as HIRES whose Phase A study has started in 2016. An international consortium (stemmed from the existing "HIRES initiative") is conducting a preliminary study of a modular E-ELT instrument able to provide highresolution spectroscopy (R ~ 100; 000) in a wide wavelength range (0.37-2.5 μm). For the aims of data treatment (which encompasses both the reduction and the analysis procedures) an end-to-end approach has been adopted, to directly extract scientific information from the observations with a coherent set of interactive, properly validated software modules. This approach is favoured by the specific science objectives of the instrument, which pose unprecedented requirements in terms of measurement precision and accuracy. In this paper we present the architecture envisioned for the HIRES science software, building on the lessons learned in the development of the data analysis software for the ESPRESSO ultra-stable spectrograph for the VLT.
Aircraft avoidance for laser propagation at the Large Binocular Telescope Observatory: life under a busy airspace
A key aspect of LGS operations is the implementation of measures to prevent the illumination of airplanes flying overhead. The most basic one is the use of “aircraft spotters” in permanent communication with the laser operator. Although this is the default method accepted by the FAA to authorize laser propagation, it relies on the inherent subjectivity of human perception, and requires keeping a small army of spotters to cover all the nights scheduled for propagation. Following the successful experience of other observatories (Keck and APO), we have installed an automatic aircraft detection system developed at UCSD known as TBAD (Transponder-Based Aircraft Detection). The system has been in continuous operation since April 2015, collecting detection data every night the telescope is open. We present a description of our system implementation and operational procedures. We also describe and discuss the analysis of the TBAD detection data, that shows how busy our airspace is, and the expected impact on the operation efficiency of the observatory.
Genetically optimizing weather predictions
humidity, air pressure, wind speed and wind direction) into a database. Built upon this database, we have developed a remarkably simple approach to derive a functional weather predictor. The aim is provide up to the minute local weather predictions in order to e.g. prepare dome environment conditions ready for night time operations or plan, prioritize and update weather dependent observing queues.

In order to predict the weather for the next 24 hours, we take the current live weather readings and search the entire archive for similar conditions. Predictions are made against an averaged, subsequent 24 hours of the closest matches for the current readings. We use an Evolutionary Algorithm to optimize our formula through weighted parameters.

The accuracy of the predictor is routinely tested and tuned against the full, updated archive to account for seasonal trends and total, climate shifts. The live (updated every 5 minutes) SALT weather predictor can be viewed here: http://www.saao.ac.za/~sbp/suthweather_predict.html
Detailed design of a deployable tertiary mirror for the Keck I telescope
J. Xavier Prochaska, Chris Ratliff, Jerry Cabak, et al.
Motivated by the ever increasing pursuit of science with the transient sky (dubbed Time Domain Astronomy or TDA), we are fabricating and will commission a new deployable tertiary mirror for the Keck I telescope (K1DM3) at the W.M. Keck Observatory. This paper presents the detailed design of K1DM3 with emphasis on the opto- mechanics. This project has presented several design challenges. Foremost are the competing requirements to avoid vignetting the light path when retracted against a sufficiently rigid system for high-precision and repeatable pointing. The design utilizes an actuated swing arm to retract the mirror or deploy it into a kinematic coupling. The K1DM3 project has also required the design and development of custom connections to provide power, communications, and compressed air to the system. This NSF-MRI funded project is planned to be commissioned in Spring 2017.
DAG telescope site studies and infrastructure for possible international co-operations
The selected site for the 4 m DAG (Eastern Anatolian Observatory in Turkish) telescope is at “Karakaya Ridge”, at 3170 m altitude (3150 m after summit management). The telescope’s optical design is performed by the DAG technical team to allow infrared observation at high angular resolution, with its adaptive optics system to be built in Turkey. In this paper; a brief introduction about DAG telescope design; planned instrumentation; the meteorological data collected from 2008, clear night counts, short-term DIMM observations; current infrastructure to hold auxiliary telescopes; auxiliary buildings to assist operations; the observatory design; and coating unit plans will be presented along with possible collaboration possibilities in terms of instrumentation and science programs.
Education and public engagement in observatory operations
Pavel Gabor, Louis Mayo, Dennis Zaritsky
Education and public engagement (EPE) is an essential part of astronomy’s mission. New technologies, remote observing and robotic facilities are opening new possibilities for EPE. A number of projects (e.g., Telescopes In Education, MicroObservatory, Goldstone Apple Valley Radio Telescope and UNC’s Skynet) have developed new infrastructure, a number of observatories (e.g., University of Arizona’s “full-engagement initiative” towards its astronomy majors, Vatican Observatory’s collaboration with high-schools) have dedicated their resources to practical instruction and EPE. Some of the facilities are purpose built, others are legacy telescopes upgraded for remote or automated observing. Networking among institutions is most beneficial for EPE, and its implementation ranges from informal agreements between colleagues to advanced software packages with web interfaces. The deliverables range from reduced data to time and hands-on instruction while operating a telescope. EPE represents a set of tasks and challenges which is distinct from research applications of the new astronomical facilities and operation modes. In this paper we examine the experience with several EPE projects, and some lessons and challenges for observatory operation.
Monitoring the performance of the Southern African Large Telescope
Christian Hettlage, Chris Coetzee, Petri Väisänen, et al.
The efficient operation of a telescope requires awareness of its performance on a daily and long-term basis. This paper outlines the Fault Tracker, WebSAMMI and the Dashboard used by the Southern African Large Telescope (SALT) to achieve this aim. Faults are mostly logged automatically, but the Fault Tracker allows users to add and edit faults. The SALT Astronomer and SALT Operator record weather conditions and telescope usage with WebSAMMI. Various efficiency metrics are shown for different time periods on the Dashboard. A kiosk mode for displaying on a public screen is included. Possible applications for other telescopes are discussed.
Quality control and data flow operations of SPHERE
Wolfgang Hummel, Julien H. V. Girard, Julien Milli, et al.
ESO operates since April 2015 the new planet finder instrument SPHERE1 with three arms supported by a common path coronograph with extreme AO. Observing modes include dual band imaging, long slit spectroscopy, IFS and high contrast polarimetry. We report on the implementation of the SPHERE data flow and quality control system and on operational highlights in the first year of operations: This includes some unconventional parts of the SPHERE calibration plan like special rules for the selection of filters and the measures for an optimized calibration of the two polarimetric channels of the ZIMPOL arm. Finally we report on the significance of the SPHERE quality control system, its relation to the data reduction pipeline and which previously undocumented instrumental features have been revealed so far.
Development of an automated data acquisition and processing pipeline using multiple telescopes for observing transient phenomena
Vaibhav Savant, Niall Smith
We report on the current status in the development of a pilot automated data acquisition and reduction pipeline based around the operation of two nodes of remotely operated robotic telescopes based in California, USA and Cork, Ireland. The observatories are primarily used as a testbed for automation and instrumentation and as a tool to facilitate STEM (Science Technology Engineering Mathematics) promotion. The Ireland node is situated at Blackrock Castle Observatory (operated by Cork Institute of Technology) and consists of two optical telescopes – 6” and 16” OTAs housed in two separate domes while the node in California is its 6” replica. Together they form a pilot Telescope ARrAy known as TARA. QuickPhot is an automated data reduction pipeline designed primarily to throw more light on the microvariability of blazars employing precision optical photometry and using data from the TARA telescopes as they constantly monitor predefined targets whenever observing conditions are favourable. After carrying out aperture photometry, if any variability above a given threshold is observed, the reporting telescope will communicate the source concerned and the other nodes will follow up with multi-band observations, taking advantage that they are located in strategically separated time-zones. Ultimately we wish to investigate the applicability of Shock-in-Jet and Geometric models. These try to explain the processes at work in AGNs which result in the formation of jets, by looking for temporal and spectral variability in TARA multi-band observations. We are also experimenting with using a Twochannel Optical PHotometric Imaging CAMera (TOΦCAM) that we have developed and which has been optimised for simultaneous two-band photometry on our 16” OTA.
Model-based fault detection and diagnosis in ALMA subsystems
The Atacama Large Millimeter/submillimeter Array (ALMA) observatory, with its 66 individual telescopes and other central equipment, generates a massive set of monitoring data every day, collecting information on the performance of a variety of critical and complex electrical, electronic and mechanical components. This data is crucial for most troubleshooting efforts performed by engineering teams. More than 5 years of accumulated data and expertise allow for a more systematic approach to fault detection and diagnosis. This paper presents model-based fault detection and diagnosis techniques to support corrective and predictive maintenance in a 24/7 minimum-downtime observatory.
Investigating the effect of atmospheric turbulence on mid-IR data quality with VISIR
Mario E. van den Ancker, Daniel Asmus, Christian Hummel, et al.
A comparison of the FWHM of standard stars observed with VISIR, the mid-IR imager and spectrometer at ESO's VLT, with expectations for the achieved mid-IR Image Quality based on the optical seeing and the wavelength-dependence of atmospheric turbulence, shows that for N-band data (7{12μm), VISIR realizes an image quality about 0.1" worse than expected based on the optical seeing. This difference is large compared to the median N-band image quality of 0.3-0.4" achieved by VISIR. We also note that other mid-IR groundbased imagers show similar image quality in the N-band. We attribute this difference to an under-estimate of the effect of the atmosphere in the mid-IR in the parameters adopted so far for the extrapolation of optical to mid-IR seeing. Adopting an average outer length-scale of the atmospheric turbulence above Paranal L0 = 46 m (instead of the previously used L0 = 23 m) improves the agreement between predicted and achieved image quality in the mid-IR while only having a modest effect on the predicted image quality at shorter wavelengths (although a significant amount of scatter remains, suggesting that l0 may not be constant in time). We therefore advocate adopting L0 = 46 m for the average outer length scale of atmospheric turbulence above Cerro Paranal for real-time scheduling of observations on VLT UT3 (Melipal).
DAG: a new observatory and a prospective observing site for other potential telescopes
DAG (Eastern Anatolia Observatory is read as “Doğu Anadolu Gözlemevi” in Turkish) is the newest and largest observatory of Turkey, constructed at an altitude of 3150 m in Konaklı/Erzurum provenience, with an optical and nearinfrared telescope (4 m in diameter) and its robust observing site infrastructure. This national project consists of three main phases: DAG (Telescope, Enclosure, Buildings and Infrastructures), FPI (Focal Plane Instruments and Adaptive Optics) and MCP (Mirror Coating Plant). All these three phases are supported by the Ministry of Development of Turkey and funding is awarded to Atatürk University. Telescope, enclosure and building tenders were completed in 2014, 2015 and 2016, respectively. The final design of telescope, enclosure and building and almost all main infrastructure components of DAG site have been completed; mainly: road work, geological and atmospheric surveys, electric and fiber cabling, water line, generator system, cable car to summit. This poster explains recent developments of DAG project and talks about the future possible collaborations for various telescopes which can be constructed at the site.
The MIRI Medium Resolution Spectrometer calibration pipeline
A. Labiano, R. Azzollini, J. Bailey, et al.
The Mid-Infrared Instrument (MIRI) Medium Resolution Spectrometer (MRS) is the only mid-IR Integral Field Spectrometer on board James Webb Space Telescope. The complexity of the MRS requires a very specialized pipeline, with some specific steps not present in other pipelines of JWST instruments, such as fringe corrections and wavelength offsets, with different algorithms for point source or extended source data. The MRS pipeline has also two different variants: the baseline pipeline, optimized for most foreseen science cases, and the optimal pipeline, where extra steps will be needed for specific science cases. This paper provides a comprehensive description of the MRS Calibration Pipeline from uncalibrated slope images to final scientific products, with brief descriptions of its algorithms, input and output data, and the accessory data and calibration data products necessary to run the pipeline.
Linear and Angolar Moment of a general spherical TEM and DEM beam radio wave detection with a quadratic order system processor in state of art technology implementation: a three axis sensors array quadratic order correlator for the 21cm radiation radio detection coming from Early Universe
The paper focuses on an innovative spherical wave beam quadratic order processor, HSCS-1. It is an IP, Coulob Gauge based, to directly mesure, ∀t and in any single P ∀P (along the propagation axis too) the quadratic order Poynting Vector, with both the complex Linear Momentum (LiM) and Angolar Momentum (AnM) contributions, as well as the mutual quadratic order coherence function of any total or pseudo monochromatic observed beam wave.

The focoused spherical quadratic order method, directly mesure the spherical complex OAM (time and propagation axsis invariant) which is composed by the observed beam wave modes. Such solenoidal energy modes, becomes relevant to mesure far distance (as exemple: distance greater than billions of light years away) sources radiations.

Furthermore, HSCS-1 contemporary and directly measure the mutual (spatial as well as temporal) complex coherence of any general complex divergent or not strictly TEM (as example: TEM+DEM) observed radiations. Tipically TEM+DEM radiations are characterized by N=LPM+1 complex wave beam modes. N is the number of considered EM fields modes, as great as requested; N and L are integer, which values are internal to a closed interval [0; ∞]; P and M are integer, which values are internal to a closed interval [1; ∞]; n=0,1,…,N is the mode or beam channel index; with l=0, 1,…,L; p= 1,…, P; and m = 1,…,M; n=l=0 is the fundamental mode index).

Here are considered only the wave beam modes which satisfy the related Helmoltz monochromatic wave equation soluctions. As well known in Physics, only adopting a quadratic order energy processor it is possible ∀t to contemporary and directely mesure in P, ∀P(θ; Φ; z) and ∀(P-P0), both the proper P0 position and quantity of motion (proper space time variations), or by a Fourier Transformation to contemporary and directely mesure proper phase and frequency spectrum variations, of the observed general radiation source.
Evolution of operations for the Survey Telescope at Paranal
Cristian Marcelo Romero, Steffen Mieske, Stéphane Brillant, et al.
Since 2009, operations began at the Survey Telescopes at Paranal Observatory. The surveys aimed to observe using a large field of view targeting much fainter sources and covering wide areas of sky quickly. The first to enter operations was VISTA (Visible and Infrared Survey Telescope for Astronomy) and then the VST Telescope (VLT Survey Telescope). The survey telescopes introduced a change into the operational model of the time. The observations were wholly conducted by the telescope and instrument operator without the aid of a support astronomer. This prompted the gradual and steady improvement of tools for the operation of the observatory both generally and in particular for the Survey Telescopes. Examples of these enhancements include control systems for image quality, selection of OBs, logging of evening activities, among others. However, the new generation instruments at the Very Large Telescope (VLT) posed a new challenge to the observatory from a scientific and operational point of view. As these new systems were more demanding and complex, they would be more complicated to operate and require additional support. Hence, the focus of this study is to explore the possible development and optimization of the operations of the Survey telescopes, which would give greater operational flexibility in regards to the new generation instruments. Moreover, we aim to evaluate the feasibility of redistributing of telescope operators during periods of increased demand from other VLT systems.
PHOENIX: the production line for science data products at ESO
Reinhard Hanuschik, Lodovico Coccato
In the past three years the QC group at ESO has installed a production line for science-grade data products. With the focus on spectroscopic observations, these in-house generated data products are complementary to the externally provided data products from the surveys. The production line combines efficient mass production (more than one million spectra have been generated so far), previews, and quality control. All data products are available to the community on the ESO archive interface.
ESO science data product standard for 1D spectral products
Alberto Micol, Magda Arnaboldi, Nausicaa A. R. Delmotte, et al.
The ESO Phase 3 process allows the upload, validation, storage, and publication of reduced data through the ESO Science Archive Facility. Since its introduction, ~2 million data products have been archived and published; 80% of them are one-dimensional extracted and calibrated spectra. Central to Phase3 is the ESO science data product standard that defines metadata and data format of any product. This contribution describes the ESO data standard for 1d-spectra, its adoption by the reduction pipelines of selected instrument modes for in-house generation of reduced spectra, the enhanced archive legacy value. Archive usage statistics are provided.
Building a pipeline of talent for operating radio observatories
The National Radio Astronomy Observatory’s (NRAO) National and International Non-Traditional Exchange (NINE) Program teaches concepts of project management and systems engineering in a focused, nine-week, continuous effort that includes a hands-on build project with the objective of constructing and verifying the performance of a student-level basic radio instrument. The combination of using a project management (PM)/systems engineering (SE) methodical approach based on internationally recognized standards in completing this build is to demonstrate clearly to the learner the positive net effects of following methodical approaches to achieving optimal results. It also exposes the learner to basic radio science theory. An additional simple research project is used to impress upon the learner both the methodical approach, and to provide a basic understanding of the functional area of interest to the learner.

This program is designed to teach sustainable skills throughout the full spectrum of activities associated with constructing, operating and maintaining radio astronomy observatories. NINE Program learners thereby return to their host sites and implement the program in their own location as a NINE Hub. This requires forming a committed relationship (through a formal Letter of Agreement), establishing a site location, and developing a program that takes into consideration the needs of the community they represent. The anticipated outcome of this program is worldwide partnerships with fast growing radio astronomy communities designed to facilitate the exchange of staff and the mentoring of under-represented1 groups of learners, thereby developing a strong pipeline of global talent to construct, operate and maintain radio astronomy observatories.
Training telescope operators and support astronomers at Paranal
Henri M. J. Boffin, Dimitri A. Gadotti, Joe Anderson, et al.
The operations model of the Paranal Observatory relies on the work of efficient staff to carry out all the daytime and nighttime tasks. This is highly dependent on adequate training. The Paranal Science Operations department (PSO) has a training group that devises a well-defined and continuously evolving training plan for new staff, in addition to broadening and reinforcing courses for the whole department. This paper presents the training activities for and by PSO, including recent astronomical and quality control training for operators, as well as adaptive optics and interferometry training of all staff. We also present some future plans.
SPOT: an optimization software for dynamic observation programming
Anne-Marie Lagrange, Pascal Rubini, Nadia Brauner-Vettier, et al.
The surveys dedicated to the search for extrasolar planets with the recently installed extreme-AO, high contrast Planet Imagers generally include hundreds of targets, to be observed sometimes repeatedly, generally in Angular Differential Imaging Mode. Each observation has to fulfill several time-dependent constraints, which makes a manual elaboration of an optimized planning impossible. We have developed a software (SPOT), an easy to use tool with graphical interface that allows both long term (months, years) and dynamic (nights) optimized scheduling of such surveys, taking into account all relevant constraints.

Tests show that excellent schedules and high filling efficiencies can be obtained with execution times compatible with real-time scheduling, making possible to take in account complex constraints and to dynamically adapt planning to unexpected circumstances even during their execution. Moreover, such a tool is very valuable during survey preparations to build target lists and calendars.

SPOT could be easily adapted for scheduling observations other instruments or telescopes.