Proceedings Volume 9911

Modeling, Systems Engineering, and Project Management for Astronomy VII

cover
Proceedings Volume 9911

Modeling, Systems Engineering, and Project Management for Astronomy VII

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 4 October 2016
Contents: 12 Sessions, 86 Papers, 0 Presentations
Conference: SPIE Astronomical Telescopes + Instrumentation 2016
Volume Number: 9911

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9911
  • Systems Engineering I: Integration and Test
  • Systems Engineering II: Integration and Test
  • Systems Engineering III: Model-based Systems Engineering
  • Systems Engineering IV: Architectures and Budgets
  • Project Management I
  • Project Management II
  • Systems Engineering V: Tools and Processes
  • Modeling I: Optical and Dynamic Modeling
  • Modeling II: Aero-thermal and Thermal Modeling
  • Modeling III: End-to-end and Science Modeling
  • Poster Session
Front Matter: Volume 9911
icon_mobile_dropdown
Front Matter: Volume 9911
This PDF file contains the front matter associated with SPIE Proceedings Volume 9911, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Systems Engineering I: Integration and Test
icon_mobile_dropdown
Daniel K. Inouye Solar Telescope: integration, testing, and commissioning planning
The Daniel K. Inouye Solar Telescope (DKIST) has been in its construction phase since 2010, anticipating the onset of the integration, test, and commissioning (IT&C) phase in early 2017, and the commencement of science verification in 2019. The works on Haleakala are progressing at a phenomenal rate and many of the various subsystems are either through or about to enter their Factory (or Laboratory) acceptance. The delays in obtaining site planning permissions, while a serious issue for Project Management, has allowed the sub-systems to develop well ahead of their required delivery to site. We have benefited from the knowledge that many sub-systems will be on site and ready for integration well before affecting the critical path. Opportunities have been presented for additional laboratory/factory testing which, while not free, significantly reduce the risks of potential delays and rework on site. From the perspective of IT&C this has provided an opportunity to develop the IT&C plans and schedules free from the pressures of imminent deployment.

In this paper we describe the ongoing planning of the Integration, Testing and Commissioning (IT&C) phase of the project in particular the detailed planning phase that we are currently developing.
More than two years of nominal mission: experiences from the Gaia science ground segment
S. G. Els, A. G. A. Brown, N. Cheek, et al.
With its launch at the very end of 2013, ESA's astrometry satellite Gaia began its endeavor to compile astrometric and photometric measurements of at least one billion objects, as well as high resolution optical spectra of hundred million objects. The Gaia catalog therefore results in a wealth of coherently determined astrophysical parameters of these objects. After its extensive commissioning phase, Gaia entered the nominal mission phase in July 2014. The science ground segment, which is formed by the Gaia Data Processing and Analysis Consortium (DPAC), has since then started its operations. DPAC is a large, multi-national, science consortium which has to handle and process the dense and complex Gaia data stream. With its decentralized management and its distributed infrastructure, the Gaia DPAC is a remarkable undertaking. In this paper we will summarize some of the experiences of the DPAC facing the real Gaia data, compare this to the pre-launch expectations, and critically review the development phase.
Mission-level performance verification approach for the Euclid space mission
Roland D. Vavrek, René J. Laureijs, Jose Lorenzo Alvarez, et al.
ESA's Dark Energy Mission Euclid will map the 3D matter distribution in our Universe using two Dark Energy probes: Weak Lensing (WL) and Galaxy Clustering (GC). The extreme accuracy required for both probes can only be achieved by observing from space in order to limit all observational biases in the measurements of the tracer galaxies. Weak Lensing requires an extremely high precision measurement of galaxy shapes realised with the Visual Imager (VIS) as well as photometric redshift measurements using near-infrared photometry provided by the Near Infrared Spectrometer Photometer (NISP). Galaxy Clustering requires accurate redshifts (Δz/(z+1)<0.1%) of galaxies to be obtained by the NISP Spectrometer.

Performance requirements on spacecraft, telescope assembly, scientific instruments and the ground data-processing have been carefully budgeted to meet the demanding top level science requirements. As part of the mission development, the verification of scientific performances needs mission-level end-to-end analyses in which the Euclid systems are modeled from as-designed to final as-built flight configurations. We present the plan to carry out end-to-end analysis coordinated by the ESA project team with the collaboration of the Euclid Consortium. The plan includes the definition of key performance parameters and their process of verification, the input and output identification and the management of applicable mission configurations in the parameter database.
Bottom-up laboratory testing of the DKIST Visible Broadband Imager (VBI)
Andrew Ferayorni, Andrew Beard, Wes Cole, et al.
The Daniel K. Inouye Solar Telescope (DKIST) is a 4-meter solar observatory under construction at Haleakala, Hawaii [1]. The Visible Broadband Imager (VBI) is a first light instrument that will record images at the highest possible spatial and temporal resolution of the DKIST at a number of scientifically important wavelengths [2]. The VBI is a pathfinder for DKIST instrumentation and a test bed for developing processes and procedures in the areas of unit, systems integration, and user acceptance testing. These test procedures have been developed and repeatedly executed during VBI construction in the lab as part of a "test early and test often" philosophy aimed at identifying and resolving issues early thus saving cost during integration test and commissioning on summit.

The VBI team recently completed a bottom up end-to-end system test of the instrument in the lab that allowed the instrument’s functionality, performance, and usability to be validated against documented system requirements. The bottom up testing approach includes four levels of testing, each introducing another layer in the control hierarchy that is tested before moving to the next level. First the instrument mechanisms are tested for positioning accuracy and repeatability using a laboratory position-sensing detector (PSD). Second the real-time motion controls are used to drive the mechanisms to verify speed and timing synchronization requirements are being met. Next the high-level software is introduced and the instrument is driven through a series of end-to-end tests that exercise the mechanisms, cameras, and simulated data processing. Finally, user acceptance testing is performed on operational and engineering use cases through the use of the instrument engineering graphical user interface (GUI).

In this paper we present the VBI bottom up test plan, procedures, example test cases and tools used, as well as results from test execution in the laboratory. We will also discuss the benefits realized through completion of this testing, and share lessons learned from the bottoms up testing process.
Systems Engineering II: Integration and Test
icon_mobile_dropdown
Gaia challenging performances verification: combination of spacecraft models and test results
Eric Ecale, Frédéric Faye, François Chassat
To achieve the ambitious scientific objectives of the Gaia mission, extremely stringent performance requirements have been given to the spacecraft contractor (Airbus Defence and Space). For a set of those key-performance requirements (e.g. end-of-mission parallax, maximum detectable magnitude, maximum sky density or attitude control system stability), this paper describes how they are engineered during the whole spacecraft development process, with a focus on the end-to-end performance verification. As far as possible, performances are usually verified by end-to-end tests onground (i.e. before launch). However, the challenging Gaia requirements are not verifiable by such a strategy, principally because no test facility exists to reproduce the expected flight conditions. The Gaia performance verification strategy is therefore based on a mix between analyses (based on spacecraft models) and tests (used to directly feed the models or to correlate them). Emphasis is placed on how to maximize the test contribution to performance verification while keeping the test feasible within an affordable effort. In particular, the paper highlights the contribution of the Gaia Payload Module Thermal Vacuum test to the performance verification before launch. Eventually, an overview of the in-flight payload calibration and in-flight performance verification is provided.
Integration and verification testing of the Large Synoptic Survey Telescope camera
Travis Lange, Tim Bond, James Chiang, et al.
We present an overview of the Integration and Verification Testing activities of the Large Synoptic Survey Telescope (LSST) Camera at the SLAC National Accelerator Lab (SLAC). The LSST Camera, the sole instrument for LSST and under construction now, is comprised of a 3.2 Giga-pixel imager and a three element corrector with a 3.5 degree diameter field of view. LSST Camera Integration and Test will be taking place over the next four years, with final delivery to the LSST observatory anticipated in early 2020. We outline the planning for Integration and Test, describe some of the key verification hardware systems being developed, and identify some of the more complicated assembly/integration activities. Specific details of integration and verification hardware systems will be discussed, highlighting some of the technical challenges anticipated.
OAJ 2.6m survey telescope: optical alignment and on-sky evaluation of IQ performances
AMOS has recently completed the alignment campaign of the 2.6m telescope for the Observatorio Astrofisico de Javalambre (OAJ). AMOS developed an innovative alignment technique for wide field-of-view telescopes that has been successfully implemented on the OAJ 2.6m telescope with the active support of the team of CEFCA (Centro de Estudios de Física del Cosmos de Aragón). The alignment relies on two fundamental techniques: (1) the wavefront-curvature sensing (WCS) for the evaluation of the telescope aberrations at arbitrary locations in the focal plane, and (2) the comafree point method for the adjustment of the position of the secondary mirror (M2) and of the focal plane (FP). The alignment campaign unfolds in three steps: (a) analysis of the repeatability of the WCS measurements, (b) assessment of the sensitivity of telescope wavefront error to M2 and FP position adjustments, and (c) optical alignment of the telescope. At the end of the campaign, seeing-limited performances are demonstrated in the complete focal plane. With the help of CEFCA team, the image quality of the telescope are investigated with a lucky-imaging method. Image sizes of less than 0.3 arcsec FWHM are obtained, and this excellent image quality is observed over the complete focal plane.
Systems Engineering III: Model-based Systems Engineering
icon_mobile_dropdown
Creating system engineering products with executable models in a model-based engineering environment
Applying systems engineering across the life-cycle results in a number of products built from interdependent sources of information using different kinds of system level analysis. This paper focuses on leveraging the Executable System Engineering Method (ESEM) [1] [2], which automates requirements verification (e.g. power and mass budget margins and duration analysis of operational modes) using executable SysML [3] models. The particular value proposition is to integrate requirements, and executable behavior and performance models for certain types of system level analysis. The models are created with modeling patterns that involve structural, behavioral and parametric diagrams, and are managed by an open source Model Based Engineering Environment (named OpenMBEE [4]). This paper demonstrates how the ESEM is applied in conjunction with OpenMBEE to create key engineering products (e.g. operational concept document) for the Alignment and Phasing System (APS) within the Thirty Meter Telescope (TMT) project [5], which is under development by the TMT International Observatory (TIO) [5].
Model-based system engineering approach for the Euclid mission to manage scientific and technical complexity
Jose Lorenzo Alvarez, Harold Metselaar, Jerome Amiaux, et al.
In the last years, the system engineering field is coming to terms with a paradigm change in the approach for complexity management. Different strategies have been proposed to cope with highly interrelated systems, system of systems and collaborative system engineering have been proposed and a significant effort is being invested into standardization and ontology definition. In particular, Model Based System Engineering (MBSE) intends to introduce methodologies for a systematic system definition, development, validation, deployment, operation and decommission, based on logical and visual relationship mapping, rather than traditional 'document based' information management.

The practical implementation in real large-scale projects is not uniform across fields. In space science missions, the usage has been limited to subsystems or sample projects with modeling being performed 'a-posteriori' in many instances. The main hurdle for the introduction of MBSE practices in new projects is still the difficulty to demonstrate their added value to a project and whether their benefit is commensurate with the level of effort required to put them in place.

In this paper we present the implemented Euclid system modeling activities, and an analysis of the benefits and limitations identified to support in particular requirement break-down and allocation, and verification planning at mission level.
Using model based systems engineering for the development of the Large Synoptic Survey Telescope's operational plan
Brian M. Selvy, Charles Claver, Beth Willman, et al.
We† provide an overview of the Model Based Systems Engineering (MBSE) language, tool, and methodology being used in our development of the Operational Plan for Large Synoptic Survey Telescope (LSST) operations. LSST’s Systems Engineering (SE) team is using a model-based approach to operational plan development to: 1) capture the topdown stakeholders’ needs and functional allocations defining the scope, required tasks, and personnel needed for operations, and 2) capture the bottom-up operations and maintenance activities required to conduct the LSST survey across its distributed operations sites for the full ten year survey duration. To accomplish these complimentary goals and ensure that they result in self-consistent results, we have developed a holistic approach using the Sparx Enterprise Architect modeling tool and Systems Modeling Language (SysML). This approach utilizes SysML Use Cases, Actors, associated relationships, and Activity Diagrams to document and refine all of the major operations and maintenance activities that will be required to successfully operate the observatory and meet stakeholder expectations. We have developed several customized extensions of the SysML language including the creation of a custom stereotyped Use Case element with unique tagged values, as well as unique association connectors and Actor stereotypes. We demonstrate this customized MBSE methodology enables us to define: 1) the rolls each human Actor must take on to successfully carry out the activities associated with the Use Cases; 2) the skills each Actor must possess; 3) the functional allocation of all required stakeholder activities and Use Cases to organizational entities tasked with carrying them out; and 4) the organization structure required to successfully execute the operational survey. Our approach allows for continual refinement utilizing the systems engineering spiral method to expose finer levels of detail as necessary. For example, the bottom-up, Use Case-driven approach will be deployed in the future to develop the detailed work procedures required to successfully execute each operational activity.
Operational modes, health, and status monitoring
Corrie Taljaard
System Engineers must fully understand the system, its support system and operational environment to optimise the design. Operations and Support Managers must also identify the correct metrics to measure the performance and to manage the operations and support organisation. Reliability Engineering and Support Analysis provide methods to design a Support System and to optimise the Availability of a complex system. Availability modelling and Failure Analysis during the design is intended to influence the design and to develop an optimum maintenance plan for a system.

The remote site locations of the SKA Telescopes place emphasis on availability, failure identification and fault isolation. This paper discusses the use of Failure Analysis and a Support Database to design a Support and Maintenance plan for the SKA Telescopes. It also describes the use of modelling to develop an availability dashboard and performance metrics.
Systems Engineering IV: Architectures and Budgets
icon_mobile_dropdown
Optical error budgeting using linearized ray-trace models
The Root-Sum-Squared, or "RSS" wavefront error model is a simple, scalar tool, commonly used for space telescope error budgeting. At the same time, much more detailed models, combining ray-trace and Fourier optics with optical alignments and wavefront controls, can provide accurate, high-resolution simulations for detailed system and subsystem design. This paper makes a connection between the two modeling approaches by deriving RSS model coefficients from ray-trace models, including the effects of wavefront controls, for computing system performance from component error statistics. It is shown that, properly constructed, the simple RSS error budget is a covariance analysis, and can be as accurate as high-resolution wavefront models for statistical wavefront error prediction. A notional segmented-aperture space telescope is used to illustrate this error modeling process.
Systems budgets architecture and development for the Maunakea Spectroscopic Explorer
Shan Mignot, Nicolas Flagey, Kei Szeto, et al.
The Maunakea Spectroscopic Explorer (MSE) project is an enterprise to upgrade the existing Canada-France- Hawaii observatory into a spectroscopic facility based on a 10 meter-class telescope. As such, the project relies on engineering requirements not limited only to its instruments (the low, medium and high resolution spectrographs) but for the whole observatory. The science requirements, the operations concept, the project management and the applicable regulations are the basis from which these requirements are initially derived, yet they do not form hierarchies as each may serve several purposes, that is, pertain to several budgets. Completeness and consistency are hence the main systems engineering challenges for such a large project as MSE. Special attention is devoted to ensuring the traceability of requirements via parametric models, derivation documents, simulations, and finally maintaining KAOS diagrams and a database under IBM Rational DOORS linking them together. This paper will present the architecture of the main budgets under development and the associated processes, expand to highlight those that are interrelated and how the system, as a whole, is then optimized by modelling and analysis of the pertinent system parameters.
An automated performance budget estimator: a process for use in instrumentation
Current day astronomy projects continue to increase in size and are increasingly becoming more complex, regardless of the wavelength domain, while risks in terms of safety, cost and operability have to be reduced to ensure an affordable total cost of ownership. All of these drivers have to be considered carefully during the development process of an astronomy project at the same time as there is a big drive to shorten the development life-cycle. From the systems engineering point of view, this evolution is a significant challenge. Big instruments imply management of interfaces within large consortia and dealing with tight design phase schedules which necessitate efficient and rapid interactions between all the stakeholders to firstly ensure that the system is defined correctly and secondly that the designs will meet all the requirements. It is essential that team members respond quickly such that the time available for the design team is maximised.

In this context, performance prediction tools can be very helpful during the concept phase of a project to help selecting the best design solution. In the first section of this paper we present the development of such a prediction tool that can be used by the system engineer to determine the overall performance of the system and to evaluate the impact on the science based on the proposed design. This tool can also be used in "what-if" design analysis to assess the impact on the overall performance of the system based on the simulated numbers calculated by the automated system performance prediction tool. Having such a tool available from the beginning of a project can allow firstly for a faster turn-around between the design engineers and the systems engineer and secondly, between the systems engineer and the instrument scientist. Following the first section we described the process for constructing a performance estimator tool, followed by describing three projects in which such a tool has been utilised to illustrate how such a tool have been used in astronomy projects. The three use-cases are; EAGLE, one of the European Extremely Large Telescope (E-ELT) Multi-Object Spectrograph (MOS) instruments that was studied from 2007 to 2009, the Multi-Object Optical and Near-Infrared Spectrograph (MOONS) for the European Southern Observatory’s Very Large Telescope (VLT), currently under development and SST-GATE.
E-ELT requirements flow down
J. C. Gonzalez, H. Kurlandczyk, C. Schmid, et al.
One of the critical activities in the systems engineering scope of work is managing requirements. In line with this, E-ELT devotes a significant effort to this activity, which follows a well-established process. This involves optimally deriving requirements from the user (Top-Level Requirements) through the system Level 1 Requirements and from here down to subsystems procurement specifications.

This paper describes the process, which is illustrated with some practical examples, including in particular the role of technical budgets to derive requirements on subsystems. Also, the provisions taken for the requirements verification are discussed.
DESI systems engineering: throughput and signal-to-noise
The Dark Energy Spectroscopic Instrument (DESI) is a fiber-fed multi-object spectroscopic instrument under construction to measure the expansion history of the Universe using the Baryon Acoustic Oscillation technique.

Management of light throughput and noise in all elements of the instrument is key to achieving the high-level DESI science requirements over the planned survey area and depth within the planned survey duration. The DESI high-level science requirements flow down to instrument performance requirements on system throughput and operational efficiency. Signal-to-noise requirements directly affect minimum required exposure time per field, which dictates the pace and duration of the entire survey. The need to maximize signal (light throughput) and to minimize noise contributions and time overhead due to reconfigurations between exposures drives the instrument subsystem requirements and technical implementation.

Throughput losses, noise contributors, and interexposure reconfiguration time are budgeted, tracked, and managed as DESI Systems Engineering resources. Current best estimates of throughput losses and noise contributions from each individual element of the instrument are tracked together in a master budget to calculate overall margin on completing the survey within the allotted time. That budget is a spreadsheet accessible to the entire DESI project.
Project Management I
icon_mobile_dropdown
Project management and control of the Daniel K. Inouye Solar Telescope
Joseph P. McMullin, William McVeigh, Mark Warner, et al.
We provide a brief update on the construction status of the Daniel K. Inouye Solar Telescope, a $344M, 10-year construction project to design and build the world's largest solar physics observatory. We review the science drivers along with the challenges in meeting the evolving scientific needs over the course of the construction period without jeopardizing the systems engineering and management realization. We review the tools, processes and performance measures in use in guiding the development as well as the risks and challenges as the project transitions through various developmental phases. We elaborate on environmental and cultural compliance obligations in building in Hawai'i. We discuss the broad "lessons learned". Finally, we discuss the project in the context of the evolving management oversight within the US (in particular under the NSF).
Multivariable parametric cost model for space and ground telescopes
Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper hypothesizes a single model, based on published models and engineering intuition, for both ground and space telescopes:

OTA Cost ~ (X) D (1.75 ± 0.05) λ (-0.5 ± 0.25) T-0.25 e (-0.04) Y

Specific findings include: space telescopes cost 50X to 100X more ground telescopes; diameter is the most important CER; cost is reduced by approximately 50% every 20 years (presumably because of technology advance and process improvements); and, for space telescopes, cost associated with wavelength performance is balanced by cost associated with operating temperature. Finally, duplication only reduces cost for the manufacture of identical systems (i.e. multiple aperture sparse arrays or interferometers). And, while duplication does reduce the cost of manufacturing the mirrors of segmented primary mirror, this cost savings does not appear to manifest itself in the final primary mirror assembly (presumably because the structure for a segmented mirror is more complicated than for a monolithic mirror).
The seven habits of highly effective project managers
Mark Warner, Richard Summers
Why do some astronomy projects succeed, while others fail? There are obviously many different factors that can and do influence the outcome of any given project, but one of the most prevalent characteristics among successful projects is the combined skills and qualifications of the project manager (PM) at their helms. But this begs an obvious question: what exactly makes a project manager "skilled and qualified?" Asked another way, are there common traits, philosophies, and/or techniques that the most successful PMs share, and if so, what are they? The short answer is yes, the majority successful engineering project managers have significant skills, habits, and character traits in common. The longer answer is there are at least seven of these key traits, or "habits" that many successful PMs share and, more importantly, implement within their respective projects. This paper presents these key factors, including thoughts on scope and quality management, cost and schedule control, project team structures, risk management strategies, stakeholder management, and general project execution.
Agile software development in an earned value world: a survival guide
Jeffrey Kantor, Kevin Long, Jacek Becla, et al.
Agile methodologies are current best practice in software development. They are favored for, among other reasons, preventing premature optimization by taking a somewhat short-term focus, and allowing frequent replans/reprioritizations of upcoming development work based on recent results and current backlog. At the same time, funding agencies prescribe earned value management accounting for large projects which, these days, inevitably include substantial software components. Earned Value approaches emphasize a more comprehensive and typically longer-range plan, and tend to characterize frequent replans and reprioritizations as indicative of problems. Here we describe the planning, execution and reporting framework used by the LSST Data Management team, that navigates these opposite tensions.
Project Management II
icon_mobile_dropdown
CARMENES: management of a schedule-driven project
CARMENES (Calar Alto high-Resolution search for M dwarfs with Exoearths with Near-infrared and optical Échelle Spectrographs) is an instrument consistent in two ultra-stable high resolution (R~82,000) spectrographs covering simultaneously the visible (0.5 – 1.0μm) and near-IR (1.0 - 1.7μm) ranges to provide high-accuracy radial-velocity measurements (∼1 m/s) thanks to the long-term stability. CARMENES was the initiative of a consortium of eleven German and Spanish institutions. CARMENES has been built for the 3.5m telescope at the Centro Astronómico Hipano- Alemán (CAHA), Calar Alto Observatory (Almería, Spain) and is currently in operation. CAHA is jointly operated by the Max-Planck-Society (MPG) and the Spanish National Research Council (CSIC).

The project received the green light in October 2010 and in February 2013 passed a Final Design Review. Six months later, the MPG and CSIC, the observatory’s owners, made an independent evaluation concluding that CARMENES had to be ready for operations at the end of 2015. Since then, fulfilling the calendar was the driver of all project decisions. Moreover, the observatory’s survival was linked to the instrument’s success: should the instrument fail, the observatory would be closed. On the contrary, the instrument’s success would give unique capabilities to the Observatory for Big Science. Such a challenge became to be our private Olympic Games: we had to be on time. This decision definitively impacted on the project dynamics, there was no room for a delay. The deadline, December 31st, 2015, was controlled by a strict tracking of the critical path; calendar deviations were corrected with risky decisions while fast tracking or even crashing methods were applied.

The management scenario was far from optimum: most key people in the project shared their time with other duties; the observatory funding cuts; the budget was tight and distributed among the 11 partner centers with their own different rules, etc. Despite these difficulties, the close coordination among the project manager, the system engineer and the work package managers, the hard work of the whole team, and the support from the observatory were our best bets.

Two frenetic years after the calendar decision, we had manufactured, integrated and tested the two spectrographs and we were commissioning the instrument. The instrument first light took place on November, 9th, 2015 and CARMENES entered in operation at the end of December 2015. This paper describes the keys to success.
Management of the camera electronics programme for the World Space Observatory ultraviolet WUVS instrument
Gayatri Patel, Matthew Clapp, Mike Salter, et al.
World Space Observatory Ultraviolet (WSO-UV) is a major international collaboration led by Russia and will study the universe at ultraviolet wavelengths between 115 nm and 320 nm. The WSO Ultraviolet Spectrograph (WUVS) subsystem is led by a consortium of Russian institutes and consists of three spectrographs.

RAL Space is contracted by e2v technologies Ltd to provide the CCD readout electronics for each of the three WUVS channels. The programme involves the design, manufacturing, assembly and testing of each Camera Electronics Box (CEB), its associated Interconnection Module (ICM), Electrical Ground Support Equipment (EGSE) and harness.

An overview of the programme will be presented, from the initial design phase culminating in the development of an Engineering Model (EM) through qualification whereby an Engineering Qualification Model (EQM) will undergo environmental testing to characterize the performance of the CEB against the space environment, to the delivery of the Flight Models (FMs). The paper will discuss the challenges faced managing a large, dynamic project. This includes managing significant changes in fundamental requirements mid-programme as a result of external political issues which forced a complete re-design of an existing CEB with extensive space heritage but containing many ITAR controlled electronic components to a new, more efficient solution, free of ITAR controlled parts. The methodology and processes used to ensure the demanding schedule is maintained through each stage of the project will be presented including an insight into planning, decision-making, communication, risk management, and resource management; all essential to the continued success of the programme.
ALMA software releases versus quality management: and the winner is...
After its inauguration and the formal completion of the construction phase, the software development effort at the Atacama Large Millimeter/submillimeter Array (ALMA) continues at roughly the same level as during construction – gradually adding capabilities as required by and offered to the scientific community. In the run-up to a new yearly Observing Cycle several software releases have to be prepared, incorporating this new functionality. However, the ALMA observatory is used on a daily basis to produce scientific data for the approved projects within the current Observing Cycle, and also by engineering teams to extend existing capabilities or to diagnose and fix problems – so the preparation of new software releases up to their deployment competes for resources with all other activities. Testing a new release and ensuring its quality is of course fundamental, but can on the other hand not monopolize the observatory's resources or jeopardize its commitments to the scientific community.
Project management of DAG: Eastern Anatolia Observatory
The four meter DAG (Eastern Anatolia Observatory in Turkish) telescope is not only the largest telescope in Turkey but also the most promising telescope in the northern hemisphere with a large potential to offer scientific observations with its cutting edge technology. DAG is designed to be an AO telescope which will allow both infrared and visible observations with its two Nasmyth platforms dedicated to next generation focal plane instruments. In this paper, status updates from DAG telescope will be presented in terms of; (i) in house optical design of DAG, (ii) tender process of telescope, (iii) tender process of enclosure, and (iv) tender process of the observatory building. Also status updates from the focal plane instruments project and possible collaboration activities will be presented.
Systems Engineering V: Tools and Processes
icon_mobile_dropdown
System engineering and science projects: lessons from MeerKAT
The Square Kilometre Array (SKA) is a large science project planning to commence construction of the world's largest Radio Telescope after 2018. MeerKAT is one of the precursor projects to the SKA, based on the same site that will host the SKA Mid array in the central Karoo area of South Africa. From the perspective of signal processing hardware development, we analyse the challenges that MeerKAT encountered and extrapolate them to SKA in order to prepare the System Engineering and Project Management methods that could contribute to a successful completion of SKA.

Using the MeerKAT Digitiser, Correlator/Beamformer and Time and Frequency Reference Systems as an example, we will trace the risk profile and subtle differences in engineering approaches of these systems over time and show the effects of varying levels of System Engineering rigour on the evolution of their risk profiles. It will be shown that the most rigorous application of System Engineering discipline resulted in the most substantial reduction in risk over time.

Since the challenges faced by SKA are not limited to that of MeerKAT, we also look into how that translates to a system development where there is substantial complexity in both the created system as well as the creating system. Since the SKA will be designed and constructed by consortia made up from the ten member countries, there are many additional complexities to the organisation creating the system - a challenge the MeerKAT project did not encounter. Factors outside of engineering, for instance procurement models and political interests, also play a more significant role, and add to the project risks of SKA when compared to MeerKAT.
Gemini Observatory base facility operations: systems engineering process and lessons learned
Andrew Serio, Martin Cordova, Gustavo Arriagada, et al.
Gemini North Observatory successfully began nighttime remote operations from the Hilo Base Facility control room in November 2015. The implementation of the Gemini North Base Facility Operations (BFO) products was a great learning experience for many of our employees, including the author of this paper, the BFO Systems Engineer.

In this paper we focus on the tailored Systems Engineering processes used for the project, the various software tools used in project support, and finally discuss the lessons learned from the Gemini North implementation. This experience and the lessons learned will be used both to aid our implementation of the Gemini South BFO in 2016, and in future technical projects at Gemini Observatory.
European Extremely Large Telescope (E-ELT) availability stochastic model: integrating failure mode and effect analysis (FMEA), influence diagram, and Bayesian network together
An Availability Stochastic Model for the E-ELT has been developed in GeNIE. The latter is a Graphical User Interface (GUI) for the Structural Modeling, Inference, and Learning Engine (SMILE), originally distributed by the Decision Systems Laboratory from the University of Pittsburgh, and now being a product of Bayes Fusion, LLC.

The E-ELT will be the largest optical/near-infrared telescope in the world. Its design comprises an Alt-Azimuth mount reflecting telescope with a 39-metre-diameter segmented primary mirror, a 4-metre-diameter secondary mirror, a 3.75-metre-diameter tertiary mirror, adaptive optics and multiple instruments.

This paper highlights how a Model has been developed for an earlier on assessment of the Telescope Avail- ability. It also describes the modular structure and the underlying assumptions that have been adopted for developing the model and demonstrates the integration of FMEA, Influence Diagram and Bayesian Network elements. These have been considered for a better characterization of the Model inputs and outputs and for taking into account Degraded-based Reliability (DBR).

Lastly, it provides an overview of how the information and knowledge captured in the model may be used for an earlier on definition of the Failure, Detection, Isolation and Recovery (FDIR) Control Strategy and the Telescope Minimum Master Equipment List (T-MMEL).
Considerations regarding system engineering in large scale projects with heterogeneous contexts
A. Cremonini, M. Caiazzo, D. Hayden, et al.
In this paper we would like to share some considerations and lessons learned based on our direct experience as system engineer at the SKA project, with emphasis in the personal experiences of the first author. This is a very wide and ambitious program, which involves several stakeholders with a level of heterogeneity in cultural backgrounds, technological heritages, multidisciplinary interplays, motivations and competences without precedents. The role of the leading author is to amalgamate efforts in order to deliver the "MID telescope" and in that role, he has often discovered that, Systems Engineering means far more than purely a disciplined sets of processes.
Modeling I: Optical and Dynamic Modeling
icon_mobile_dropdown
An extensive coronagraphic simulation applied to LBT
In this article we report the results of a comprehensive simulation program aimed at investigating coronagraphic capabilities of SHARK-NIR, a camera selected to proceed to the final design phase at Large Binocular Telescope. For the purpose, we developed a dedicated simulation tool based on physical optics propagation. The code propagates wavefronts through SHARK optical train in an end-to-end fashion and can implement any kind of coronagraph. Detection limits can be finally computed, exploring a wide range of Strehl values and observing conditions.
Using frequency response functions to manage image degradation from equipment vibration in the Daniel K. Inouye Solar Telescope
William R. McBride II, Daniel R. McBride
The Daniel K Inouye Solar Telescope (DKIST) will be the largest solar telescope in the world, providing a significant increase in the resolution of solar data available to the scientific community.

Vibration mitigation is critical in long focal-length telescopes such as the Inouye Solar Telescope, especially when adaptive optics are employed to correct for atmospheric seeing. For this reason, a vibration error budget has been implemented.

Initially, the FRFs for the various mounting points of ancillary equipment were estimated using the finite element analysis (FEA) of the telescope structures. FEA analysis is well documented and understood; the focus of this paper is on the methods involved in estimating a set of experimental (measured) transfer functions of the as-built telescope structure for the purpose of vibration management.

Techniques to measure low-frequency single-input-single-output (SISO) frequency response functions (FRF) between vibration source locations and image motion on the focal plane are described. The measurement equipment includes an instrumented inertial-mass shaker capable of operation down to 4 Hz along with seismic accelerometers. The measurement of vibration at frequencies below 10 Hz with good signal-to-noise ratio (SNR) requires several noise reduction techniques including high-performance windows, noise-averaging, tracking filters, and spectral estimation. These signal-processing techniques are described in detail.
Analyzing the impact of vibrations on E-ELT primary segmented mirror
B. Sedghi, M. Müller, M. Dimmler
The E-ELT primary mirror is 39m in diameter composed of 798 segments. It is exposed to external large but slow amplitude perturbations, mostly gravity, thermal and wind. These perturbations are efficiently rejected by a combination of edge sensor loop and adaptive optics (AO) in order to leave a small residual wavefront error (WFE). Vibrations induced by various equipment in the observatory are typically smaller amplitude but higher frequency perturbations exceeding the rejection capabilities of these control loops. They generate both, low spatial frequency and high spatial frequency WFE. Especially segment phasing errors, i.e. high spatial frequency errors, cannot be compensated by AO. The effect of vibrations is characterized by excitation sources and transmission of the telescope structure and segment support. They all together define the WFE caused by M1 due to vibrations. It is important to build a proper vibration error budget and specification requirements from an early stage of the project. This paper presents the vibration analysis and budgeting approach developed for E-ELT M1 and addresses the impact of vibrations onto WFE.
Modeling II: Aero-thermal and Thermal Modeling
icon_mobile_dropdown
On the precision of aero-thermal simulations for TMT
Konstantinos Vogiatzis, Hugh Thompson
Environmental effects on the Image Quality (IQ) of the Thirty Meter Telescope (TMT) are estimated by aero-thermal numerical simulations. These simulations utilize Computational Fluid Dynamics (CFD) to estimate, among others, thermal (dome and mirror) seeing as well as wind jitter and blur. As the design matures, guidance obtained from these numerical experiments can influence significant cost-performance trade-offs and even component survivability. The stochastic nature of environmental conditions results in the generation of a large computational solution matrix in order to statistically predict Observatory Performance. Moreover, the relative contribution of selected key subcomponents to IQ increases the parameter space and thus computational cost, while dictating a reduced prediction error bar. The current study presents the strategy followed to minimize prediction time and computational resources, the subsequent physical and numerical limitations and finally the approach to mitigate the issues experienced. In particular, the paper describes a mesh-independence study, the effect of interpolation of CFD results on the TMT IQ metric, and an analysis of the sensitivity of IQ to certain important heat sources and geometric features.
Initial computational fluid dynamics modeling of the Giant Magellan Telescope site and enclosure
Ryan Danks, William Smeaton, Bruce Bigelow, et al.
In the era of extremely large telescopes (ELTs), with telescope apertures growing in size and tighter image quality requirements, maintaining a controlled observation environment is critical. Image quality is directly influenced by thermal gradients, the level of turbulence in the incoming air flow and the wind forces acting on the telescope. Thus any ELT enclosure must be able to modulate the speed and direction of the incoming air and limit the inflow of disturbed ground-layer air. However, gaining an a priori understanding of the wind environment’s impacts on a proposed telescope is complicated by the fact that telescopes are usually located in remote, mountainous areas, which often do not have high quality historic records of the wind conditions, and can be subjected to highly complex flow patterns that may not be well represented by the traditional analytic approaches used in typical building design. As part of the design process for the Giant Magellan Telescope at Cerro Las Campanas, Chile; the authors conducted a parametric design study using computational fluid dynamics which assessed how the telescope’s position on the mesa, its ventilation configuration and the design of the enclosure and windscreens could be optimized to minimize the infiltration of ground-layer air. These simulations yielded an understanding of how the enclosure and the natural wind flows at the site could best work together to provide a consistent, well controlled observation environment. Future work will seek to quantify the aerothermal environment in terms of image quality.
Computational fluid dynamics modeling and analysis for the Giant Magellan Telescope (GMT)
John Ladd, Jeffrey Slotnick, William Norby, et al.
The Giant Magellan Telescope (GMT) is planned for construction at a summit of Cerro Las Campanas at the Los Campanas Observatory (LCO) in Chile. GMT will be the most powerful ground-based telescope in operation in the world. Aero-thermal interactions between the site topography, enclosure, internal systems, and optics are complex. A key parameter for optical quality is the thermal gradient between the terrain and the air entering the enclosure, and how quickly that gradient can be dissipated to equilibrium. To ensure the highest quality optical performance, careful design of the telescope enclosure building, location of the enclosure on the summit, and proper venting of the airflow within the enclosure is essential to minimize the impact of velocity and temperature gradients in the air entering the enclosure.

High-fidelity Reynolds-Averaged Navier Stokes (RANS) Computational Fluid Dynamics (CFD) analysis of the GMT, enclosure, and LCO terrain is performed to study (a) the impact of either an open or closed enclosure base soffit external shape design, (b) the effect of telescope/enclosure location on the mountain summit, and (c) the effect of enclosure venting patterns. Details on the geometry modeling, grid discretization, and flow solution are first described. Then selected computational results are shown to quantify the quality of the airflow entering the GMT enclosure based on soffit, site location, and venting considerations. Based on the results, conclusions are provided on GMT soffit design, site location, and enclosure venting. The current work is not used to estimate image quality but will be addressed in future analyses as described in the conclusions.
Thermal control modeling approach for GRAPE (GRAntecan PolarimEter)
I. Di Varano, M. Woche, K. G. Strassmeier
GRAPE is the polarimeter planned to be installed on the main Cassegrain focus of GTC (Gran Telescopio Canarias), having an equivalent entrance pupil of 10.4 m, located at the Observatorio del Roque de los Muchachos (ORM) , in La Palma, Canary Islands. It’s meant to deliver full Stokes (IQUV) polarimetry covering the spectral range 0.420-1.6 μ, in order to feed the HORS instrument (High Optical Resolution Spectrograph), mounted on the Nasmyth platform, which has a FWHM resolving power of about 25,000 (5 pixel) designed for the wavelength range of 380-800 nm. Two calcite blocks and a BK-7 prism arranged in a Foster configuration are splitting the Ø12.5mm collimated beam into the ordinary and extraordinary components. The entire subunit from the Foster prisms down to the input fibers is rotated by steps of 45 degrees in order to retrieve Q, U components. By inserting a quarter wave retarder plate before the entrance to the Foster unit circular polarization is measured too.

The current paper consist of two main parts: at first CFD simulations are introduced, which have been run compliant to the specifications derived by the environmental conditions and the transient thermal gradients taking into account the presence of the electronic cabinets installed, which are triggering the boundary conditions for the outer structure of the instrument; then a thermal control model is proposed based on heat exchangers to stabilize the inner temperature when compensation via passive insulation is not enough. The tools that have been adopted to reach for such goal are Ansys Multiphysics, in particular CFX package and Python scripts.
Improvements in analysis techniques for segmented mirror arrays
The employment of actively controlled segmented mirror architectures has become increasingly common in the development of current astronomical telescopes. Optomechanical analysis of such hardware presents unique issues compared to that of monolithic mirror designs. The work presented here is a review of current capabilities and improvements in the methodology of the analysis of mechanically induced surface deformation of such systems. The recent improvements include capability to differentiate surface deformation at the array and segment level. This differentiation allowing surface deformation analysis at each individual segment level offers useful insight into the mechanical behavior of the segments that is unavailable by analysis solely at the parent array level. In addition, capability to characterize the full displacement vector deformation of collections of points allows analysis of mechanical disturbance predictions of assembly interfaces relative to other assembly interfaces. This capability, called racking analysis, allows engineers to develop designs for segment-to-segment phasing performance in assembly integration, 0g release, and thermal stability of operation. The performance predicted by racking has the advantage of being comparable to the measurements used in assembly of hardware. Approaches to all of the above issues are presented and demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Modeling III: End-to-end and Science Modeling
icon_mobile_dropdown
End-to-end simulations and planning of a small space telescopes: Galaxy Evolution Spectroscopic Explorer: a case study
Sara Heap, David Folta, Qian Gong, et al.
Large astronomical missions are usually general-purpose telescopes with a suite of instruments optimized for different wavelength regions, spectral resolutions, etc. Their end-to-end (E2E) simulations are typically photons-in to flux-out calculations made to verify that each instrument meets its performance specifications. In contrast, smaller space missions are usually single-purpose telescopes, and their E2E simulations start with the scientific question to be answered and end with an assessment of the effectiveness of the mission in answering the scientific question. Thus, E2E simulations for small missions consist a longer string of calculations than for large missions, as they include not only the telescope and instrumentation, but also the spacecraft, orbit, and external factors such as coordination with other telescopes. Here, we illustrate the strategy and organization of small-mission E2E simulations using the Galaxy Evolution Spectroscopic Explorer (GESE) as a case study. GESE is an Explorer/Probe-class space mission concept with the primary aim of understanding galaxy evolution.

Operation of a small survey telescope in space like GESE is usually simpler than operations of large telescopes driven by the varied scientific programs of the observers or by transient events. Nevertheless, both types of telescopes share two common challenges: maximizing the integration time on target, while minimizing operation costs including communication costs and staffing on the ground. We show in the case of GESE how these challenges can be met through a custom orbit and a system design emphasizing simplification and leveraging information from ground-based telescopes.
An integrated modeling framework for the Large Synoptic Survey Telescope (LSST)
All of the components of the LSST subsystems (Telescope and Site, Camera, and Data Management) are in production. The major systems engineering challenges in this early construction phase are establishing the final technical details of the observatory, and properly evaluating potential deviations from requirements due to financial or technical constraints emerging from the detailed design and manufacturing process. To meet these challenges, the LSST Project Systems Engineering team established an Integrated Modeling (IM) framework including (i) a high fidelity optical model of the observatory, (ii) an atmospheric aberration model, and (ii) perturbation interfaces capable of accounting for quasi static and dynamic variations of the optical train. The model supports the evaluation of three key LSST Measures of Performance: image quality, ellipticity, and their impact on image depth. The various feedback loops improving image quality are also included. The paper shows application examples, as an update to the estimated performance of the Active Optics System, the determination of deployment parameters for the wavefront sensors, the optical evaluation of the final M1M3 surface quality, and the feasibility of satisfying the settling time requirement for the telescope structure.
Science yield modeling with the Exoplanet Open-Source Imaging Mission Simulator (EXOSIMS)
Christian Delacroix, Dmitry Savransky, Daniel Garrett, et al.
We report on our ongoing development of EXOSIMS and mission simulation results for WFIRST. We present the interface control and the modular structure of the software, along with corresponding prototypes and class definitions for some of the software modules. More specifically, we focus on describing the main steps of our high-fidelity mission simulator EXOSIMS, i.e., the completeness, optical system and zodiacal light modules definition, the target list module filtering, and the creation of a planet population within our simulated universe module. For the latter, we introduce the integration of a recent mass-radius model from the FORECASTER software. We also provide custom modules dedicated to WFIRST using both the Hybrid Lyot Coronagraph (HLC) and the Shaped Pupil Coronagraph (SPC) for detection and characterization, respectively. In that context, we show and discuss the results of some preliminary WFIRST simulations, focusing on comparing different methods of integration time calculation, through ensembles (large numbers) of survey simulations.
Structural, thermal, and optical performance (STOP) modeling and results for the James Webb Space Telescope integrated science instrument module
Renee Gracey, Andrew Bartoszyk, Emmanuel Cofie, et al.
The James Webb Space Telescope includes the Integrated Science Instrument Module (ISIM) element that contains four science instruments (SI) including a Guider. We performed extensive structural, thermal, and optical performance (STOP) modeling in support of all phases of ISIM development. In this paper, we focus on modeling and results associated with test and verification. ISIM’s test program is bound by ground environments, mostly notably the 1g and test chamber thermal environments. This paper describes STOP modeling used to predict ISIM system performance in 0g and at various on-orbit temperature environments. The predictions are used to project results obtained during testing to on-orbit performance.
Poster Session
icon_mobile_dropdown
Using integrated multi-body systems for dynamical-optical simulations
Johannes Störkle, Peter Eberhard
In this work, the in uence of model order reduction methods on optical aberrations is analysed within an dynamical-optical simulation of a high precision optomechanical system. Therefore, an integrated modeling process and new methods have to be introduced for the computation and investigation of the dynamical-optical behaviour. For instance, this optical system can be a telescope optic or a lithographic objective. In order to derive a simplified mechanical model for transient time simulations with low computational cost, the method of elastic multibody systems in combination with model order reduction methods can be used. For this, software-tools and interfaces are utilized. Furthermore, mechanical and optical simulation models are derived and implemented. With these, on the one hand, the mechanical sensitivity can be investigated for arbitrary external excitations and on the other hand, the related optical behaviour can be predicted. In order to clarify these methods, academic examples are chosen and the influences of the model order reduction methods and simulation strategies are analysed. Finally, the systems are investigated with respect to the mechanical-optical frequency responses.
A feasibility study for conducting unattended night-time operations at WMKO
Paul J. Stomski Jr., Sarah Gajadhar, Scott Dahm, et al.
In 2015, W. M. Keck Observatory conducted a study of the feasibility of conducting nighttime operations on Maunakea without any staff on the mountain. The study was motivated by the possibility of long term operational costs savings as well as other expected benefits. The goals of the study were to understand the technical feasibility and risk as well as to provide labor and cost estimates for implementation. The results of the study would be used to inform a decision about whether or not to fund and initiate a formal project aimed at the development of this new unattended nighttime operating capability. In this paper we will describe the study process as well as a brief summary of the results including the identified viable design alternative, the risk analysis, and the scope of work. We will also share the decisions made as a result of the study and current status of related follow-on activity.
Requirements management for Gemini Observatory: a small organization with big development projects
Madeline Close, Andrew Serio, Martin Cordova, et al.
Gemini Observatory is an astronomical observatory operating two premier 8m-class telescopes, one in each hemisphere. As an operational facility, a majority of Gemini’s resources are spent on operations however the observatory undertakes major development projects as well. Current projects include new facility science instruments, an operational paradigm shift to full remote operations, and new operations tools for planning, configuration and change control. Three years ago, Gemini determined that a specialized requirements management tool was needed. Over the next year, the Gemini Systems Engineering Group investigated several tools, selected one for a trial period and configured it for use. Configuration activities including definition of systems engineering processes, development of a requirements framework, and assignment of project roles to tool roles. Test projects were implemented in the tool. At the conclusion of the trial, the group determined that the Gemini could meet its requirements management needs without use of a specialized requirements management tool, and the group identified a number of lessons learned which are described in the last major section of this paper. These lessons learned include how to conduct an organizational needs analysis prior to pursuing a tool; caveats concerning tool criteria and the selection process; the prerequisites and sequence of activities necessary to achieve an optimum configuration of the tool; the need for adequate staff resources and staff training; and a special note regarding organizations in transition and archiving of requirements.
Antarctic Surveying Telescope (AST3-3) NIR camera for the Kunlun Infrared Sky Survey (KISS): thermal optimization and system performance
Jessica R. Zheng, Jon Lawrence, Robert Content, et al.
The Antarctic survey telescope (AST 3-3) near infrared(NIR) camera is designed to conduct the Kunlun Infrared Sky Survey which will provide a comprehensive exploration of the time varying Universe in the near infrared. It is going to be located at Dome A, on the Antarctic plateau, one of the most unique low background sites at the Kdark band (2.4μm). Carefully designed thermal emission from the telescope and the Kdark camera is very important to realize background limited operation. We setup a scattering and thermal emission model of the whole system to optimize the camera performance. An exposure time calculator was also built to predict system performance.
Using tailored methodical approaches to achieve optimal science outcomes
The science community is actively engaged in research, development, and construction of instrumentation projects that they anticipate will lead to new science discoveries. There appears to be very strong link between the quality of the activities used to complete these projects, and having a fully functioning science instrument that will facilitate these investigations.[2] The combination of using internationally recognized standards within the disciplines of project management (PM) and systems engineering (SE) has been demonstrated to lead to achievement of positive net effects and optimal project outcomes. Conversely, unstructured, poorly managed projects will lead to unpredictable, suboptimal project outcomes ultimately affecting the quality of the science that can be done with the new instruments. The proposed application of these two specific methodical approaches, implemented as a tailorable suite of processes, are presented in this paper. Project management (PM) is accepted worldwide as an effective methodology used to control project cost, schedule, and scope. Systems engineering (SE) is an accepted method that is used to ensure that the outcomes of a project match the intent of the stakeholders, or if they diverge, that the changes are understood, captured, and controlled. An appropriate application, or tailoring, of these disciplines can be the foundation upon which success in projects that support science can be optimized.
SALT tracker upgrade utilizing aerospace processes and procedures
Raoul van den Berg, Chris Coetzee, Ockert Strydom, et al.
The SALT Tracker was originally designed to carry a payload of approximately 1000 kg. The current loading exceeds 1300 kg and more instrumentation, for example, the Near-Infrared (NIR) arm of the Robert Stobie Spectrograph (RSS), is being designed for the telescope. In general, provision also had to be made to expand the envelope of the tracker payload carrying capacity for future growth as some of the systems on SALT are currently running with small safety margins. It was therefore decided to upgrade the SALT Tracker to be able to carry a payload of 1875 kg.

Before the project "Kick-Off" it became evident that neither SALT nor SAAO had the required standard of formal processes and procedures to execute a project of this nature. The Project Management, Mechanical Design and Review processes and procedures were adopted from the Aerospace Industry and tailored for our application. After training the project team in the application of these processes/procedures and gaining their commitment, the Tracker Upgrade Project was "Kicked-Off" in early May 2013.

The application of these aerospace-derived processes and procedures, as used during the Tracker Upgrade Project, were very successful as is shown in this paper where the authors also highlight some of the details of the implemented processes and procedures as well as specific challenges that needed to be met while executing a project of this nature and technical complexity.
Problem reporting and tracking system: a systems engineering challenge
Vasco Cortez, Bernhard Lopez, Nicholas Whyborn, et al.
The problem reporting and tracking system (PRTS) is the ALMA system to register operational problems, track unplanned corrective operational maintenance activities and follow the investigations of all problems or possible issues arisen in operation activities. After the PRTS implementation appeared several issues that finally produced a lack in the management of the investigations, problems to produce KPIs, loss of information, among others. In order to improve PRTS, we carried out a process to review the status of system, define a set of modifications and implement a solution; all according to the stakeholder requirements. In this work, we shall present the methodology applied to define a set of concrete actions at the basis of understanding the complexity of the problem, which finally got to improve the interactions between different subsystems and enhance the communication at different levels.
Daniel K. Inouye Solar Telescope: computational fluid dynamic analyses and evaluation of the air knife model
Implementation of an air curtain at the thermal boundary between conditioned and ambient spaces allows for observation over wavelength ranges not practical when using optical glass as a window. The air knife model of the Daniel K. Inouye Solar Telescope (DKIST) project, a 4-meter solar observatory that will be built on Haleakalā, Hawai’i, deploys such an air curtain while also supplying ventilation through the ceiling of the coudé laboratory. The findings of computational fluid dynamics (CFD) analysis and subsequent changes to the air knife model are presented. Major design constraints include adherence to the Interface Control Document (ICD), separation of ambient and conditioned air, unidirectional outflow into the coudé laboratory, integration of a deployable glass window, and maintenance and accessibility requirements. Optimized design of the air knife successfully holds full 12 Pa backpressure under temperature gradients of up to 20°C while maintaining unidirectional outflow. This is a significant improvement upon the .25 Pa pressure differential that the initial configuration, tested by Linden and Phelps, indicated the curtain could hold. CFD post- processing, developed by Vogiatzis, is validated against interferometry results of initial air knife seeing evaluation, performed by Hubbard and Schoening. This is done by developing a CFD simulation of the initial experiment and using Vogiatzis’ method to calculate error introduced along the optical path. Seeing error, for both temperature differentials tested in the initial experiment, match well with seeing results obtained from the CFD analysis and thus validate the post-processing model. Application of this model to the realizable air knife assembly yields seeing errors that are well within the error budget under which the air knife interface falls, even with a temperature differential of 20°C between laboratory and ambient spaces. With ambient temperature set to 0°C and conditioned temperature set to 20°C, representing the worst-case temperature gradient, the spatial rms wavefront error in units of wavelength is 0.178 (88.69 nm at λ = 500 nm).
Development of the space telescope mirror construction
D. V. Butova, N. D. Tolstoba, A. G. Fleysher, et al.
In the article we studied the properties of silicon carbide by simulating them in program, the influens of it on deflections of mirror surface. Also the article shows what small changings in model are led to. We found some interesting results, which can help to the developers of mirrors.
Meeting the challenges of bringing a new base facility operation model to Gemini Observatory
The aim of the Gemini Observatory’s Base Facilities Project is to provide the capabilities to perform routine night time operations with both telescopes and their instruments from their respective base facilities without anyone present at the summit. Tightening budget constraints prompted this project as both a means to save money and an opportunity to move toward increasing remote operations in the future.

We successfully moved Gemini North nighttime operation to our base facility in Hawaii in Nov., 2015. This is the first 8mclass telescope to completely move night time operations to base facility. We are currently working on implementing BFO to Gemini South.

Key challenges for this project include: (1) This is a schedule driven project. We have to implement the new capabilities by the end of 2015 for Gemini North and end of 2016 for Gemini South. (2) The resources are limited and shared with operations which has the higher priority than our project. (3) Managing parallel work within the project. (4) Testing, commissioning and introducing new tools to operational systems without adding significant disruptions to nightly operations. (5) Staff buying to the new operational model. (6) The staff involved in the project are spread on two locations separated by 10,000km, seven time zones away from each other. To overcome these challenges, we applied two principles: "Bare Minimum" and "Gradual Descent". As a result, we successfully completed the project ahead of schedule at Gemini North Telescope. I will discuss how we managed the cultural and human aspects of the project through these concepts. The other management aspects will be presented by Gustavo Arriagada [2], the Project Manager of this project. For technical details, please see presentations from Andrew Serio [3] and Martin Cordova [4].
Management aspects of Gemini's base facility operations project
Gemini’s Base Facilities Operations (BFO) Project provided the capabilities to perform routine nighttime operations without anyone on the summit. The expected benefits were to achieve money savings and to become an enabler of the future development of remote operations.

The project was executed using a tailored version of Prince2 project management methodology.

It was schedule driven and managing it demanded flexibility and creativity to produce what was needed, taking into consideration all the constraints present at the time: Time available to implement BFO at Gemini North (GN), two years.

The project had to be done in a matrix resources environment.

There were only three resources assigned exclusively to BFO.

The implementation of new capabilities had to be done without disrupting operations.

And we needed to succeed, introducing the new operational model that implied Telescope and instrumentation Operators (Science Operations Specialists - SOS) relying on technology to assess summit conditions.

To meet schedule we created a large number of concurrent smaller projects called Work Packages (WP).

To be reassured that we would successfully implement BFO, we initially spent a good portion of time and effort, collecting and learning about user’s needs. This was done through close interaction with SOSs, Observers, Engineers and Technicians.

Once we had a clear understanding of the requirements, we took the approach of implementing the "bare minimum" necessary technology that would meet them and that would be maintainable in the long term.

Another key element was the introduction of the "gradual descent" concept. In this, we increasingly provided tools to the SOSs and Observers to prevent them from going outside the control room during nighttime operations, giving them the opportunity of familiarizing themselves with the new tools over a time span of several months. Also, by using these tools at an early stage, Engineers and Technicians had more time for debugging, problem fixing and systems usage and servicing training as well.
Diffraction modeling of finite subband EFC probing on dark hole contrast with WFIRST-CGI shaped pupil coronagraph
Current coronagraph instrument design (CGI), as a part of a proposed NASA WFIRST (Wide-Field InfraRed Survey Telescope) mission, allocates two subband filters per full science band in order to contain system complexity and cost. We present our detailed investigation results on the adequacy of such limited number of finite subband filters in achieving full band dark hole contrast with shaped pupil coronagraph. The study is based on diffraction propagation modeling with realistic WFIRST optics, where each subband’s complex field estimation is obtained, using Electric Field Conjugation (EFC) wavefront sensing / control algorithm, from pairwise pupil plane deformable mirror (DM) probing and image plane intensity averaging of the resulting fields of multiple (subband) wavelengths. Multiple subband choices and probing and control strategies are explored, including standard subband probing; mixed wavelength and/or weighted Jacobian matrix; subband probing with intensity subtraction; and extended subband probing with intensity subtraction. Overall, the investigation shows that the achievable contrast with limited number of finite subband EFC probing is about 2~2.5x worse than the designed post-EFC contrast for current SPC design. The result suggests that for future shaped pupil design, slightly larger over intended full bandwidth should be considered if it will be used with limited subbands for probing.
Radiometric model for the stereo camera STC onboard the BepiColombo ESA mission
Vania Da Deppo, Elena Martellato, Emanuele Simioni, et al.
The STereoscopic imaging Channel (STC) is one of the instruments on-board the BepiColombo mission, which is an ESA/JAXA Cornerstone mission dedicated to the investigation of the Mercury planet. STC is part of the Spectrometers and Imagers for MPO BepiColombo Integrated Observatory SYStem (SIMBIO-SYS) suite. STC main scientific objective is the 3D global mapping of the entire surface of Mercury with a mean scale factor of 55 m per pixel at periherm.

To determine the design requirements and to model the on-ground and in-flight performance of STC, a radiometric model has been developed. In particular, STC optical characteristics have been used to define the instrument response function. As input for the model, different sources can be taken into account depending on the applications, i.e. to simulate the in-flight or on-ground performances. Mercury expected radiance, the measured Optical Ground Support Equipment (OGSE) integrating sphere radiance, or calibrated stellar fluxes can be considered.

Primary outputs of the model are the expected signal per pixel expressed in function of the integration time and its signal-to-noise ratio (SNR). These outputs allow then to calculate the most appropriate integration times to be used during the different phases of the mission; in particular for the images taken during the calibration campaign on-ground and for the in-flight ones, i.e. surface imaging along the orbit around Mercury and stellar calibration acquisitions.

This paper describes the radiometric model structure philosophy, the input and output parameters and presents the radiometric model derived for STC. The predictions of the model will be compared with some measurements obtained during the Flight Model (FM) ground calibration campaign. The results show that the model is valid, in fact the foreseen simulated values are in good agreement with the real measured ones.
Cooling a solar telescope enclosure: plate coil thermal analysis
Michael Gorman, Chriselle Galapon, Guillermo Montijo Jr., et al.
The climate of Haleakalā requires the observatories to actively adapt to changing conditions in order to produce the best possible images. Observatories need to be maintained at a temperature closely matching ambient or the images become blurred and unusable. The Daniel K. Inouye Solar Telescope is a unique telescope as it will be active during the day as opposed to the other night-time stellar observatories. This means that it will not only need to constantly match the ever-changing temperature during the day, but also during the night so as not to sub-cool and affect the view field of other telescopes while they are in use.

To accomplish this task, plate coil heat exchanger panels will be installed on the DKIST enclosure that are designed to keep the temperature at ambient temperature +0°C/-4°C. To verify the feasibility of this and to validate the design models, a test rig has been installed at the summit of Haleakalā. The project’s purpose is to confirm that the plate coil panels are capable of maintaining this temperature throughout all seasons and involved collecting data sets of various variables including pressures, temperatures, coolant flows, solar radiations and wind velocities during typical operating hours. Using MATLAB, a script was written to observe the plate coil’s thermal performance. The plate coil did not perform as expected, achieving a surface temperature that was generally 2ºC above ambient temperature. This isn’t to say that the plate coil does not work, but the small chiller used for the experiment was undersized resulting in coolant pumped through the plate coil that was not supplied at a low enough temperature. Calculated heat depositions were about 23% lower than that used as the basis of the design for the hillers to be used on the full system, a reasonable agreement given the fact that many simplifying assumptions were used in the models. These were not carried over into the testing.

The test rig performance showing a 23% margin provides a high degree of confidence for the performance of the full system when it is installed. If time allows, additional testing could be done that includes additional incident angles and times of day. This would allow a more complete analysis. If additional testing were to be performed, it’s recommended to use a larger chiller capable of reaching lower temperatures. The test rig design could also be optimized in order to bring the plate coil up to its maximum efficiency. In the future, the script could be rewritten in a different computer language, so that the data could be solved for quicker. Further analysis could also include different types of coolants.
The Atacama Large Millimeter/sub-millimeter Array band-1 receiver
Yau De (Ted) Huang, Oscar Morata, Patrick Michel Koch, et al.
The Atacama Large Millimeter/submillimeter Array(ALMA) Band 1 receiver covers the 35-50 GHz frequency band. Development of prototype receivers, including the key components and subsystems has been completed and two sets of prototype receivers were fully tested. We will provide an overview of the ALMA Band 1 science goals, and its requirements and design for use on the ALMA. The receiver development status will also be discussed and the infrastructure, integration, evaluation of fully-assembled band 1 receiver system will be covered. Finally, a discussion of the technical and management challenges encountered will be presented.
Point spread function computation in normal incidence for rough optical surfaces
The Point Spread Function (PSF) allows for specifying the angular resolution of optical systems which is a key parameter used to define the performances of most optics. A prediction of the system's PSF is therefore a powerful tool to assess the design and manufacture requirements of complex optical systems. Currently, well-established ray-tracing routines based on a geometrical optics are used for this purpose. However, those ray-tracing routines either lack real surface defect considerations (figure errors or micro-roughness) in their computation, or they include a scattering effect modeled separately that requires assumptions difficult to verify. Since there is an increasing demand for tighter angular resolution, the problem of surface finishing could drastically damage the optical performances of a system, including optical telescopes systems. A purely physical optics approach is more effective as it remains valid regardless of the shape and size of the defects appearing on the optical surface. However, a computation when performed in the two-dimensional space is time consuming since it requires processing a surface map with a few micron resolution which sometimes extends the propagation to multiple-reflections. The computation is significantly simplified in the far-field configuration as it involves only a sequence of Fourier Transforms. We show how to account for measured surface defects and roughness in order to predict the performances of the optics in single reflection, which can be applied and validated for real case studies.
4MOST systems engineering: from conceptual design to preliminary design review
Olga Bellido-Tirado, Steffen Frey, Samuel C. Barden, et al.
The 4MOST Facility is a high-multiplex, wide-field, brief-fed spectrograph system for the ESO VISTA telescope. It aims to create a world-class spectroscopic survey facility unique in its combination of wide-field multiplex, spectral resolution, spectral coverage, and sensitivity. At the end of 2014, after a successful concept optimization design phase, 4MOST entered into its Preliminary Design Phase. Here we present the process and tools adopted during the Preliminary Design Phase to define the subsystems specifications, coordinate the interface control documents and draft the system verification procedures.
Preliminary design of the HARMONI science software
Laure Piqueras, Aurelien Jarno, Arlette Pécontal-Rousset, et al.
This paper introduces the science software of HARMONI. The Instrument Numerical Model simulates the instrument from the optical point of view and provides synthetic exposures simulating detector readouts from data-cubes containing astrophysical scenes. The Data Reduction Software converts raw-data frames into a fully calibrated, scientifically usable data cube. We present the functionalities and the preliminary design of this software, describe some of the methods and algorithms used and highlight the challenges that we will have to face.
CARMENES system engineering
A. Pérez-Calpena, W. Seifert, P. Amado, et al.
CARMENES is a high resolution spectrograph built for the 3.5m telescope at the Calar Alto Observatory by a consortium formed by 11 German and Spanish institutions. CARMENES is composed by two separated highly stabilized spectrographs covering the VIS and NIR wavelength ranges to provide high-accuracy radial-velocity measurements with long-term stability. The technical and managerial complexity of the instrument, with a fixed project deadline, demanded a strong system engineering control to preserve the high level requirements during the development, manufacturing, assembly, integration and verification phases.
ALMA release management: a practical approach
The ALMA software is a large collection of modules for implementing all the functionality needed by the observatory's day-to-day operations from proposal preparation to the scientific data delivery. ALMA software subsystems include among many others: array/antenna control, correlator, telescope calibration, submission and processing of science proposals and data archiving.

The implementation of new features and improvements for each software subsystem must be in close coordination with observatory milestones, the need to rapidly respond to operational issues, regular maintenance activities and testing resources available to verify and validate new and improved software capabilities. This paper describes the main issues detected managing all these factors together and the different approaches used by the observatory in the search of an optimal solution.

In this paper, we describe the software delivery process adopted by ALMA during the construction phase and its further evolution in early operations. We also present the acceptance process implemented by the observatory for the validation of the software before it can be used for science observations. We provide details of the main roles and responsibilities during software verification and validation as well as their participation in the process for reviewing and approving changes into the accepted software versions.

Finally, we present ideas on how these processes should evolve in the near future, considering the operational reality of the ALMA observatory as it moves into full operations, and summarize the progress implementing some of these ideas and lessons learnt.
NELIOTA: ESA's new NEO lunar impact monitoring project with the 1.2m telescope at the National Observatory of Athens
Alceste Bonanos, Alexios Liakos, Manolis Xilouris, et al.
NELIOTA is a new ESA activity launched at the National Observatory of Athens in February 2015 aiming to determine the distribution and frequency of small near-earth objects via lunar monitoring. The objective of this 3.5 year activity is to design, develop and implement a highly automated lunar monitoring system, which will conduct an observing campaign for 2 years, starting in the Summer 2016, in search of NEO impact flashes on the Moon. The project involves: (i) a complete refurbishment of the 40 year old 1.2m Kryoneri telescope of the National Observatory of Athens, (ii) development of a Lunar imager for the prime focus with two fast-frame sCMOS cameras, and (iii) procurement of servers for data processing and storage. Furthermore, we have developed a software system that controls the telescope and the cameras, processes the images and automatically detects lunar flashes. NELIOTA provides a web-based user interface, where the impact events, after their verification and characterization, will be reported and made available to the scientific community and the general public. The novelty of this project is the dedication of a large, 1.2m telescope for lunar monitoring, which is expected to characterize the frequency and distribution of NEOs weighing as little as a few grams.
Design and test of a tip-tilt driver for an image stabilization system
Albert Casas, José María Gómez, David Roma, et al.
The tip/tilt driver is part of the Polarimetric and Helioseismic Imager (PHI) instrument for the ESA Solar Orbiter (SO), which is scheduled to launch in 2017. PPHI captures polarimetric images from the Sun to better understand our nearest star, the Sun. The paper covers an analog amplifier design to drive capacitive solid state actuator such ass piezoelectric actuator. Due to their static and continuous operation, the actuator needs to be supplied by high-quality, low-frequency, high-voltage sinusoidal signals. The described circuit is an efficiency-improved Class-AB amplifier capable of recovering up to 60% of the charge stored in the actuator. The results obtained after the qualification model test demonstrate the feasibility of the circuit with the accomplishment of the requirements fixed by the scientific team.
SimCADO: an instrument data simulator package for MICADO at the E-ELT
K. Leschinski, O. Czoske, R. Köhler, et al.
MICADO will be the first-light wide-field imager for the European Extremely Large Telescope (E-ELT) and will provide diffraction limited imaging (7mas at 1.2mm) over a ~53 arc-second field of view. In order to support various consortium activities we have developed a first version of SimCADO: an instrument simulator for MICADO. SimCADO uses the results of the detailed simulation efforts conducted for each of the separate consortium-internal work packages in order to generate a model of the optical path from source to detector readout. SimCADO is thus a tool to provide scientific context to both the science and instrument development teams who are ultimately responsible for the final design and future capabilities of the MICADO instrument. Here we present an overview of the inner workings of SimCADO and outline our plan for its further development.
Simulating the LSST OCS for conducting survey simulations using the LSST scheduler
Michael A. Reuter, Kem H. Cook, Francisco Delgado, et al.
The Operations Simulator was used to prototype the Large Synoptic Survey Telescope (LSST) Scheduler. Currently, the Scheduler is being developed separately to interface with the LSST Observatory Control System (OCS). A new Simulator is under concurrent development to adjust to this new architecture. This requires a package simulating enough of the OCS to allow execution of realistic schedules. This new package is called the Simulated OCS (SOCS). In this paper we detail the SOCS construction plan, package structure, LSST communication middleware platform use, provide some interesting use cases that the separated architecture allows and the software engineering practices used in development.
Quality initiative at ESO
An initiative is under way at ESO Headquarters to optimise operations, in particular in the engineering, technical and associated management areas. A systematic approach to strengthen the operating processes is in preparation, starting with a mapping of the extensive existing process network. Processes identified as sufficiently important and complex to merit an in-depth analysis will be properly specified and their implementation optimised to strike a sensible balance between organisational overhead (documentation) and efficiency. By applying methods and tools tried and tested in industry we expect to achieve a more unified approach to address recurrent tasks. This will enable staff to concentrate more on new challenges and improvement and avoid spending effort on issues already resolved in the past.
SHARK-NIR system design analysis overview
Valentina Viotto, Jacopo Farinato, Davide Greggio, et al.
In this paper, we present an overview of the System Design Analysis carried on for SHARK-NIR, the coronagraphic camera designed to take advantage of the outstanding performance that can be obtained with the FLAO facility at the LBT, in the near infrared regime. Born as a fast-track project, the system now foresees both coronagraphic direct imaging and spectroscopic observing mode, together with a first order wavefront correction tool. The analysis we here report includes several trade-offs for the selection of the baseline design, in terms of optical and mechanical engineering, and the choice of the coronagraphic techniques to be implemented, to satisfy both the main scientific drivers and the technical requirements set at the level of the telescope. Further care has been taken on the possible exploitation of the synergy with other LBT instrumentation, like LBTI. A set of system specifications is then flown down from the upper level requirements to finally ensure the fulfillment of the science drivers. The preliminary performance budgets are presented, both in terms of the main optical planes stability and of the image quality, including the contributions of the main error sources in different observing modes.
Integrated opto-dynamic modeling of the 4m DAG telescope image quality performance
Lorenzo Zago, Benjamin Guex, Cahit Yesilyaprak, et al.
The Turkish DAG 4-m telescope is currently through the final design stage. It is to be located on a 3170 m mountain top in Eastern Anatolia. The telescope will be a state-of-the art device, alt-az mount with active primary and adjustable secondary and tertiary mirrors. Its optics design is specially aimed at being compatible with advance adaptive optics instrumentation. The ultimate performance of such a telescope results of multiple concurrent effects from many different components and active functions of the complex system.

The paper presents a comprehensive integrated (end-to-end) model of the telescope, comprising in one computational sequence all structural, electrodynamics and oactive optics performance that produce the image quality at the focal plane. The model is entirely programmed in Matlab/Simulink and comprises a finite element model of structure and mirrors, dynamics modal reduction, deformation analyses of structural and optical elements, active optics feedback control in the Zernike modal space.
SysML model of exoplanet archive functionality and activities
Solange Ramirez
The NASA Exoplanet Archive is an online service that serves data and information on exoplanets and their host stars to help astronomical research related to search for and characterization of extra-solar planetary systems. In order to provide the most up to date data sets to the users, the exoplanet archive performs weekly updates that include additions into the database and updates to the services as needed. These weekly updates are complex due to interfaces within the archive. I will be presenting a SysML model that helps us perform these update activities in a weekly basis.
Optomechanical design software for segmented mirrors
The software package presented in this paper, still under development, was born to help analyzing the influence of the many parameters involved in the design of a large segmented mirror telescope. In summary, it is a set of tools which were added to a common framework as they were needed. Great emphasis has been made on the graphical presentation, as scientific visualization nowadays cannot be conceived without the use of a helpful 3d environment, showing the analyzed system as close to reality as possible. Use of third party software packages is limited to ANSYS, which should be available in the system only if the FEM results are needed. Among the various functionalities of the software, the next ones are worth mentioning here: automatic 3d model construction of a segmented mirror from a set of parameters, geometric ray tracing, automatic 3d model construction of a telescope structure around the defined mirrors from a set of parameters, segmented mirror human access assessment, analysis of integration tolerances, assessment of segments collision, structural deformation under gravity and thermal variation, mirror support system analysis including warping harness mechanisms, etc.
Making the most of MBSE: pragmatic model-based engineering for the SKA Telescope Manager
Gerhard Le Roux, Alan Bridger, Mike MacIntosh, et al.
Many large projects including major astronomy projects are adopting a Model Based Systems Engineering approach. How far is it possible to get value for the effort involved in developing a model that accurately represents a significant project such as SKA? Is it possible for such a large project to ensure that high-level requirements are traceable through the various system-engineering artifacts? Is it possible to utilize the tools available to produce meaningful measures for the impact of change?

This paper shares one aspect of the experience gained on the SKA project. It explores some of the recommended and pragmatic approaches developed, to get the maximum value from the modeling activity while designing the Telescope Manager for the SKA. While it is too early to provide specific measures of success, certain areas are proving to be the most helpful and offering significant potential over the lifetime of the project.

The experience described here has been on the 'Cameo Systems Modeler' tool-set, supporting a SysML based System Engineering approach; however the concepts and ideas covered would potentially be of value to any large project considering a Model based approach to their Systems Engineering.
Tolerancing a radial velocity spectrometer within Zemax
Techniques are described for tolerancing a radial velocity spectrometer system within Zemax, including: how to set up and verify the tolerancing model, performance metrics and tolerance operands used, as well as post- Zemax analysis methods. Use of the tolerancing model for various analyses will be discussed, such as: alignment sensitivity, radial velocity sensitivity, and sensitivity of the optical system to temperature changes. Tolerance results from the Keck Planet Finder project (a precision radial velocity spectrometer of asymmetric white pupil design) will be shown.
A method for generating a synthetic spectrum within Zemax
Steven R. Gibson, Edward H. Wishnow
A method using non-sequential Zemax to produce a pixelated synthetic spectrum is described. This simulation was developed for the Keck Planet Finder (KPF) instrument, and will prove useful for engineering performance analyses (stability, stray light, order cross-talk, distortion, etc.). It has also provided a set of synthetic spectra to be used during the development of the data pipeline. Various aspects concerning the construction of the spectrum are described, including: converting a model from sequential to non-sequential Zemax, the creation of Zemax coating files for echelle blaze functions, and the generation of spectrum source files (solar, thorium-argon, incandescent, Fabry-Perot etalon and laser frequency comb).
LSST telescope modeling overview
J. Sebag, J. Andrew, G. Angeli, et al.
During this early stage of construction of the Large Synoptic Survey Telescope (LSST), modeling has become a crucial system engineering process to ensure that the final detailed design of all the sub-systems that compose the telescope meet requirements and interfaces. Modeling includes multiple tools and types of analyses that are performed to address specific technical issues. Three-dimensional (3D) Computeraided Design (CAD) modeling has become central for controlling interfaces between subsystems and identifying potential interferences. The LSST Telescope dynamic requirements are challenging because of the nature of the LSST survey which requires a high cadence of rapid slews and short settling times. The combination of finite element methods (FEM), coupled with control system dynamic analysis, provides a method to validate these specifications. An overview of these modeling activities is reported in this paper including specific cases that illustrate its impact.
Daniel K. Inouye Solar Telescope systems engineering update
The Daniel K. Inouye Solar Telescope (DKIST), formerly the Advanced Technology Solar Telescope (ATST), is now in its sixth year of construction. During the two years that have elapsed since our last systems engineering update we have been through factory acceptance of several major subsystems including the enclosure, telescope mount assembly, and the primary mirror. With these major milestones behind us, site assembly in progress, and with the integration, test, and commissioning phase about to begin, we will discuss what has been working well in terms of DKIST systems engineering processes along with some things we could have done better and would do differently if given another chance. The paper examines examples of successes including full-scale factory assembly of major mechanical components and some less optimum outcomes. We explore the reasons for success or failure, including the early delivery and level of detail in factory acceptance test procedures.
End-to-end modeling: a new modular and flexible approach
In this paper we present an innovative philosophy to develop the End-to-End model for astronomical observation projects, i.e. the architecture which allows physical modeling of the whole system from the light source to the reduced data. This alternative philosophy foresees the development of the physical model of the different modules, which compose the entire End-to-End system, directly during the project design phase. This approach is strongly characterized by modularity and flexibility; these aspects will be of relevant importance in the next generation astronomical observation projects like E-ELT (European Extremely Large Telescope) because of the high complexity and long-time design and development. With this approach it will be possible to keep the whole system and its different modules efficiently under control during every project phase and to exploit a reliable tool at a system engineering level to evaluate the effects on the final performance both of the main parameters and of different instrument architectures and technologies. This philosophy will be important to allow scientific community to perform in advance simulations and tests on the scientific drivers. This will translate in a continuous feedback to the (system) design process with a resulting improvement in the effectively achievable scientific goals and consistent tool for efficiently planning observation proposals and programs. We present the application case for this End-to-End modeling technique, which is the high resolution spectrograph at the E-ELT (E-ELT HIRES). In particular we present the definition of the system modular architecture, describing the interface parameters of the modules.
Systems engineering overview and concept of operations of the COronal Solar Magnetism Observatory (COSMO)
P. H. H. Oakley, S. Tomczyk, S. Sewell, et al.
The COronal Solar Magnetism Observatory (COSMO) is a proposed facility with unique capabilities for magnetic field measurements in the solar atmosphere and corona to increase our understanding of solar physics and space weather. The observatory underwent a preliminary design review (PDR) in 2015. This paper summarizes the systems engineering plan for this facility as well as a preliminary overview of the concept of operations. In particular we detail the flow of science requirements to engineering requirements, and discuss an overview of requirements management, documentation management, interface control and overall verification and compliance processes. Operationally, we discuss the categories of operational modes, as well as an overview of a daily operational cycle.
An optical toolbox for astronomical instrumentation
The author has open-sourced a program for optical modeling of astronomical instrumentation. The code allows for optical systems to be described in a programming language. An optical prescription may contain coordinate systems and transformations, arbitrary polynomial aspheric surfaces and complex volumes. Rather than using a plethora of rays to evaluate performance, all the derivatives along a ray are computed by automatic differentiation. By adaptively controlling the patches around each ray, the system can be modeled to a guaranteed known precision. The code currently consists of less than 10,000 lines of C++/stdlib code.
Optical parametric evaluation model for a broadband high resolution spectrograph at E-ELT (E-ELT HIRES)
M. Genoni, M. Riva, G. Pariani, et al.
We present the details of a paraxial parametric model of a high resolution spectrograph which can be used as a tool, characterized by good approximation and reliability, at a system engineering level. This model can be exploited to perform a preliminary evaluation of the different parameters as long as different possible architectures of high resolution spectrograph like the one under design for the E-ELT (for the moment called E-ELT HIRES in order to avoid wrong association with the HIRES spectrograph at Keck telescope). The detailed equations flow concerning the first order effects of all the spectrograph components is described; in addition a comparison with the data of a complete physical ESPRESSO spectrograph model is presented as a model proof.
A green observatory in the Chilean Atacama desert
Michael Ramolla, Christian Westhues, Moritz Hackstein, et al.
Since 2007, the Ruhr-Universit¨at Bochum (RUB) in Germany and Universidad Cat´olica del Norte (UCN) in Chile jointly operate the Universit¨atssternwarte der Ruhr-Universit¨at Bochum (USB), which is located in direct neighborhood of the future E-ELT of ESO. It is the only observatory powered exclusively by solar panels and wind turbines. Excess power is stored in batteries that allow uninterrupted operation even in windless nights. The scientific equipment consists of three robotic optical telescopes with apertures ranging from 15 cm (RoBoTT) over 25 cm (BESTII) to 40 cm (BMT) and one 80 cm (IRIS) infra-red telescope. The optical telescopes are equipped with Johnson and Sloan broad band filters together with a large number of narrow and intermediate bands. In the infrared, J,H and K filters are available, accompanied by several narrow bands near the K band wavelength. The second Nasmyth focus in the 80 cm telescope feeds a high resolution echelle spectrograph similar to the FEROS instrument of ESO. This variety of instruments has evolved from different collaborations, i.e. with the University of Hawaii (IfA) in the USA, which provided the near-infrared-camera of the IRIS telescope, or with the Deutsches Zentrum f¨ur Luft- und Raumfahrt (DLR) in Germany, which provided the BESTII telescope. The highly automatized processes on all telescopes enable a single person to run the whole facility, providing the high cost efficiency required for an university observatory. The excellent site conditions allow projects that require daily observations of astronomical objects over epochs of several months or years. Here we report on such studies of young stellar objects from the Bochum Galactic Disk Survey, the multiplicity of stars, quasar variability or the hunt for exo-planets.
Vibration measurements of the Daniel K. Inouye Solar Telescope mount, Coudé rotator, and enclosure assemblies
William R. McBride II, Daniel R McBride
The Daniel K. Inouye Solar Telescope (DKIST) will be the largest solar telescope in the world, with a 4-meter off-axis primary mirror and 16 meter rotating Coudé laboratory within the telescope pier. The off-axis design requires a mount similar to an 8-meter on-axis telescope. Both the telescope mount and the Coudé laboratory utilize a roller bearing technology in place of the more commonly used hydrostatic bearings. The telescope enclosure utilizes a crawler mechanism for the altitude axis. As these mechanisms have not previously been used in a telescope, understanding the vibration characteristics and the potential impact on the telescope image is important.

This paper presents the methodology used to perform jitter measurements of the enclosure and the mount bearings and servo system in a high-noise environment utilizing seismic accelerometers and high dynamic-range data acquisition equipment, along with digital signal processing (DSP) techniques. Data acquisition and signal processing were implemented in MATLAB.

In the factory acceptance testing of the telescope mount, multiple accelerometers were strategically located to capture the six axes-of-motion of the primary and secondary mirror dummies. The optical sensitivity analysis was used to map these mirror mount displacements and rotations into units of image motion on the focal plane.

Similarly, tests were done with the Coudé rotator, treating the entire rotating instrument lab as a rigid body.

Testing was performed by recording accelerometer data while the telescope control system performed tracking operations typical of various observing scenarios. The analysis of the accelerometer data utilized noise-averaging fast Fourier transform (FFT) routines, spectrograms, and periodograms. To achieve adequate dynamic range at frequencies as low as 3Hz, the use of special filters and advanced windowing functions were necessary. Numerous identical automated tests were compared to identify and select the data sets with the lowest level of external interference.

Similar testing was performed on the telescope enclosure during the factory test campaign. The vibration of the enclosure altitude and azimuth mechanisms were characterized.

This paper details jitter tests using accelerometers placed in locations that allowed the motion of the assemblies to be measured while the control system performed various moves typical of on-sky observations. The measurements were converted into the rigid body motion of the structures and mapped into image motion using the telescope's optical sensitivity analysis.
Production of ELZM mirrors: performance coupled with attractive schedule, cost, and risk factors
Antoine Leys, Tony Hull, Thomas Westerhoff
Extreme light weighted ZERODUR Mirrors (ELZM) have been developed to exploit the superb thermal characteristics of ZERODUR. Coupled with up to date mechanical and optical fabrication methods this becomes an attractive technical approach. However the process of making mirror substrates has demonstrated to be unusually rapid and especially cost-effective. ELZM is aimed at the knee of the cost as a function of light weighting curve. ELZM mirrors are available at 88% light weighted. Together with their low risk, low cost production methods, this is presented as a strong option for NASA Explorer and Probe class missions.
A database for TMT interface control documents
Kim Gillies, Scott Roberts, Allan Brighton, et al.
The TMT Software System consists of software components that interact with one another through a software infrastructure called TMT Common Software (CSW). CSW consists of software services and library code that is used by developers to create the subsystems and components that participate in the software system. CSW also defines the types of components that can be constructed and their roles. The use of common component types and shared middleware services allows standardized software interfaces for the components.

A software system called the TMT Interface Database System was constructed to support the documentation of the interfaces for components based on CSW. The programmer describes a subsystem and each of its components using JSON-style text files. A command interface file describes each command a component can receive and any commands a component sends. The event interface files describe status, alarms, and events a component publishes and status and events subscribed to by a component. A web application was created to provide a user interface for the required features. Files are ingested into the software system’s database. The user interface allows browsing subsystem interfaces, publishing versions of subsystem interfaces, and constructing and publishing interface control documents that consist of the intersection of two subsystem interfaces. All published subsystem interfaces and interface control documents are versioned for configuration control and follow the standard TMT change control processes. Subsystem interfaces and interface control documents can be visualized in the browser or exported as PDF files.
Observatory building design: a case study of DAG with infrastructure and facilities
Eastern Anatolian Observatory (DAG), will be built in one of the well-known mountain ridges of Erzurum, Turkey, at latitude of 39°46'50, longitude of 41°13'35 and an altitude of 3.151 meters. As well as erecting the largest telescope of Turkey, the DAG project aims to establish an observatory complex both small in size and functional enough to give service to all astronomy community. In this paper, the challenge is explained in details: geological and geographical limitations, environmental and meteorological constraints, engineering and structural considerations, energy efficiency and sustainability.
High-contrast imaging and high-resolution spectroscopy observation of exoplanets
Ji Wang, Dimitri Mawet, Renyu Hu, et al.
Detection and characterization of exoplanets faces challenges of smaller angular separation and high contrast between exoplanets and their host stars. High contrast imaging (HCI) instruments equipped with coronagraphs are built to meet these challenges, providing a way of spatially suppressing and separating stellar flux from that of a planet. Another way of separating stellar flux can be achieved by high-resolution spectroscopy (HRS), exploiting the fact that spectral features are different between a star and a planet. Observing exoplanets with HCI+HRS will achieve a higher contrast than the spatial or the spectroscopic method alone, improving the sensitivity to planet detection and enabling the study of the physical and chemical processes. Here, we simulate the performance of a HCI+HRS instrument (i.e., the upgrade Keck NIRSPEC and the fiber injection unit) to study its potential in detecting and characterizing currently known directly imaged planets. The simulation considers the spectral information content of an exoplanet, telescope and instrument specifications and realistic noise sources. The result of the simulation helps to set system requirement and informs designs at system-level. We also perform a trade study for a HCI+HRS instrument for a space mission to study an Earth-like planet orbiting a Sun-like star at 10 pc.
AETC: a powerful web tool to simulate astronomical images
Michela Uslenghi, Renato Falomo, Daniela Fantinel
We present the capabilities of the Advanced Exposure Time Calculator (AETC), a tool, publicly available via web interface (http://aetc.oapd.inaf.it/), aimed to simulate astronomical images obtained with any (given) telescope and instrument combination. The tool includes the possibility of providing an accurate modelling of PSF variations in the FoV, a crucial issue for realistic simulations, which makes AETC particularly suitable for simulations of adaptive optics instruments.

To exemplify the AETC capabilities we present a number of simulations for specific science cases, useful for studying the capabilities of next generation AO imaging cameras for Extremely Large Telescopes.