Proceedings Volume 9826

Cyber Sensing 2016

Igor V. Ternovskiy, Peter Chin
cover
Proceedings Volume 9826

Cyber Sensing 2016

Igor V. Ternovskiy, Peter Chin
Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 6 June 2016
Contents: 4 Sessions, 12 Papers, 0 Presentations
Conference: SPIE Defense + Security 2016
Volume Number: 9826

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9826
  • Risks and Mitigations
  • New Concepts for Sensing
  • Learning and Social Aspects of Cyber
Front Matter: Volume 9826
icon_mobile_dropdown
Front Matter: Volume 9826
This PDF file contains the front matter associated with SPIE Proceedings Volume 9826, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Risks and Mitigations
icon_mobile_dropdown
OS friendly microprocessor architecture: Hardware level computer security
Patrick Jungwirth, Patrick La Fratta
We present an introduction to the patented OS Friendly Microprocessor Architecture (OSFA) and hardware level computer security. Conventional microprocessors have not tried to balance hardware performance and OS performance at the same time. Conventional microprocessors have depended on the Operating System for computer security and information assurance. The goal of the OS Friendly Architecture is to provide a high performance and secure microprocessor and OS system. We are interested in cyber security, information technology (IT), and SCADA control professionals reviewing the hardware level security features. The OS Friendly Architecture is a switched set of cache memory banks in a pipeline configuration. For light-weight threads, the memory pipeline configuration provides near instantaneous context switching times. The pipelining and parallelism provided by the cache memory pipeline provides for background cache read and write operations while the microprocessor’s execution pipeline is running instructions. The cache bank selection controllers provide arbitration to prevent the memory pipeline and microprocessor’s execution pipeline from accessing the same cache bank at the same time. This separation allows the cache memory pages to transfer to and from level 1 (L1) caching while the microprocessor pipeline is executing instructions. Computer security operations are implemented in hardware. By extending Unix file permissions bits to each cache memory bank and memory address, the OSFA provides hardware level computer security.
Network reconstruction via graph blending
Rolando Estrada
Graphs estimated from empirical data are often noisy and incomplete due to the difficulty of faithfully observing all the components (nodes and edges) of the true graph. This problem is particularly acute for large networks where the number of components may far exceed available surveillance capabilities. Errors in the observed graph can render subsequent analyses invalid, so it is vital to develop robust methods that can minimize these observational errors. Errors in the observed graph may include missing and spurious components, as well fused (multiple nodes are merged into one) and split (a single node is misinterpreted as many) nodes. Traditional graph reconstruction methods are only able to identify missing or spurious components (primarily edges, and to a lesser degree nodes), so we developed a novel graph blending framework that allows us to cast the full estimation problem as a simple edge addition/deletion problem. Armed with this framework, we systematically investigate the viability of various topological graph features, such as the degree distribution or the clustering coefficients, and existing graph reconstruction methods for tackling the full estimation problem. Our experimental results suggest that incorporating any topological feature as a source of information actually hinders reconstruction accuracy. We provide a theoretical analysis of this phenomenon and suggest several avenues for improving this estimation problem.
Identifying compromised systems through correlation of suspicious traffic from malware behavioral analysis
Ana E. F. Camilo, André Grégio, Rafael D. C. Santos
Malware detection may be accomplished through the analysis of their infection behavior. To do so, dynamic analysis systems run malware samples and extract their operating system activities and network traffic. This traffic may represent malware accessing external systems, either to steal sensitive data from victims or to fetch other malicious artifacts (configuration files, additional modules, commands). In this work, we propose the use of visualization as a tool to identify compromised systems based on correlating malware communications in the form of graphs and finding isomorphisms between them. We produced graphs from over 6 thousand distinct network traffic files captured during malware execution and analyzed the existing relationships among malware samples and IP addresses.
New Concepts for Sensing
icon_mobile_dropdown
Investigating end-to-end security in the fifth generation wireless capabilities and IoT extensions
J. Uher, J. Harper, R. G. Mennecke III, et al.
The emerging 5th generation wireless network will be architected and specified to meet the vision of allowing the billions of devices and millions of human users to share spectrum to communicate and deliver services. The expansion of wireless networks from its current role to serve these diverse communities of interest introduces new paradigms that require multi-tiered approaches. The introduction of inherently low security components, like IoT devices, necessitates that critical data be better secured to protect the networks and users. Moreover high-speed communications that are meant to enable the autonomous vehicles require ultra reliable and low latency paths. This research explores security within the proposed new architectures and the cross interconnection of the highly protected assets with low cost/low security components forming the overarching 5th generation wireless infrastructure.
Learning and Social Aspects of Cyber
icon_mobile_dropdown
Hybrid sentiment analysis utilizing multiple indicators to determine temporal shifts of opinion in OSNs
Joshua S. White, Robert T. Hall, Jeremy Fields, et al.
Utilization of traditional sentiment analysis for predicting the outcome of an event on a social network depends on: precise understanding of what topics relate to the event, selective elimination of trends that don't fit, and in most cases, expert knowledge of major players of the event. Sentiment analysis has traditionally taken one of two approaches to derive a quantitative value from qualitative text. These approaches include the bag of words model", and the usage of "NLP" to attempt a real understanding of the text. Each of these methods yield very similar accuracy results with the exception of some special use cases. To do so, however, they both impose a large computational burden on the analytic system. Newer approaches have this same problem. No matter what approach is used, SA typically caps out around 80% in accuracy. However, accuracy is the result of both polarity and degree of polarity, nothing else. In this paper we present a method for hybridizing traditional SA methods to better determine shifts in opinion over time within social networks. This hybridization process involves augmenting traditional SA measurements with contextual understanding, and knowledge about writers' demographics. Our goal is to not only to improve accuracy, but to do so with minimal impact to computation requirements.
Social relevance: toward understanding the impact of the individual in an information cascade
Robert T. Hall, Joshua S. White, Jeremy Fields
Information Cascades (IC) through a social network occur due to the decision of users to disseminate content. We define this decision process as User Diffusion (UD). IC models typically describe an information cascade by treating a user as a node within a social graph, where a node’s reception of an idea is represented by some activation state. The probability of activation then becomes a function of a node’s connectedness to other activated nodes as well as, potentially, the history of activation attempts. We enrich this Coarse-Grained User Diffusion (CGUD) model by applying actor type logics to the nodes of the graph. The resulting Fine-Grained User Diffusion (FGUD) model utilizes prior research in actor typing to generate a predictive model regarding the future influence a user will have on an Information Cascade. Furthermore, we introduce a measure of Information Resonance that is used to aid in predictions regarding user behavior.
Application of actor level social characteristic indicator selection for the precursory detection of bullies in online social networks
Holly M. White, Jeremy Fields, Robert T. Hall, et al.
Bullying is a national problem for families, courts, schools, and the economy. Social, educational, and professional lives of victims are affected. Early detection of bullies mitigates destructive effects of bullying. Our previous research found, given specific characteristics of an actor, actor logics can be developed utilizing input from natural language processing and graph analysis. Given similar characteristics of cyberbullies, in this paper, we create specific actor logics and apply these to a select social media dataset for the purpose of rapid identification of cyberbullying.
Self-structuring data learning approach
Igor Ternovskiy, James Graham, Daniel Carson
In this paper, we propose a hierarchical self-structuring learning algorithm based around the general principles of the Stanovich/Evans framework and “Quest” group definition of unexpected query. One of the main goals of our algorithm is for it to be capable of patterns learning and extrapolating more complex patterns from less complex ones. This pattern learning, influenced by goals, either learned or predetermined, should be able to detect and reconcile anomalous behaviors. One example of a proposed application of this algorithm would be traffic analysis. We choose this example, because it is conceptually easy to follow. Despite the fact that we are unlikely to develop superior traffic tracking techniques using our algorithm, a traffic based scenario remains a good starting point if only do to the easy availability of data and the number of other known techniques. In any case, in this scenario, the algorithm would observe and track all vehicular traffic in a particular area. After some initial time passes, it would begin detecting and learning the traffic’s patters. Eventually the patterns would stabilize. At that point, “new” patterns could be considered anomalies, flagged, and handled accordingly. This is only one, particular application of our proposed algorithm. Ideally, we want to make it as general as possible, such that it can be applies to numerous different problems with varying types of sensory input and data types, such as IR, RF, visual, census data, meta data, etc.
Visualizing output for a data learning algorithm
Daniel Carson, James Graham, Igor Ternovskiy
This paper details the process we went through to visualize the output for our data learning algorithm. We have been developing a hierarchical self-structuring learning algorithm based around the general principles of the LaRue model. One example of a proposed application of this algorithm would be traffic analysis, chosen because it is conceptually easy to follow and there is a significant amount of already existing data and related research material with which to work with. While we choose the tracking of vehicles for our initial approach, it is by no means the only target of our algorithm. Flexibility is the end goal, however, we still need somewhere to start. To that end, this paper details our creation of the visualization GUI for our algorithm, the features we included and the initial results we obtained from our algorithm running a few of the traffic based scenarios we designed.
Implementing a self-structuring data learning algorithm
James Graham, Daniel Carson, Igor Ternovskiy
In this paper, we elaborate on what we did to implement our self-structuring data learning algorithm. To recap, we are working to develop a data learning algorithm that will eventually be capable of goal driven pattern learning and extrapolation of more complex patterns from less complex ones. At this point we have developed a conceptual framework for the algorithm, but have yet to discuss our actual implementation and the consideration and shortcuts we needed to take to create said implementation. We will elaborate on our initial setup of the algorithm and the scenarios we used to test our early stage algorithm. While we want this to be a general algorithm, it is necessary to start with a simple scenario or two to provide a viable development and testing environment. To that end, our discussion will be geared toward what we include in our initial implementation and why, as well as what concerns we may have. In the future, we expect to be able to apply our algorithm to a more general approach, but to do so within a reasonable time, we needed to pick a place to start.
Behavior-based network management: a unique model-based approach to implementing cyber superiority
Jocelyn M. Seng
Behavior-Based Network Management (BBNM) is a technological and strategic approach to mastering the identification and assessment of network behavior, whether human-driven or machine-generated. Recognizing that all five U.S. Air Force (USAF) mission areas rely on the cyber domain to support, enhance and execute their tasks, BBNM is designed to elevate awareness and improve the ability to better understand the degree of reliance placed upon a digital capability and the operational risk.2 Thus, the objective of BBNM is to provide a holistic view of the digital battle space to better assess the effects of security, monitoring, provisioning, utilization management, allocation to support mission sustainment and change control. Leveraging advances in conceptual modeling made possible by a novel advancement in software design and implementation known as Vector Relational Data Modeling (VRDM™), the BBNM approach entails creating a network simulation in which meaning can be inferred and used to manage network behavior according to policy, such as quickly detecting and countering malicious behavior. Initial research configurations have yielded executable BBNM models as combinations of conceptualized behavior within a network management simulation that includes only concepts of threats and definitions of “good” behavior. A proof of concept assessment called “Lab Rat,” was designed to demonstrate the simplicity of network modeling and the ability to perform adaptation. The model was tested on real world threat data and demonstrated adaptive and inferential learning behavior. Preliminary results indicate this is a viable approach towards achieving cyber superiority in today's volatile, uncertain, complex and ambiguous (VUCA) environment.