• A Graph theoretical approach to Network Vulnerability Analysis and Countermeasures

      Hamid, Thaier K.A.; Maple, Carsten (Foundation of Computer Science, New York, USA, 2011)
      Computer networks are certainly vulnerable as long as they deliver services from different machines. An attack graph is a security model representing the chains of vulnerability exploits in a network displays the ways an attacker can compromise a network or host. A number of researchers have admitted attack graph visual complications and a large amount of source data must be assembled to accurately build an attack graph, the difficulty scaling to large, enterprise-size networks with tens of thousands of hosts and the lack comprehensive understanding. Information on vulnerabilities is present in public vulnerability databases, such as the National Vulnerability Database and Nessus. But current attack graph tools are reserved to only limited attributes. The automatic formation of vulnerability information has been troublesome and vulnerability descriptions were created by hand or based on limited information. Much vulnerability has still not been discov-ered and many others without patches or solutions Our approach to developing a cost metric exploits the Markov’s model using combinations well known vulnerabilities (the Common Vulnerability Scoring System, CVSS) and Risk Assessment Values (RAV) and using ranking algorithms (similar to V. Mehta et al. 2006 and kijsanayothin, 2010) but instead of using vulnerabilities. For each host we have developed a cost rank Markov’s model reducing the complexity in the attack graph, representing the network topology and dipping the problem of visibility.
    • Methodologies to develop quantitative risk evaluation metrics

      Hamid, Thaier K.A.; Maple, Carsten; Sant, Paul (FCS - Foundation of Computer Science, USA, 2012)
      The goal of this work is to advance a new methodology to measure a severity cost for each host using the Common Vulnerability Scoring System (CVSS) based on base, temporal and environmental metrics by combining related sub-scores to produce a unique severity cost by modeling the problem's parameters in to a mathematical framework. We build our own CVSS Calculator using our equations to simplify the calculations of the vulnerabilities scores and to benchmark with other models. We design and develop a new approach to represent the cost assigned to each host by dividing the scores of the vulnerabilities to two main levels of privileges, user and root, and we classify these levels into operational levels to identify and calculate the severity cost of multi steps vulnerabilities. Finally we implement our framework on a simple network, using Nessus scanner as tool to discover known vulnerabilities and to implement the results to build and represent our cost centric attack graph.
    • A parallel algorithm to calculate the costrank of a network

      Hamid, Thaier K.A.; Maple, Carsten; Yue, Yong (Foundation of Computer Science, New York, USA, 2012)
      We developed analogous parallel algorithms to implement CostRank for distributed memory parallel computers using multi processors. Our intent is to make CostRank calculations for the growing number of hosts in a fast and a scalable way. In the same way we intent to secure large scale networks that require fast and reliable computing to calculate the ranking of enormous graphs with thousands of vertices (states) and millions or arcs (links). In our proposed approach we focus on a parallel CostRank computational architecture on a cluster of PCs networked via Gigabit Ethernet LAN to evaluate the performance and scalability of our implementation. In particular, a partitioning of input data, graph files, and ranking vectors with load balancing technique can improve the runtime and scalability of large-scale parallel computations. An application case study of analogous Cost Rank computation is presented. Applying parallel environment models for one-dimensional sparse matrix partitioning on a modified research page, results in a significant reduction in communication overhead and in per-iteration runtime. We provide an analytical discussion of analogous algorithms performance in terms of I/O and synchronization cost, as well as of memory usage.