O crescimento e proliferação da Internet nos últimos anos tem trazido à tona vários problemas relativos à segurança e operacionabilidade das máquinas de universidades e empresas. Inúmeras invasões são realizadas anualmente. Entretanto, a grande maioria delas não possui registro algum, sendo muitas vezes de total desconhecimento do administrador local. Para prover soluções para estes problemas foi realizado um estudo, aqui apresentado, que tem como principal objetivo propor uma filosofia de gerência de segurança. São utilizados para isso conceitos de gerenciamento de redes como SNMPv2, aliado à implementação de um conjunto de ferramentas que garantam a integridade dos vários sistemas envolvidos. O resultado foi um sistema denominado CUCO1, que alerta sobre tentativas de ataque e situações de risco. CUCO foi projetado para permitir a um administrador, protegido ou não por uma firewall, dispor de um controle maior e melhor sobre acessos e tentativas de acessos indevidos à sua rede. O sistema usa uma estratégia de monitoração de eventos em diferentes níveis e aplicações, tentando com isto detectar e alertar a ocorrência de ataques tradicionais. Também está incorporado um bloco de funções que visam identificar um agressor situado em algum lugar da Internet...
The quality of service framework in a heterogeneous computer network environment may provide users and applications with a wide range of security mechanisms and services. We propose a simplified user security interface and a method for mapping this interface to complex underlying security mechanisms and services. Additionally, we illustrate a mechanism for mapping multiple security policies to the same user security interface.
Fonte: Escola de Pós-Graduação NavalPublicador: Escola de Pós-Graduação Naval
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
The quality of service framework in a heterogeneous computer network environment may provide users and applications with a wide range of security mechanisms and services. We propose a simplified user security interface and a method for mapping the interface to complex underlying security mechanisms and services. Additionally, we illustrate a mechanism for mapping multiple security policies to the same user security interface.
When classified data of different classifications are stored in a database, it is necessary for a contemporary database system to pass through other classified data to find the properly classified data. Although the user of the system may only see data classified at the user's level, the database system itself has breached the security by bringing the other classified data into the main memory from secondary storage. Additionally, the system is not efficient as it could be because unnecessary material has been retrieved. This is a problem in access precision. This thesis proposes a solution to the access precision and pass-through problems using a database counterpart to the mathematical concept of equivalence relations. Each record of the database contains at least one security attribute (e.g., classification) and the database is divided into compartments of records; Compartments are disjoint sets, where each compartment of records has the same aggregate of security attributes. A suitable database model, the Attribute-Based Data Model, is selected, and an example of implementation is provided. Keywords: Database security; Multilevel security; Computer security. (Theses); http://archive.org/details/secureaccesscont00hopp; U.S. Navy (U.S.N.) author.
Approved for public release; distribution is unlimited; The Monterey Security Architecture (MYSEA) provides trusted security services, allowing users to access information at different sensitivity levels at the same time. The MYSEA server enforces a mandatory access control policy to ensure that users can only access data for which they are authorized. We would like to know the consequences of the MYSEA design on the performance of the MYSEA system. In articular, have the MYSEA trusted processes introduced any design bottlenecks into the system? The objective of this thesis is to analyze the performance of selected aspects of MYSEA and, when applicable, identify system performance bottlenecks. In the absence of bottlenecks, our secure system performance study can be interpreted as characterizing the "cost of security" in a multilevel security context. We analyze the overhead associated with MYSEA by targeting and benchmarking its components and services. We deployed the netperf tool as a MYSEA service, to observe costs associated with IPSec, the MYSEA trusted proxy and communication among servers in the MYSEA Federation. Our benchmark tests provided useful insights to the performance overhead introduced by MYSEA's design and highlighted the cost of security of selected aspects in MYSEA.; Civilian...
Nguyen, D Thuy; Gondree, Mark A.; Shifflet, David J.; Khosalim, Jean; Levin, Timothy E.; Irvine, Cynthia E.
Fonte: Military Communications Conference (MILCOM 2010), San Jose, CAPublicador: Military Communications Conference (MILCOM 2010), San Jose, CA
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
The Monterey Security Architecture addresses the need to share high-value data across multiple domains of different classification levels while enforcing information flow
policies. The architecture allows users with different security authorizations to securely collaborate and exchange information using commodity computers and familiar commercial client software that generally lack the prerequisite assurance and functional security protections. MYSEA seeks to meet two compelling requirements, often assumed to be at odds: enforcing critical, mandatory security policies, and allowing access and collaboration in a familiar work environment. Recent additions to the MYSEA design expand the architecture to support a cloud of cross-domain services, hosted within
a federation of multilevel secure (MLS) MYSEA servers. The MYSEA cloud supports single-sign on, service replication, and
network-layer quality of security service. This new cross domain, distributed architecture follows the consumption and delivery model for cloud services, while maintaining the federated control model necessary to support and protect cross domain collaboration within the enterprise. The resulting architecture shows the feasibility of high-assurance, cross-domain services hosted within a community cloud suitable for interagency...
This dissertation presents Pidgin, a static program analysis and understanding tool that enables the specification and enforcement of precise application-specific information security guarantees. Pidgin also allows developers to interactively explore the information flows in their applications to develop policies and investigate counter-examples.
Pidgin combines program dependence graphs (PDGs), which precisely capture the in- formation flows in a whole application, with a custom PDG query language. Queries express properties about the paths in the PDG; because paths in the PDG correspond to information flows in the application, queries can be used to specify global security policies.
The effectiveness of Pidgin depends on the precision of the static analyses used to produce program dependence graphs. In particular it depends on the precision of a points-to analysis. Points-to analysis is a foundational static analysis that estimates the memory locations pointer expressions can refer to at runtime. Points-to information is used by clients ranging from compiler optimizations to security tools like Pidgin. The precision of these client analyses relies on the precision of the points-to analysis. In this dissertation we investigate points-to analysis performance/precision trade-offs...
Wireless Sensor networks (WSN) is an emerging technology and have great
potential to be employed in critical situations like battlefields and
commercial applications such as building, traffic surveillance, habitat
monitoring and smart homes and many more scenarios. One of the major challenges
wireless sensor networks face today is security. While the deployment of sensor
nodes in an unattended environment makes the networks vulnerable to a variety
of potential attacks, the inherent power and memory limitations of sensor nodes
makes conventional security solutions unfeasible. The sensing technology
combined with processing power and wireless communication makes it profitable
for being exploited in great quantity in future. The wireless communication
technology also acquires various types of security threats. This paper
discusses a wide variety of attacks in WSN and their classification mechanisms
and different securities available to handle them including the challenges
faced.; Comment: 9 Pages IEEE format, International Journal of Computer Science and
Information Security, IJCSIS 2009, ISSN 1947 5500, Impact Factor 0.423
Modern-day computer security relies heavily on cryptography as a means to
protect the data that we have become increasingly reliant on. The main research
in computer security domain is how to enhance the speed of RSA algorithm. The
computing capability of Graphic Processing Unit as a co-processor of the CPU
can leverage massive-parallelism. This paper presents a novel algorithm for
calculating modulo value that can process large power of numbers which
otherwise are not supported by built-in data types. First the traditional
algorithm is studied. Secondly, the parallelized RSA algorithm is designed
using CUDA framework. Thirdly, the designed algorithm is realized for small
prime numbers and large prime number . As a result the main fundamental problem
of RSA algorithm such as speed and use of poor or small prime numbers that has
led to significant security holes, despite the RSA algorithm's mathematical
soundness can be alleviated by this algorithm.; Comment: 14 pages, Journal paper
We demonstrate, by a number of examples, that information-flow security
properties can be proved from abstract architectural descriptions, that
describe only the causal structure of a system and local properties of trusted
components. We specify these architectural descriptions of systems by
generalizing intransitive noninterference policies to admit the ability to
filter information passed between communicating domains. A notion of refinement
of such system architectures is developed that supports top-down development of
architectural specifications and proofs by abstraction of information security
properties. We also show that, in a concrete setting where the causal structure
is enforced by access control, a static check of the access control setting
plus local verification of the trusted components is sufficient to prove that a
generalized intransitive noninterference policy is satisfied.
The IEEE 802.11b wireless ethernet standart has several serious security
flaws. This paper describes this flaws, surveys wireless networks in the
Cologne/Bonn area to get an assessment of the security configurations of
fielded networks and analizes the legal protections provided to wireless
ethernet operators by german law. We conclude that wireless ethernets without
additional security measures are not usable for any transmissions which are not
meant for a public audience.
Deployment of distributed systems sets high requirements for procedures for
the security testing of these systems. This work introduces: (1) a list of
typical threats based on standards and actual practices; (2) an extended
six-layered model for test generation mission on the basis of technical
specifications and end-user requirements. Based on the list of typical threats
and the multilayer model, we describe a formal approach to the automated design
and generation of security mechanisms checklists for complex distributed
systems.; Comment: 7 pages, 6 figures, 3 tables, Published with International Journal of
Computer Trends and Technology (IJCTT). arXiv admin note: text overlap with
In this paper we focus on one critical issue in mobile ad hoc networks that
is multicast routing and propose a mesh based on demand multicast routing
protocol for Ad-Hoc networks with QoS (quality of service) support. Then a
model was presented which is used for create a local recovering mechanism in
order to joining the nodes to multi sectional groups at the minimized time and
method for security in this protocol we present .; Comment: 5 Pages IEEE Format, International Journal of Computer Science and
Information Security, IJCSIS 2009, ISSN 1947 5500, Impact Factor 0.423
This paper presents a demo of our Security Toolbox to detect novel malware in
Android apps. This Toolbox is developed through our recent research project
funded by the DARPA Automated Program Analysis for Cybersecurity (APAC)
project. The adversarial challenge ("Red") teams in the DARPA APAC program are
tasked with designing sophisticated malware to test the bounds of malware
detection technology being developed by the research and development ("Blue")
teams. Our research group, a Blue team in the DARPA APAC program, proposed a
"human-in-the-loop program analysis" approach to detect malware given the
source or Java bytecode for an Android app. Our malware detection apparatus
consists of two components: a general-purpose program analysis platform called
Atlas, and a Security Toolbox built on the Atlas platform. This paper describes
the major design goals, the Toolbox components to achieve the goals, and the
workflow for auditing Android apps. The accompanying video
(http://youtu.be/WhcoAX3HiNU) illustrates features of the Toolbox through a
live audit.; Comment: 4 pages, 1 listing, 2 figures
In recent months there has been an increase in the popularity and public
awareness of secure, cloudless file transfer systems. The aim of these services
is to facilitate the secure transfer of files in a peer-to- peer (P2P) fashion
over the Internet without the need for centralised authentication or storage.
These services can take the form of client installed applications or entirely
web browser based interfaces. Due to their P2P nature, there is generally no
limit to the file sizes involved or to the volume of data transmitted - and
where these limitations do exist they will be purely reliant on the capacities
of the systems at either end of the transfer. By default, many of these
services provide seamless, end-to-end encryption to their users. The
cybersecurity and cyberforensic consequences of the potential criminal use of
such services are significant. The ability to easily transfer encrypted data
over the Internet opens up a range of opportunities for illegal use to
cybercriminals requiring minimal technical know-how. This paper explores a
number of these services and provides an analysis of the risks they pose to
corporate and governmental security. A number of methods for the forensic
investigation of such transfers are discussed.; Comment: 15 pages; Proc. of Tenth ADFSL Conference on Digital Forensics...
This study reports on an implementation of cryptographic pairings in a
general purpose computer algebra system. For security levels equivalent to the
different AES flavours, we exhibit suitable curves in parametric families and
show that optimal ate and twisted ate pairings exist and can be efficiently
evaluated. We provide a correct description of Miller's algorithm for signed
binary expansions such as the NAF and extend a recent variant due to Boxall et
al. to addition-subtraction chains. We analyse and compare several algorithms
proposed in the literature for the final exponentiation. Finally, we ive
recommendations on which curve and pairing to choose at each security level.
The need for flexible, low-overhead virtualization is evident on many fronts
ranging from high-density cloud servers to mobile devices. During the past
decade OS-level virtualization has emerged as a new, efficient approach for
virtualization, with implementations in multiple different Unix-based systems.
Despite its popularity, there has been no systematic study of OS-level
virtualization from the point of view of security. In this report, we conduct a
comparative study of several OS-level virtualization systems, discuss their
security and identify some gaps in current solutions.; Comment: 20 pages, 2 figures
Network penetration testing identifies the exploits and vulnerabilities those
exist within computer network infrastructure and help to confirm the security
measures. The objective of this paper is to explain methodology and methods
behind penetration testing and illustrate remedies over it, which will provide
substantial value for network security Penetration testing should model real
world attacks as closely as possible. An authorized and scheduled penetration
testing will probably detected by IDS (Intrusion Detection System). Network
penetration testing is done by either or manual automated tools. Penetration
test can gather evidence of vulnerability in the network. Successful testing
provides indisputable evidence of the problem as well as starting point for
prioritizing remediation. Penetration testing focuses on high severity
vulnerabilities and there are no false positive.
Current network data rates have made it increasingly difficult for cyber security specialists to protect
the information stored on private systems. Greater throughput not only allows for higher
productivity, but also creates a “larger” security hole that may allow numerous malicious applications
(e.g. bots) to enter a private network. Software-based intrusion detection/prevention systems
are not fast enough for the massive amounts of traffic found on 1 Gb/s and 10 Gb/s networks to
be fully effective. Consequently, businesses accept more risk and are forced to make a conscious
trade-off between threat and performance.
A solution that can handle a much broader view of large-scale, high-speed systems will allow us
to increase maximum throughput and network productivity. This paper describes a novel method
of solving this problem by joining a pre-existing signature-based intrusion prevention system with
an anomaly-based botnet detection algorithm in a hybrid hardware/software implementation.
Our contributions include the addition of an anomaly detection engine to a pre-existing signature
detection engine in hardware. This hybrid system is capable of processing full-duplex 10
Gb/s traffic in real-time with no packet loss. The behavior-based algorithm and user interface
are customizable. This research has also led to improvements of the vendor supplied signal and
programming interface specifications which we have made readily available.
Intrusion Detection System (IDS) has become an integral component in the field of network security. Prior research has focused on developing efficient IDSs and correlating attacks as Attack Tracks. To enhance the network analyst's situational awareness, sequence modeling techniques like Variable Length Markov Models (VLMM) have been used to project likely future attacks. However, such projections are made assuming that the IDSs detect each and every attack action, which is not viable in reality. An IDS could miss an attack due to loss of packets or improper traffic analysis, or when an attacker evades detection by
employing obfuscation techniques. Such missed detections, could negatively affect the prediction model, resulting in erroneous estimations.
This thesis investigates the prediction performance as an error analysis of VLMM when used for projecting cyber attacks. This analysis is based on the impact of missed alerts,
representing undetected attack actions. The analysis begins with an analytical study of a state-based Markov model, called Causal-State Splitting Reconstruction (CSSR), to contrast the context-based VLMM. Simulation results show that VLMM and CSSR perform
comparably, with VLMM being a simpler model without the need to maintain and train the state space. A thorough design of experiments studies the effects of missing IDS alerts...