Página 1 dos resultados de 158 itens digitais encontrados em 0.044 segundos

Proposta de implementação de uma arquitetura para a Internet de nova geração; An implementation proposal of a next generation internet architecture

Walter Wong
Fonte: Biblioteca Digital da Unicamp Publicador: Biblioteca Digital da Unicamp
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 11/07/2007 Português
Relevância na Pesquisa
65.69%
A concepção original da arquitetura da Internet foi baseada em uma rede fixa e confiável. Hoje em dia, a Internet se tornou dinâmica e vulnerável aos ataques de segurança. Também não era prevista a necessidade de integração de tecnologias heterogêneas nem de ambientes sem fio. A arquitetura atual apresenta uma série de barreiras técnicas para prover estes serviços, sendo uma das maiores a sobrecarga semântica do Internet Protocol (IP). O endereço IP atua como localizador na camada de rede e como identificador na camada de transporte, impossibilitando novas funcionalidades como a mobilidade e abrindo brechas de segurança. Este trabalho apresenta uma proposta de implementação de uma arquitetura para Internet de nova geração para o provisionamento de novos serviços de forma natural e integrada para a Internet atual. A proposta de arquitetura de implementação oferece suporte à mobilidade, ao multihoming, à segurança, à integração de redes heterogêneas e às aplicações legadas através da introdução de uma nova camada de identificação na arquitetura atual. Esta nova camada tem por objetivo separar a identidade da localização e se tornar uma opção de comunicação para as redes heterogêneas. Mecanismos adicionais foram propostos para prover o suporte às funcionalidades da arquitetura...

Decoupling congestion control and bandwidth allocation policy with application to high bandwidth-delay product networks

Katabi, Dina, 1971-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 129 p.; 9557907 bytes; 9557665 bytes; application/pdf; application/pdf
Português
Relevância na Pesquisa
75.52%
In this dissertation, we propose a new architecture for Internet congestion control that decouples the control of congestion from the bandwidth allocation policy. We show that the new protocol, called XCP, enables very large per-flow throughput (e.g., more than 1 Gb/s), which is unachievable using current congestion control. Additionally, we show via extensive simulations that XCP significantly improves the overall performance, reducing drop rate by three orders of magnitude, increasing utilization, decreasing queuing delay, and attaining fairness in a few RTTs. Using tools from control theory, we model XCP and demonstrate that, in steady state, it is stable for any capacity, delay, and number of sources. XCP does not maintain any per-flow state in routers and requires only a few CPU cycles per packet making it implementable in high-speed routers. Its flexible architecture facilitates the design and implementation of quality of service, such as guaranteed and proportional bandwidth allocations. Finally, XCP is amenable to gradual deployment.; by Dina Katabi.; Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.; Includes bibliographical references (p. 124-129).

Performance, scalability, and flexibility in the RAW network router

DeGangi, Anthony M
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 46 p.; 2223111 bytes; 2223448 bytes; application/pdf; application/pdf
Português
Relevância na Pesquisa
75.48%
Conventional high speed Internet routers are built using custom designed microprocessors, dubbed network processors, to efficiently handle the task of packet routing. While capable of meeting the performance demanded of them, these custom network processors generally lack the flexibility to incorporate new features and do not scale well beyond that for which they were designed. Furthermore, they tend to suffer from long and costly development cycles, since each new generation must be redesigned to support new features and fabricated anew in hardware. This thesis presents a new design for a network processor, one implemented entirely in software, on a tiled, general purpose microprocessor. The network processor is implemented on the Raw microprocessor, a general purpose microchip developed by the Computer Architecture Group at MIT. The Raw chip consists of sixteen identical processing tiles arranged in a four by four matrix and connected by four inter-tile communication networks; the Raw chip is designed to be able to scale up merely by adding more tiles to the matrix. By taking advantage of the parallelism inherent in the task of packet forwarding on this inherently parallel microprocessor, the Raw network processor is able to achieve performance that matches or exceeds that of commercially available custom designed network processors. At the same time...

CAPRI : a common architecture for distributed probabilistic Internet fault diagnosis; Common architecture for distributed probabilistic Internet fault diagnosis

Lee, George J. (George Janbing), 1979-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 222 p.
Português
Relevância na Pesquisa
75.65%
This thesis presents a new approach to root cause localization and fault diagnosis in the Internet based on a Common Architecture for Probabilistic Reasoning in the Internet (CAPRI) in which distributed, heterogeneous diagnostic agents efficiently conduct diagnostic tests and communicate observations, beliefs, and knowledge to probabilistically infer the cause of network failures. Unlike previous systems that can only diagnose a limited set of network component failures using a limited set of diagnostic tests, CAPRI provides a common, extensible architecture for distributed diagnosis that allows experts to improve the system by adding new diagnostic tests and new dependency knowledge. To support distributed diagnosis using new tests and knowledge, CAPRI must overcome several challenges including the extensible representation and communication of diagnostic information, the description of diagnostic agent capabilities, and efficient distributed inference. Furthermore, the architecture must scale to support diagnosis of a large number of failures using many diagnostic agents.; (cont.) To address these challenges, this thesis presents a probabilistic approach to diagnosis based on an extensible, distributed component ontology to support the definition of new classes of components and diagnostic tests; a service description language for describing new diagnostic capabilities in terms of their inputs and outputs; and a message processing procedure for dynamically incorporating new information from other agents...

Agent organization in the knowledge plane; Agent organization in the KP

Li, Ji, 1975-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 191 p.
Português
Relevância na Pesquisa
65.6%
In designing and building a network like the Internet, we continue to face the problems of scale and distribution. With the dramatic expansion in scale and heterogeneity of the Internet, network management has become an increasingly difficult task. Furthermore, network applications often need to maintain efficient organization among the participants by collecting information from the underlying networks. Such individual information collection activities lead to duplicate efforts and contention for network resources. The Knowledge Plane (KP) is a new common construct that provides knowledge and expertise to meet the functional, policy and scaling requirements of network management, as well as to create synergy and exploit commonality among many network applications. To achieve these goals, we face many challenging problems, including widely distributed data collection, efficient processing of that data, wide availability of the expertise, etc. In this thesis, to provide better support for network management and large-scale network applications, I propose a knowledge plane architecture that consists of a network knowledge plane (NetKP) at the network layer, and on top of it, multiple specialized KPs (spec-KPs). The NetKP organizes agents to provide valuable knowledge and facilities about the Internet to the spec-KPs. Each spec-KP is specialized in its own area of interest. In both the NetKP and the spec-KPs...

NPSNET : integration of distributed interactive simulation (DIS) protocol for communication architecture and information interchange

Zeswitz, Steven Randall.
Fonte: Monterey, California. Naval Postgraduate School Publicador: Monterey, California. Naval Postgraduate School
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
55.49%
Approved for public release; distribution is unlimited.; The Computer Science Department at the Naval Postgraduate School in Monterey, California has developed a low-cost real-time interactive network based simulation system, known as NPSNET, that uses Silicon Graphics workstations. NPSNET has used non-standard protocols which constrains its participation in distributed simulation. DIS specifies standard protocols and is emerging as the international standard for distributed simulation. This research focused on the development of a robust, high-performance implementation of the DIS Version 2.0.3 protocol to support graphic simulation systems (e.g. NPSNET). The challenge was to comply with the standard and minimize network latency thereby maintaining the time and space coherence of distributed simulations. The resulting DIS Network Library consists of an application program interface (API) to low level network routines, a host of network utilities, and a network harness that takes advantage of multiprocessor workstations. The library was successfully tested on our local network and two configurations of a T-I based internet, the Defense Simulation Internet (DSI), with the Air Force Institute of Technology and Advanced Research Projects Agency. The testing confirmed that the semantics and syntax of the DIS protocol is properly implemented and the latency incurred by the network does not adversely effect the simulation application.

A design comparison between IPv4 and IPv6 in the context of MYSEA, and implementation of an IPv6 MYSEA prototype

O'Neal, Matthew R.
Fonte: Monterey, California. Naval Postgraduate School Publicador: Monterey, California. Naval Postgraduate School
Tipo: Tese de Doutorado Formato: xiv, 69 p. : ill.
Português
Relevância na Pesquisa
55.52%
Approved for public release, distribution is unlimited; Internet Protocol version six (IPv6), the next generation Internet Protocol, exists sparsely in today's world. However, as it gains popularity, it will grow into a vital part of the Internet and communications technology in general. Many large organizations, including the Department of Defense, are working toward deploying IPv6 in many varied applications. This thesis focuses on the design and implementation issues that accompany a migration from Internet Protocol version four (IPv4) to IPv6 in the Monterey Security Enhanced Architecture (MYSEA). The research for this thesis consists of two major parts: a functional comparison between the IPv6 and IPv4 designs, and a prototype implementation of MYSEA with IPv6. The current MYSEA prototype relies on a subset of Network Address Translation (NAT) functionality to support the network's operation; and, due to the fact that IPv6 has no native support for NAT, this work also requires the creation of a similar mechanism for IPv6. This thesis provides a preliminary examination of IPv6 in MYSEA, which is a necessary step in determining whether the new protocol will assist with or detract from the enforcement of MYSEA policies.; Ensign...

An evaluation of best effort traffic management of server and agent based active network management (SAAM) architecture

Ayvat, Birol
Fonte: Monterey, California. Naval Postgraduate School Publicador: Monterey, California. Naval Postgraduate School
Tipo: Tese de Doutorado Formato: xii, 91 p. : ill. (some col.) ;
Português
Relevância na Pesquisa
55.49%
The Server and Agent-based Active Network Management (SAAM) architecture was initially designed to work with the next generation Internet where increasingly sophisticated applications will require QoS guarantees. Although such QoS traffic is growing in volume, Best Effort traffic, which does not require QoS guarantees, needs to be supported for foreseeable future. Thus, SAAM must handle Best Effort traffic as well as QoS traffic. A Best Effort traffic management algorithm was developed for SAAM recently to take advantage of the abilities of the SAAM server. However, this algorithm has not been evaluated quantitatively. This thesis conducts experiments to compare the performance of the Best Effort traffic management scheme of the SAAM architecture against the well known MPLS Adaptive Traffic Engineering (MATE) Algorithm. A couple of realistic network topologies were used. The results show while SAAM may not perform as well as MATE with a fixed set of paths, using SAAM's dynamic path deployment functionality allows the load to be distributed across more parts of the network, thus achieving better performance than MATE. Much of the effort was spent on implementing the MATE algorithm in SAAM. Some modifications were also made to the SAAM code based on the experimental results to increase the performance of SAAM's Best Effort solution.; Turkish Navy author.

A best effort traffic management solution for server and agent-based active network management (SAAM)

Wofford, Corey D.
Fonte: Monterey, California. Naval Postgraduate School Publicador: Monterey, California. Naval Postgraduate School
Tipo: Tese de Doutorado Formato: xviii, 143 p. : ill. (some col.) ;
Português
Relevância na Pesquisa
65.52%
Approved for public release, distribution unlimited; Server and Agent-based Active Network Management (SAAM) is a promising network management solution for the Internet of tomorrow, "Next Generation Internet (NGI)." SAAM is a new network architecture that incorporates many of the latest features of Internet technologies. The primary purpose of SAAM is managing network quality of service (QoS) to support the resource-intensive next-generation Internet applications. Best effort (BE) traffic will continue to exist in the era of NGI. Thus SAAM must be able to manage such traffic. In this thesis, we propose a solution for management of BE traffic within SAAM. With SAAM, it is possible to make a "better best effort" in routing BE packets. Currently, routers handle BE traffic based solely on local information or from information obtained by linkstate flooding which may not be reliable. In contrast, SAAM centralizes management at a server where better (more optimal) decisions can be made. SAAM's servers have access to accurate topology and timely traffic-condition information. Additionally, due to their placement on high-end routers or dedicated machines, the servers can better afford computationally intensive routing solutions. It is these characteristics that are exploited by the solution design and implementation of this thesis.; Lieutenant...

MIT Device Simulation WebLab : an online simulator for microelectronic devices; WeblabSim

Solis, Adrian (Adrian Orbita)
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 157 p.; 7134632 bytes; 7141207 bytes; application/pdf; application/pdf
Português
Relevância na Pesquisa
65.53%
In the field of microelectronics, a device simulator is an important engineering tool with tremendous educational value. With a device simulator, a student can examine the characteristics of a microelectronic device described by a particular model. This makes it easier to develop an intuition for the general behavior of that device and examine the impact of particular device parameters on device characteristics. In this thesis, we designed and implemented the MIT Device Simulation WebLab ("WeblabSim"), an online simulator for exploring the behavior of microelectronic devices. WeblabSim makes a device simulator readily available to users on the web anywhere, and at any time. Through a Java applet interface, a user connected to the Internet specifies and submits a simulation to the system. A program performs the simulation on a computer that can be located anywhere else on the Internet. The results are then sent back to the user's applet for graphing and further analysis. The WeblabSim system uses a three-tier design based on the iLab Batched Experiment Architecture. It consists of a client applet that lets users configure simulations, a laboratory server that runs them, and a generic service broker that mediates between the two through SOAP-based web services. We have implemented a graphical client applet...

User authentication and remote execution across administrative domains

Kaminsky, Michael, 1976-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 77 p.; 4200828 bytes; 4208861 bytes; application/pdf; application/pdf
Português
Relevância na Pesquisa
75.52%
(cont.) selectively delegates authority to processes running on remote machines that need to access other resources. The delegation mechanism lets users incrementally construct trust policies for remote machines. Measurements of the system demonstrate that the modularity of REX's architecture does not come at the cost of performance.; A challenge in today's Internet is providing easy collaboration across administrative boundaries. Using and sharing resources between individuals in different administrative domains should be just as easy and secure as sharing them within a single domain. This thesis presents a new authentication service and a new remote login and execution utility that address this challenge. The authentication service contributes a new design point in the space of user authentication systems. The system provides the flexibility to create cross-domain groups in the context of a global, network file system using a familiar, intuitive interface for sharing files that is similar to local access control mechanisms. The system trades off freshness for availability by pre-fetching and caching remote users and groups defined in other administrative domains, so the file server can make authorization decisions at file-access time using only local information. The system offers limited privacy for group lists and has all-or-nothing delegation to other administrative domains via nested groups. Experiments demonstrate that the authentication server scales to groups with tens of thousands of members. REX contributes a new architecture for remote execution that offers extensibility and security. To achieve extensibility...

NIRA : a new Internet routing architecture; New Internet routing architecture

Yang, Xiaowei, 1974-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 181 p.; 11513732 bytes; 11536499 bytes; application/pdf; application/pdf
Português
Relevância na Pesquisa
75.66%
(cont.) mechanism, a user only needs to know a small region of the Internet in order to select a route to reach a destination. In addition, a novel route representation and packet forwarding scheme is designed such that a source and a destination address can uniquely represent a sequence of providers a packet traverses. Network measurement, simulation, and analytic modeling are used in combination to evaluate the design of NIRA. The evaluation suggests that NIRA is scalable.; The present Internet routing system faces two challenging problems. First, unlike in the telephone system, Internet users cannot choose their wide-area Internet service providers (ISPs) separately from their local access providers. With the introduction of new technologies such as broadband residential service and fiber-to-the-home, the local ISP market is often a monopoly or a duopoly. The lack of user choice is likely to reduce competition among wide-area ISPs, limiting the incentives for wide-area ISPs to improve quality of service, reduce price, and offer new services. Second, the present routing system fails to scale effectively in the presence of real-world requirements such as multi-homing for robust and redundant Internet access. A multi-homed site increases the amount of routing state maintained globally by the Internet routing system. As the demand for multi-homing continues to rise...

Moving Towards a Socially-Driven Internet Architectural Design

Sofia, Rute C.; Mendes, Paulo; Damásio, José Manuel; Henriques, Sara; Giglietto, Fabio; Giambitto, Erica; Bogliolo, Alessadro
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 05/08/2015 Português
Relevância na Pesquisa
65.48%
This paper provides an interdisciplinary perspective concerning the role of prosumers on future Internet design based on the current trend of Internet user empowerment. The paper debates the prosumer role, and addresses models to develop a symmetric Internet architecture and supply-chain based on the integration of social capital aspects. It has as goal to ignite the discussion concerning a socially-driven Internet architectural design.

The Crypto-democracy and the Trustworthy

Gambs, Sebastien; Ranellucci, Samuel; Tapp, and Alain
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 08/09/2014 Português
Relevância na Pesquisa
65.49%
In the current architecture of the Internet, there is a strong asymmetry in terms of power between the entities that gather and process personal data (e.g., major Internet companies, telecom operators, cloud providers, ...) and the individuals from which this personal data is issued. In particular, individuals have no choice but to blindly trust that these entities will respect their privacy and protect their personal data. In this position paper, we address this issue by proposing an utopian crypto-democracy model based on existing scientific achievements from the field of cryptography. More precisely, our main objective is to show that cryptographic primitives, including in particular secure multiparty computation, offer a practical solution to protect privacy while minimizing the trust assumptions. In the crypto-democracy envisioned, individuals do not have to trust a single physical entity with their personal data but rather their data is distributed among several institutions. Together these institutions form a virtual entity called the Trustworthy that is responsible for the storage of this data but which can also compute on it (provided first that all the institutions agree on this). Finally, we also propose a realistic proof-of-concept of the Trustworthy...

Effect of Thread Level Parallelism on the Performance of Optimum Architecture for Embedded Applications

Alipour, Mehdi; Taghdisi, Hojjat
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 12/04/2012 Português
Relevância na Pesquisa
55.53%
According to the increasing complexity of network application and internet traffic, network processor as a subset of embedded processors have to process more computation intensive tasks. By scaling down the feature size and emersion of chip multiprocessors (CMP) that are usually multi-thread processors, the performance requirements are somehow guaranteed. As multithread processors are the heir of uni-thread processors and there isn't any general design flow to design a multithread embedded processor, in this paper we perform a comprehensive design space exploration for an optimum uni-thread embedded processor based on the limited area and power budgets. Finally we run multiple threads on this architecture to find out the maximum thread level parallelism (TLP) based on performance per power and area optimum uni-thread architecture.; Comment: International Journal of Embedded Systems and Applications (IJESA), http://airccse.org/journal/ijesa/current2012.html

Towards a Practical Architecture for India Centric Internet of Things

Misra, Prasant; Simmhan, Yogesh; Warrior, Jay
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
55.58%
An effective architecture for the Internet of Things (IoT), particularly for an emerging nation like India with limited technology penetration at the national scale, should be based on tangible technology advances in the present, practical application scenarios of social and entrepreneurial value, and ubiquitous capabilities that make the realization of IoT affordable and sustainable. Humans, data, communication and devices play key roles in the IoT ecosystem that we perceive. In a push towards this sustainable and practical IoT Architecture for India, we synthesize ten design paradigms to consider.

Caffe: Convolutional Architecture for Fast Feature Embedding

Jia, Yangqing; Shelhamer, Evan; Donahue, Jeff; Karayev, Sergey; Long, Jonathan; Girshick, Ross; Guadarrama, Sergio; Darrell, Trevor
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 20/06/2014 Português
Relevância na Pesquisa
65.48%
Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU ($\approx$ 2.5 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments. Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia.; Comment: Tech report for the Caffe software at http://github.com/BVLC/Caffe/

Stochastic Model Based Proxy Servers Architecture for VoD to Achieve Reduced Client Waiting Time

Nair, T. R. GopalaKrishnan; Dakshayini, M.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 05/02/2010 Português
Relevância na Pesquisa
55.54%
In a video on demand system, the main video repository may be far away from the user and generally has limited streaming capacities. Since a high quality video's size is huge, it requires high bandwidth for streaming over the internet. In order to achieve a higher video hit ratio, reduced client waiting time, distributed server's architecture can be used, in which multiple local servers are placed close to clients and, based on their regional demands video contents are cached dynamically from the main server. As the cost of proxy server is decreasing and demand for reduced waiting time is increasing day by day, newer architectures are explored, innovative schemes are arrived at. In this paper we present novel 3 layer architecture, includes main multimedia server, a Tracker and Proxy servers. This architecture targets to optimize the client waiting time. We also propose an efficient prefix caching and load sharing algorithm at the proxy server to allocate the cache according to regional popularity of the video. The simulation results demonstrate that it achieves significantly lower client's waiting time, when compared to the other existing algorithms.; Comment: International Journal of Computer Science Issues, IJCSI, Vol. 7, Issue 1...

Improving Computer-Mediated Synchronous Communication of Doctors in Rural Communities through Cloud Computing: A Case Study of Rural Hospitals in South Africa

Coleman, Alfred; Herselman, Marlien E; Coleman, Mary
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 24/11/2012 Português
Relevância na Pesquisa
65.55%
This paper investigated how doctors in remote rural hospitals in South Africa use computer-mediated tool to communicate with experienced and specialist doctors for professional advice to improve on their clinical practices. A case study approach was used. Ten doctors were purposively selected from ten hospitals in the North West Province. Data was collected using semi-structured open ended interview questions. The interviewees were asked to tell in their own words the average number of patients served per week, processes used in consultation with other doctors, communication practices using computer-mediated tool, transmission speed of the computer-mediated tool and satisfaction in using the computer-mediated communication tool. The findings revealed that an average of 15 consultations per doctor to a specialist doctor per week was done through face to face or through telephone conversation instead of using a computer-mediated tool. Participants cited reasons for not using computer-mediated tool for communication due to slow transmission speed of the Internet and regular down turn of the Internet connectivity, constant electricity power outages and lack of e-health application software to support real time computer-mediated communication. The results led to the recommendation of a hybrid cloud computing architecture for improving communication between doctors in hospitals.; Comment: 10

Principles of Security: Human, Cyber, and Biological

Stacey, Blake C.; Bar-Yam, Yaneer
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 11/03/2013 Português
Relevância na Pesquisa
55.53%
Cybersecurity attacks are a major and increasing burden to economic and social systems globally. Here we analyze the principles of security in different domains and demonstrate an architectural flaw in current cybersecurity. Cybersecurity is inherently weak because it is missing the ability to defend the overall system instead of individual computers. The current architecture enables all nodes in the computer network to communicate transparently with one another, so security would require protecting every computer in the network from all possible attacks. In contrast, other systems depend on system-wide protections. In providing conventional security, police patrol neighborhoods and the military secures borders, rather than defending each individual household. Likewise, in biology, the immune system provides security against viruses and bacteria using primarily action at the skin, membranes, and blood, rather than requiring each cell to defend itself. We propose applying these same principles to address the cybersecurity challenge. This will require: (a) Enabling pervasive distribution of self-propagating securityware and creating a developer community for such securityware, and (b) Modifying the protocols of internet routers to accommodate adaptive security software that would regulate internet traffic. The analysis of the immune system architecture provides many other principles that should be applied to cybersecurity. Among these principles is a careful interplay of detection and action that includes evolutionary improvement. However...