Página 1 dos resultados de 19 itens digitais encontrados em 0.022 segundos

An RFID Bulk Cargo Supervising System

Foina, Aislan Gomide; Fernandez, Francisco Javier Ramirez; Barbin, Silvio Ernesto
Fonte: IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC Publicador: IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
35.57%
In this work, a system using active RFID tags to supervise truck bulk cargo is described. The tags are attached to the bodies of the trucks and readers are distributed in the cargo buildings and attached to weighs and the discharge platforms. PDAs with camera and support to a WiFi network are provided to the inspectors and access points are installed throughout the discharge area to allow effective confirmations of unload actions and the acquisition of pictures for future audit. Broadband radio equipments are used to establish efficient communication links between the weighs and cargo buildings which are usually located very far from each other in the field. A web application software was especially developed to enable robust communication between the equipments for efficient device management, data processing and reports generation to the operating personal. The system was deployed in a cargo station of a Brazilian seashore port. The obtained results prove the effectiveness of the proposed system.

Middleware de comunicação entre objetos distribuídos para gerenciamento de computadores baseado em redes sem fio (WSE-OS)

Crepaldi, Luis Gustavo
Fonte: Universidade Estadual Paulista (UNESP) Publicador: Universidade Estadual Paulista (UNESP)
Tipo: Dissertação de Mestrado Formato: 117 f. : il. color.
Português
Relevância na Pesquisa
35.77%
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES); Pós-graduação em Ciência da Computação - IBILCE; Para simplificar o gerenciamento de computadores, vários sistemas de administração estruturados por conexões físicas adotam técnicas avançadas para gestão de configuração de software. No entanto, a forte ligação entre hardware e o software faz com que haja uma individualização desta gerência, além da penalização da mobilidade e ubiqüidade do poder computacional. Neste cenário, cada computador torna-se uma entidade individual a ser gerenciada, exigindo operações manuais de configuração da imagem de sistema. Tecnologias que oferecem gestão centralizada baseadas em conexões físicas cliente-servidor, combinando técnicas de virtualização com a utilização de sistemas de arquivos distribuídos, refletem a degradação em flexibilidade e facilidade de instalação deste sistema gerenciador. Outras arquiteturas para gerenciamento centralizado que estruturam o compartilhamento de dados através de conexões físicas e dependem do protocolo PXE, apresentam os mesmos impasses descritos anteriormente. Diante das limitações dos modelos de gerenciamento centralizado baseado em conexões físicas...

Desenvolvimento e aplicação de um software cristalográfico com protocolo de acesso a um banco de dados distribuído

Utuni, Vegner Hizau dos Santos
Fonte: Universidade Estadual Paulista (UNESP) Publicador: Universidade Estadual Paulista (UNESP)
Tipo: Tese de Doutorado Formato: 90 f. : il. + 1 CD-ROM
Português
Relevância na Pesquisa
15.73%
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES); Pós-graduação em Química - IQ; Desde a revolução provocada pela segunda geração de computadores ocorrida por volta de 1960 e que permitiu a disseminação dos computadores para os diversos setores da sociedade, vem acontecendo uma evolução na capacidade de processamento dos chips e conseqüentemente no conceito de software. Em se tratando especificamente de softwares científicos, esse incremente no volume e velocidade de processamento de dados torna possível a aplicação de modelos físicoquímicos cada vez mais complexos. Em 1969, o cristalógrafo Hugo Rietveld criou um método que utiliza este novo paradigma tecnológico e que hoje é conhecido como método de Rietveld. Desenvolvido especificamente para o refinamento de dados de difração raios X de amostra policristalinas, passou a ser utilizado em todas as áreas da pesquisa em novos materiais. Para uma boa estabilidade do processo de refinamento é necessário fornecer ao modelo uma aproximação inicial de cada fase que compõe a amostra. Esta exigência é necessária para permitir a estabilidade do processo iterativo que irá ajustar os dados experimentais à função teórica, característica que obriga a uma dependência de bancos de dados especializados. O processo de refinamento utilizando o método de Rietveld é complexo e não linear o que implica necessariamente no uso de um software. Esta característica aliada à dependência de bancos de dados cristalográficos justifica a utilização da nova tecnologia de bancos de dados distribuídos...

Sparsely faceted arrays : a mechanism supporting the parallel allocation, communication, and garbage collection; SFAs : a mechanism supporting the parallel allocation, communication, and garbage collection

Brown, Jeremy Hanford, 1972-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 126 p.; 664152 bytes; 663501 bytes; application/pdf; application/pdf
Português
Relevância na Pesquisa
15.77%
Conventional parallel computer architectures do not provide support for non-uniformly distributed objects. In this thesis, I introduce sparsely faceted arrays (SFAs), a new low-level mechanism for naming regions of memory, or facets, on different processors in a distributed, shared memory parallel processing system. Sparsely faceted arrays address the disconnect between the global distributed arrays provided by conventional architectures (e.g. the Cray T3 series), and the requirements of high-level parallel programming methods that wish to use objects that are distributed over only a subset of processing elements. A sparsely faceted array names a virtual globally-distributed array, but actual facets are lazily allocated. By providing simple semantics and making efficient use of memory, SFAs enable efficient implementation of a variety of non-uniformly distributed data structures and related algorithms. I present example applications which use SFAs, and describe and evaluate simple hardware mechanisms for implementing SFAs. Keeping track of which nodes have allocated facets for a particular SFA is an important task that suggests the need for automatic memory management, including garbage collection. To address this need, I first argue that conventional tracing techniques such as mark/sweep and copying GC are inherently unscalable in parallel systems. I then present a parallel memory-management strategy...

Fault-tolerance and load management in a distributed stream processing system

Balazinska, Magdalena
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 199 p.; 6212416 bytes; 7581442 bytes; application/pdf; application/pdf
Português
Relevância na Pesquisa
25.87%
Advances in monitoring technology (e.g., sensors) and an increased demand for online information processing have given rise to a new class of applications that require continuous, low-latency processing of large-volume data streams. These "stream processing applications" arise in many areas such as sensor-based environment monitoring, financial services, network monitoring, and military applications. Because traditional database management systems are ill-suited for high-volume, low-latency stream processing, new systems, called stream processing engines (SPEs), have been developed. Furthermore, because stream processing applications are inherently distributed, and because distribution can improve performance and scalability, researchers have also proposed and developed distributed SPEs. In this dissertation, we address two challenges faced by a distributed SPE: (1) faulttolerant operation in the face of node failures, network failures, and network partitions, and (2) federated load management. For fault-tolerance, we present a replication-based scheme, called Delay, Process, and Correct (DPC), that masks most node and network failures.; (cont.) When network partitions occur, DPC addresses the traditional availability-consistency trade-off by maintaining...

Utilizando grades computacionais no atendimento de requisitos de e-Gov

Komatsu, Edson Shin-Iti
Fonte: Pontifícia Universidade Católica do Rio Grande do Sul; Porto Alegre Publicador: Pontifícia Universidade Católica do Rio Grande do Sul; Porto Alegre
Tipo: Dissertação de Mestrado
Português
Relevância na Pesquisa
15.61%
O Governo Eletrônico (e-Gov) deve servir de base para a gestão de Tecnologia da Informação e Comunicação do Governo Federal e tem como requisitos fundamentais a promoção da cidadania, a priorização da inclusão digital, o uso da gestão do conhecimento, a racionalização de recursos, a padronização de normas, políticas e padrões e a integração com os demais entes federativos. As grades computacionais, por sua vez, fornecem um ambiente para execução de aplicações paralelas no qual recursos distribuídos podem ser utilizados de forma transparente. Este ambiente possibilita o processamento de grandes volumes de dados, compartilhamento de recursos e a redução de custos. Dessa forma, as grades computacionais podem ajudar os órgãos públicos a alcançar seus objetivos. Neste estudo os requisitos do Governo Eletrônico são analisados sob a perspectiva de utilização de grades computacionais para execução aplicações de e-Gov, verificando como as grades podem ser úteis aos órgãos governamentais. Uma aplicação de e-Gov é modelada e testada em ambiente de grades para mostrar o potencial desta plataforma de execução em tais condições.; Electronic Government (e-Gov) should serve as the basis for the management of Information Technology and Communication of the Federal Government. It has as basic requirements the promotion of citizenship and digital inclusion...

Detecção automática de anomalias em ambientes distribuídos utilizando redes bayesianas

Silva Junior, Brivaldo Alves da
Fonte: Universidade Federal de Mato Grosso do Sul Publicador: Universidade Federal de Mato Grosso do Sul
Tipo: Dissertação de Mestrado
Português
Relevância na Pesquisa
55.62%
Diagnosticar anomalias em grandes redes corporativas consome tempo considerável das equipes de suporte técnico, principalmente pela complexidade das inúmeras interações existentes entre as aplicações e os elementos de rede (servidores, roteadores, enlaces, etc.). Nos últimos anos, vários trabalhos cientí cos propuseram ferramentas automatizadas para detecção de anomalias em ambientes distribuídos. As ferramentas são divididas em dois grandes grupos: as que usam abordagens intrusivas, em que as aplicações precisam ser alteradas para registrar eventos de comunicação e facilitar o rastreamento de problemas; e as não intrusivas, em que pacotes são capturados diretamente da rede e técnicas estatísticas são aplicadas para inferir, com um certo grau de con ança, as possíveis causas dos problemas. As duas abordagens possuem vantagens e desvantagens. Entretanto, as técnicas não intrusivas são mais aceitas pela facilidade de implantação e também por não exigirem que aplicações já desenvolvidas sejam alteradas para incluir mecanismos de registro de eventos. A abordagem mais completa e promissora para a solução desse problema, denominada Sherlock, utiliza traços de rede para construir automaticamente um Grafo de Inferência (GI) que modela as múltiplas interações e dependências presentes em um ambiente distribu ído. Apesar do progresso feito por Sherlock na modelagem do problema...

Speeding up a path-based policy language compiler

Guven, Ahmet
Fonte: Monterey, California. Naval Postgraduate School Publicador: Monterey, California. Naval Postgraduate School
Tipo: Tese de Doutorado Formato: xiv, 151 p. : ill. (some col.) ;
Português
Relevância na Pesquisa
45.63%
Approved for public release, distribution unlimited; Policy based network management has an increasing importance depending on the increasing importance of distributed large networks and the growing number of services that run on them. Policy languages, which enable users define policies in a formal language, are one of the main tools of policy management. Even though there are policy languages like PFDL or RPSL, none of them has the capability of a robust conflict detection and resolution focused on policy. A new Policy Language, Path-based Policy Language (PPL), has been developed recently. It encompasses as many of the features addressed in the other policy languages as possible, as well as providing means for testing policies for consistency and defining both static and dynamic policies. The most important, PPL provides the ability to detect and resolve conflicts between by translating policy rules into formal logic statement and checking them with a Prolog program. Even though in theory PPL seems to be a very high performance policy language, its current compiler has a performance bottleneck. In some cases the PPL compiler can not finish compilation and runs forever without returning any conflict results. This thesis focuses on the PPL compiler's performance bottleneck and introduces solutions speeding up the PPL compiler. The new PPL compiler achieves a reasonable compilation time for any configuration file for a network with 100 nodes while maintaining its ability to detect and resolve policy conflicts.; Lieutenant Junior Grade...

Digital Identity Toolkit : A Guide for Stakeholders in Africa; Guide de l'identité électronique : à l'intention des parties prenantes d'Afrique

World Bank Group
Fonte: Washington, DC Publicador: Washington, DC
Tipo: Publications & Research :: Working Paper; Publications & Research
Português
Relevância na Pesquisa
35.86%
Digital identity, or electronic identity (eID), offers developing nations a unique opportunity to accelerate the pace of their national progress. It changes the way services are delivered, helps grow a country's digital economy, and supports effective safety nets for disadvantaged and impoverished populations. Though digital identity is an opportunity, it raises important considerations with respect to privacy, cost, capacity, and long-term viability. This report provides a strategic view of the role of identification in a country's national development, as well as a tactical view of the building blocks and policy choices needed for setting up eID in a developing country. The report presents a conceptual overview of digital identity management practices, providing a set of guidelines at a national level that policymakers can find helpful as they begin to think about modernizing the identity infrastructure of their country into eID. The report also provides an operating knowledge of the terminology and concepts used in identity management and an exposition of the functional blocks that must be in place. Policy considerations are referenced at the end of the report that governments can use as they contemplate a digital identity program. Given its abridged nature...

Practical Guidance for Defining a Smart Grid Modernization Strategy : The Case of Distribution

Madrigal, Marcelino; Uluski, Robert
Fonte: Washington, DC: World Bank Publicador: Washington, DC: World Bank
Tipo: Publications & Research :: Publication
Português
Relevância na Pesquisa
35.76%
This report provides some practical guidance on how utilities can define their own smart grid vision, identify priorities, and structure investment plans. While most of these strategic aspects apply to any area of the electricity grid, the document focuses on the segment of distribution. The guidance includes key building blocks that are needed to modernize the distribution grid and provides examples of grid modernization projects. Potential benefits that can be achieved (in monetary terms) for a given investment range are also discussed. The concept of the smart grid is relevant to any grid regardless of its stage of development. What varies are the magnitude and type of the incremental steps toward modernization that will be required to achieve a specific smart grid vision. Importantly, a utility that is at a relatively low level of grid modernization may leap frog one or more levels of modernization to achieve some of the benefits offered by the highest levels of grid modernization. Smart grids impact electric distribution systems significantly and sometimes more than any other part of the electric power grid. In developing countries...

Laboratory for Information Globalization and Harmonization

Madnick, Stuart; Choucri, Nazli; Siegel, Michael; Haghseta, Farnaz; Moulton, Allen; Zhu, Harry
Fonte: MIT - Massachusetts Institute of Technology Publicador: MIT - Massachusetts Institute of Technology
Formato: 866400 bytes; application/pdf
Português
Relevância na Pesquisa
25.77%
The convergence of three distinct but interconnected trends - unrelenting globalization, growing worldwide electronic connectivity, and increasing knowledge intensity of economic activity - is creating powerful new opportunities and challenges for global politics. This rapidly changing environment has information demands that surpass existing capabilities for information access, interpretation, and overall use, thus hindering our abilities to address emergent and complex global challenges, such as terrorism and other security threats. This reality has serious implications for two diverse domains of scholarship: international relations (IR) in political science and information technology (IT). Unless IT advances remain "one step ahead" of emergent realities and complexities, strategies for better understanding and responding to critical global challenges will be severely impeded. For example, more so now than ever, the U.S. Office of Counter-Terrorism and the newly-created Office of Homeland Security rely on intelligence information from all over the world to develop strategic responses to security threats. However...

The management of distributed processing

Fonte: Center for Information Systems Research, Massachusetts Institute of Technology, Alfred P. Sloan School of Management Publicador: Center for Information Systems Research, Massachusetts Institute of Technology, Alfred P. Sloan School of Management
Formato: 30, [2] p.; 1761098 bytes; application/pdf
Português
Relevância na Pesquisa
45.76%
by John F. Rockart, Christine V. Bullen, John N. Kogan.; "December 1978." Highlights of a conference held at MIT's Endicott House in Dedham, Mass., March 29-31, 1979.; Includes bibliographical references.

Using Provenance to support Good Laboratory Practice in Grid Environments

Ney, Miriam; Kloss, Guy K.; Schreiber, Andreas
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 13/12/2011 Português
Relevância na Pesquisa
25.8%
Conducting experiments and documenting results is daily business of scientists. Good and traceable documentation enables other scientists to confirm procedures and results for increased credibility. Documentation and scientific conduct are regulated and termed as "good laboratory practice." Laboratory notebooks are used to record each step in conducting an experiment and processing data. Originally, these notebooks were paper based. Due to computerised research systems, acquired data became more elaborate, thus increasing the need for electronic notebooks with data storage, computational features and reliable electronic documentation. As a new approach to this, a scientific data management system (DataFinder) is enhanced with features for traceable documentation. Provenance recording is used to meet requirements of traceability, and this information can later be queried for further analysis. DataFinder has further important features for scientific documentation: It employs a heterogeneous and distributed data storage concept. This enables access to different types of data storage systems (e. g. Grid data infrastructure, file servers). In this chapter we describe a number of building blocks that are available or close to finished development. These components are intended for assembling an electronic laboratory notebook for use in Grid environments...

A simulation of a distributed file system

Stanley, Alan
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
75.79%
This thesis presents a simulation of a distributed file system. It is a simplified version of the distributed file system found in the LOCUS distributed operating system. The simulation models a network of multiuser computers of any configuration. The number of sites in the network can range from a minimum of three sites to a maximum of twenty. A simple database management system is supported that allows the creation of an indexed database for reading and updating records. The distributed file system supports a transaction mechanism, record level locking, file replication and update propagation, and network transparency. To test the effect of site failures and network partitioning on the distributed file system, a facility is provided to "crash", "reboot", and "jump to" random sites in the network.

A Study of the Xerox XNS Filing Protocol as Implemented on Several Heterogenous Systems

Flint, Edward
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
65.67%
The Xerox Network System is composed of heterogeneous processors connected across a variety of transmission media A series of protocols is defined to describe the communication mechanisms between system elements. One of these protocols, the Filing Protocol, defines a general purpose file management system. Current implementations of the protocol, although derived from the Xerox specification, fall short of providing the interconnectivity between elements desired in a heterogeneous network system. The definition of an easily implemented protocol subset that provides the common file system functions of retrieval, storage, enumeration/location and deletion is derived from experiences with several implementations. This definition and an accompanying implementation document provide a mechanism to guide future implementations toward increased interconnectivity.

An alternative language interface for the mistress relational database patterned after IBM's query-by-example

Vogel, Susan C.
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
55.67%
This thesis effort developed a user-oriented query language interface, patterned after IBM's Query-by-Example, for the Mistress relational database. The interface, Mistress/QBE, is written entirely in C and uses the UNIX curses library of subroutines to allow full screen input and output. Mistress /QBE allows the user to issue commands to draw pictorial representations of tables which exist in the database. The user then enters values and operators into the tables to specify a query by indicating attributes to be used in conditional selections, sort and grouping orders, and output formats. Mistress /QBE decodes the information entered on the screen and formulates a Mistress Query Language command which is passed to the Mistress standard C language interface for execution. With a few minor exceptions, any query which can be written in the Mistress Query language can also be written in Mistress/QBE. The interface also includes a high-level operator- called grouping, which is supported by IBM's QBE but not by native Mistress.

Design and implementation of page based distributed shared memory in distributed database systems

Raman, Padmanabhan
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
75.86%
This project is the simulation of page based distributed shared memory originally called IVY proposed by Li in 1986[3] and then by Li and Hudak in 1989[4]. The 'Page Based Distributed Shared Memory System' consists of a collection of clients or workstations connected to a server by a Local Area Network. The server contains a shared memory segment within which the distributed database is located. The shared memory segment is divided in the form of pages and hence the name 'Page Based Distributed Shared Memory System' where each page represents a table within that distributed database. In the simplest variant, each page is present on exactly one machine. A reference to a local page is done at full memory speed. An attempt to reference a page on a different machine causes a page fault, which is trapped by the software. The software then sends a message to the remote machine, which finds the needed page and sends it to the requesting process. The fault is then restarted and can now complete, which is achieved with the help of Inter Process Communication (IPC) library. In essence, this design is similar to traditional virtual memory systems: when a process touches a nonresident page, a fault occurs and the operating system fetches the page and maps it in. The difference here is that instead of getting the page from the disk...

Enhancements to the XNS authentication-by-proxy model

Wing, Peter D.
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
65.67%
Authentication is the secure network architecture mechanism by which a pair of suspicious principals communicating over presumably unsecure channels assure themselves that each is that whom it claims to be. The Xerox Network Systems architecture proposes one such authentication scheme. This thesis examines the system consequences of the XNS model's unique proxy variant, by which a principal may temporarily commission a second network entity to assume its identity as a means of authority transfer. Specific attendant system failure modes are highlighted. The student's associated original contributions include proposed model revisions which rectify authentication shortfalls yet facilitate the temporal authority transfer motivating the proxy model. Consistent with the acknowledgement that no single solution is defensible as best under circumstances of such technical and administrative complexity, three viable such architectures are specified. Finally, the demand for a disciplined agent management mechanism within a distributed system such as XNS is resoundingly affirmed in the course of these first-order pursuits.

Implementation of an activity coordinator for an activity-based distributed system

Shaw, Robert
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
75.81%
Distributed computing systems offer a number of potential benefits, including: - improved fault-tolerance and reliability - increased processor availability - faster response time - flexibility of system configuration - effective management of geographically distributed resources - integration of special purpose machines into applications In order to realize this potential, support systems that aid in the development of distributed programs are needed. An Activity System facilitates the design and implementation of distributed programs: (1) By allowing the programmer to group functionally related objects into an activity (or job) which is recorded within the system. The information stored concerning relationships between objects may then be used to control their interactions and thus to manage distributed resources. (2) By effectively eliminating the need for the programmer to deal with the underlying details of inter-process communication. The system handles the establishment of communication links between objects in an activity, and controls the routing of messages to activity members. To evaluate the uses of activities in developing distributed programs, I have implemented a portion of such a system; namely, an Activity Coordinator ...