Página 1 dos resultados de 396 itens digitais encontrados em 0.027 segundos

Uma infraestrutura de comando e controle de data center para um conjunto de recursos computacionais.; A data center command and control infrastructure for a computing resource ensemble.

Silva, Marcio Augusto de Lima e
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Tese de Doutorado Formato: application/pdf
Publicado em 30/06/2009 Português
Relevância na Pesquisa
45.85%
O crescimento das necessidades de recursos computacionais gerado por novas classes de aplicações comerciais e científicas apresenta um novo tipo de desafio para infraestruturas computacionais. O acelerado crescimento das demandas por recursos promove um acelerado crescimento no número absoluto de elementos computacionais nestas. Nesse cenário, o provisionamento e a operação de sistemas tornam-se tarefas progressivamente complexas, devido primariamente ao aumento em escala. Este trabalho propõe um modelo para uma infraestrutura computacional que opera como um repositório abstrato de recursos computacionais de tempo de execução com níveis variáveis de consumo. Desenhado para operar como um ensemble (i.e. um conjunto coordenado) de recursos computacionais, grandes números de elementos são agregados em conjuntos de servidores de recursos de processamento, armazenamento e comunicação. O ensemble é concebido e implementado com ampla utilização de tecnologias de virtualização e possui um mecanismo de provisionamento e operação organizado como uma estrutura distribuída de comando e controle (Command and Control, ou C²). A implementação de uma prova de conceito de tal infraestrutura computacional é apresentada, e a validação da proposta é realizada através de uma combinação de resultados experimentais e emulação.; The increase in computing resource needs posed by new classes of commercial and scientific applications presents a new kind of challenge for computing infrastructures. The accelerated growth in resource demand leads to an accelerated growth in the absolute number of computing elements on such infrastructures. In this scenario...

Uma arquitetura para aprovisionamento de redes virtuais definidas por software em redes de data center; An architecture for virtual networks defined by software embedding in data center networks

Raphael Vicente Rosa
Fonte: Biblioteca Digital da Unicamp Publicador: Biblioteca Digital da Unicamp
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 05/06/2014 Português
Relevância na Pesquisa
55.99%
Atualmente provedores de infraestrutura (Infrastructure Providers - InPs) alocam recursos virtualizados, computacionais e de rede, de seus data centers para provedores de serviços na forma de data centers virtuais (Virtual Data Centers - VDCs). Almejando maximizar seus lucros e usar de forma eficiente os recursos de seus data centers, InPs lidam com o problema de otimizar a alocação de múltiplos VDCs. Mesmo que a alocação de máquinas virtuais em servidores seja feita de maneira otimizada por diversas técnicas e algoritmos já existentes, aplicações de computação em nuvem ainda tem o desempenho prejudicado pelo gargalo do subaproveitamento de recursos de rede, explicitamente definidos por limitações de largura de banda e latência. Baseado no paradigma de Redes Definidas por Software, nós aplicamos o modelo de rede como serviço (Network-as-a-Service - NaaS) para construir uma arquitetura de data center bem definida para dar suporte ao problema de aprovisionamento de redes virtuais em data centers. Construímos serviços sobre o plano de controle da plataforma RouteFlow os quais tratam a alocação de redes virtuais de data center otimizando a utilização de recursos da infraestrutura de rede. O algoritmo proposto neste trabalho realiza a tarefa de alocação de redes virtuais...

A missão crítica na articulação das áreas de infraestrutura, telecomunicações e tecnologia da informação do data center: um estudo de seus significados e efeitos; The critical mission on the interaction of the areas of infrastructure, telecommunications and information technology of the data center: a study of their meanings and effects.

Castro, João Luiz Ramalho de
Fonte: Universidade de Brasília Publicador: Universidade de Brasília
Tipo: Dissertação
Português
Relevância na Pesquisa
45.95%
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, Programa de Pós-Graduação em Engenharia Elétrica, 2010.; O maior desafio enfrentado pelas empresas de tecnologia está centrado na capacidade de adaptação às mudanças e na convergência de serviços. O negócio exige a implantação de sistemas suportados por modelos de gestão que garantam flexibilidade estratégica, a evolução tecnológica, a disponibilidade e a segurança da informação. Muito embora o avanço de equipamentos de Tecnologia da Informação e Telecomunicações seja constante e sempre mais inovador, o recurso humano ainda é a peça mais importante desse processo, pois é dele que depende o gerenciamento e a articulação que põe em marcha a engrenagem organizacional. O objetivo deste trabalho é analisar a articulação entre as áreas de Infraestrutura, de Tecnologia da Informação e de Telecomunicações de um Data Center, de modo a identificar seus efeitos no desempenho geral esperado. Para alcançar o objetivo, foi feita uma pesquisa de campo em um Data Center de uma empresa nacional, prestadora de serviços de telefonia, e os resultados apontaram: a necessidade de desenvolvimento de visão sistêmica...

The Global Opportunity in IT-Based Services : Assessing and Enhancing Country Competitiveness

Sudan, Randeep; Ayers, Seth; Dongier, Philippe; Muente-Kunigami, Arturo; Qiang, Christine Zhen-Wei
Fonte: World Bank Publicador: World Bank
Português
Relevância na Pesquisa
35.95%
This book aims to help policy makers take advantage of the opportunities presented by increased cross-border trade in information technology (IT) services and IT-enabled services (ITES). It begins by defining the two industries and estimating the potential global market opportunities for trade in each. Then it discusses economic and other benefits for countries that succeed in these areas, along with factors crucial to the competitiveness of a country or location, including skills, cost advantages, infrastructure, and a hospitable business environment, and examines the potential competitiveness of small countries and of least developed countries specifically. The volume also discusses policy options for enabling growth in the IT services and ITES industries. Appendix A introduces the Location Readiness Index (LRI), a modeling tool to help countries assess their IT and ITES industries. Finally, appendix B presents an analysis of the IT and ITES industries in Indonesia and Kenya as an illustrative application of the LRI.

Assessment Framework to Monitor and Evaluate e-Government Procurement Systems in India

World Bank
Fonte: Washington, DC Publicador: Washington, DC
Português
Relevância na Pesquisa
45.76%
This working paper elaborates on an assessment framework to monitor and evaluate e-government procurement systems in India. This is relevant due to the fact that many government agencies have sought to extend use of e-tendering system implemented in their organization to handle procurement in World Bank funded projects. The World Bank has developed a procedure to assess e-tendering systems for compliance to certain guidelines laid down by Multilateral Development Banks. The World Bank prefers to work with the Government of India in development of a robust mechanism for assessment of e-tendering systems instead of independently assessing e-tendering systems as has been done till date. The Government of India has already established a set-up under Standardization, Quality and Testing Certification (STQC) for assessment of e-procurement systems. From its assessment experience, the World Bank has found that the e-procurement applications deployed in many of the e-procurement installations assessed by the Bank were STQC certified...

Open Data Readiness Assessment Prepared for Government of Antigua and Barbuda

World Bank
Fonte: Washington, DC Publicador: Washington, DC
Português
Relevância na Pesquisa
45.87%
This 2013 report applies the World Bank Open Data Readiness Assessment Framework to diagnose the readiness of Antigua and Barbuda to create an Open Data initiative. The Framework examines the following dimensions: leadership, policy/legal framework, institutional preparedness, data within government, demand for data, open data ecosystem, financing, technology and skills infrastructure, and key datasets. The report finds Antigua and Barbuda is clearly ready along the dimensions of leadership, institutional preparedness, financing, and infrastructure and skills. Evidence for readiness is present but less clear for the remaining dimensions. The Government stands to benefit from first-mover advantage and has the potential to lead the Caribbean in Open Data, harness skilled people, and establish itself as a world class example of government transparency. An Open Data initiative could also increase efficiency and competitiveness in key areas such as tourism, foreign inward investment, and community engagement. Antigua and Barbuda possesses strengths in its institutions...

Towards an European Soil Data Center in Support of the EU Thematic Strategy for Soil Protection

HOUSKOVA BEATA; MONTANARELLA LUCA
Fonte: Romanian National Society of Soil Science Publicador: Romanian National Society of Soil Science
Tipo: Contributions to Conferences Formato: Printed
Português
Relevância na Pesquisa
45.69%
Soil protection has never ranked high among the priorities for environmental protection in Europe. Soils are commonly not well known by the European citizens, particularly since only a small fraction of the European population is currently living in rural areas and having a direct contact with soils. The majority of the urban population in Europe has only little understanding for the features and functions of soils. The most common perception is that soils are good dumping sites for all kinds of wastes and that soil can be quite useful as surfaces for building houses and infrastructure. Having more data and information about soils in Europe can help in improving this situation. The establishment of an European Soil Data Centre by the European Commission in support of the new EU thematic strategy for soil protection can certainly contribute to raising awareness in the general public of the importance of soil protection. Key words: soil protection, EU thematic strategy, European soil data center; JRC.H.7-Land management and natural hazards

Optics and virtualization as data center network infrastructure

Wang, Guohui
Fonte: Universidade Rice Publicador: Universidade Rice
Português
Relevância na Pesquisa
56.02%
The emerging cloud services have motivated a fresh look at the design of data center network infrastructure in multiple layers. To transfer the huge amount of data generated by many data intensive applications, data center network has to be fast, scalable and power efficient. To support flexible and efficient sharing in cloud services, service providers deploy a virtualization layer as part of the data center infrastructure. This thesis explores the design and performance analysis of data center network infrastructure in both physical network and virtualization layer. On the physical network design front, we present a hybrid packet/circuit switched network architecture which uses circuit switched optics to augment traditional packet-switched Ethernet in modern data centers. We show that this technique has substantial potential to improve bisection bandwidth and application performance in a cost-effective manner. To push the adoption of optical circuits in real cloud data centers, we further explore and address the circuit control issues in shared data center environments. On the virtualization layer, we present an analytical study on the network performance of virtualized data centers. Using Amazon EC2 as an experiment platform, we quantify the impact of virtualization on network performance in commercial cloud. Our findings provide valuable insights to both cloud users in moving legacy application into cloud and service providers in improving the virtualization infrastructure to support better cloud services.

Personal Mobile Server / Center for the Study of Mobile Devices and Communications

Gurminder, Singh; Center for the Study of Mobile Devices and Communications
Fonte: Monterey, California: Naval Postgraduate School. Publicador: Monterey, California: Naval Postgraduate School.
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
45.65%
A personal server is any small, light-weight, battery-powered mobile device with capability for data storage and some form of wireless connectivity means such as Bluetooth and 802.11. It may be without any standard I/O capabilities such as keyboard and display. Access to it will be from any computing infrastructure within range of the wireless connection.

Operational Risk Assessment (ORA) for Local Government Engineering Department (LGED) in Bangladesh : Final Report, Volume 1

World Bank
Fonte: Washington, DC Publicador: Washington, DC
Tipo: Economic & Sector Work :: Country Infrastructure Framework; Economic & Sector Work
Português
Relevância na Pesquisa
35.92%
The Local Government Division, Ministry of Local Government, Rural Development and Cooperatives (LGD) agreed, as part of the identification of a follow-up project to the on-going Rural Transport Improvement Program (RTIP), to launch an Operational Risk Assessment (ORA) of the Local Government Engineering Department (LGED). The ORA draws on and adapts previous work to develop methodologies to assess and suggest mitigation measures for fiduciary risks, as well as inherent risks linked with road and infrastructure construction and maintenance, administrative control risks, and risks associated with political influence. The Fiduciary and Operational Risk Management Improvement Plan (FORMIP) built on the first report to: (i) assess fiduciary and operational risks in LGED's management of projects, assets and other resources, and in LGD's oversight function, that are likely to be major factors in possible funds leakages, delays and undue interferences and overall inefficient use of public resources; (ii) prioritize options which are realistic and available to effectively minimize (and where possible...

Guangzhou Green Trucks Pilot Project : Background Analysis Report

Clean Air Initiative for Asian Cities Center
Fonte: Clean Air Initiative for Asian Cities Center and the World Bank, Washington, DC Publicador: Clean Air Initiative for Asian Cities Center and the World Bank, Washington, DC
Tipo: Economic & Sector Work :: Policy Note; Economic & Sector Work
Português
Relevância na Pesquisa
45.66%
This document was devloped as it initiated a pilot project - dubbed Guangzhou Green Trucks Pilot Project in support of Guangzhou's efforts to improve air quality in preparation for the 2010 Asian Games. The goal of this project was to develop a proof of concept for a truck program in Guangdong Province, and possibly China, that aims to: Enhance the fuel economy of the truck fleet, Reduce black carbon and other air pollutants from trucks and consequently obtain GHG emission savings.The project was implemented by the Clean Air Initiative for Asian Cities Center (CAI-Asia Center), in cooperation with Cascade Sierra Solutions, US EPA and World Bank, and with support from Guangzhou Environmental Protection Bureau (GEPB), Guangzhou Transport Committee (GTC), and Guangzhou Project Management Office (PMO) for the World Bank.The pilot project aims to contribute to addressing three problems related to trucks in Guangzhou and the wider Guangdong province simultaneously: (a) fuel costs and security; (b) air pollution and associated health impacts...

Análise de ampliação de infraestrutura de um centro de dados : sistema tradicional versus híbrido; Analysis of an extension of infrastructure data center : traditional versus hybrid system

Morais, Dayler Losi de
Fonte: Universidade de Brasília Publicador: Universidade de Brasília
Tipo: Dissertação
Português
Relevância na Pesquisa
45.86%
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2010.; O avanço da Tecnologia da Informação requer um constante redimensionamento da capacidade operacional instalada de um Centro de Dados, haja vista não só o crescimento do número de usuários e consumidores, como também a necessidade de atualização dos equipamentos e a concomitante busca de aproximação na maximização dos recursos. É isso o que ocorre, por exemplo, com o consumo de energia elétrica nos Centros de Dados, principalmente quando eles são projetados já há alguns anos, com base em critérios bem diferentes do que hoje se requer, na perspectiva da sustentabilidade e da ecologia. O objetivo deste trabalho é comparar, do ponto de vista financeiro, a ampliação da infraestrutura de um Centro de Dados em funcionamento, igual ao que se encontra em uso em Brasília, do qual foram coletadas informações para este trabalho, com soluções modulares para Centros de Dados, tipo que vem ganhando espaço no mercado, devido a suas características como: menor espaço, mobilidade, maior eficiência energética, otimização do espaço e outras. Os resultados demonstraram, além dessas características...

Measuring software systems scalability for proactive data center management

Carvalho, Nuno; Pereira, José
Fonte: Springer Publicador: Springer
Tipo: Conferência ou Objeto de Conferência
Publicado em 25/10/2010 Português
Relevância na Pesquisa
45.69%
The current trend of increasingly larger Web-based applications makes scalability the key challenge when developing, deploying, and maintaining data centers. At the same time, the migration to the cloud computing paradigm means that each data center hosts an increasingly complex mix of applications, from multiple owners and in constant evolution. Unfortunately, managing such data centers in a cost-effective manner requires that the scalability properties of the hosted workloads to be accurately known, namely, to proactively provision adequate resources and to plan the most economical placement of applications. Obviously, stopping each of them and running a custom benchmark to asses its scalability properties is not an option. In this paper we address this challenge with a tool to measure the software scalability regarding CPU availability, towards being able to predict its behavior in face of varying resources and an increasing workload. This tool does not depend on a particular application and relies only on Linux's SystemTap probing infrastructure. We validate the approach first using simulation and then in an actual system. The resulting better prediction of scalability properties should allow improved (self-) management practices.; Partially funded by PT Inovação S.A.

Building a microscope for the data center

Pereira, Nuno; Tennina, Stefano; Tovar, Eduardo
Fonte: Springer Publicador: Springer
Tipo: Parte de Livro
Publicado em //2012 Português
Relevância na Pesquisa
45.97%
Managing the physical and compute infrastructure of a large data center is an embodiment of a Cyber-Physical System (CPS). The physical parameters of the data center (such as power, temperature, pressure, humidity) are tightly coupled with computations, even more so in upcoming data centers, where the location of workloads can vary substantially due, for example, to workloads being moved in a cloud infrastructure hosted in the data center. In this paper, we describe a data collection and distribution architecture that enables gathering physical parameters of a large data center at a very high temporal and spatial resolutionof the sensor measurements. We think this is an important characteristic to enable more accurate heat-flow models of the data center andwith them, _and opportunities to optimize energy consumption. Havinga high resolution picture of the data center conditions, also enables minimizing local hotspots, perform more accurate predictive maintenance (pending failures in cooling and other infrastructure equipment can be more promptly detected) and more accurate billing. We detail this architecture and define the structure of the underlying messaging system that is used to collect and distribute the data. Finally...

A microscope for the data center

Pereira, Nuno; Tennina, Stefano; Loureiro, João; Severino, Ricardo; Saraiva, Bruno; Santos, Manuel; Pacheco, Filipe; Tovar, Eduardo
Fonte: Inderscience Publishers Publicador: Inderscience Publishers
Tipo: Artigo de Revista Científica
Publicado em //2015 Português
Relevância na Pesquisa
55.98%
Nowadays, data centers are large energy consumers and the trend for next years is expected to increase further, considering the growth in the order of cloud services. A large portion of this power consumption is due to the control of physical parameters of the data center (such as temperature and humidity). However, these physical parameters are tightly coupled with computations, and even more so in upcoming data centers, where the location of workloads can vary substantially due, for example, to workloads being moved in the cloud infrastructure hosted in the data center. Therefore, managing the physical and compute infrastructure of a large data center is an embodiment of a Cyber-Physical System (CPS). In this paper, we describe a data collection and distribution architecture that enables gathering physical parameters of a large data center at a very high temporal and spatial resolution of the sensor measurements. We think this is an important characteristic to enable more accurate heat-flow models of the data center and with them, find opportunities to optimize energy consumptions. Having a high-resolution picture of the data center conditions, also enables minimizing local hot-spots, perform more accurate predictive maintenance (failures in all infrastructure equipments can be more promptly detected) and more accurate billing. We detail this architecture and define the structure of the underlying messaging system that is used to collect and distribute the data. Finally...

A layer 2 multipath fabric using a centralized controller

Júlio, Fábio José Correia
Fonte: Faculdade de Ciências e Tecnologia Publicador: Faculdade de Ciências e Tecnologia
Tipo: Dissertação de Mestrado
Publicado em //2013 Português
Relevância na Pesquisa
45.65%
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores; Ethernet is the most used L2 protocol in modern datacenters networks. These networks serve many times like the underlying infrastructure for highly virtualised cloud computing services. To support such services the underlying network needs to be prepared to support host mobility and multi-tenant isolation for a high number of hosts while using the available bandwidth e ciently and maintaing the inherent costs low. These important properties are not ensured by Ethernet protocols. The bandwidth is always wasted because the spanning tree protocol is used to calculate paths. Also, the scalability can be an issue because the MAC learning process is based in frame ooding. On layer 3 some of this problems can be solved, but layer 3 is harder to con gure, poses di culties in host mobility and is more expensive. Recent e orts try to bring the advantages of layer 3 to layer 2. Most of them are based in some form of Equal-Cost Multipath (ECMP) to calculate paths on data center network. The solution proposed on this document uses a di erent approach. Paths are calculated using a non-ECMP policy based control-plane that is implemented in an OpenFlow controller. OpenFlow is a new protocol developed to help researchers test their new discovers on real networks without messing with the real tra c. To do that OpenFlow has to be supported by the network's switches. The communication between systems is done by SSL and all switches features are available to the controller. The non-ECMP policy based algorithm is a di erent way to do routing. Instead of using unitary metrics on each link...

Big Data Strategies for Data Center Infrastructure Management Using a 3D Gaming Platform

Hubbell, Matthew; Moran, Andrew; Arcand, William; Bestor, David; Bergeron, Bill; Byun, Chansup; Gadepally, Vijay; Michaleas, Peter; Mullen, Julie; Prout, Andrew; Reuther, Albert; Rosa, Antonio; Yee, Charles; Kepner, Jeremy
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 29/06/2015 Português
Relevância na Pesquisa
65.9%
High Performance Computing (HPC) is intrinsically linked to effective Data Center Infrastructure Management (DCIM). Cloud services and HPC have become key components in Department of Defense and corporate Information Technology competitive strategies in the global and commercial spaces. As a result, the reliance on consistent, reliable Data Center space is more critical than ever. The costs and complexity of providing quality DCIM are constantly being tested and evaluated by the United States Government and companies such as Google, Microsoft and Facebook. This paper will demonstrate a system where Big Data strategies and 3D gaming technology is leveraged to successfully monitor and analyze multiple HPC systems and a lights-out modular HP EcoPOD 240a Data Center on a singular platform. Big Data technology and a 3D gaming platform enables the relative real time monitoring of 5000 environmental sensors, more than 3500 IT data points and display visual analytics of the overall operating condition of the Data Center from a command center over 100 miles away. In addition, the Big Data model allows for in depth analysis of historical trends and conditions to optimize operations achieving even greater efficiencies and reliability.; Comment: 6 pages; accepted to IEEE High Peformance Extreme Computing (HPEC) conference 2015

HVSTO: Efficient Privacy Preserving Hybrid Storage in Cloud Data Center

Dong, Mianxiong; Li, He; Ota, Kaoru; Zhu, Haojin
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 23/05/2014 Português
Relevância na Pesquisa
45.72%
In cloud data center, shared storage with good management is a main structure used for the storage of virtual machines (VM). In this paper, we proposed Hybrid VM storage (HVSTO), a privacy preserving shared storage system designed for the virtual machine storage in large-scale cloud data center. Unlike traditional shared storage, HVSTO adopts a distributed structure to preserve privacy of virtual machines, which are a threat in traditional centralized structure. To improve the performance of I/O latency in this distributed structure, we use a hybrid system to combine solid state disk and distributed storage. From the evaluation of our demonstration system, HVSTO provides a scalable and sufficient throughput for the platform as a service infrastructure.; Comment: 7 pages, 8 figures, in proceeding of The Second International Workshop on Security and Privacy in Big Data (BigSecurity 2014)

Data center design & enterprise networking

Mahood, Christian
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
65.78%
Today’s enterprise networks and data centers have become very complex and have completely integrated themselves into every facet of their represented organization. Organizations require Internet facing services and applications to be available at any part of the day or night. These organizations have realized that with centralized computing and highly available components, their technological presence with customers can be greatly enhanced. The creation of an infrastructure supporting such high availability takes numerous components and resources to function optimally. When an organization makes the decision to design a data center, they utilize resources to provide insight into what components to deploy. Much of this information is based off of recommendations made by third party vendors or limited past experiences. This research provides a course offering as a solution to help provide students with the information to design and comprehend the major components within a modern data center. The information included in the course offering has been compared with industry accepted standards and various other resources to provide reliable and accurate information. Course topics have been architected around eight major topics. These topics covered are network design...

Construindo um Data Center; Construindo um Data Center

Zucchi, Wagner Luiz; Amâncio, Anderson Barreto
Fonte: Universidade de São Paulo. Superintendência de Comunicação Social Publicador: Universidade de São Paulo. Superintendência de Comunicação Social
Tipo: info:eu-repo/semantics/article; info:eu-repo/semantics/publishedVersion; Formato: application/pdf
Publicado em 30/05/2013 Português
Relevância na Pesquisa
65.81%
A construção de um moderno centro de processamento de dados exige um correto balanço entre tendências tecnológicas, eficiência, fatores ambientais e baixo custo. Neste artigo os autores tentam apresentar os cuidados no projeto desse tipo de ambiente de forma acessível ao público não especializado.; Building a modern center for processing data requires striking the right balance between technology trends, efficiency, environmental factors and low cost. In this article the authors seek to present the issues to consider when designing a project of this type of environment in a way that is accessible to a non-expert audience.