Página 1 dos resultados de 700 itens digitais encontrados em 0.023 segundos

Proposta e validação de nova arquitetura de redes de data center; Proposal and Validation of New Architecture for Data Center Networks

Carlos Alberto Bráz Macapuna
Fonte: Biblioteca Digital da Unicamp Publicador: Biblioteca Digital da Unicamp
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 29/04/2011 Português
Relevância na Pesquisa
55.76%
Assim como as grades computacionais, os centros de dados em nuvem são estruturas de processamento de informações com requisitos de rede bastante exigentes. Esta dissertação contribui para os esforços em redesenhar a arquitetura de centro de dados de próxima geração, propondo um serviço eficaz de encaminhamento de pacotes, que explora a disponibilidade de switches programáveis com base na API OpenFlow. Desta forma, a dissertação descreve e avalia experimentalmente uma nova arquitetura de redes de centro de dados que implementa dois serviços distribuídos e resilientes a falhas que fornecem as informações de diretório e topologia necessárias para codificar aleatoriamente rotas na origem usando filtros de Bloom no cabeçalho dos pacotes. Ao implantar um exército de gerenciadores de Rack atuando como controladores OpenFlow, a arquitetura proposta denominada Switching with in-packet Bloom filters (SiBF) promete escalabilidade, desempenho e tolerância a falhas. O trabalho ainda defende a ideia que o encaminhamento de pacotes pode tornar-se um serviço interno na nuvem e que a sua implementação pode aproveitar as melhores práticas das aplicações em nuvem como, por exemplo, os sistemas de armazenamento distribuído do tipo par

Modeling, characterization, and optimization of web server power in data centers = : Modelagem, caracterização e otimização de potência em centro de dados; Modelagem, caracterização e otimização de potência em centro de dados

Leonardo de Paula Rosa Piga
Fonte: Biblioteca Digital da Unicamp Publicador: Biblioteca Digital da Unicamp
Tipo: Tese de Doutorado Formato: application/pdf
Publicado em 08/11/2013 Português
Relevância na Pesquisa
45.57%
Para acompanhar uma demanda crescente pelos recursos computacionais, empresas de TI precisaram construir instalações que comportam centenas de milhares de computadores chamadas centro de dados. Este ambiente é altamente dependente de energia elétrica, um recurso que é cada vez mais caro e escasso. Neste contexto, esta tese apresenta uma abordagem para otimizar potência e desempenho em centro de dados Web. Para isto, apresentamos uma infraestrutura para medir a potência dissipada por computadores de prateleiras, desenvolvemos modelos empíricos que estimam a potência de servidores Web e, por fim, implementamos uma de nossas heurísticas de otimização de potência global em um aglomerado de nós de processamento chamado AMD SeaMicro SM15k. A infraestrutura de medição de potência é composta por: uma placa personalizada, que é capaz de medir potência e é instalada em computadores de prateleira; um conversor de dados analógico/digital que amostra os valores de potência; e um software controlador. Mostramos uma nova metodologia para o desenvolvimento de modelos de potência para servidores Web que diminuem a quantidade de parâmetros dos modelos e reduzem as relações não lineares entre medidas de desempenho e potência do sistema. Avaliamos a nossa metodologia em dois servidores Web...

Uma arquitetura para aprovisionamento de redes virtuais definidas por software em redes de data center; An architecture for virtual networks defined by software embedding in data center networks

Raphael Vicente Rosa
Fonte: Biblioteca Digital da Unicamp Publicador: Biblioteca Digital da Unicamp
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 05/06/2014 Português
Relevância na Pesquisa
45.76%
Atualmente provedores de infraestrutura (Infrastructure Providers - InPs) alocam recursos virtualizados, computacionais e de rede, de seus data centers para provedores de serviços na forma de data centers virtuais (Virtual Data Centers - VDCs). Almejando maximizar seus lucros e usar de forma eficiente os recursos de seus data centers, InPs lidam com o problema de otimizar a alocação de múltiplos VDCs. Mesmo que a alocação de máquinas virtuais em servidores seja feita de maneira otimizada por diversas técnicas e algoritmos já existentes, aplicações de computação em nuvem ainda tem o desempenho prejudicado pelo gargalo do subaproveitamento de recursos de rede, explicitamente definidos por limitações de largura de banda e latência. Baseado no paradigma de Redes Definidas por Software, nós aplicamos o modelo de rede como serviço (Network-as-a-Service - NaaS) para construir uma arquitetura de data center bem definida para dar suporte ao problema de aprovisionamento de redes virtuais em data centers. Construímos serviços sobre o plano de controle da plataforma RouteFlow os quais tratam a alocação de redes virtuais de data center otimizando a utilização de recursos da infraestrutura de rede. O algoritmo proposto neste trabalho realiza a tarefa de alocação de redes virtuais...

LORIS: a web-based data management system for multi-center studies

Das, Samir; Zijdenbos, Alex P.; Harlap, Jonathan; Vins, Dario; Evans, Alan C.
Fonte: Frontiers Media S.A. Publicador: Frontiers Media S.A.
Tipo: Artigo de Revista Científica
Publicado em 20/01/2012 Português
Relevância na Pesquisa
35.77%
Longitudinal Online Research and Imaging System (LORIS) is a modular and extensible web-based data management system that integrates all aspects of a multi-center study: from heterogeneous data acquisition (imaging, clinical, behavior, and genetics) to storage, processing, and ultimately dissemination. It provides a secure, user-friendly, and streamlined platform to automate the flow of clinical trials and complex multi-center studies. A subject-centric internal organization allows researchers to capture and subsequently extract all information, longitudinal or cross-sectional, from any subset of the study cohort. Extensive error-checking and quality control procedures, security, data management, data querying, and administrative functions provide LORIS with a triple capability (1) continuous project coordination and monitoring of data acquisition (2) data storage/cleaning/querying, (3) interface with arbitrary external data processing “pipelines.” LORIS is a complete solution that has been thoroughly tested through a full 10 year life cycle of a multi-center longitudinal project1 and is now supporting numerous international neurodevelopment and neurodegeneration research projects.

msCompare: A Framework for Quantitative Analysis of Label-free LC-MS Data for Comparative Candidate Biomarker Studies*

Hoekman, Berend; Breitling, Rainer; Suits, Frank; Bischoff, Rainer; Horvatovich, Peter
Fonte: The American Society for Biochemistry and Molecular Biology Publicador: The American Society for Biochemistry and Molecular Biology
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
35.73%
Data processing forms an integral part of biomarker discovery and contributes significantly to the ultimate result. To compare and evaluate various publicly available open source label-free data processing workflows, we developed msCompare, a modular framework that allows the arbitrary combination of different feature detection/quantification and alignment/matching algorithms in conjunction with a novel scoring method to evaluate their overall performance. We used msCompare to assess the performance of workflows built from modules of publicly available data processing packages such as SuperHirn, OpenMS, and MZmine and our in-house developed modules on peptide-spiked urine and trypsin-digested cerebrospinal fluid (CSF) samples. We found that the quality of results varied greatly among workflows, and interestingly, heterogeneous combinations of algorithms often performed better than the homogenous workflows. Our scoring method showed that the union of feature matrices of different workflows outperformed the original homogenous workflows in some cases. msCompare is open source software (https://trac.nbic.nl/mscompare), and we provide a web-based data processing service for our framework by integration into the Galaxy server of the Netherlands Bioinformatics Center (http://galaxy.nbic.nl/galaxy) to allow scientists to determine which combination of modules provides the most accurate processing for their particular LC-MS data sets.

Cloudwave: Distributed Processing of “Big Data” from Electrophysiological Recordings for Epilepsy Clinical Research Using Hadoop

Jayapandian, Catherine P.; Chen, Chien-Hung; Bozorgi, Alireza; Lhatoo, Samden D.; Zhang, Guo-Qiang; Sahoo, Satya S.
Fonte: American Medical Informatics Association Publicador: American Medical Informatics Association
Tipo: Artigo de Revista Científica
Publicado em 16/11/2013 Português
Relevância na Pesquisa
35.77%
Epilepsy is the most common serious neurological disorder affecting 50–60 million persons worldwide. Multi-modal electrophysiological data, such as electroencephalography (EEG) and electrocardiography (EKG), are central to effective patient care and clinical research in epilepsy. Electrophysiological data is an example of clinical “big data” consisting of more than 100 multi-channel signals with recordings from each patient generating 5–10GB of data. Current approaches to store and analyze signal data using standalone tools, such as Nihon Kohden neurology software, are inadequate to meet the growing volume of data and the need for supporting multi-center collaborative studies with real time and interactive access. We introduce the Cloudwave platform in this paper that features a Web-based intuitive signal analysis interface integrated with a Hadoop-based data processing module implemented on clinical data stored in a “private cloud”. Cloudwave has been developed as part of the National Institute of Neurological Disorders and Strokes (NINDS) funded multi-center Prevention and Risk Identification of SUDEP Mortality (PRISM) project. The Cloudwave visualization interface provides real-time rendering of multi-modal signals with “montages” for EEG feature characterization over 2TB of patient data generated at the Case University Hospital Epilepsy Monitoring Unit. Results from performance evaluation of the Cloudwave Hadoop data processing module demonstrate one order of magnitude improvement in performance over 77GB of patient data. (Cloudwave project: http://prism.case.edu/prism/index.php/Cloudwave)

How do experienced Information Lens users use rules?

Fonte: Center for Information Systems Research, Massachusetts Institute of Technology, Sloan School of Management Publicador: Center for Information Systems Research, Massachusetts Institute of Technology, Sloan School of Management
Formato: 12 p.; 887747 bytes; application/pdf
Português
Relevância na Pesquisa
45.54%
Wendy E. Mackay ... [et. al.].; "October 1988."; Includes bibliographical references (p. 12).

Open Data Readiness Assessment Prepared for Government of Antigua and Barbuda

World Bank
Fonte: Washington, DC Publicador: Washington, DC
Português
Relevância na Pesquisa
45.6%
This 2013 report applies the World Bank Open Data Readiness Assessment Framework to diagnose the readiness of Antigua and Barbuda to create an Open Data initiative. The Framework examines the following dimensions: leadership, policy/legal framework, institutional preparedness, data within government, demand for data, open data ecosystem, financing, technology and skills infrastructure, and key datasets. The report finds Antigua and Barbuda is clearly ready along the dimensions of leadership, institutional preparedness, financing, and infrastructure and skills. Evidence for readiness is present but less clear for the remaining dimensions. The Government stands to benefit from first-mover advantage and has the potential to lead the Caribbean in Open Data, harness skilled people, and establish itself as a world class example of government transparency. An Open Data initiative could also increase efficiency and competitiveness in key areas such as tourism, foreign inward investment, and community engagement. Antigua and Barbuda possesses strengths in its institutions...

Efficient VLSI Architectures for Baseband Signal Processing for Wireless Base-Station Receivers

Rajagopal, Sridhar; Bhashyam, Srikrishna; Cavallaro, Joseph R.; Aazhang, Behnaam; Rajagopal, Sridhar; Bhashyam, Srikrishna; Cavallaro, Joseph R.; Aazhang, Behnaam
Fonte: Universidade Rice Publicador: Universidade Rice
Tipo: Conference paper
Português
Relevância na Pesquisa
45.66%
Conference Paper; A real-time VLSI architecture is designed for multiuser channel estimation, one of the core base-band processing operations in wireless base-station receivers. Future wireless basestation receivers will need to use sophisticated algorithms to support extremely high data rates and multimedia. Current DSP architectures are unable to fully exploit the parallelism and bit level arithmetic present in these algorithms. These features can be revealed and efficiently implemented by task partitioning the algorithms for a VLSI solution. We modify the channel estimation algorithm for a reduced complexity fixed-point hardware implementation. We show the complexity and hardware required for three different area-time tradeoffs: an area-constrained, a time-constrained and an area-time efficient architecture. The area-constrained architecture achieves low data rates with minimum hardware, which may be used in picocell base-stations. The time-constrained solution exploits the entire available parallelism and determines the maximum theoretical data rates. The area-time efficient architecture meets real-time requirements with minimum area overhead. The orders-of-magnitude difference between area and time constrained solutions reveals significant inherent parallelism in the algorithm. All proposed VLSI solutions exhibit better time performance than a previous DSP implementation.

A Space weather information service based upon remote and in-situ measurements of coronal mass ejections heading for Earth: A concept mission consisting of six spacecraft in a heliocentric orbit at 0.72 AU

Ritter, Birgit; Meskers, Arjan J. H.; Miles, Oscar; Ru??wurm, Michael; Scully, Stephen; Rold??n Aranda, Andr??s; Hartkorn, Oliver; J??stel, Peter; R??ville, Victor; Lupu, Sorina; Ruffenach, Alexis
Fonte: EDP Sciences Publicador: EDP Sciences
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
35.74%
The Earth???s magnetosphere is formed as a consequence of interaction between the planet???s magnetic field and the solar wind, a continuous plasma stream from the Sun. A number of different solar wind phenomena have been studied over the past 40 years with the intention of understanding and forecasting solar behavior. One of these phenomena in particular, Earth-bound interplanetary coronal mass ejections (CMEs), can significantly disturb the Earth???s magnetosphere for a short time and cause geomagnetic storms. This publication presents a mission concept consisting of six spacecraft that are equally spaced in a heliocentric orbit at 0.72 AU. These spacecraft will monitor the plasma properties, the magnetic field???s orientation and magnitude, and the 3D-propagation trajectory of CMEs heading for Earth. The primary objective of this mission is to increase space weather forecasting time by means of a near real-time information service, that is based upon in-situ and remote measurements of the aforementioned CME properties. The obtained data can additionally be used for updating scientific models. This update is the mission???s secondary objective. In-situ measurements are performed using a Solar Wind Analyzer instrumentation package and fluxgate magnetometers...

Data-parallel Digital Signal Processors: Algorithm Mapping, Architecture Scaling and Workload Adaptation

Rajagopal, Sridhar; Rajagopal, Sridhar
Fonte: Universidade Rice Publicador: Universidade Rice
Tipo: Thesis; Text; Text
Português
Relevância na Pesquisa
35.73%
PhD Thesis; Emerging applications such as high definition television (HDTV), streaming video, image processing in embedded applications and signal processing in high-speed wireless communications are driving a need for high performance digital signal processors (DSPs) with real-time processing. This class of applications demonstrates significant data parallelism, finite precision, need for power-efficiency and the need for 100's of arithmetic units in the DSP to meet real-time requirements. Data-parallel DSPs meet these requirements by employing clusters of functional units, enabling 100's of computations every clock cycle. These DSPs exploit instruction level parallelism and subword parallelism within clusters, similar to a traditional VLIW (Very Long Instruction Word) DSP, and exploit data parallelism across clusters, similar to vector processors. Stream processors are data-parallel DSPs that use a bandwidth hierarchy to support dataflow to 100's of arithmetic units and are used for evaluating the contributions of this thesis. Different software realizations of the dataflow in the algorithms can affect the performance of stream processors by greater than an order-of-magnitude. The thesis first presents the design of signal processing algorithms that map efficiently on stream processors by parallelizing the algorithms and by re-ordering the flow of data. The design space for stream processors also exhibits trade-offs between arithmetic units per cluster...

A microscope for the data center

Pereira, Nuno; Tennina, Stefano; Loureiro, João; Severino, Ricardo; Saraiva, Bruno; Santos, Manuel; Pacheco, Filipe; Tovar, Eduardo
Fonte: Inderscience Publishers Publicador: Inderscience Publishers
Tipo: Artigo de Revista Científica
Publicado em //2015 Português
Relevância na Pesquisa
45.74%
Nowadays, data centers are large energy consumers and the trend for next years is expected to increase further, considering the growth in the order of cloud services. A large portion of this power consumption is due to the control of physical parameters of the data center (such as temperature and humidity). However, these physical parameters are tightly coupled with computations, and even more so in upcoming data centers, where the location of workloads can vary substantially due, for example, to workloads being moved in the cloud infrastructure hosted in the data center. Therefore, managing the physical and compute infrastructure of a large data center is an embodiment of a Cyber-Physical System (CPS). In this paper, we describe a data collection and distribution architecture that enables gathering physical parameters of a large data center at a very high temporal and spatial resolution of the sensor measurements. We think this is an important characteristic to enable more accurate heat-flow models of the data center and with them, find opportunities to optimize energy consumptions. Having a high-resolution picture of the data center conditions, also enables minimizing local hot-spots, perform more accurate predictive maintenance (failures in all infrastructure equipments can be more promptly detected) and more accurate billing. We detail this architecture and define the structure of the underlying messaging system that is used to collect and distribute the data. Finally...

The management of distributed processing

Fonte: Center for Information Systems Research, Massachusetts Institute of Technology, Alfred P. Sloan School of Management Publicador: Center for Information Systems Research, Massachusetts Institute of Technology, Alfred P. Sloan School of Management
Formato: 30, [2] p.; 1761098 bytes; application/pdf
Português
Relevância na Pesquisa
45.6%
by John F. Rockart, Christine V. Bullen, John N. Kogan.; "December 1978." Highlights of a conference held at MIT's Endicott House in Dedham, Mass., March 29-31, 1979.; Includes bibliographical references.

A Space Weather Information Service Based Upon Remote and In-Situ Measurements of Coronal Mass Ejections Heading for Earth

Ritter, Birgit; Meskers, Arjan J. H.; Miles, Oscar; Rußwurm, Michael; Scully, Stephen; Roldán, Andrés; Hartkorn, Oliver; Jüstel, Peter; Réville, Victor; Lupu, Sorina; Ruffenach, Alexis
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 06/02/2015 Português
Relevância na Pesquisa
35.73%
The Earth's magnetosphere is formed as a consequence of interaction between the planet's magnetic field and the solar wind, a continuous plasma stream from the Sun. A number of different solar wind phenomena have been studied over the past forty years with the intention of understanding and forecasting solar behavior. One of these phenomena in particular, Earth-bound interplanetary coronal mass ejections (CMEs), can significantly disturb the Earth's magnetosphere for a short time and cause geomagnetic storms. This publication presents a mission concept consisting of six spacecraft that are equally spaced in a heliocentric orbit at 0.72 AU. These spacecraft will monitor the plasma properties, the magnetic field's orientation and magnitude, and the 3D-propagation trajectory of CMEs heading for Earth. The primary objective of this mission is to increase space weather (SW) forecasting time by means of a near real-time information service, that is based upon in-situ and remote measurements of the aforementioned CME properties. The mission's secondary objective is to provide vital data to update scientific models. In-situ measurements are performed using a Solar Wind Analyzer instrumentation package and flux gate magnetometers, while coronagraphs execute remote measurements. Communication with the six identical spacecraft is realized via a deep space network consisting of six ground stations. They provide an information service that is in uninterrupted contact with the spacecraft...

Overview of the SOFIA Data Processing System: A generalized system for manual and automatic data processing at the SOFIA Science Center

Shuping, R. Y.; Krzaczek, R.; Vacca, W. D.; Charcos-Llorens, M.; Reach, W. T.; Alles, R.; Clarke, M.; Melchiorri, R.; Radomski, J.; Shenoy, S.; Sandel, D.; Omelian, E. B.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 17/12/2014 Português
Relevância na Pesquisa
45.8%
The Stratospheric Observatory for Infrared Astronomy (SOFIA) is an airborne astronomical observatory comprised of a 2.5-meter telescope mounted in the aft section of a Boeing 747SP aircraft. During routine operations, several instruments will be available to the astronomical community including cameras and spectrographs in the near- to far-IR. Raw data obtained in-flight require a significant amount of processing to correct for background emission (from both the telescope and atmosphere), remove instrumental artifacts, correct for atmospheric absorption, and apply both wavelength and flux calibration. In general, this processing is highly specific to the instrument and telescope. In order to maximize the scientific output of the observatory, the SOFIA Science Center must provide these post-processed data sets to Guest Investigators in a timely manner. To meet this requirement, we have designed and built the SOFIA Data Processing System (DPS): an in-house set of tools and services that can be used in both automatic ("pipeline") and manual modes to process data from a variety of instruments. Here we present an overview of the DPS concepts and architecture, as well as operational results from the first two SOFIA observing cycles (2013--2014).; Comment: Presented at Astronomical Data Analysis Software & Systems XXIV...

Energy-aware replica selection for data-intensive services in cloud

Li, Bo
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
45.66%
With the increasing energy cost in data centers, an energy efficient approach to provide data intensive services in the cloud is highly in demand. This thesis solves the energy cost reduction problem of data centers by formulating an energy-aware replica selection problem in order to guide the distribution of workload among data centers. The current popular centralized replica selection approaches address such problems but they lack scalability and are vulnerable to a crash of the central coordinator. Also, they do not take total data center energy cost as the primary optimization target. We propose a simple decentralized replica selection system implemented with two distributed optimization algorithms (consensus-based distributed projected subgradient method and Lagrangian dual decomposition method) to work with clients as a decentralized coordinator. We also compare our energy-aware replica selection approach with the replica selection where a round-robin algorithm is implemented. A prototype of the decentralized replica selection system is designed and developed to collect energy consumption information of data centers. The results show that in the best case scenario of our experiments, the total energy cost using the Lagrangian dual decomposition method is 17.8% less than a baseline round-robin method and 15.3% less than consensus-based distributed projected subgradient method. Also...

Development of deterministic collision-avoidance algorithms for routing automated guided vehicles

Pai, Arun S.
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
35.75%
A manufacturing job spends a small portion of its total flow time being processed on machines, and during the remaining time, either it is in a queue or being transported from one work center to another. In a fully automated material-handling environment, automated guided vehicles (AGV) perform the function of transporting the jobs between workstations, and high operational costs are involved in these material-handling activities. Consequently, the AGV route schedule dictates subsequent work-center scheduling. For an AGV job transportation schedule to be effective, the issue of collisions amongst AGV during travel needs to be addressed. Such collisions cause stalemate situations that potentially disrupt the flow of materials in the job shop, adding to the non-value time of job processing, and thus, increase the material handling and inventory holding costs. The current research goal was to develop a methodology that could effectively and efficiently derive optimal AGV routes for a given set of transportation requests, considering the issue of collisions amongst AGV during travel. As part of the solution approach in the proposed work, an integer linear program was formulated in Phase I with the capability of optimally predicting the AGV routes for a deterministic set of transportation requests. Collision avoidance constraints were developed in this model. The model was programmed using OPL / Visual Basic...

Color in scientific visualization: Perception and image-based data display

Zhang, Hongqin
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Dissertação
Português
Relevância na Pesquisa
45.65%
Visualization is the transformation of information into a visual display that enhances users understanding and interpretation of the data. This thesis project has investigated the use of color and human vision modeling for visualization of image-based scientific data. Two preliminary psychophysical experiments were first conducted on uniform color patches to analyze the perception and understanding of different color attributes, which provided psychophysical evidence and guidance for the choice of color space/attributes for color encoding. Perceptual color scales were then designed for univariate and bivariate image data display and their effectiveness was evaluated through three psychophysical experiments. Some general guidelines were derived for effective color scales design. Extending to high-dimensional data, two visualization techniques were developed for hyperspectral imagery. The first approach takes advantage of the underlying relationships between PCA/ICA of hyperspectral images and the human opponent color model, and maps the first three PCs or ICs to several opponent color spaces including CIELAB, HSV, YCbCr, and YUV. The gray world assumption was adopted to automatically set the mapping origins. The rendered images are well color balanced and can offer a first look capability or initial classification for a wide variety of spectral scenes. The second approach combines a true color image and a PCA image based on a biologically inspired visual attention model that simulates the center-surround structure of visual receptive fields as the difference between fine and coarse scales. The model was extended to take into account human contrast sensitivity and include high-level information such as the second order statistical structure in the form of local variance map...

Transforming Data Into Information: The Development and Demonstration of a Model to Support Transportation Planning

Racca, David P.
Fonte: Center for Applied Demography & Survey Research Publicador: Center for Applied Demography & Survey Research
Tipo: Outros Formato: 2194028 bytes; application/pdf
Português
Relevância na Pesquisa
55.55%
In this project a functional prototype of web based documentation, search, cataloging, and organizational tool was created to demonstrate a potentially powerful aid to the Division of Planning. For purposes of discussion this utility will be termed the DUROS, the Documentation Utility for Referencing, Organization, and Search. In conclusion, DUROS represents a simple but powerful utility that can be developed and implemented in the near term with relatively low cost when compared to large scale data warehouse efforts.

Construindo um Data Center; Construindo um Data Center

Zucchi, Wagner Luiz; Amâncio, Anderson Barreto
Fonte: Universidade de São Paulo. Superintendência de Comunicação Social Publicador: Universidade de São Paulo. Superintendência de Comunicação Social
Tipo: info:eu-repo/semantics/article; info:eu-repo/semantics/publishedVersion; Formato: application/pdf
Publicado em 30/05/2013 Português
Relevância na Pesquisa
35.73%
A construção de um moderno centro de processamento de dados exige um correto balanço entre tendências tecnológicas, eficiência, fatores ambientais e baixo custo. Neste artigo os autores tentam apresentar os cuidados no projeto desse tipo de ambiente de forma acessível ao público não especializado.; Building a modern center for processing data requires striking the right balance between technology trends, efficiency, environmental factors and low cost. In this article the authors seek to present the issues to consider when designing a project of this type of environment in a way that is accessible to a non-expert audience.