Nesta dissertação, o objetivo é investigar um conjunto de redações do Exame Nacional do Ensino Médio (Enem)/2012, analisando as relações dialógicas estabelecidas pelos escreventes a partir da interação ativa com as vozes reportadas para a defesa de um ponto de vista sobre o tema: O movimento imigratório para o Brasil no século XXI. Desde 2009, o Enem seleciona candidatos para o ingresso no ensino superior e a redação é o instrumento que solicita a elaboração de um texto dissertativo-argumentativo. A partir do total de 2720 redações cedidas pelo Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira (Inep), o corpus foi constituído por 121 redações, segundo dois critérios: (a) a faixa de desempenho de 200 a 1000 pontos, respeitando a diversidade de notas, e (b) as cinco regiões brasileiras, marcando a representatividade regional. A fundamentação teórica deste trabalho centra-se na perspectiva dialógica da linguagem de Bakhtin e o Círculo, principalmente, nos conceitos de enunciado concreto e discurso citado, e na perspectiva ideológica dos estudos de letramento. Assumindo o trabalho com a escrita como um processo de compreensão responsiva, esta pesquisa buscou compreender cada texto como uma réplica ativa à proposta de redação e aos discursos oficiais que dela ecoam. No conjunto das redações...
Tese de mestrado em Informática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2012; Obter resultados e comportamentos correctos em computação é uma preocupação de longa data. O excerto seguinte sobre o advento das máquinas de calcular foi escrito em 1834 e ilustra a importância já dada naquela época ao uso de mecanismos para tolerar e identificar erros de cálculo : “A verificação mais correcta e efectiva contra erros que surgem do processo de computação ´e realizar a mesma computação em máquinas de calcular separadas e independentes; e tal verificação ´e ainda mais decisiva se os cálculos forem realizados através de métodos diferentes.” Existem dois mecanismos que surgem desta afirmação e são considerados importantes para obter computações correctas. O primeiro é a replicação, a qual consiste em calcular os resultados mais de uma vez e compará-los ou realizar uma votação no final. O segundo ´e a diversidade, a qual consiste em utilizar métodos e componentes distintos em cada computação. Actualmente, ambos integram o grupo de mecanismos para tolerância a faltas e intrusões (FIT), os quais são capazes de tolerar tanto faltas acidentais como maliciosas em sistemas computacionais. Em termos práticos...
Isolation of bacterial mutants hypersusceptible to antibiotics can reveal novel targets for antibiotic potentiators. However, identification of such mutants is a difficult task which normally requires laborious replica plating of thousands of colonies. The technique proposed here allows for the positive selection of genetic knockout mutants leading to hypersusceptibility. This technique, designated SDR (selection for DNA release), involves introduction of random insertions of a marker gene into the chromosome of a highly transformable bacterial species, followed by treatment of the obtained library with an antibiotic at subinhibitory concentrations. DNA released by lysing bacteria is collected and used to transform fresh bacteria, selecting for insertion of the marker gene. These selection cycles are repeated until variants with a hypersusceptibility phenotype caused by insertion of the marker begin to dominate in the library. This approach allowed for isolation of a number of mutants of the gram-negative opportunistic pathogen Acinetobacter sp. susceptible to 4- to 16-times-lower concentrations of ampicillin than wild-type bacteria. The mutations affected proteins involved in peptidoglycan turnover and, surprisingly, proteins involved in exopolysaccharide production. A further modification of the SDR technique is described which allows for selecting mutants hypersensitive to agents that affect bacterial physiology but do not cause cell lysis...
The ribosome translocation step that occurs during protein synthesis is a highly conserved, essential activity of all cells. The precise movement of one codon that occurs following peptide bond formation is regulated by elongation factor G (EF-G) in eubacteria or elongation factor 2 (EF-2) in eukaryotes. To begin to understand molecular interactions that regulate this process, a genetic selection was developed with the aim of obtaining conditional-lethal alleles of the gene (fusA) that encodes EF-G in Escherichia coli. The genetic selection depends on the observation that resistant strains arose spontaneously in the presence of sublethal concentrations of the antibiotic kanamycin. Replica plating was performed to obtain mutant isolates from this collection that were restrictive for growth at 42 degrees C. Two tightly temperature-sensitive strains were characterized in detail and shown to harbor single-site missense mutations within fusA. The fusA100 mutant encoded a glycine-to-aspartic acid change at codon 502. The fusA101 allele encoded a glutamine-to-proline alteration at position 495. Induction kinetics of beta-galactosidase activity suggested that both mutations resulted in slower elongation rates in vivo. These missense mutations were very near a small group of conserved amino acid residues (positions 483 to 493) that occur in EF-G and EF-2 but not EF-Tu. It is concluded that these sequences encode a specific domain that is essential for efficient translocase function.
We describe a new technique for selection of cloned gene segments which are expressed preferentially at one developmental stage but at a relatively low level. A nitrocellulose filter replica of plaques of lambda phage which contain approximately 8 KB inserts of genomic DNA is prepared; it is hybridized with a small amount of [32p] labeled mRNA prepared from one developmental stage, in the presence of a several-hundred fold excess of competitor RNA from a different stage. We show that clones of Dictyostelium nuclear DNA which form hybrids under these conditions indeed encode developmentally regulated mRNAs. Our previous analysis of Dictyostelium discoideum differentiation indicated that transcripts from about 12% of the genome appear in mRNA at one defined stage of differentiation - the formation of cell-cell aggregates. A number of our new clones are novel, in that they encode multiple discrete mRNA species all of which accumulate only at the cell aggregate stages; others encode one or more mRNAs which appear at the tight aggregate stage and also one or more which are present throughout differentiation. These latter clones, in particular, would be difficult to identify using other selection techniques.
The assembly and processing of glycoprotein-linked oligosaccharides in Dictyostelium discoideum has been shown to generate a wide array of glycan structures which undergo dramatic developmental regulation. As late steps in processing of these oligosaccharides involve sulfation, a sulfate suicide selection procedure was developed to select for temperature-sensitive glycoprotein-processing mutants. Of 673 clones derived from the survivors of suicide selection, 99 were classified by replica-plating fluorography as temperature sensitive for sulfate transport or incorporation. Of these, 74 were unable to complete the developmental program to the fruiting body stage at the restrictive temperature, 29 being blocked in some aspect of aggregation and 45 being blocked at some postaggregation stage. Quantitative metabolic labeling experiments with representative clones showed that they incorporated wild-type levels of [35S]methionine but reduced levels of sulfate at the restrictive temperature. The specific incorporation patterns in the mutants suggest that distinct oligosaccharide-processing steps are involved in different developmental events.
Mutant Chinese hamster ovary cells altered in glycoproteins have been isolated by selecting for ability to survive exposure to [6-3H]fucose. Mutagenized wild-type cells were permitted to incorporate [3H]fucose to approximately 1 cpm of trichloroacetic acid-insoluble radioactivity per cell and then frozen for several days to accumulate radiation damage. The overall viability of the population was reduced by 5- to 50-fold. Four consecutive selection cycles were carried out. The surviving cells were screened by replica plating-fluorography for clones showing decreased incorporation of fucose into trichloroacetic acid-insoluble macromolecules. Considerable enrichment for cells deficient in fucose uptake or incorporation into proteins (or both) was found in populations surviving the later selection cycles. Two mutant clones isolated after the fourth selection cycle had the same doubling time as the wild type, but contained only 30 to 40% as much fucose bound to proteins as the wild type. Sialic acid contents of the mutants and the wild type were similar. The mutants differed quantitatively and qualitatively from the wild type and from each other with respect to total glycoprotein profiles as visualized by sodium dodecyl sulfate gel electrophoresis. Differences were also found in resistances to cytotoxicity of lectins such as concanavalin A and wheat germ agglutinin.
By using a chemically defined medium, a general and highly specific procedure was devised to select for mutant cells with less abundant or structurally altered sterol in their surface membranes. Within a certain concentration range, the polyene antibiotic filipin was shown to kill only cells with normal (as opposed to decreased) membrane sterol levels. Sterol-requiring derivatives of LM cells were isolated by chemical mutagenesis, filipin treatment, and cloning followed by replica plating in soft agar. Mutants (S1 and S2) are described which, when compared to normal cells, show decreased synthesis of demosterol in vivo from acetate and mevalonate relative to cell number or to fatty acid synthesis. When exogenous sterol is supplied, mutants S1 and S2 grow normally in suspension culture. However, when deprived of sterol supplement, mutant S1 grows slower than wild type cells and mutant S2 lyses within one to two generations. Gas/liquid chromatography revealed that the mutants contained a normal spectrum of fatty acids including unsaturated fatty acyl groups but, unlike wildtype cells, they have less abundant (mutant S1) or no (mutant S2) desmosterol in either the presence or absence of exogenous cholesterol. In vitro experiments with mevalonate as the substrate suggest that the defect in both mutants is in a demethylation reaction subsequent to lanosterol synthesis. The selection method developed here may permit the isolation of mutants with defective membrane incorporation of sterols and other polyisoprenoids as well as defective synthesis of these compounds.
A protocol is presented for the global refinement of homology models of proteins. It combines the advantages of temperature-based replica-exchange molecular dynamics (REMD) for conformational sampling and the use of statistical potentials for model selection. The protocol was tested using 21 models. Of these 14 were models of 10 small proteins for which high-resolution crystal structures were available, the remainder were targets of the recent CASPR exercise. It was found that REMD in combination with currently available force fields could sample near-native conformational states starting from high-quality homology models. Conformations in which the backbone RMSD of secondary structure elements (SSE-RMSD) was lower than the starting value by 0.5 to 1.0 Å were found for 15 out of the 21 cases (average 0.82 Å). Furthermore, when a simple scoring function consisting of two statistical potentials was used to rank the structures, one or more structures with SSE-RMSD of at least 0.2 Å lower than the starting value was found among the 5 best ranked structures in 11 out of the 21 cases. The average improvement in SSE-RMSD for the best models was 0.42 Å. However, none of the scoring functions tested identified the structures with the lowest SSE-RMSD as the best models although all identified the native conformation as the one with lowest energy. This suggests that while the proposed protocol proved effective for the refinement of high-quality models of small proteins scoring functions remain one of the major limiting factors in structure refinement. This and other aspects by which the methodology could be further improved are discussed.
Behavioral ecologists and evolutionary biologists have long studied how predators respond to prey items novel in color and pattern. Because a predatory response is influenced by both the predator’s ability to detect the prey and a post-detection behavioral response, variation among prey types in conspicuousness may confound inference about post-prey-detection predator behavior. That is, a relatively high attack rate on a given prey type may result primarily from enhanced conspicuousness and not predators’ direct preference for that prey. Few studies, however, account for such variation in conspicuousness. In a field experiment, we measured predation rates on clay replicas of two aposematic forms of the poison dart frog Dendrobates pumilio, one novel and one familiar, and two cryptic controls. To ask whether predators prefer or avoid a novel aposematic prey form independently of conspicuousness differences among replicas, we first modeled the visual system of a typical avian predator. Then, we used this model to estimate replica contrast against a leaf litter background to test whether variation in contrast alone could explain variation in predator attack rate. We found that absolute predation rates did not differ among color forms. Predation rates relative to conspicuousness did...
Today's standard molecular dynamics simulations of moderately sized biomolecular systems at full atomic resolution are typically limited to the nanosecond timescale and therefore suffer from limited conformational sampling. Efficient ensemble-preserving algorithms like replica exchange (REX) may alleviate this problem somewhat but are still computationally prohibitive due to the large number of degrees of freedom involved. Aiming at increased sampling efficiency, we present a novel simulation method combining the ideas of essential dynamics and REX. Unlike standard REX, in each replica only a selection of essential collective modes of a subsystem of interest (essential subspace) is coupled to a higher temperature, with the remainder of the system staying at a reference temperature, T0. This selective excitation along with the replica framework permits efficient approximate ensemble-preserving conformational sampling and allows much larger temperature differences between replicas, thereby considerably enhancing sampling efficiency. Ensemble properties and sampling performance of the method are discussed using dialanine and guanylin test systems, with multi-microsecond molecular dynamics simulations of these test systems serving as references.
Efficient embedding virtual clusters in physical network is a challenging
problem. In this paper we consider a scenario where physical network has a
structure of a balanced tree. This assumption is justified by many real- world
implementations of datacenters. We consider an extension to virtual cluster
embedding by introducing replication among data chunks. In many real-world
applications, data is stored in distributed and redundant way. This assumption
introduces additional hardness in deciding what replica to process. By
reduction from classical NP-complete problem of Boolean Satisfia- bility, we
show limits of optimality of embedding. Our result holds even in trees of edge
height bounded by three. Also, we show that limiting repli- cation factor to
two replicas per chunk type does not make the problem simpler.; Comment: 2 figures
Compressive sensing (CS) is a new methodology to capture signals at lower
rate than the Nyquist sampling rate when the signals are sparse or sparse in
some domain. The performance of CS estimators is analyzed in this paper using
tools from statistical mechanics, especially called replica method. This method
has been used to analyze communication systems like Code Division Multiple
Access (CDMA) and multiple input multi- ple output (MIMO) systems with large
size. Replica analysis, now days rigorously proved, is an efficient tool to
analyze large systems in general. Specifically, we analyze the performance of
some of the estimators used in CS like LASSO (the Least Absolute Shrinkage and
Selection Operator) estimator and Zero-Norm regularizing estimator as a special
case of maximum a posteriori (MAP) estimator by using Bayesian framework to
connect the CS estimators and replica method. We use both replica symmetric
(RS) ansatz and one-step replica symmetry breaking (1RSB) ansatz, clamming the
latter is efficient when the problem is not convex. This work is more
analytical in its form. It is deferred for next step to focus on the numerical
results.; Comment: The analytical work and results were presented at the 2012 IEEE
European School of Information Theory in Antalya...
Joint user selection (US) and vector precoding (US-VP) is proposed for
multiuser multiple-input multiple-output (MU-MIMO) downlink. The main
difference between joint US-VP and conventional US is that US depends on data
symbols for joint US-VP, whereas conventional US is independent of data
symbols. The replica method is used to analyze the performance of joint US-VP
in the large-system limit, where the numbers of transmit antennas, users, and
selected users tend to infinity while their ratios are kept constant. The
analysis under the assumptions of replica symmetry (RS) and 1-step replica
symmetry breaking (1RSB) implies that optimal data-independent US provides
nothing but the same performance as random US in the large-system limit,
whereas data-independent US is capacity-achieving as only the number of users
tends to infinity. It is shown that joint US-VP can provide a substantial
reduction of the energy penalty in the large-system limit. Consequently, joint
US-VP outperforms separate US-VP in terms of the achievable sum rate, which
consists of a combination of vector precoding (VP) and data-independent US. In
particular, data-dependent US can be applied to general modulation, and
implemented with a greedy algorithm.
Multiple-input multiple-output (MIMO) broadcast channels (BCs) (MIMO-BCs)
with perfect channel state information (CSI) at the transmitter are considered.
As joint user selection (US) and vector precoding (VP) (US-VP) with
zero-forcing transmit beamforming (ZF-BF), US and continuous VP (CVP) (US-CVP)
and data-dependent US (DD-US) are investigated. The replica method, developed
in statistical physics, is used to analyze the energy penalties for the two
US-VP schemes in the large-system limit, where the number of users, the number
of selected users, and the number of transmit antennas tend to infinity with
their ratios kept constant. Four observations are obtained in the large-system
limit: First, the assumptions of replica symmetry (RS) and 1-step replica
symmetry breaking (1RSB) for DD-US can provide acceptable approximations for
low and moderate system loads, respectively. Secondly, DD-US outperforms CVP
with random US in terms of the energy penalty for low-to-moderate system loads.
Thirdly, the asymptotic energy penalty of DD-US is indistinguishable from that
of US-CVP for low system loads. Finally, a greedy algorithm of DD-US proposed
in authors' previous work can achieve nearly optimal performance for
low-to-moderate system loads.; Comment: submitted to ISITA2012
Increasing need for large-scale data analytics in a number of application
domains has led to a dramatic rise in the number of distributed data management
systems, both parallel relational databases, and systems that support
alternative frameworks like MapReduce. There is thus an increasing contention
on scarce data center resources like network bandwidth; further, the energy
requirements for powering the computing equipment are also growing
dramatically. As we show empirically, increasing the execution parallelism by
spreading out data across a large number of machines may achieve the intended
goal of decreasing query latencies, but in most cases, may increase the total
resource and energy consumption significantly. For many analytical workloads,
however, minimizing query latencies is often not critical; in such scenarios,
we argue that we should instead focus on minimizing the average query span,
i.e., the average number of machines that are involved in processing of a
query, through colocation of data items that are frequently accessed together.
In this work, we exploit the fact that most distributed environments need to
use replication for fault tolerance, and we devise workload-driven replica
selection and placement algorithms that attempt to minimize the average query
span. We model a historical query workload trace as a hypergraph over a set of
The Globus Data Grid architecture provides a scalable infrastructure for the
management of storage resources and data that are distributed across Grid
environments. These services are designed to support a variety of scientific
applications, ranging from high-energy physics to computational genomics, that
require access to large amounts of data (terabytes or even petabytes) with
varied quality of service requirements. By layering on a set of core services,
such as data transport, security, and replica cataloging, one can construct
various higher-level services. In this paper, we discuss the design and
implementation of a high-level replica selection service that uses information
regarding replica location and user preferences to guide selection from among
storage replica alternatives. We first present a basic replica selection
service design, then show how dynamic information collected using Globus
information service capabilities concerning storage system properties can help
improve and optimize the selection process. We demonstrate the use of Condor's
ClassAds resource description and matchmaking mechanism as an efficient tool
for representing and matching storage resource capabilities and policies
against application requirements.; Comment: 8 pages...
We review a selection of methods for performing enhanced sampling in
molecular dynamics simulations. We consider methods based on collective
variable biasing and on tempering, and offer both historical and contemporary
perspectives. In collective-variable biasing, we first discuss methods stemming
from thermodynamic integration that use mean force biasing, including the
adaptive biasing force algorithm and temperature acceleration. We then turn to
methods that use bias potentials, including umbrella sampling and metadynamics.
We next consider parallel tempering and replica-exchange methods. We conclude
with a brief presentation of some combination methods.; Comment: Accepted for publication on Entropy
We study a simple solvable model describing the genesis of monomer sequences
for hetero-polymers (such as proteins), as the result of the equilibration of a
slow stochastic genetic selection process which is assumed to be driven by the
competing demands of functionality and reproducibility of the polymer's folded
structure. Since reproducibility is defined in terms of properties of the
folding process, one is led to the analysis of the coupled dynamics of (fast)
polymer folding and (slow) genetic sequence selection. For the present
mean-field model this analysis can be carried out using the finite-dimensional
replica method, leading to exact results for (first- and second-order)
transitions and to rich phase diagrams.; Comment: 21 pages, 7 figures
With the increasing energy cost in data centers, an energy efficient approach to provide data intensive services in the cloud is highly in demand. This thesis solves the energy cost reduction problem of data centers by formulating an energy-aware replica selection problem in order to guide the distribution of workload among data centers. The current popular centralized replica selection approaches address such problems but they lack scalability and are vulnerable to a crash of the central coordinator. Also, they do not take total data center energy cost as the primary optimization target. We propose a simple decentralized replica selection system implemented with two distributed optimization algorithms (consensus-based distributed projected subgradient method and Lagrangian dual decomposition method) to work with clients as a decentralized coordinator. We also compare our energy-aware replica selection approach with the replica selection where a round-robin algorithm is implemented. A prototype of the decentralized replica selection system is designed and developed to collect energy consumption information of data centers. The results show that in the best case scenario of our experiments, the total energy cost using the Lagrangian dual decomposition method is 17.8% less than a baseline round-robin method and 15.3% less than consensus-based distributed projected subgradient method. Also...