# A melhor ferramenta para a sua pesquisa, trabalho e TCC!

Página 1 dos resultados de 740 itens digitais encontrados em 0.006 segundos

## Análise Bayesiana da área de olho do lombo e da espessura de gordura obtidas por ultrassom e suas associações com outras características de importância econômica na raça Nelore

Fonte: Universidade Estadual Paulista (UNESP)
Publicador: Universidade Estadual Paulista (UNESP)

Tipo: Tese de Doutorado
Formato: x, 84 f. : il.

Português

Relevância na Pesquisa

46.39%

#Nelore (Zebu)#Carcaças#Correlação genética#Herdabilidade#Precocidade sexual#Carcass#Genetic correlation#Gibbs sampling#Heritability#Sexual precocity

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES); Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP); Pós-graduação em Genética e Melhoramento Animal - FCAV; Objetivou-se com esse trabalho estimar os parâmetros genéticos para as características área de olho de lombo (AOL), espessura de gordura subcutânea na costela (EG) e espessura de gordura na garupa (EGP8) obtidas por ultrassom, ao ano (A) e ao sobreano (S). Além disso, foram estimadas as correlações genéticas entre essas características de carcaça obtidas por ultrassom (CCUS), e dessas com outras características de importância econômica em bovinos de corte, como peso (PS), altura do posterior (ALT) e perímetro escrotal (PE450) ao sobreano, idade ao primeiro parto (IPP) e primeiro intervalo entre partos (PIEP). Os parâmetros genéticos foram estimados em análises multi-características pelo modelo animal, utilizando-se a inferência Bayesiana via algoritmo de Gibbs Sampling. As estimativas de herdabilidade a posteriori para as CCUS foram: 0,46 (AOL_A), 0,42 (EG_A), 0,60 (EGP8_A), 0,33 (AOL_S), 0,59 (EG_S) e 0,55 (EGP8_S), mostrando que se essas características forem utilizadas como critério de seleção, as mesmas devem responder rapidamente à seleção individual...

Link permanente para citações:

## Gibbs sampling and helix-cap motifs

Fonte: Oxford University Press
Publicador: Oxford University Press

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

46.39%

Protein backbones have characteristic secondary structures, including α-helices and β-sheets. Which structure is adopted locally is strongly biased by the local amino acid sequence of the protein. Accurate (probabilistic) mappings from sequence to structure are valuable for both secondary-structure prediction and protein design. For the case of α-helix caps, we test whether the information content of the sequence–structure mapping can be self-consistently improved by using a relaxed definition of the structure. We derive helix-cap sequence motifs using database helix assignments for proteins of known structure. These motifs are refined using Gibbs sampling in competition with a null motif. Then Gibbs sampling is repeated, allowing for frameshifts of ±1 amino acid residue, in order to find sequence motifs of higher total information content. All helix-cap motifs were found to have good generalization capability, as judged by training on a small set of non-redundant proteins and testing on a larger set. For overall prediction purposes, frameshift motifs using all training examples yielded the best results. Frameshift motifs using a fraction of all training examples performed best in terms of true positives among top predictions. However...

Link permanente para citações:

## Blocking Gibbs sampling for linkage analysis in large pedigrees with many loops.

Fonte: PubMed
Publicador: PubMed

Tipo: Artigo de Revista Científica

Publicado em /09/1999
Português

Relevância na Pesquisa

46.49%

We apply the method of "blocking Gibbs" sampling to a problem of great importance and complexity-linkage analysis. Blocking Gibbs sampling combines exact local computations with Gibbs sampling, in a way that complements the strengths of both. The method is able to handle problems with very high complexity, such as linkage analysis in large pedigrees with many loops, a task that no other known method is able to handle. New developments of the method are outlined, and it is applied to a highly complex linkage problem in a human pedigree.

Link permanente para citações:

## Influence of priors in Bayesian estimation of genetic parameters for multivariate threshold models using Gibbs sampling

Fonte: BioMed Central
Publicador: BioMed Central

Tipo: Artigo de Revista Científica

Publicado em 17/02/2007
Português

Relevância na Pesquisa

46.54%

Simulated data were used to investigate the influence of the choice of priors on estimation of genetic parameters in multivariate threshold models using Gibbs sampling. We simulated additive values, residuals and fixed effects for one continuous trait and liabilities of four binary traits, and QTL effects for one of the liabilities. Within each of four replicates six different datasets were generated which resembled different practical scenarios in horses with respect to number and distribution of animals with trait records and availability of QTL information. (Co)Variance components were estimated using a Bayesian threshold animal model via Gibbs sampling. The Gibbs sampler was implemented with both a flat and a proper prior for the genetic covariance matrix. Convergence problems were encountered in > 50% of flat prior analyses, with indications of potential or near posterior impropriety between about round 10 000 and 100 000. Terminations due to non-positive definite genetic covariance matrix occurred in flat prior analyses of the smallest datasets. Use of a proper prior resulted in improved mixing and convergence of the Gibbs chain. In order to avoid (near) impropriety of posteriors and extremely poorly mixing Gibbs chains, a proper prior should be used for the genetic covariance matrix when implementing the Gibbs sampler.

Link permanente para citações:

## Multivariate Bayesian analysis of Gaussian, right censored Gaussian, ordered categorical and binary traits using Gibbs sampling

Fonte: BioMed Central
Publicador: BioMed Central

Tipo: Artigo de Revista Científica

Publicado em 15/03/2003
Português

Relevância na Pesquisa

46.49%

A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed.

Link permanente para citações:

## Posterior analysis of stochastic frontier models using Gibbs sampling

Fonte: Universidade Carlos III de Madrid
Publicador: Universidade Carlos III de Madrid

Tipo: Trabalho em Andamento
Formato: application/pdf

Publicado em /12/1992
Português

Relevância na Pesquisa

66.49%

In this paper we describe the use of Gibbs sampling methods for making posterior inferences in stochastic frontier models with composed error. We show how the Gibbs sampler can greatly reduce the computational difficulties involved in analyzing such models. Our fidings are illustrated in an empirical example.

Link permanente para citações:

## Gibbs sampling will fail in outlier problems with strong masking

Fonte: Universidade Carlos III de Madrid
Publicador: Universidade Carlos III de Madrid

Tipo: Trabalho em Andamento
Formato: application/pdf

Publicado em /06/1995
Português

Relevância na Pesquisa

46.39%

This paper discusses the convergence of the Gibbs sampling algorithm when it is applied to the problem of outlier detection in regression models. Given any vector of initial conditions, theoretically, the algorithm converges to the true posterior distribution. However, the speed of convergence may slow down in a high dimensional parameter space where the parameters are highly correlated. We show that the effect of the leverage in regression models makes very difficult the convergence of the Gibbs sampling algorithm in sets of data with strong masking. The problem is illustrated in several examples.

Link permanente para citações:

## Algoritmos adaptativos de Gibbs Sampling para la identificación de heterogeneidad en regresión y series temporales

Fonte: Universidade Carlos III de Madrid
Publicador: Universidade Carlos III de Madrid

Tipo: Tese de Doutorado
Formato: application/pdf

Português

Relevância na Pesquisa

46.49%

El objetivo principal de esta tesis doctoral es desarrollar nuevos procedimientos para la
identificación de observaciones atípicas que introducen heterogeneidad en muestras con
datos independientes y dependientes. Se proponen dos algoritmos diferentes para los
problemas de regresión y series temporales basados en el algoritmo de Gibbs Sampling.
Al igual que sucede con los métodos clásicos de identificación de valores atípicos,
se demuestra que la aplicación estándar del Gibbs Sampling no proporciona una identificación correcta de estos valores atípicos en problemas que presentan grupos de observaciones atípicas enmascaradas. Dado un vector cualquiera de valores iniciales,
teóricamente el algoritmo converge a la verdadera distribución a posteriori de los
parámetros, sin embargo, la velocidad de convergencia puede ser extremadamente lenta
cuando el espacio paramétrico tiene dimensión alta y los parámetros están muy correlacionados. Los nuevos algoritmos que se discuten en este trabajo permiten mediante un
proceso de aprendizaje adaptar las condiciones iniciales del Gibbs Sampling y mejorar
su convergencia a la distribución a posteriori de los parámetros del modelo.

Link permanente para citações:

## An estimation of distribution algorithm with adaptive Gibbs sampling for unconstrained global optimization

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

46.39%

#Computer Science - Neural and Evolutionary Computing#Mathematics - Optimization and Control#Statistics - Machine Learning

In this paper is proposed a new heuristic approach belonging to the field of
evolutionary Estimation of Distribution Algorithms (EDAs). EDAs builds a
probability model and a set of solutions is sampled from the model which
characterizes the distribution of such solutions. The main framework of the
proposed method is an estimation of distribution algorithm, in which an
adaptive Gibbs sampling is used to generate new promising solutions and, in
combination with a local search strategy, it improves the individual solutions
produced in each iteration. The Estimation of Distribution Algorithm with
Adaptive Gibbs Sampling we are proposing in this paper is called AGEDA. We
experimentally evaluate and compare this algorithm against two deterministic
procedures and several stochastic methods in three well known test problems for
unconstrained global optimization. It is empirically shown that our heuristic
is robust in problems that involve three central aspects that mainly determine
the difficulty of global optimization problems, namely high-dimensionality,
multi-modality and non-smoothness.; Comment: This paper has been withdrawn by the author by request of the journal
in which has been accepted for publication

Link permanente para citações:

## Modulation Classification via Gibbs Sampling Based on a Latent Dirichlet Bayesian Network

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

46.39%

A novel Bayesian modulation classification scheme is proposed for a
single-antenna system over frequency-selective fading channels. The method is
based on Gibbs sampling as applied to a latent Dirichlet Bayesian network (BN).
The use of the proposed latent Dirichlet BN provides a systematic solution to
the convergence problem encountered by the conventional Gibbs sampling approach
for modulation classification. The method generalizes, and is shown to improve
upon, the state of the art.; Comment: Contains corrections with respect to the version to appear on IEEE
Signal Processing Letters (see Fig. 2)

Link permanente para citações:

## Accelerated Gibbs sampling of normal distributions using matrix splittings and polynomials

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 13/05/2015
Português

Relevância na Pesquisa

46.6%

Standard Gibbs sampling applied to a multivariate normal distribution with a
specified precision matrix is equivalent in fundamental ways to the
Gauss-Seidel iterative solution of linear equations in the precision matrix.
Specifically, the iteration operators, the conditions under which convergence
occurs, and geometric convergence factors (and rates) are identical. These
results hold for arbitrary matrix splittings from classical iterative methods
in numerical linear algebra giving easy access to mature results in that field,
including existing convergence results for antithetic-variable Gibbs sampling,
REGS sampling, and generalizations. Hence, efficient deterministic stationary
relaxation schemes lead to efficient generalizations of Gibbs sampling. The
technique of polynomial acceleration that significantly improves the
convergence rate of an iterative solver derived from a \emph{symmetric} matrix
splitting may be applied to accelerate the equivalent generalized Gibbs
sampler. Identicality of error polynomials guarantees convergence of the
inhomogeneous Markov chain, while equality of convergence factors ensures that
the optimal solver leads to the optimal sampler. Numerical examples are
presented, including a Chebyshev accelerated SSOR Gibbs sampler applied to a
stylized demonstration of low-level Bayesian image reconstruction in a large
3-dimensional linear inverse problem.; Comment: 33 pages...

Link permanente para citações:

## Minimum Message Length Clustering Using Gibbs Sampling

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 16/01/2013
Português

Relevância na Pesquisa

46.39%

The K-Mean and EM algorithms are popular in clustering and mixture modeling,
due to their simplicity and ease of implementation. However, they have several
significant limitations. Both coverage to a local optimum of their respective
objective functions (ignoring the uncertainty in the model space), require the
apriori specification of the number of classes/clsuters, and are inconsistent.
In this work we overcome these limitations by using the Minimum Message Length
(MML) principle and a variation to the K-Means/EM observation assignment and
parameter calculation scheme. We maintain the simplicity of these approaches
while constructing a Bayesian mixture modeling tool that samples/searches the
model space using a Markov Chain Monte Carlo (MCMC) sampler known as a Gibbs
sampler. Gibbs sampling allows us to visit each model according to its
posterior probability. Therefore, if the model space is multi-modal we will
visit all models and not get stuck in local optima. We call our approach
multiple chains at equilibrium (MCE) MML sampling.; Comment: Appears in Proceedings of the Sixteenth Conference on Uncertainty in
Artificial Intelligence (UAI2000)

Link permanente para citações:

## Sex as Gibbs Sampling: a probability model of evolution

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 11/02/2014
Português

Relevância na Pesquisa

46.39%

#Quantitative Biology - Populations and Evolution#Computer Science - Neural and Evolutionary Computing

We show that evolutionary computation can be implemented as standard
Markov-chain Monte-Carlo (MCMC) sampling. With some care, `genetic algorithms'
can be constructed that are reversible Markov chains that satisfy detailed
balance; it follows that the stationary distribution of populations is a Gibbs
distribution in a simple factorised form. For some standard and popular
nonparametric probability models, we exhibit Gibbs-sampling procedures that are
plausible genetic algorithms. At mutation-selection equilibrium, a population
of genomes is analogous to a sample from a Bayesian posterior, and the genomes
are analogous to latent variables. We suggest this is a general, tractable, and
insightful formulation of evolutionary computation in terms of standard machine
learning concepts and techniques.
In addition, we show that evolutionary processes in which selection acts by
differences in fecundity are not reversible, and also that it is not possible
to construct reversible evolutionary models in which each child is produced by
only two parents.

Link permanente para citações:

## Efficient Gibbs Sampling for Markov Switching GARCH Models

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 21/12/2012
Português

Relevância na Pesquisa

46.39%

We develop efficient simulation techniques for Bayesian inference on
switching GARCH models. Our contribution to existing literature is manifold.
First, we discuss different multi-move sampling techniques for Markov Switching
(MS) state space models with particular attention to MS-GARCH models. Our
multi-move sampling strategy is based on the Forward Filtering Backward
Sampling (FFBS) applied to an approximation of MS-GARCH. Another important
contribution is the use of multi-point samplers, such as the Multiple-Try
Metropolis (MTM) and the Multiple trial Metropolize Independent Sampler, in
combination with FFBS for the MS-GARCH process. In this sense we ex- tend to
the MS state space models the work of So [2006] on efficient MTM sampler for
continuous state space models. Finally, we suggest to further improve the
sampler efficiency by introducing the antithetic sampling of Craiu and Meng
[2005] and Craiu and Lemieux [2007] within the FFBS. Our simulation experiments
on MS-GARCH model show that our multi-point and multi-move strategies allow the
sampler to gain efficiency when compared with single-move Gibbs sampling.; Comment: 38 pages, 7 figures

Link permanente para citações:

## Asynchronous Distributed Gibbs Sampling

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 29/09/2015
Português

Relevância na Pesquisa

46.39%

Gibbs sampling is a widely used Markov Chain Monte Carlo (MCMC) method for
numerically approximating integrals of interest in Bayesian statistics and
other mathematical sciences. It is widely believed that MCMC methods do not
extend easily to parallel implementations, as their inherently sequential
nature incurs a large synchronization cost. This means that new solutions are
needed to bring Bayesian analysis fully into the era of large-scale
computation. In this paper, we present a novel scheme - Asynchronous
Distributed Gibbs (ADG) sampling - that allows us to perform MCMC in a parallel
fashion with no synchronization or locking, avoiding the typical performance
bottlenecks of parallel algorithms. Our method is especially attractive in
settings, such as hierarchical random-effects modeling in which each
observation has its own random effect, where the problem dimension grows with
the sample size. We prove convergence under some basic regularity conditions,
and discuss the proof for similar parallelization schemes for other iterative
algorithms. We provide three examples that illustrate some of the algorithm's
properties with respect to scaling. Because our hardware resources are bounded,
we have not yet found a limit to the algorithm's scaling...

Link permanente para citações:

## On particle Gibbs sampling

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

46.56%

The particle Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm to
sample from the full posterior distribution of a state-space model. It does so
by executing Gibbs sampling steps on an extended target distribution defined on
the space of the auxiliary variables generated by an interacting particle
system. This paper makes the following contributions to the theoretical study
of this algorithm. Firstly, we present a coupling construction between two
particle Gibbs updates from different starting points and we show that the
coupling probability may be made arbitrarily close to one by increasing the
number of particles. We obtain as a direct corollary that the particle Gibbs
kernel is uniformly ergodic. Secondly, we show how the inclusion of an
additional Gibbs sampling step that reselects the ancestors of the particle
Gibbs' extended target distribution, which is a popular approach in practice to
improve mixing, does indeed yield a theoretically more efficient algorithm as
measured by the asymptotic variance. Thirdly, we extend particle Gibbs to work
with lower variance resampling schemes. A detailed numerical study is provided
to demonstrate the efficiency of particle Gibbs and the proposed variants.; Comment: Published at http://dx.doi.org/10.3150/14-BEJ629 in the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm)

Link permanente para citações:

## Quantum Gibbs Sampling Using Szegedy Operators

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

46.45%

We present an algorithm for doing Gibbs sampling on a quantum computer. The
algorithm combines phase estimation for a Szegedy operator, and Grover's
algorithm. For any $\epsilon>0$, the algorithm will sample a probability
distribution in ${\cal O}(\frac{1}{\sqrt{\delta}})$ steps with precision ${\cal
O}(\epsilon)$. Here $\delta$ is the distance between the two largest eigenvalue
magnitudes of the transition matrix of the Gibbs Markov chain used in the
algorithm. It takes ${\cal O}(\frac{1}{\delta})$ steps to achieve the same
precision if one does Gibbs sampling on a classical computer.; Comment: V1-17 pages(8 files:1 .tex, 2 .sty, 5 .eps);V2-many minor changes to
improve larity

Link permanente para citações:

## Fast Parallel SAME Gibbs Sampling on General Discrete Bayesian Networks

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 19/11/2015
Português

Relevância na Pesquisa

46.42%

A fundamental task in machine learning and related fields is to perform
inference on Bayesian networks. Since exact inference takes exponential time in
general, a variety of approximate methods are used. Gibbs sampling is one of
the most accurate approaches and provides unbiased samples from the posterior
but it has historically been too expensive for large models. In this paper, we
present an optimized, parallel Gibbs sampler augmented with state replication
(SAME or State Augmented Marginal Estimation) to decrease convergence time. We
find that SAME can improve the quality of parameter estimates while
accelerating convergence. Experiments on both synthetic and real data show that
our Gibbs sampler is substantially faster than the state of the art sampler,
JAGS, without sacrificing accuracy. Our ultimate objective is to introduce the
Gibbs sampler to researchers in many fields to expand their range of feasible
inference problems.

Link permanente para citações:

## Quibbs, a Code Generator for Quantum Gibbs Sampling

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

46.39%

This paper introduces Quibbs v1.3, a Java application available for free.
(Source code included in the distribution.) Quibbs is a "code generator" for
quantum Gibbs sampling: after the user inputs some files that specify a
classical Bayesian network, Quibbs outputs a quantum circuit for performing
Gibbs sampling of that Bayesian network on a quantum computer. Quibbs
implements an algorithm described in earlier papers, that combines various
apple pie techniques such as: an adaptive fixed-point version of Grover's
algorithm, Szegedy operators, quantum phase estimation and quantum
multiplexors.; Comment: 21 pages (16 files: 1 .tex, 1 .sty, 14 .pdf);V2-added 2 new
files(.xxx, .txt) txt file contains quibbs1.4 source code

Link permanente para citações:

## Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 02/10/2015
Português

Relevância na Pesquisa

46.54%

Gibbs sampling on factor graphs is a widely used inference technique, which
often produces good empirical results. Theoretical guarantees for its
performance are weak: even for tree structured graphs, the mixing time of Gibbs
may be exponential in the number of variables. To help understand the behavior
of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy
width. We show that under suitable conditions on the weights, bounded hierarchy
width ensures polynomial mixing time. Our study of hierarchy width is in part
motivated by a class of factor graph templates, hierarchical templates, which
have bounded hierarchy width---regardless of the data used to instantiate them.
We demonstrate a rich application from natural language processing in which
Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human
volunteers.

Link permanente para citações: