Página 1 dos resultados de 772 itens digitais encontrados em 0.010 segundos

Um estudo comparativo entre abordagens Bayesianas à testes de hipóteses; A comparative study of Bayesian approaches to hypothesis testing

Melo, Brian Alvarez Ribeiro de
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 04/03/2013 Português
Relevância na Pesquisa
36.2%
Neste trabalho, consideramos uma população finita composta por N elementos, sendo que para cada unidade está associado um número (ou vetor) de tal forma que temos para a população o vetor de valores X = (X1, ... ,XN), onde Xi denota a característica de interesse do i-ésimo indivíduo da população, que suporemos desconhecida. Aqui assumimos que a distribuição do vetor X é permutável e que existe disponível uma amostra composta por n < N elementos. Os objetivos são a construção de testes de hipóteses para os parâmetros operacionais, através das distribuições a posteriori obtidas sob a abordagem preditivista para populações finitas e a comparação com os resultados obtidos a partir dos modelos Bayesianos de superpopulação. Nas análises consideramos os modelos Bernoulli, Poisson, Uniforme Discreto e Multinomial. A partir dos resultados obtidos, conseguimos ilustrar situações nas quais as abordagens produzem resultados diferentes, como prioris influenciam os resultados e quando o modelo de populações finitas apresenta melhores resultados que o modelo de superpopulação.; We consider a finite population consisting of N units and to each unit there is a number (or vector) associated such that we have for the population the vector of values X = (X1...

Testes bayesianos para homogeneidade marginal em tabelas de contingência; Bayesian tests for marginal homogeneity in contingency tables

Carvalho, Helton Graziadei de
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 06/08/2015 Português
Relevância na Pesquisa
36.12%
O problema de testar hipóteses sobre proporções marginais de uma tabela de contingência assume papel fundamental, por exemplo, na investigação da mudança de opinião e comportamento. Apesar disso, a maioria dos textos na literatura abordam procedimentos para populações independentes, como o teste de homogeneidade de proporções. Existem alguns trabalhos que exploram testes de hipóteses em caso de respostas dependentes como, por exemplo, o teste de McNemar para tabelas 2 x 2. A extensão desse teste para tabelas k x k, denominado teste de homogeneidade marginal, usualmente requer, sob a abordagem clássica, a utilização de aproximações assintóticas. Contudo, quando o tamanho amostral é pequeno ou os dados esparsos, tais métodos podem eventualmente produzir resultados imprecisos. Neste trabalho, revisamos medidas de evidência clássicas e bayesianas comumente empregadas para comparar duas proporções marginais. Além disso, desenvolvemos o Full Bayesian Significance Test (FBST) para testar a homogeneidade marginal em tabelas de contingência bidimensionais e multidimensionais. O FBST é baseado em uma medida de evidência, denominada e-valor, que não depende de resultados assintóticos, não viola o princípio da verossimilhança e respeita a várias propriedades lógicas esperadas para testes de hipóteses. Consequentemente...

Bayesian computerized adaptive testing

Veldkamp,Bernard P.; Matteucci,Mariagiulia
Fonte: Fundação CESGRANRIO Publicador: Fundação CESGRANRIO
Tipo: Artigo de Revista Científica Formato: text/html
Publicado em 01/03/2013 Português
Relevância na Pesquisa
36.25%
Computerized adaptive testing (CAT) comes with many advantages. Unfortunately, it still is quite expensive to develop and maintain an operational CAT. In this paper, various steps involved in developing an operational CAT are described and literature on these topics is reviewed. Bayesian CAT is introduced as an alternative, and the use of empirical priors is proposed for estimating item and person parameters to reduce the costs of CAT. Methods to elicit empirical priors are presented and a two small examples are presented that illustrate the advantages of Bayesian CAT. Implications of the use of empirical priors are discussed, limitations are mentioned and some suggestions for further research are formulated.

Multiplicity-calibrated Bayesian hypothesis tests

Guo, Mengye; Heitjan, Daniel F.
Fonte: Oxford University Press Publicador: Oxford University Press
Tipo: Artigo de Revista Científica
Publicado em /07/2010 Português
Relevância na Pesquisa
36.2%
When testing multiple hypotheses simultaneously, there is a need to adjust the levels of the individual tests to effect control of the family-wise error rate (FWER). Standard frequentist adjustments control the error rate but are typically both conservative and oblivious to prior information. We propose a Bayesian testing approach—multiplicity-calibrated Bayesian hypothesis testing—that sets individual critical values to reflect prior information while controlling the FWER via the Bonferroni inequality. If the prior information is specified correctly, in the sense that those null hypotheses considered most likely to be false in fact are false, the power of our method is substantially greater than that of standard frequentist approaches. We illustrate our method using data from a pharmacogenetic trial and a preclinical cancer study. We demonstrate its error rate control and power advantage by simulation.

Testing for Divergent Transmission Histories among Cultural Characters: a Study Using Bayesian Phylogenetic Methods and Iranian Tribal Textile Data

Matthews, Luke J.; Tehrani, Jamie J.; Jordon, Fiona M.; Collard, Mark; Nunn, Charles Lindsay
Fonte: Public Library of Science Publicador: Public Library of Science
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
36.1%
Background: Archaeologists and anthropologists have long recognized that different cultural complexes may have distinct descent histories, but they have lacked analytical techniques capable of easily identifying such incongruence. Here, we show how Bayesian phylogenetic analysis can be used to identify incongruent cultural histories. We employ the approach to investigate Iranian tribal textile traditions. Methods: We used Bayes factor comparisons in a phylogenetic framework to test two models of cultural evolution: the hierarchically integrated system hypothesis and the multiple coherent units hypothesis. In the hierarchically integrated system hypothesis, a core tradition of characters evolves through descent with modification and characters peripheral to the core are exchanged among contemporaneous populations. In the multiple coherent units hypothesis, a core tradition does not exist. Rather, there are several cultural units consisting of sets of characters that have different histories of descent. Results: For the Iranian textiles, the Bayesian phylogenetic analyses supported the multiple coherent units hypothesis over the hierarchically integrated system hypothesis. Our analyses suggest that pile-weave designs represent a distinct cultural unit that has a different phylogenetic history compared to other textile characters. Conclusions: The results from the Iranian textiles are consistent with the available ethnographic evidence...

Residual Diagnostic Methods for Bayesian Structural Equation Models

Stokes-Riner, Abbie ; Thurston, Sally W.
Fonte: Universidade de Rochester Publicador: Universidade de Rochester
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
36.1%
Thesis (Ph.D.)--University of Rochester. School of Medicine and Dentistry. Dept. of Biostatistics and Computational Biology, 2009.; Often environmental epidemiological studies focus on estimating eects of highly correlated exposures on a health condition measured with multiple outcomes. Adjusting for all exposures as separate covariates in a multiple linear regression model can cause multicollinearity. Furthermore, tting separate models for each exposure and outcome combination leads to problems of multiple comparisons and may introduce confounding with the exposures left out of the model. Structural equation modeling (SEM) alleviates these issues by assuming a latent variable structure underlying the observed exposures and outcomes. Bayesian methods for estimating the parameters in SEM treat the latent variables as missing data and impute them as part of a Markov Chain Monte Carlo (MCMC) sampler, resulting in the full posterior distribution for both the parameters and the latent variables. Bayesian SEM is reviewed and illustrated with a model analyzing the eects of phthalate exposure on human semen quality. Although methods exist for checking overall goodness-of-t in SEM, little attention has been given to testing specic model assumptions. Individual-level residuals are easy to estimate in the Bayesian SEM and are used to dene posterior predictive checks for model assumptions. The empirical cumulative distribution function of the residuals is used to test the assumption that the residual error is normally distributed. Cumulative sums of the residuals are used to check the assumption that the predictors have a linear relationship to the dependent variables in the model equations. The validity of the posterior predictive checks is examined through simulation studies...

Bayesian Testing of Granger Causality in Markov-Switching VARs

DROUMAGUET, Matthieu; WOŹNIAK, Tomasz
Fonte: Instituto Universitário Europeu Publicador: Instituto Universitário Europeu
Tipo: Trabalho em Andamento Formato: application/pdf; digital
Português
Relevância na Pesquisa
46.15%
Recent economic developments have shown the importance of spillover and contagion effects in financial markets as well as in macroeconomic reality. Such effects are not limited to relations between the levels of variables but also impact on the volatility and the distributions. We propose a method of testing restrictions for Granger noncausality on all these levels in the framework of Markov-switching Vector Autoregressive Models. The conditions for Granger noncausality for these models were derived by Warne (2000). Due to the nonlinearity of the restrictions, classical tests have limited use. We, therefore, choose a Bayesian approach to testing. The inference consists of a novel Gibbs sampling algorithm for estimation of the restricted models, and of standard methods of computing the Posterior Odds Ratio. The analysis may be applied to financial and macroeconomic time series with complicated properties, such as changes of parameter values over time and heteroskedasticity.

Granger-Causal Analysis of Conditional Mean and Volatility Models

WOŹNIAK, Tomasz
Fonte: Instituto Universitário Europeu Publicador: Instituto Universitário Europeu
Tipo: Tese de Doutorado Formato: application/pdf; digital
Português
Relevância na Pesquisa
36.25%
Recent economic developments have shown the importance of spillover and contagion effects in financial markets as well as in macroeconomic reality. Such effects are not limited to relations between the levels of variables but also impact on the volatility and the distributions. Granger causality in conditional means and conditional variances of time series is investigated in the framework of several popular multivariate econometric models. Bayesian inference is proposed as a method of assessment of the hypotheses of Granger noncausality. First, the family of ECCC-GARCH models is used in order to perform inference about Granger-causal relations in second conditional moments. The restrictions for second-order Granger noncausality between two vectors of variables are derived. Further, in order to investigate Granger causality in conditional mean and conditional variances of time series VARMA-GARCH models are employed. Parametric restrictions for the hypothesis of noncausality in conditional variances between two groups of variables, when there are other variables in the system as well are derived. These novel conditions are convenient for the analysis of potentially large systems of economic variables. Bayesian testing procedures applied to these two problems...

Testing the storm et al.(2010) meta-analysis using bayesian and frequentist approaches: Reply to rouder et al.(2013)

Storm, L.; Tressoldi, P.; Utts, J.
Fonte: Amer Psychological Assoc Publicador: Amer Psychological Assoc
Tipo: Artigo de Revista Científica
Publicado em //2013 Português
Relevância na Pesquisa
36.13%
Rouder, Morey, and Province (2013) stated that (a) the evidence-based case for psi in Storm, Tressoldi, and Di Risio's (2010) meta-analysis is supported only by a number of studies that used manual randomization, and (b) when these studies are excluded so that only investigations using automatic randomization are evaluated (and some additional studies previously omitted by Storm et al., 2010, are included), the evidence for psi is “unpersuasive.” Rouder et al. used a Bayesian approach, and we adopted the same methodology, finding that our case is upheld. Because of recent updates and corrections, we reassessed the free-response databases of Storm et al. using a frequentist approach. We discuss and critique the assumptions and findings of Rouder et al.; Storm, Lance; Tressoldi, Patrizio E.; Utts, Jessica

"Testes de hipótese e critério bayesiano de seleção de modelos para séries temporais com raiz unitária" ; "Hypothesis testing and bayesian model selection for time series with a unit root"

Silva, Ricardo Gonçalves da
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 23/06/2004 Português
Relevância na Pesquisa
36.36%
A literatura referente a testes de hipótese em modelos auto-regressivos que apresentam uma possível raiz unitária é bastante vasta e engloba pesquisas oriundas de diversas áreas. Nesta dissertação, inicialmente, buscou-se realizar uma revisão dos principais resultados existentes, oriundos tanto da visão clássica quanto da bayesiana de inferência. No que concerne ao ferramental clássico, o papel do movimento browniano foi apresentado de forma detalhada, buscando-se enfatizar a sua aplicabilidade na dedução de estatísticas assintóticas para a realização dos testes de hipótese relativos à presença de uma raíz unitária. Com relação à inferência bayesiana, foi inicialmente conduzido um exame detalhado do status corrente da literatura. A seguir, foi realizado um estudo comparativo em que se testa a hipótese de raiz unitária com base na probabilidade da densidade a posteriori do parâmetro do modelo, considerando as seguintes densidades a priori: Flat, Jeffreys, Normal e Beta. A inferência foi realizada com base no algoritmo Metropolis-Hastings, usando a técnica de simulação de Monte Carlo por Cadeias de Markov (MCMC). Poder, tamanho e confiança dos testes apresentados foram computados com o uso de séries simuladas. Finalmente...

Inferencia Bayesiana para valores extremos; Bayesian inference for extremes

Diego Fernando de Bernardini
Fonte: Biblioteca Digital da Unicamp Publicador: Biblioteca Digital da Unicamp
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 23/02/2010 Português
Relevância na Pesquisa
36.16%
Iniciamos o presente trabalho apresentando uma breve introdução a teoria de valores extremos, estudando especialmente o comportamento da variável aleatória que representa o máximo de uma sequência de variáveis aleatórias independentes e identicamente distribuídas. Vemos que o Teorema dos Tipos Extremos (ou Teorema de Fisher-Tippett) constitui uma ferramenta fundamental no que diz respeito ao estudo do comportamento assintóticos destes máximos, permitindo a modelagem de dados que representem uma sequência de observações de máximos de um determinado fenômeno ou processo aleatório, através de uma classe de distribuições conhecida como família de distribuições de Valor Extremo Generalizada (Generalized Extreme Value - GEV). A distribuição Gumbel, associada ao máximo de distribuições como a Normal ou Gama entre outras, é um caso particular desta família. Torna-se interessante, assim, realizar inferência para os parâmetros desta família. Especificamente, a comparação entre os modelos Gumbel e GEV constitui o foco principal deste trabalho. No Capítulo 1 estudamos, no contexto da inferência clássica, o método de estimação por máxima verossimilhança para estes parâmetros e um procedimento de teste de razão de verossimilhanças adequado para testar a hipótese nula que representa o modelo Gumbel contra a hipótese que representa o modelo completo GEV. Prosseguimos...

A bayesian approach for NDT data fusion: The Saint Torcato Church case study

Ramos, Luís F.; Miranda, Tiago F. S.; Mishra, M.; Fernandes, Francisco Manuel Carvalho Pinto; Manning, Elizabeth Campbell
Fonte: Elsevier Publicador: Elsevier
Tipo: Artigo de Revista Científica
Publicado em //2015 Português
Relevância na Pesquisa
36.13%
This paper presents a methodology based on the Bayesian data fusion techniques applied to non-destructive and destructive tests for the structural assessment of historical constructions. The aim of the methodology is to reduce the uncertainties of the parameter estimation. The Young's modulus of granite stones was chosen as an example for the present paper. The methodology considers several levels of uncertainty since the parameters of interest are considered random variables with random moments. A new concept of Trust Factor was introduced to affect the uncertainty related to each test results, translated by their standard deviation, depending on the higher or lower reliability of each test to predict a certain parameter.; The authors would like to acknowledge the Fundação para a Ciência e Tecnologia, which supported this research work as a part of the Project “Improved and innovative techniques for the diagnosis and monitoring of historical masonry”, PTDC/ECM/104045/2008.

Testing hypotheses via a mixture estimation model

Kamary, Kaniav; Mengersen, Kerrie; Robert, Christian P.; Rousseau, Judith
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
36.2%
We consider a novel paradigm for Bayesian testing of hypotheses and Bayesian model comparison. Our alternative to the traditional construction of posterior probabilities that a given hypothesis is true or that the data originates from a specific model is to consider the models under comparison as components of a mixture model. We therefore replace the original testing problem with an estimation one that focus on the probability weight of a given model within a mixture model. We analyze the sensitivity on the resulting posterior distribution on the weights of various prior modeling on the weights. We stress that a major appeal in using this novel perspective is that generic improper priors are acceptable, while not putting convergence in jeopardy. Among other features, this allows for a resolution of the Lindley-Jeffreys paradox. When using a reference Beta B(a,a) prior on the mixture weights, we note that the sensitivity of the posterior estimations of the weights to the choice of a vanishes with the sample size increasing and avocate the default choice a=0.5, derived from Rousseau and Mengersen (2011). Another feature of this easily implemented alternative to the classical Bayesian solution is that the speeds of convergence of the posterior mean of the weight and of the corresponding posterior probability are quite similar.; Comment: 37 pages...

Bayesian testing of many hypotheses $\times$ many genes: A study of sleep apnea

Jensen, Shane T.; Erkan, Ibrahim; Arnardottir, Erna S.; Small, Dylan S.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
36.15%
Substantial statistical research has recently been devoted to the analysis of large-scale microarray experiments which provide a measure of the simultaneous expression of thousands of genes in a particular condition. A typical goal is the comparison of gene expression between two conditions (e.g., diseased vs. nondiseased) to detect genes which show differential expression. Classical hypothesis testing procedures have been applied to this problem and more recent work has employed sophisticated models that allow for the sharing of information across genes. However, many recent gene expression studies have an experimental design with several conditions that requires an even more involved hypothesis testing approach. In this paper, we use a hierarchical Bayesian model to address the situation where there are many hypotheses that must be simultaneously tested for each gene. In addition to having many hypotheses within each gene, our analysis also addresses the more typical multiple comparison issue of testing many genes simultaneously. We illustrate our approach with an application to a study of genes involved in obstructive sleep apnea in humans.; Comment: Published in at http://dx.doi.org/10.1214/09-AOAS241 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org)

Nonparametric Bayesian testing for monotonicity

Scott, James G.; Shively, Thomas S.; Walker, Stephen G.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
36.13%
This paper studies the problem of testing whether a function is monotone from a nonparametric Bayesian perspective. Two new families of tests are constructed. The first uses constrained smoothing splines, together with a hierarchical stochastic-process prior that explicitly controls the prior probability of monotonicity. The second uses regression splines, together with two proposals for the prior over the regression coefficients. The finite-sample performance of the tests is shown via simulation to improve upon existing frequentist and Bayesian methods. The asymptotic properties of the Bayes factor for comparing monotone versus non-monotone regression functions in a Gaussian model are also studied. Our results significantly extend those currently available, which chiefly focus on determining the dimension of a parametric linear model.

Bayesian testing for embedded hypotheses with application to shape constrains

Salomond, Jean-Bernard
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
36.29%
In this paper we study Bayesian answers to testing problems when the hypotheses are not well separated and propose a general approach with a special focus on shape constrains testing. We then apply our method to several testing problems including testing for positivity and monotonicity in a nonparametric regression setting. For each of this problems, we show that our approach leads to the optimal separation rate of testing, which indicates that our tests have the best power. To our knowledge, separation rates have not been studied in the Bayesian literature so far.

Bayesian methods for gravitational waves and neural networks

Graff, Philip B.
Fonte: University of Cambridge; Department of Physics Publicador: University of Cambridge; Department of Physics
Tipo: Thesis; doctoral; PhD
Português
Relevância na Pesquisa
36.17%
Einstein?s general theory of relativity has withstood 100 years of testing and will soon be facing one of its toughest challenges. In a few years we expect to be entering the era of the first direct observations of gravitational waves. These are tiny perturbations of space-time that are generated by accelerating matter and affect the measured distances between two points. Observations of these using the laser interferometers, which are the most sensitive length-measuring devices in the world, will allow us to test models of interactions in the strong field regime of gravity and eventually general relativity itself. I apply the tools of Bayesian inference for the examination of gravitational wave data from the LIGO and Virgo detectors. This is used for signal detection and estimation of the source parameters. I quantify the ability of a network of ground-based detectors to localise a source position on the sky for electromagnetic follow-up. Bayesian criteria are also applied to separating real signals from glitches in the detectors. These same tools and lessons can also be applied to the type of data expected from planned space-based detectors. Using simulations from the Mock LISA Data Challenges, I analyse our ability to detect and characterise both burst and continuous signals. The two seemingly different signal types will be overlapping and confused with one another for a space-based detector; my analysis shows that we will be able to separate and identify many signals present. Data sets and astrophysical models are continuously increasing in complexity. This will create an additional computational burden for performing Bayesian inference and other types of data analysis. I investigate the application of the MOPED algorithm for faster parameter estimation and data compression. I find that its shortcomings make it a less favourable candidate for further implementation. The framework of an artificial neural network is a simple model for the structure of a brain which can ?learn? functional relationships between sets of inputs and outputs. I describe an algorithm developed for the training of feed-forward networks on pre-calculated data sets. The trained networks can then be used for fast prediction of outputs for new sets of inputs. After demonstrating capabilities on toy data sets...

Bayesian Adjustment for Multiplicity

Scott, James Gordon
Fonte: Universidade Duke Publicador: Universidade Duke
Tipo: Dissertação Formato: 2596785 bytes; application/pdf
Publicado em //2009 Português
Relevância na Pesquisa
36.31%

This thesis is about Bayesian approaches for handling multiplicity. It considers three main kinds of multiple-testing scenarios: tests of exchangeable experimental units, tests for variable inclusion in linear regresson models, and tests for conditional independence in jointly normal vectors. Multiplicity adjustment in these three areas will be seen to have many common structural features. Though the modeling approach throughout is Bayesian, frequentist reasoning regarding error rates will often be employed.

Chapter 1 frames the issues in the context of historical debates about Bayesian multiplicity adjustment. Chapter 2 confronts the problem of large-scale screening of functional data, where control over Type-I error rates is a crucial issue. Chapter 3 develops new theory for comparing Bayes and empirical-Bayes approaches for multiplicity correction in regression variable selection. Chapters 4 and 5 describe new theoretical and computational tools for Gaussian graphical-model selection, where multiplicity arises in performing many simultaneous tests of pairwise conditional independence. Chapter 6 introduces a new approach to sparse-signal modeling based upon local shrinkage rules. Here the focus is not on multiplicity per se...

Spatial Bayesian Variable Selection with Application to Functional Magnetic Resonance Imaging (fMRI)

Yang, Ying
Fonte: Universidade Duke Publicador: Universidade Duke
Tipo: Tese de Doutorado
Publicado em //2011 Português
Relevância na Pesquisa
36.13%

Functional magnetic resonance imaging (fMRI) is a major neuroimaging methodology and have greatly facilitate basic cognitive neuroscience research. However, there are multiple statistical challenges in the analysis of fMRI data, including, dimension reduction, multiple testing and inter-dependence of the MRI responses. In this thesis, a spatial Bayesian variable selection (BVS) model is proposed for the analysis of multi-subject fMRI data. The BVS framework simultaneously account for uncertainty in model specific parameters as well as the model selection process, solving the multiple testing problem. A spatial prior incorporate the spatial relationship of the MRI response, accounting for their inter-dependence. Compared to the non-spatial BVS model, the spatial BVS model enhances the sensitivity and accuracy of identifying activated voxels.

; Thesis

Bayesian Semi-parametric Factor Models

Bhattacharya, Anirban
Fonte: Universidade Duke Publicador: Universidade Duke
Tipo: Dissertação
Publicado em //2012 Português
Relevância na Pesquisa
36.1%

Identifying a lower-dimensional latent space for representation of high-dimensional observations is of significant importance in numerous biomedical and machine learning applications. In many such applications, it is now routine to collect data where the dimensionality of the outcomes is comparable or even larger than the number of available observations. Motivated in particular by the problem of predicting the risk of impending diseases from massive gene expression and single nucleotide polymorphism profiles, this dissertation focuses on building parsimonious models and computational schemes for high-dimensional continuous and unordered categorical data, while also studying theoretical properties of the proposed methods. Sparse factor modeling is fast becoming a standard tool for parsimonious modeling of such massive dimensional data and the content of this thesis is specifically directed towards methodological and theoretical developments in Bayesian sparse factor models.

The first three chapters of the thesis studies sparse factor models for high-dimensional continuous data. A class of shrinkage priors on factor loadings are introduced with attractive computational properties, with operating characteristics explored through a number of simulated and real data examples. In spite of the methodological advances over the past decade...