Página 1 dos resultados de 2242 itens digitais encontrados em 0.030 segundos

## Bivariate gamma-geometric law and its induced Levy process

Barreto-Souza, Wagner
Fonte: ELSEVIER INC; SAN DIEGO Publicador: ELSEVIER INC; SAN DIEGO
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
55.75%
In this article we introduce a three-parameter extension of the bivariate exponential-geometric (BEG) law (Kozubowski and Panorska, 2005) [4]. We refer to this new distribution as the bivariate gamma-geometric (BGG) law. A bivariate random vector (X, N) follows the BGG law if N has geometric distribution and X may be represented (in law) as a sum of N independent and identically distributed gamma variables, where these variables are independent of N. Statistical properties such as moment generation and characteristic functions, moments and a variance-covariance matrix are provided. The marginal and conditional laws are also studied. We show that BBG distribution is infinitely divisible, just as the BEG model is. Further, we provide alternative representations for the BGG distribution and show that it enjoys a geometric stability property. Maximum likelihood estimation and inference are discussed and a reparametrization is proposed in order to obtain orthogonality of the parameters. We present an application to a real data set where our model provides a better fit than the BEG model. Our bivariate distribution induces a bivariate Levy process with correlated gamma and negative binomial processes, which extends the bivariate Levy motion proposed by Kozubowski et al. (2008) [6]. The marginals of our Levy motion are a mixture of gamma and negative binomial processes and we named it BMixGNB motion. Basic properties such as stochastic self-similarity and the covariance matrix of the process are presented. The bivariate distribution at fixed time of our BMixGNB process is also studied and some results are derived...

## Semiparametric estimation and inference using doubly robust moment conditions

Rothe, Christoph; Firpo, Sergio
Fonte: Fundação Getúlio Vargas Publicador: Fundação Getúlio Vargas
Português
Relevância na Pesquisa
55.71%
We study semiparametric two-step estimators which have the same structure as parametric doubly robust estimators in their second step. The key difference is that we do not impose any parametric restriction on the nuisance functions that are estimated in a first stage, but retain a fully nonparametric model instead. We call these estimators semiparametric doubly robust estimators (SDREs), and show that they possess superior theoretical and practical properties compared to generic semiparametric two-step estimators. In particular, our estimators have substantially smaller first-order bias, allow for a wider range of nonparametric first-stage estimates, rate-optimal choices of smoothing parameters and data-driven estimates thereof, and their stochastic behavior can be well-approximated by classical first-order asymptotics. SDREs exist for a wide range of parameters of interest, particularly in semiparametric missing data and causal inference models. We illustrate our method with a simulation exercise.

## Probabilistic surface change detection and measurement from digital aerial stereo images.

Jalobeanu, André; Gama, Cristina; Gonçalves, José
Fonte: IEEE Geoscience and Remote Sensing Society and the IGARSS Publicador: IEEE Geoscience and Remote Sensing Society and the IGARSS
Tipo: Aula
Português
Relevância na Pesquisa
55.71%
We propose a new method to measure changes in terrain topography from two optical stereo image pairs acquired at different dates. The main novelty is in the ability of computing the spatial distribution of uncertainty, thanks to stochastic modeling and probabilistic inference. Thus, scientists will have access to quantitative error estimates of local surface variation, so they can check the statistical significance of elevation changes, and make, where changes have occurred, consistent measurements of volume or shape evolution. The main application area is geomorphology, as the method can help study phenomena such as coastal cliff erosion, sand dune displacement and various transport mechanisms through the computation of volume changes. It can also help measure vegetation growth, and virtually any kind of evolution of the surface. We first start by inferring a dense disparity map from two images, assuming a known viewing geometry. The images are accurately rectified in order to constrain the deformation on one of the axes, so we only have to infer a one-dimensional parameter field. The probabilistic approach provides a rigorous framework for parameter estimation and error computation, so all the disparities are described as random variables. We define a generative model for both images given all model variables. It mainly consists of warping the scene using B-Splines...

## Semiparametric analysis of recurrent events: artificial censoring, truncation, pairwise estimation and inference

Ghosh, Debashis
Fonte: PubMed Publicador: PubMed
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
65.79%
The analysis of recurrent failure time data from longitudinal studies can be complicated by the presence of dependent censoring. There has been a substantive literature that has developed based on an artificial censoring device. We explore in this article the connection between this class of methods with truncated data structures. In addition, a new procedure is developed for estimation and inference in a joint model for recurrent events and dependent censoring. Estimation proceeds using a mixed U-statistic based estimating function approach. New resampling-based methods for variance estimation and model checking are also described. The methods are illustrated by application to data from an HIV clinical trial as with a limited simulation study.

## Dense multibody motion estimation and reconstruction from a handheld camera

Roussos, A.; Russell, C.; Garg, R.; Agapito, L.
Fonte: Institute of Electrical and Electronics Engineers Publicador: Institute of Electrical and Electronics Engineers
Tipo: Conference paper
Publicado em //2012 Português
Relevância na Pesquisa
55.69%
Existing approaches to camera tracking and reconstruction from a single handheld camera for Augmented Reality (AR) focus on the reconstruction of static scenes. However, most real world scenarios are dynamic and contain multiple independently moving rigid objects. This paper addresses the problem of simultaneous segmentation, motion estimation and dense 3D reconstruction of dynamic scenes. We propose a dense solution to all three elements of this problem: depth estimation, motion label assignment and rigid transformation estimation directly from the raw video by optimizing a single cost function using a hill-climbing approach. We do not require prior knowledge of the number of objects present in the scene - the number of independent motion models and their parameters are automatically estimated. The resulting inference method combines the best techniques in discrete and continuous optimization: a state of the art variational approach is used to estimate the dense depth maps while the motion segmentation is achieved using discrete graph-cut based optimization. For the rigid motion estimation of the independently moving objects we propose a novel tracking approach designed to cope with the small fields of view they induce and agile motion. Our experimental results on real sequences show how accurate segmentations and dense depth maps can be obtained in a completely automated way and used in marker-free AR applications.; Anastasios Roussos...

## Maritime intent estimation and the detection of unknown obstacles

Fong, Edward H. L. (Edward Hsiang Lung), 1980-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 196 p.; 9115439 bytes; 9140281 bytes; application/pdf; application/pdf
Português
Relevância na Pesquisa
65.64%
The benefits of using Unmanned Undersea Vehicles (UUVs) in maritime operations are numerous. However, before these benefits can be realized, UUV capabilities must be expanded. This thesis focuses on improving certain aspects of the Maritime Reconnaissance and Undersea Search and Survey capabilities of a UUV. An algorithm is first presented which provides the UUV with the ability to estimate the intent of the contacts it is observing (intent estimation). This was accomplished by developing a probabilistic model of the contact's possible intents and then using those models to estimate the contact's actual intent. The results from that algorithm are used to analyze the contact's observed path to determine a probabilistic belief of the potential location of obstacles in the environment (obstacle detection) that the contact is avoiding. These values are recorded in an obstacle inference map, which is capable of incorporating the results from the analysis of any number of observed paths from multiple contacts. The laws of probability were used to develop the algorithms in this thesis-with an emphasis on Bayes' rule. Various scenarios are presented to demonstrate the capabilities and limitations of the intent estimation and obstacle detection algorithms.; by Edward Hsiang Lung Fong.; Thesis (S.M.)--Massachusetts Institute of Technology...

## Diffusion Strategies Outperform Consensus Strategies for Distributed Estimation over Adaptive Networks

Tu, Sheng-Yuan; Sayed, Ali H.
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
55.69%
Adaptive networks consist of a collection of nodes with adaptation and learning abilities. The nodes interact with each other on a local level and diffuse information across the network to solve estimation and inference tasks in a distributed manner. In this work, we compare the mean-square performance of two main strategies for distributed estimation over networks: consensus strategies and diffusion strategies. The analysis in the paper confirms that under constant step-sizes, diffusion strategies allow information to diffuse more thoroughly through the network and this property has a favorable effect on the evolution of the network: diffusion networks are shown to converge faster and reach lower mean-square deviation than consensus networks, and their mean-square stability is insensitive to the choice of the combination weights. In contrast, and surprisingly, it is shown that consensus networks can become unstable even if all the individual nodes are stable and able to solve the estimation task on their own. When this occurs, cooperation over the network leads to a catastrophic failure of the estimation task. This phenomenon does not occur for diffusion networks: we show that stability of the individual nodes always ensures stability of the diffusion network irrespective of the combination topology. Simulation results support the theoretical findings.; Comment: 37 pages...

## Estimation and inference for linear panel data models under misspecification when both $n$ and $T$ are large

Galvao, Antonio F.; Kato, Kengo
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
55.77%
This paper considers fixed effects (FE) estimation for linear panel data models under possible model misspecification when both the number of individuals, $n$, and the number of time periods, $T$, are large. We first clarify the probability limit of the FE estimator and argue that this probability limit can be regarded as a pseudo-true parameter. We then establish the asymptotic distributional properties of the FE estimator around the pseudo-true parameter when $n$ and $T$ jointly go to infinity. Notably, we show that the FE estimator suffers from the incidental parameters bias of which the top order is $O(T^{-1})$, and even after the incidental parameters bias is completely removed, the rate of convergence of the FE estimator depends on the degree of model misspecification and is either $(nT)^{-1/2}$ or $n^{-1/2}$. Second, we establish asymptotically valid inference on the (pseudo-true) parameter. Specifically, we derive the asymptotic properties of the clustered covariance matrix (CCM) estimator and the cross section bootstrap, and show that they are robust to model misspecification. This establishes a rigorous theoretical ground for the use of the CCM estimator and the cross section bootstrap when model misspecification and the incidental parameters bias (in the coefficient estimate) are present. We conduct Monte Carlo simulations to evaluate the finite sample performance of the estimators and inference methods...

## Minimum Integrated Distance Estimation in Simultaneous Equation Models

Gao, Zhengyuan; Galvao, Antonio
Tipo: Artigo de Revista Científica
Publicado em 05/12/2014 Português
Relevância na Pesquisa
55.68%
This paper considers estimation and inference in semiparametric econometric models. Standard procedures estimate the model based on an independence restriction that induces a minimum distance between a joint cumulative distribution function and the product of the marginal cumulative distribution functions. This paper develops a new estimator which generalizes estimation by allowing endogeneity of the weighting measure and estimating the optimal measure nonparametrically. The optimality corresponds to the minimum of the integrated distance. To accomplish this aim we use Kantorovich's formulation of the optimal transportation problem. The minimizing distance is equivalent to the total variation distance and thus characterizes finer topological structures of the distributions. The estimation also provides greater generality by dealing with probability measures on compact metric spaces without assuming existence of densities. Asymptotic statistics of the empirical estimates have standard convergent results and are available for different statistical analyses. In addition, we provide a tractable implementation for computing the estimator in practice.; Comment: arXiv admin note: text overlap with arXiv:1109.1516 by other authors

## Estimation and inference in generalized additive coefficient models for nonlinear interactions with high-dimensional covariates

Ma, Shujie; Carroll, Raymond J.; Liang, Hua; Xu, Shizhong
Tipo: Artigo de Revista Científica
Publicado em 14/10/2015 Português
Relevância na Pesquisa
65.71%
In the low-dimensional case, the generalized additive coefficient model (GACM) proposed by Xue and Yang [Statist. Sinica 16 (2006) 1423-1446] has been demonstrated to be a powerful tool for studying nonlinear interaction effects of variables. In this paper, we propose estimation and inference procedures for the GACM when the dimension of the variables is high. Specifically, we propose a groupwise penalization based procedure to distinguish significant covariates for the "large $p$ small $n$" setting. The procedure is shown to be consistent for model structure identification. Further, we construct simultaneous confidence bands for the coefficient functions in the selected model based on a refined two-step spline estimator. We also discuss how to choose the tuning parameters. To estimate the standard deviation of the functional estimator, we adopt the smoothed bootstrap method. We conduct simulation experiments to evaluate the numerical performance of the proposed methods and analyze an obesity data set from a genome-wide association study as an illustration.; Comment: Published at http://dx.doi.org/10.1214/15-AOS1344 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org)

## A general framework for estimation and inference from clusters of features

Reid, Stephen; Taylor, Jonathan; Tibshirani, Robert
Tipo: Artigo de Revista Científica
Publicado em 24/11/2015 Português
Relevância na Pesquisa
55.71%
Applied statistical problems often come with pre-specified groupings to predictors. It is natural to test for the presence of simultaneous group-wide signal for groups in isolation, or for multiple groups together. Classical tests for the presence of such signals rely either on tests for the omission of the entire block of variables (the classical F-test) or on the creation of an unsupervised prototype for the group (either a group centroid or first principal component) and subsequent t-tests on these prototypes. In this paper, we propose test statistics that aim for power improvements over these classical approaches. In particular, we first create group prototypes, with reference to the response, hopefully improving on the unsupervised prototypes, and then testing with likelihood ratio statistics incorporating only these prototypes. We propose a (potentially) novel model, called the "prototype model", which naturally models the two-step prototype-then-test procedure. Furthermore, we introduce an inferential schema detailing the unique considerations for different combinations of prototype formation and univariate/multivariate testing models. The prototype model also suggests new applications to estimation and prediction. Prototype formation often relies on variable selection...

## Sparse Nonlinear Regression: Parameter Estimation and Asymptotic Inference

Yang, Zhuoran; Wang, Zhaoran; Liu, Han; Eldar, Yonina C.; Zhang, Tong
Tipo: Artigo de Revista Científica
Publicado em 14/11/2015 Português
Relevância na Pesquisa
55.71%
We study parameter estimation and asymptotic inference for sparse nonlinear regression. More specifically, we assume the data are given by $y = f( x^\top \beta^* ) + \epsilon$, where $f$ is nonlinear. To recover $\beta^*$, we propose an $\ell_1$-regularized least-squares estimator. Unlike classical linear regression, the corresponding optimization problem is nonconvex because of the nonlinearity of $f$. In spite of the nonconvexity, we prove that under mild conditions, every stationary point of the objective enjoys an optimal statistical rate of convergence. In addition, we provide an efficient algorithm that provably converges to a stationary point. We also access the uncertainty of the obtained estimator. Specifically, based on any stationary point of the objective, we construct valid hypothesis tests and confidence intervals for the low dimensional components of the high-dimensional parameter $\beta^*$. Detailed numerical results are provided to back up our theory.; Comment: 32 pages, 2 figures, 1 table

## Distributed Estimation and Detection with Bounded Transmissions over Gaussian Multiple Access Channels

Dasarathan, Sivaraman; Tepedelenlioglu, Cihan
Tipo: Artigo de Revista Científica
Publicado em 25/06/2013 Português
Relevância na Pesquisa
55.7%
A distributed inference scheme which uses bounded transmission functions over a Gaussian multiple access channel is considered. When the sensor measurements are decreasingly reliable as a function of the sensor index, the conditions on the transmission functions under which consistent estimation and reliable detection are possible is characterized. For the distributed estimation problem, an estimation scheme that uses bounded transmission functions is proved to be strongly consistent provided that the variance of the noise samples are bounded and that the transmission function is one-to-one. The proposed estimation scheme is compared with the amplify-and-forward technique and its robustness to impulsive sensing noise distributions is highlighted. In contrast to amplify-and-forward schemes, it is also shown that bounded transmissions suffer from inconsistent estimates if the sensing noise variance goes to infinity. For the distributed detection problem, similar results are obtained by studying the deflection coefficient. Simulations corroborate our analytical results.; Comment: 24 Pages, 7 Figures, Will be submitted to an IEEE journal

## Panel Data Models with Nonadditive Unobserved Heterogeneity: Estimation and Inference

Fernandez-Val, Ivan; Lee, Joonhwah
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
65.71%
This paper considers fixed effects estimation and inference in linear and nonlinear panel data models with random coefficients and endogenous regressors. The quantities of interest -- means, variances, and other moments of the random coefficients -- are estimated by cross sectional sample moments of GMM estimators applied separately to the time series of each individual. To deal with the incidental parameter problem introduced by the noise of the within-individual estimators in short panels, we develop bias corrections. These corrections are based on higher-order asymptotic expansions of the GMM estimators and produce improved point and interval estimates in moderately long panels. Under asymptotic sequences where the cross sectional and time series dimensions of the panel pass to infinity at the same rate, the uncorrected estimator has an asymptotic bias of the same order as the asymptotic variance. The bias corrections remove the bias without increasing variance. An empirical example on cigarette demand based on Becker, Grossman and Murphy (1994) shows significant heterogeneity in the price effect across U.S. states.; Comment: 51 pages, 4 tables, 1 figure, it includes supplementary appendix

## Estimation and inference for high-dimensional non-sparse models

Lin, Lu; Zhu, Lixing; Gai, Yujie
Tipo: Artigo de Revista Científica
Publicado em 03/12/2011 Português
Relevância na Pesquisa
55.78%
To successfully work on variable selection, sparse model structure has become a basic assumption for all existing methods. However, this assumption is questionable as it is hard to hold in most of cases and none of existing methods may provide consistent estimation and accurate model prediction in nons-parse scenarios. In this paper, we propose semiparametric re-modeling and inference when the linear regression model under study is possibly non-sparse. After an initial working model is selected by a method such as the Dantzig selector adopted in this paper, we re-construct a globally unbiased semiparametric model by use of suitable instrumental variables and nonparametric adjustment. The newly defined model is identifiable, and the estimator of parameter vector is asymptotically normal. The consistency, together with the re-built model, promotes model prediction. This method naturally works when the model is indeed sparse and thus is of robustness against non-sparseness in certain sense. Simulation studies show that the new approach has, particularly when $p$ is much larger than $n$, significant improvement of estimation and prediction accuracies over the Gaussian Dantzig selector and other classical methods. Even when the model under study is sparse...

## High-Dimensional Gaussian Copula Regression: Adaptive Estimation and Statistical Inference

Cai, T. Tony; Zhang, Linjun
Tipo: Artigo de Revista Científica
Publicado em 08/12/2015 Português
Relevância na Pesquisa
55.81%
We develop adaptive estimation and inference methods for high-dimensional Gaussian copula regression that achieve the same performance without the knowledge of the marginal transformations as that for high-dimensional linear regression. Using a Kendall's tau based covariance matrix estimator, an $\ell_1$ regularized estimator is proposed and a corresponding de-biased estimator is developed for the construction of the confidence intervals and hypothesis tests. Theoretical properties of the procedures are studied and the proposed estimation and inference methods are shown to be adaptive to the unknown monotone marginal transformations. Prediction of the response for a given value of the covariates is also considered. The procedures are easy to implement and perform well numerically. The methods are also applied to analyze the Communities and Crime Unnormalized Data from the UCI Machine Learning Repository.; Comment: 41 pages, 1 figure

## Estimation and Inference in Large Heterogeneous Panels with Cross Section Dependence

Pesaran, M. Hashem
Tipo: Trabalho em Andamento Formato: 582479 bytes; application/pdf; application/pdf
Português
Relevância na Pesquisa
95.86%
This paper presents a new approach to estimation and inference in panel data models with unobserved common factors possibly correlated with exogenously given individual-specific regressors and/or the observed common effects. The basic idea behind the proposed estimation procedure is to filter the individual-specific regressors by means of (weighted) cross-section aggregates such that, asymptotically as the cross-section dimension (N) tends to infinity, the differential of unobserved common factors are eliminated. The estimation procedure has the advantage that it can be computed by OLS applied to an auxiliary regression where the observed regressors are augmented by cross sectional averages of the dependent variable and the individual specific regressors. It is shown that the proposed correlated common effects (CCE) estimators for the individual-specific regressors (and its pooled counterpart) are asymptotically unbiased as N ? 8, both when T (the time-series dimension) is fixed, and when N and T tend to infinity jointly. Further, the CCE estimators are asymptotically normal for T fixed as N ? 8, and when (N,T) ? 8, jointly provided vT/N ? 0 as (N,T) ? 8. A generalisation of these results to multi-factor structures is also provided.

## Alternative Approaches to Estimation and Inference in Large Multifactor Panels: Small Sample Results with an Application to Modelling of Asset Returns

Kapetanios, George; Pesaran, M. Hashem
Formato: 323020 bytes; application/pdf; application/pdf
Português
Relevância na Pesquisa
65.85%
This paper considers alternative approaches to the analysis of large panel data models in the presence of error cross section dependence. A popular method for modelling such dependence uses a factor error structure. Such models raise new problems for estimation and inference. This paper compares two alternative methods for carrying out estimation and inference in panels with a multifactor error structure. One uses the correlated common effects estimator that proxies the unobserved factors by cross section averages of the observed variables as suggested by Pesaran (2004) , and the other uses principal components following the work of Stock and Watson (2002) . The paper develops the principal component method and provides small sample evidence on the comparative properties of these estimators by means of extensive Monte Carlo experiments. An empirical application to company returns provides an illustration of the alternative estimation procedures.

## Measuring inequality using censored data: a multiple-imputation approach to estimation and inference

Jenkins, Stephen P.; Burkhauser, Richard V.; Feng, Shuaizhang; Larrimore, Jeff
Fonte: Wiley-Blackwell Publicador: Wiley-Blackwell
Tipo: Article; PeerReviewed Formato: application/pdf
Publicado em /01/2011 Português
Relevância na Pesquisa
75.71%
To measure income inequality with right-censored (top-coded) data, we propose multiple-imputation methods for estimation and inference. Censored observations are multiply imputed using draws from a flexible parametric model fitted to the censored distribution, yielding a partially synthetic data set from which point and variance estimates can be derived using complete-data methods and appropriate combination formulae. The methods are illustrated using US Current Population Survey data and the generalized beta of the second kind distribution as the imputation model. With Current Population Survey internal data, we find few statistically significant differences in income inequality for pairs of years between 1995 and 2004. We also show that using Current Population Survey public use data with cell mean imputations may lead to incorrect inferences. Multiply-imputed public use data provide an intermediate solution.

## Essays on estimation and inference for volatility with high frequency data.

Kalnina, Ilze
Fonte: London School of Economics and Political Science Thesis Publicador: London School of Economics and Political Science Thesis
Tipo: Thesis; NonPeerReviewed Formato: application/pdf
Publicado em //2009 Português
Relevância na Pesquisa
75.87%
Volatility is a measure of risk, and as such it is crucial for finance. But volatility is not observable, which is why estimation and inference for it are important. Large high frequency data sets have the potential to increase the precision of volatility estimates. However, this data is also known to be contaminated by market microstructure frictions, such as bid-ask spread, which pose a challenge to estimation of volatility. The first chapter, joint with Oliver Linton, proposes an econometric model that captures the effects of market microstructure on a latent price process. In particular, this model allows for correlation between the measurement error and the return process and allows the measurement error process to have diurnal heteroskedasticity. A modification of the TSRV estimator of quadratic variation is proposed and asymptotic distribution derived. Financial econometrics continues to make progress in developing more robust and efficient estimators of volatility. But for some estimators, the asymptotic variance is hard to derive or may take a complicated form and be difficult to estimate. To tackle these problems, the second chapter develops an automated method of inference that does not rely on the exact form of the asymptotic variance. The need for a new approach is motivated by the failure of traditional bootstrap and subsampling variance estimators with high frequency data...