# A melhor ferramenta para a sua pesquisa, trabalho e TCC!

Página 1 dos resultados de 230 itens digitais encontrados em 0.064 segundos

## Singular Perturbation for the Discounted Continuous Control of Piecewise Deterministic Markov Processes

Fonte: SPRINGER
Publicador: SPRINGER

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

78.990005%

#Piecewise-deterministic Markov processes#Continuous-time#Infinite discounted expected cost#Optimal control#Singular perturbation#Mathematics, Applied

This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP`s) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space a""e (n) . The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter epsilon > 0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as epsilon goes to zero. This convergence is obtained by...

Link permanente para citações:

## Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains

Fonte: Asist Group
Publicador: Asist Group

Tipo: Artigo de Revista Científica

Publicado em 15/12/2006
Português

Relevância na Pesquisa

79.317827%

Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated.

Link permanente para citações:

## Problèmes de premier passage et de commande optimale pour des chaînes de Markov à temps discret.

Fonte: Université de Montréal
Publicador: Université de Montréal

Tipo: Thèse ou Mémoire numérique / Electronic Thesis or Dissertation

Português

Relevância na Pesquisa

79.41693%

#Chaînes de Markov en temps discret#mathématiques ﬁnancières#processus de diffusion#problèmes d’absorption#équations aux différences#processus de Wiener#fonctions spéciales#contrôle optimal#principe d’optimalité#Discrete-time Markov chains#ﬁnancial mathematics

Nous considérons des processus de diffusion, déﬁnis par des équations
différentielles stochastiques, et puis nous nous intéressons à des problèmes
de premier passage pour les chaînes de Markov en temps discret correspon-
dant à ces processus de diffusion. Comme il est connu dans la littérature, ces
chaînes convergent en loi vers la solution des équations différentielles stochas-
tiques considérées. Notre contribution consiste à trouver des formules expli-
cites pour la probabilité de premier passage et la durée de la partie pour ces
chaînes de Markov à temps discret. Nous montrons aussi que les résultats ob-
tenus convergent selon la métrique euclidienne (i.e topologie euclidienne) vers
les quantités correspondantes pour les processus de diffusion.
En dernier lieu, nous étudions un problème de commande optimale pour des
chaînes de Markov en temps discret. L’objectif est de trouver la valeur qui mi-
nimise l’espérance mathématique d’une certaine fonction de coût. Contraire-
ment au cas continu, il n’existe pas de formule explicite pour cette valeur op-
timale dans le cas discret. Ainsi, nous avons étudié dans cette thèse quelques
cas particuliers pour lesquels nous avons trouvé cette valeur optimale.; We consider diffusion processes...

Link permanente para citações:

## Long-Range Dependence of Markov Processes

Fonte: Universidade Nacional da Austrália
Publicador: Universidade Nacional da Austrália

Tipo: Thesis (PhD); Doctor of Philosophy (PhD)

Português

Relevância na Pesquisa

79.62264%

Long-range dependence in discrete and continuous time Markov chains over a countable state space is defined via embedded renewal processes brought about by visits to a fixed state. In the discrete time chain, solidarity properties are obtained and long-range dependence of functionals are examined. On the other hand, the study of LRD of continuous time chains is defined via the number of visits in a given time interval. Long-range dependence of Markov chains over a non-countable state space is also carried out through positive Harris chains. Embedded renewal processes in these chains exist via visits to sets of states called proper atoms.
¶
Examples of these chains are presented, with particular attention given to long-range dependent Markov chains in single-server queues, namely, the waiting times of GI/G/1 queues and queue lengths at departure epochs in M/G/1 queues. The presence of long-range dependence in these processes is dependent on the moment index of the lifetime distribution of the service times. The Hurst indexes are obtained under certain conditions on the distribution function of the service times and the structure of the correlations. These processes of waiting times and queue sizes are also examined in a range of M/P/2 queues via simulation (here...

Link permanente para citações:

## Comparisons for backward stochastic differential equations on Markov chains and related no-arbitrage conditions

Fonte: Inst Mathematical Statistics
Publicador: Inst Mathematical Statistics

Tipo: Artigo de Revista Científica

Publicado em //2010
Português

Relevância na Pesquisa

89.03777%

#Backward stochastic differential equation#Markov chains#nonlinear expectation#dynamic risk measures#comparison theorem

Most previous contributions to BSDEs, and the related theories of nonlinear expectation and dynamic risk measures, have been in the framework of continuous time diffusions or jump diffusions. Using solutions of BSDEs on spaces related to finite state, continuous time Markov chains, we develop a theory of nonlinear expectations in the spirit of [Dynamically consistent nonlinear evaluations and expectations (2005) Shandong Univ.]. We prove basic properties of these expectations and show their applications to dynamic risk measures on such spaces. In particular, we prove comparison theorems for scalar and vector valued solutions to BSDEs, and discuss arbitrage and risk measures in the scalar case.; Samuel N. Cohen and Robert J. Elliott

Link permanente para citações:

## Parameter estimation for discretely observed continuous-time Markov chains

Fonte: Universidade Rice
Publicador: Universidade Rice

Português

Relevância na Pesquisa

89.39799%

This thesis develops a method for estimating the parameters of continuous-time Markov chains discretely observed by Poisson sampling. The inference problem in this context is usually simplified by assuming the process to be time-homogeneous and that the process can be observed continuously for some observation period. But many real problems are not homogeneous; moreover, in practice it is often difficult to observe random processes continuously. In this work, the Dynkin Identity motivates a martingale estimating equation which is no more complicated a function of the parameters than the infinitesimal generator of the chain. The time-dependent generators of inhomogeneous chains therefore present no new obstacles. The Dynkin Martingale estimating equation derived here applies to processes discretely observed according to an independent Poisson process. Random observation of this kind alleviates the so-called aliasing problem, which can arise when continuous-time processes are observed discretely. Theoretical arguments exploit the martingale structure to obtain conditions ensuring strong consistency and asymptotic normality of the estimators. Simulation studies of a single-server Markov queue with sinusoidal arrivals test the performance of the estimators under different sampling schemes and against the benchmark maximum likelihood estimators based on continuous observation.

Link permanente para citações:

## Transient Reward Approximation for Continuous-Time Markov Chains

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

89.09089%

We are interested in the analysis of very large continuous-time Markov chains
(CTMCs) with many distinct rates. Such models arise naturally in the context of
reliability analysis, e.g., of computer network performability analysis, of
power grids, of computer virus vulnerability, and in the study of crowd
dynamics. We use abstraction techniques together with novel algorithms for the
computation of bounds on the expected final and accumulated rewards in
continuous-time Markov decision processes (CTMDPs). These ingredients are
combined in a partly symbolic and partly explicit (symblicit) analysis
approach. In particular, we circumvent the use of multi-terminal decision
diagrams, because the latter do not work well if facing a large number of
different rates. We demonstrate the practical applicability and efficiency of
the approach on two case studies.; Comment: Accepted for publication in IEEE Transactions on Reliability

Link permanente para citações:

## A Ruelle Operator for continuous time Markov Chains

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

79.013535%

We consider a generalization of the Ruelle theorem for the case of continuous
time problems. We present a result which we believe is important for future use
in problems in Mathematical Physics related to $C^*$-Algebras We consider a
finite state set $S$ and a stationary continuous time Markov Chain $X_t$,
$t\geq 0$, taking values on S. We denote by $\Omega$ the set of paths $w$
taking values on S (the elements $w$ are locally constant with left and right
limits and are also right continuous on $t$). We consider an infinitesimal
generator $L$ and a stationary vector $p_0$. We denote by $P$ the associated
probability on ($\Omega, {\cal B}$). This is the a priori probability. All
functions $f$ we consider bellow are in the set ${\cal L}^\infty (P)$. From the
probability $P$ we define a Ruelle operator ${\cal L}^t, t\geq 0$, acting on
functions $f:\Omega \to \mathbb{R}$ of ${\cal L}^\infty (P)$. Given $V:\Omega
\to \mathbb{R}$, such that is constant in sets of the form $\{X_0=c\}$, we
define a modified Ruelle operator $\tilde{{\cal L}}_V^t, t\geq 0$. We are able
to show the existence of an eigenfunction $u$ and an eigen-probability $\nu_V$
on $\Omega$ associated to $\tilde{{\cal L}}^t_V, t\geq 0$. We also show the
following property for the probability $\nu_V$: for any integrable $g\in {\cal
L}^\infty (P)$ and any real and positive $t$ $$ \int e^{-\int_0^t (V \circ
\Theta_s)(.) ds} [ (\tilde{{\cal L}}^t_V (g)) \circ \theta_t ] d \nu_V = \int g
d \nu_V$$ This equation generalize...

Link permanente para citações:

## Non-equilibrium thermodynamic potentials for continuous-time Markov chains

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 05/08/2015
Português

Relevância na Pesquisa

88.83126%

#Condensed Matter - Statistical Mechanics#Condensed Matter - Mesoscale and Nanoscale Physics#Mathematical Physics

We connect the rare fluctuations of an Equilibrium (EQ) process to the
typical fluctuations of a Non-Equilibrium (NE) stationary process. In the
framework of large deviation theory, this observation allows us to introduce NE
thermodynamic potentials. For continuous-time Markov chains, we identify the
relevant pairs of conjugated variables and propose two NE ensembles: one with
fixed dynamics and fluctuating time-averaged variables, and another with fixed
time-averaged variables, but a fluctuating dynamics. Accordingly, we show that
NE processes are equivalent to conditioned EQ processes ensuring that NE
potentials are Legendre dual. We find a variational principle satisfied by the
NE potentials that reach their maximum in the NE stationary state and whose
first derivatives produce the NE equations of state, and second derivatives
produce the NE Maxwell relations generalizing the Onsager reciprocity
relations.; Comment: 16 pages, 2 tables, 2 figures

Link permanente para citações:

## Erratum to: Model-checking continuous-time Markov chains by Aziz et al

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 10/02/2011
Português

Relevância na Pesquisa

88.71142%

This note corrects a discrepancy between the semantics and the algorithm of
the multiple until operator of CSL, like in Pr_{> 0.0025} (a until[1,2] b
until[3,4] c), of the article: Model-checking continuous-time Markov chains by
Aziz, Sanwal, Singhal and Brayton, TOCL 1(1), July 2000, pp. 162-170.

Link permanente para citações:

## Risk-sensitive control of continuous time Markov chains

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 14/09/2014
Português

Relevância na Pesquisa

88.88704%

We study risk-sensitive control of continuous time Markov chains taking
values in discrete state space. We study both finite and infinite horizon
problems. In the finite horizon problem we characterise the value function via
HJB equation and obtain an optimal Markov control. We do the same for infinite
horizon discounted cost case. In the infinite horizon average cost case we
establish the existence of an optimal stationary control under certain Lyapunov
condition. We also develop a policy iteration algorithm for finding an optimal
control.; Comment: 19 pages, Stochastics, 2014

Link permanente para citações:

## A thermodynamic formalism for continuous time Markov chains with values on the Bernoulli Space: entropy, pressure and large deviations

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

89.53037%

Through this paper we analyze the ergodic properties of continuous time
Markov chains with values on the one-dimensional spin lattice 1,...,d}^N (also
known as the Bernoulli space). Initially, we consider as the infinitesimal
generator the operator $L={\mc L}_A -I$, where \mc L_A is a discrete time
Ruelle operator (transfer operator), and A:{1,...,d}^N \to R is a given fixed
Lipschitz function. The associated continuous time stationary Markov chain will
define the\emph{a priori}probability.
Given a Lipschitz interaction V:\{1,...,d\}^{\bb N}\to \mathbb{R}, we are
interested in Gibbs (equilibrium) state for such $V$. This will be another
continuous time stationary Markov chain. In order to analyze this problem we
will use a continuous time Ruelle operator (transfer operator) naturally
associated to V. Among other things we will show that a continuous time
Perron-Frobenius Theorem is true in the case V is a Lipschitz function.
We also introduce an entropy, which is negative, and we consider a
variational principle of pressure. Finally, we analyze large deviations
properties for the empirical measure in the continuous time setting using
results by Y. Kifer.; Comment: to appear Journ. of Stat. Physics

Link permanente para citações:

## Joint density for the local times of continuous-time Markov chains: Extended version

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

78.98182%

We investigate the local times of a continuous-time Markov chain on an
arbitrary discrete state space. For fixed finite range of the Markov chain, we
derive an explicit formula for the joint density of all local times on the
range, at any fixed time. We use standard tools from the theory of stochastic
processes and finite-dimensional complex calculus. We apply this formula in the
following directions: (1) we derive large deviation upper estimates for the
normalized local times beyond the exponential scale, (2) we derive the upper
bound in Varadhan's \chwk{l}emma for any measurable functional of the local
times, \ch{and} (3) we derive large deviation upper bounds for continuous-time
simple random walk on large subboxes of $\Z^d$ tending to $\Z^d$ as time
diverges. We finally discuss the relation of our density formula to the
Ray-Knight theorem for continuous-time simple random walk on $\Z$, which is
analogous to the well-known Ray-Knight description of Brownian local times. In
this extended version, we prove that the Ray-Knight theorem follows from our
density formula.; Comment: 22 pages

Link permanente para citações:

## A Comment on the Book "Continuous-Time Markov Chains" by W.J. Anderson

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

88.71142%

The book "Continuous-Time Markov Chains" by W. J. Anderson collects a large
part of the development in the past thirty years. It is now a popular reference
for the researchers on this subject or related fields. Unfortunately, due to a
misunderstanding of the approximating methods, several results in the book are
incorrectly stated or proved. Since the results are related to the present
author's work, to whom it may be a duty to correct the mistakes in order to
avoid further confusion. We emphasize the approximating methods because they
are useful in many situations.; Comment: In the past twenty years or more, we have seen several times that
some results from the book under review are either incorrectly used or cited.
The uploaded older paper may be helpful to classify some confusions. Two
footnotes are newly added

Link permanente para citações:

## Cycle symmetries and circulation fluctuations for discrete-time and continuous-time Markov chains

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

89.15762%

In probability theory, equalities are much less than inequalities. In this
paper, we find a series of equalities which characterize the symmetry of the
forming times of a family of similar cycles for discrete-time and
continuous-time Markov chains. Moreover, we use these cycle symmetries to study
the circulation fluctuations for Markov chains. We prove that the empirical
circulations of a family of cycles passing through a common state satisfy a
large deviation principle with a rate function which has an highly non-obvious
symmetry. Finally, we discuss the applications of our work in statistical
physics and biochemistry.; Comment: 30 pages, 1 figure

Link permanente para citações:

## Weak Error for Continuous Time Markov Chains Related to Fractional in Time P(I)DEs

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 18/05/2015
Português

Relevância na Pesquisa

88.86219%

We provide sharp error bounds for the difference between the transition
densities of some multidimensional Continuous Time Markov Chains (CTMC) and the
fundamental solutions of some fractional in time Partial (Integro) Differential
Equations (P(I)DEs). Namely, we consider equations involving a time fractional
derivative of Caputo type and a spatial operator corresponding to the generator
of a non degenerate Brownian or stable driven Stochastic Differential Equation
(SDE).; Comment: 36 pages

Link permanente para citações:

## Optimization-based Lyapunov function construction for continuous-time Markov chains with affine transition rates

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 24/12/2014
Português

Relevância na Pesquisa

88.71142%

We address the problem of Lyapunov function construction for a class of
continuous-time Markov chains with affine transition rates, typically
encountered in stochastic chemical kinetics. Following an optimization
approach, we take advantage of existing bounds from the Foster-Lyapunov
stability theory to obtain functions that enable us to estimate the region of
high stationary probability, as well as provide upper bounds on moments of the
chain. Our method can be used to study the stationary behavior of a given chain
without resorting to stochastic simulation, in a fast and efficient manner.

Link permanente para citações:

## Poisson-type deviation inequalities for curved continuous-time Markov chains

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 05/09/2007
Português

Relevância na Pesquisa

89.04071%

In this paper, we present new Poisson-type deviation inequalities for
continuous-time Markov chains whose Wasserstein curvature or $\Gamma$-curvature
is bounded below. Although these two curvatures are equivalent for Brownian
motion on Riemannian manifolds, they are not comparable in discrete settings
and yield different deviation bounds. In the case of birth--death processes, we
provide some conditions on the transition rates of the associated generator for
such curvatures to be bounded below and we extend the deviation inequalities
established [An\'{e}, C. and Ledoux, M. On logarithmic Sobolev inequalities for
continuous time random walks on graphs. Probab. Theory Related Fields 116
(2000) 573--602] for continuous-time random walks, seen as models in null
curvature. Some applications of these tail estimates are given for
Brownian-driven Ornstein--Uhlenbeck processes and $M/M/1$ queues.; Comment: Published at http://dx.doi.org/10.3150/07-BEJ6039 in the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm)

Link permanente para citações:

## Explosion, implosion, and moments of passage times for continuous-time Markov chains: a semimartingale approach

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

88.78561%

We establish general theorems quantifying the notion of recurrence ---
through an estimation of the moments of passage times --- for irreducible
continuous-time Markov chains on countably infinite state spaces. Sharp
conditions of occurrence of the phenomenon of explosion are also obtained. A
new phenomenon of implosion is introduced and sharp conditions for its
occurrence are proven. The general results are illustrated by treating models
having a difficult behaviour even in discrete time.; Comment: 33 pages

Link permanente para citações:

## Continuous-Time Tracking Algorithms Involving Two-Time-Scale Markov Chains

Fonte: Institute of Electrical and Electronics Engineers (IEEE Inc)
Publicador: Institute of Electrical and Electronics Engineers (IEEE Inc)

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

79.29669%

#Keywords: Algorithms#Approximation theory#Finite difference method#Lyapunov methods#Markov processes#Mathematical models#Matrix algebra#Ordinary differential equations#Probability#Theorem proving#Continuous time Markov chain

This work is concerned with least-mean-squares (LMS) algorithms in continuous time for tracking a time-varying parameter process. A distinctive feature is that the true parameter process is changing at a fast pace driven by a finite-state Markov chain. The states of the Markov chain are divisible into a number of groups. Within each group, the transitions take place rapidly; among different groups, the transitions are infrequent. Introducing a small parameter into the generator of the Markov chain leads to a two-time-scale formulation. The tracking objective is difficult to achieve. Nevertheless, a limit result is derived yielding algorithms for limit systems. Moreover, the rates of variation of the tracking error sequence are analyzed. Under simple conditions, it is shown that a scaled sequence of the tracking errors converges weakly to a switching diffusion. In addition, a numerical example is provided and an adaptive step-size algorithm developed.

Link permanente para citações: