Página 1 dos resultados de 36 itens digitais encontrados em 0.016 segundos

Detection of Auditory Cortex Activity by fMRI Using a Dependent Component Analysis

ESTOMBELO-MONTESCO, Carlos A.; STURZBECHER, Marcio Jr.; BARROS, Allan K. D.; ARAUJO, Draulio B. de
Fonte: SPRINGER-VERLAG BERLIN Publicador: SPRINGER-VERLAG BERLIN
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
85.84%
Functional MRI (fMRI) data often have low signal-to-noise-ratio (SNR) and are contaminated by strong interference from other physiological sources. A promising tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). BSS is based on the assumption that the detected signals are a mixture of a number of independent source signals that are linearly combined via an unknown mixing matrix. BSS seeks to determine the mixing matrix to recover the source signals based on principles of statistical independence. In most cases, extraction of all sources is unnecessary; instead, a priori information can be applied to extract only the signal of interest. Herein we propose an algorithm based on a variation of ICA, called Dependent Component Analysis (DCA), where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We applied such method to inspect functional Magnetic Resonance Imaging (fMRI) data, aiming to find the hemodynamic response that follows neuronal activation from an auditory stimulation, in human subjects. The method localized a significant signal modulation in cortical regions corresponding to the primary auditory cortex. The results obtained by DCA were also compared to those of the General Linear Model (GLM)...

Methods for quality enhancement of voice communications over erasure channels

Neves, Filipe dos Santos
Fonte: Universidade de Trás-os-Montes e Alto Douro Publicador: Universidade de Trás-os-Montes e Alto Douro
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
25.62%
Tese de Doutoramento em Informática; Esta tese apresenta um trabalho de investigação levado a cabo pelo autor no contexto da Qualidade de Experiência (Quality of Experience, QoE) em sistemas de comunicação de voz sujeitos a erros. Nela se identificam os problemas de investigação mais relevantes, dos quais decorre a motivação para a investigação apresentada, a começar pelas perturbações que contribuem para a degradação da inteligibilidade experimentada pelos utilizadores. De seguida é apresentada uma revisão das técnicas de melhoria da qualidade de voz em sistemas de comunicação sujeitos a erros de transmissão e perdas de dados, presentes actualmente na literatura. Neste contexto são abordadas as técnicas de dissimulação de perdas de pacotes (Packet Loss Concealment, PLC), de qualidade de serviço (Quality of Service, QoS) e de priorização de pacotes. Como consequência, surge a necessidade de avaliar a eficiência de uma determinada técnica. Neste âmbito, são descritos os métodos mais importantes actualmente utilizados para a avaliação da qualidade de voz telefónica tendo em conta os factores humanos de avaliação, inerentemente subjectiva. São assim apresentados os métodos subjectivos de avaliação da qualidade de voz...

HEK293S Cells Have Functional Retinoid Processing Machinery

Brueggemann, Lioubov I.; Sullivan, Jack M.
Fonte: The Rockefeller University Press Publicador: The Rockefeller University Press
Tipo: Artigo de Revista Científica
Publicado em /06/2002 Português
Relevância na Pesquisa
35.61%
Rhodopsin activation is measured by the early receptor current (ERC), a conformation-associated charge motion, in human embryonic kidney cells (HEK293S) expressing opsins. After rhodopsin bleaching in cells loaded with 11-cis-retinal, ERC signals recover in minutes and recurrently over a period of hours by simple dark adaptation, with no added chromophore. The purpose of this study is to investigate the source of ERC signal recovery in these cells. Giant HEK293S cells expressing normal wild-type (WT)-human rod opsin (HEK293S) were regenerated by solubilized 11-cis-retinal, all-trans-retinal, or Vitamin A in darkness. ERCs were elicited by flash photolysis and measured by whole-cell recording. Visible flashes initially elicit bimodal (R1, R2) ERC signals in WT-HEK293S cells loaded with 11-cis-retinal for 40 min or overnight. In contrast, cells regenerated for 40 min with all-trans-retinal or Vitamin A had negative ERCs (R1-like) or none at all. After these were placed in the dark overnight, ERCs with outward R2 signals were recorded the following day. This indicates conversion of loaded Vitamin A or all-trans-retinal into cis-retinaldehyde that regenerated ground-state pigment. 4-butylaniline, an inhibitor of the mammalian retinoid cycle...

Dual Key Speech Encryption Algorithm Based Underdetermined BSS

Zhao, Huan; He, Shaofang; Chen, Zuo; Zhang, Xixiang
Fonte: Hindawi Publishing Corporation Publicador: Hindawi Publishing Corporation
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
35.8%
When the number of the mixed signals is less than that of the source signals, the underdetermined blind source separation (BSS) is a significant difficult problem. Due to the fact that the great amount data of speech communications and real-time communication has been required, we utilize the intractability of the underdetermined BSS problem to present a dual key speech encryption method. The original speech is mixed with dual key signals which consist of random key signals (one-time pad) generated by secret seed and chaotic signals generated from chaotic system. In the decryption process, approximate calculation is used to recover the original speech signals. The proposed algorithm for speech signals encryption can resist traditional attacks against the encryption system, and owing to approximate calculation, decryption becomes faster and more accurate. It is demonstrated that the proposed method has high level of security and can recover the original signals quickly and efficiently yet maintaining excellent audio quality.

MEG Source Imaging Method using Fast L1 Minimum-norm and its Applications to Signals with Brain Noise and Human Resting-state Source Amplitude Images

Huang, Ming-Xiong; Huang, Charles W.; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L.; Baker, Dewleen G.; Song, Tao; Harrington, Deborah L.; Theilmann, Rebecca J.; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M.; Edgar, J. Christ
Fonte: PubMed Publicador: PubMed
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
35.75%
The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL’s performance of was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL’s performance was then examined in the analysis of human mediannerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next...

Diverse and Widespread Contamination Evident in the Unmapped Depths of High Throughput Sequencing Data

Lusk, Richard W.
Fonte: Public Library of Science Publicador: Public Library of Science
Tipo: Artigo de Revista Científica
Publicado em 29/10/2014 Português
Relevância na Pesquisa
25.57%
Trace quantities of contaminating DNA are widespread in the laboratory environment, but their presence has received little attention in the context of high throughput sequencing. This issue is highlighted by recent works that have rested controversial claims upon sequencing data that appear to support the presence of unexpected exogenous species. I used reads that preferentially aligned to alternate genomes to infer the distribution of potential contaminant species in a set of independent sequencing experiments. I confirmed that dilute samples are more exposed to contaminating DNA, and, focusing on four single-cell sequencing experiments, found that these contaminants appear to originate from a wide diversity of clades. Although negative control libraries prepared from ‘blank’ samples recovered the highest-frequency contaminants, low-frequency contaminants, which appeared to make heterogeneous contributions to samples prepared in parallel within a single experiment, were not well controlled for. I used these results to show that, despite heavy replication and plausible controls, contamination can explain all of the observations used to support a recent claim that complete genes pass from food to human blood. Contamination must be considered a potential source of signals of exogenous species in sequencing data...

Dynamic Allostery of the Catabolite Activator Protein Revealed by Interatomic Forces

Louet, Maxime; Seifert, Christian; Hensen, Ulf; Gräter, Frauke
Fonte: Public Library of Science Publicador: Public Library of Science
Tipo: Artigo de Revista Científica
Publicado em 05/08/2015 Português
Relevância na Pesquisa
35.51%
The Catabolite Activator Protein (CAP) is a showcase example for entropic allostery. For full activation and DNA binding, the homodimeric protein requires the binding of two cyclic AMP (cAMP) molecules in an anti-cooperative manner, the source of which appears to be largely of entropic nature according to previous experimental studies. We here study at atomic detail the allosteric regulation of CAP with Molecular dynamics (MD) simulations. We recover the experimentally observed entropic penalty for the second cAMP binding event with our recently developed force covariance entropy estimator and reveal allosteric communication pathways with Force Distribution Analyses (FDA). Our observations show that CAP binding results in characteristic changes in the interaction pathways connecting the two cAMP allosteric binding sites with each other, as well as with the DNA binding domains. We identified crucial relays in the mostly symmetric allosteric activation network, and suggest point mutants to test this mechanism. Our study suggests inter-residue forces, as opposed to coordinates, as a highly sensitive measure for structural adaptations that, even though minute, can very effectively propagate allosteric signals.

Separação cega de fontes lineares e não lineares usando algoritmo genético, redes neurais artificiais RBF e negentropia de Rényi como medida de independência

Damasceno, Nielsen Castelo
Fonte: Universidade Federal do Rio Grande do Norte; BR; UFRN; Programa de Pós-Graduação em Engenharia Elétrica; Automação e Sistemas; Engenharia de Computação; Telecomunicações Publicador: Universidade Federal do Rio Grande do Norte; BR; UFRN; Programa de Pós-Graduação em Engenharia Elétrica; Automação e Sistemas; Engenharia de Computação; Telecomunicações
Tipo: Dissertação Formato: application/pdf
Português
Relevância na Pesquisa
25.67%
Conventional methods to solve the problem of blind source separation nonlinear, in general, using series of restrictions to obtain the solution, often leading to an imperfect separation of the original sources and high computational cost. In this paper, we propose an alternative measure of independence based on information theory and uses the tools of artificial intelligence to solve problems of blind source separation linear and nonlinear later. In the linear model applies genetic algorithms and Rényi of negentropy as a measure of independence to find a separation matrix from linear mixtures of signals using linear form of waves, audio and images. A comparison with two types of algorithms for Independent Component Analysis widespread in the literature. Subsequently, we use the same measure of independence, as the cost function in the genetic algorithm to recover source signals were mixed by nonlinear functions from an artificial neural network of radial base type. Genetic algorithms are powerful tools for global search, and therefore well suited for use in problems of blind source separation. Tests and analysis are through computer simulations; Os métodos convencionais para resolver o problema de separação cega de fontes não lineares em geral utilizam uma série de restrições à obtenção da solução...

Pattern recognition and tomographic reconstruction with Terahertz Signals for applications in biomedical engineering.

Yin, Xiaoxia
Fonte: Universidade de Adelaide Publicador: Universidade de Adelaide
Tipo: Tese de Doutorado
Publicado em //2009 Português
Relevância na Pesquisa
35.51%
Over the last ten years, terahertz (THz or T-ray) biomedical imaging has become a modality of interest due to its ability to simultaneously acquire both image and spectral information. Terahertz imaging systems are being commercialized, with increasing trials performed in a biomedical setting. Advanced digital image processing algorithms are greatly need to assist screening, diagnosis, and treatment. Pattern recognition algorithms play a critical role in the accurate and automatic process of detecting abnormalities when applied to biomedical imaging. This goal requires classification of meaningful physical contrast and identification of information in images, for example, distinguishing between different biological tissues or materials. T-ray tomographic imaging and detection technology contributes especially to our ability to discriminate opaque objects with clear boundaries and makes possible significant potential applications in both in vivo and ex vivo environments. The Thesis consists of a number of Chapters, which can be grouped in to three parts. The first part provides a review of the state-of-the-art regarding THz sources and detectors, THz imaging modes, and THz imaging analysis. Pattern recognition forms the second part of this Thesis...

Bayesian nonparametrics for time series modeling

Rodríguez Ruiz, Francisco Jesús
Fonte: Universidade Carlos III de Madrid Publicador: Universidade Carlos III de Madrid
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
25.67%
In many real-world signal processing problems, an observed temporal sequence can be explained by several unobservable independent causes, and we are interested in recovering the canonical signals that lead to these observations. For example, we may want to separate the overlapping voices on a single recording, distinguish the individual players on a financial market, or recover the underlying brain signals from electroencephalography data. This problem, known as source separation, is in general highly underdetermined or ill-posed. Methods for source separation generally seek to narrow the set of possible solutions in a way that is unlikely to exclude the desired solution. However, most classical approaches for source separation assume a fixed and known number of latent sources. This may represent a limitation in contexts in which the number of independent causes is unknown and is not limited to a small range. In this Thesis, we address the signal separation problem from a probabilistic modeling perspective. We encode our independence assumptions in a probabilistic model and develop inference algorithms to unveil the underlying sequences that explain the observed signal. We adopt a Bayesian nonparametric (BNP) approach in order to let the inference procedure estimate the number of independent sequences that best explain the data. BNP models place a prior distribution over an infinite-dimensional parameter space...

Estimation of Instantaneous Gas Exchange in Flow-Through Respirometry Systems: A Modern Revision of Bartholomew's Z-Transform Method

Pendar, Hodjat; Socha, John J.
Fonte: Public Library of Science Publicador: Public Library of Science
Tipo: Artigo de Revista Científica
Publicado em 14/10/2015 Português
Relevância na Pesquisa
25.66%
Flow-through respirometry systems provide accurate measurement of gas exchange over long periods of time. However, these systems have limitations in tracking rapid changes. When an animal infuses a metabolic gas into the respirometry chamber in a short burst, diffusion and airflow in the chamber gradually alter the original signal before it arrives at the gas analyzer. For single or multiple bursts, the recorded signal is smeared or mixed, which may result in dramatically altered recordings compared to the emitted signal. Recovering the original metabolic signal is a difficult task because of the inherent ill conditioning problem. Here, we present two new methods to recover the fast dynamics of metabolic patterns from recorded data. We first re-derive the equations of the well-known Z-transform method (ZT method) to show the source of imprecision in this method. Then, we develop a new model of analysis for respirometry systems based on the experimentally determined impulse response, which is the response of the system to a very short unit input. As a result, we present a major modification of the ZT method (dubbed the ‘EZT method’) by using a new model for the impulse response, enhancing its precision to recover the true metabolic signals. The second method...

Signals from the Noise: Image Stacking for Quasars in the FIRST Survey

White, Richard L.; Helfand, David J.; Becker, Robert H.; Glikman, Eilat; deVries, Wim
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
35.56%
We present a technique to explore the radio sky into the nanoJansky regime by employing image stacking using the FIRST survey. We first discuss the non-intuitive relationship between the mean and median values of a distribution that is dominated by noise, followed by an analysis of the systematic effects present in FIRST's 20cm VLA snapshot images. Image stacking allows us to recover the properties of source populations with fluxes a factor of 30 or more below the rms noise level. Mean estimates of radio flux density, luminosity, etc., are derivable for any source class having arcsecond positional accuracy. We use this technique to compute the mean radio properties for 41,295 quasars from the SDSS DR3 catalog. There is a tight correlation between optical and radio luminosity, with the radio luminosity increasing as the 0.85 power of optical luminosity. This implies declining radio-loudness with optical luminosity: the most luminous objects (M=-28.5) have average radio-to-optical ratios 3 times lower than the least luminous objects (M=-20). There is also a striking correlation between optical color and radio loudness: quasars that are either redder or bluer than the norm are brighter radio sources, with objects 0.8 magnitudes redder than the SDSS composite spectrum having radio-loudness ratios that are higher by a factor of 10. We explore the longstanding question of whether a radio-loud/radio-quiet dichotomy exists in quasars...

Localization of short duration gravitational-wave transients with the early advanced LIGO and Virgo detectors

Essick, Reed; Vitale, Salvatore; Katsavounidis, Erik; Vedovato, Gabriele; Klimenko, Sergey
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
35.57%
The Laser Interferometer Gravitational wave Observatory (LIGO) and Virgo, advanced ground-based gravitational-wave detectors, will begin collecting science data in 2015. With first detections expected to follow, it is important to quantify how well generic gravitational-wave transients can be localized on the sky. This is crucial for correctly identifying electromagnetic counterparts as well as understanding gravitational-wave physics and source populations. We present a study of sky localization capabilities for two search and parameter estimation algorithms: \emph{coherent WaveBurst}, a constrained likelihood algorithm operating in close to real-time, and \emph{LALInferenceBurst}, a Markov chain Monte Carlo parameter estimation algorithm developed to recover generic transient signals with latency of a few hours. Furthermore, we focus on the first few years of the advanced detector era, when we expect to only have two (2015) and later three (2016) operational detectors, all below design sensitivity. These detector configurations can produce significantly different sky localizations, which we quantify in detail. We observe a clear improvement in localization of the average detected signal when progressing from two-detector to three-detector networks...

An analysis method for time ordered data processing of Dark Matter experiments

Moulin, E.; Macias-Perez, J. F.; Mayet, F.; Winkelmann, C.; Bunkov, Yu. M.; Godfrin, H.; Santos, D.; Collaboration, the MIMAC-He3
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 03/04/2006 Português
Relevância na Pesquisa
35.63%
The analysis of the time ordered data of Dark Matter experiments is becoming more and more challenging with the increase of sensitivity in the ongoing and forthcoming projects. Combined with the well-known level of background events, this leads to a rather high level of pile-up in the data. Ionization, scintillation as well as bolometric signals present common features in their acquisition timeline: low frequency baselines, random gaussian noise, parasitic noise and signal characterized by well-defined peaks. In particular, in the case of long-lasting signals such as bolometric ones, the pile-up of events may lead to an inaccurate reconstruction of the physical signal (misidentification as well as fake events). We present a general method to detect and extract signals in noisy data with a high pile-up rate and qe show that events from few keV to hundreds of keV can be reconstructed in time ordered data presenting a high pile-up rate. This method is based on an iterative detection and fitting procedure combined with prior wavelet-based denoising of the data and baseline subtraction. {We have tested this method on simulated data of the MACHe3 prototype experiment and shown that the iterative fitting procedure allows us to recover the lowest energy events...

Bayesian Source Separation and Localization

Knuth, Kevin H.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 24/05/2002 Português
Relevância na Pesquisa
35.77%
The problem of mixed signals occurs in many different contexts; one of the most familiar being acoustics. The forward problem in acoustics consists of finding the sound pressure levels at various detectors resulting from sound signals emanating from the active acoustic sources. The inverse problem consists of using the sound recorded by the detectors to separate the signals and recover the original source waveforms. In general, the inverse problem is unsolvable without additional information. This general problem is called source separation, and several techniques have been developed that utilize maximum entropy, minimum mutual information, and maximum likelihood. In previous work, it has been demonstrated that these techniques can be recast in a Bayesian framework. This paper demonstrates the power of the Bayesian approach, which provides a natural means for incorporating prior information into a source model. An algorithm is developed that utilizes information regarding both the statistics of the amplitudes of the signals emitted by the sources and the relative locations of the detectors. Using this prior information, the algorithm finds the most probable source behavior and configuration. Thus, the inverse problem can be solved by simultaneously performing source separation and localization. It should be noted that this algorithm is not designed to account for delay times that are often important in acoustic source separation. However...

Robust parameter estimation for compact binaries with ground-based gravitational-wave observations using the LALInference software library

Veitch, John; Raymond, Vivien; Farr, Benjamin; Farr, Will M.; Graff, Philip; Vitale, Salvatore; Aylott, Ben; Blackburn, Kent; Christensen, Nelson; Coughlin, Michael; Del Pozzo, Walter; Feroz, Farhan; Gair, Jonathan; Haster, Carl-Johan; Kalogera, Vicky; Li
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
35.69%
The Advanced LIGO and Advanced Virgo gravitational wave (GW) detectors will begin operation in the coming years, with compact binary coalescence events a likely source for the first detections. The gravitational waveforms emitted directly encode information about the sources, including the masses and spins of the compact objects. Recovering the physical parameters of the sources from the GW observations is a key analysis task. This work describes the LALInference software library for Bayesian parameter estimation of compact binary signals, which builds on several previous methods to provide a well-tested toolkit which has already been used for several studies. We show that our implementation is able to correctly recover the parameters of compact binary signals from simulated data from the advanced GW detectors. We demonstrate this with a detailed comparison on three compact binary systems: a binary neutron star, a neutron star black hole binary and a binary black hole, where we show a cross-comparison of results obtained using three independent sampling algorithms. These systems were analysed with non-spinning, aligned spin and generic spin configurations respectively, showing that consistent results can be obtained even with the full 15-dimensional parameter space of the generic spin configurations. We also demonstrate statistically that the Bayesian credible intervals we recover correspond to frequentist confidence intervals under correct prior assumptions by analysing a set of 100 signals drawn from the prior. We discuss the computational cost of these algorithms...

Amplify-and-Forward in Wireless Relay Networks

Agnihotri, Samar; Jaggi, Sidharth; Chen, Minghua
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
25.63%
A general class of wireless relay networks with a single source-destination pair is considered. Intermediate nodes in the network employ an amplify-and-forward scheme to relay their input signals. In this case the overall input-output channel from the source via the relays to the destination effectively behaves as an intersymbol interference channel with colored noise. Unlike previous work we formulate the problem of the maximum achievable rate in this setting as an optimization problem with no assumption on the network size, topology, and received signal-to-noise ratio. Previous work considered only scenarios wherein relays use all their power to amplify their received signals. We demonstrate that this may not always maximize the maximal achievable rate in amplify-and-forward relay networks. The proposed formulation allows us to not only recover known results on the performance of the amplify-and-forward schemes for some simple relay networks but also characterize the performance of more complex amplify-and-forward relay networks which cannot be addressed in a straightforward manner using existing approaches. Using cut-set arguments, we derive simple upper bounds on the capacity of general wireless relay networks. Through various examples...

Parameter estimation for compact binaries with ground-based gravitational-wave observations using the LALInference software library

Veitch, J.; Raymond, V.; Blackburn, K.; Smith, R.
Fonte: American Physical Society Publicador: American Physical Society
Tipo: Article; PeerReviewed Formato: application/pdf; application/pdf
Publicado em 06/02/2015 Português
Relevância na Pesquisa
35.69%
The Advanced LIGO and Advanced Virgo gravitational-wave (GW) detectors will begin operation in the coming years, with compact binary coalescence events a likely source for the first detections. The gravitational waveforms emitted directly encode information about the sources, including the masses and spins of the compact objects. Recovering the physical parameters of the sources from the GW observations is a key analysis task. This work describes the LALInference software library for Bayesian parameter estimation of compact binary signals, which builds on several previous methods to provide a well-tested toolkit which has already been used for several studies. We show that our implementation is able to correctly recover the parameters of compact binary signals from simulated data from the advanced GW detectors. We demonstrate this with a detailed comparison on three compact binary systems: a binary neutron star, a neutron star–black hole binary and a binary black hole, where we show a cross comparison of results obtained using three independent sampling algorithms. These systems were analyzed with nonspinning, aligned spin and generic spin configurations respectively, showing that consistent results can be obtained even with the full 15-dimensional parameter space of the generic spin configurations. We also demonstrate statistically that the Bayesian credible intervals we recover correspond to frequentist confidence intervals under correct prior assumptions by analyzing a set of 100 signals drawn from the prior. We discuss the computational cost of these algorithms...

Signals from the Noise: Image Stacking for Quasars in the FIRST Survey

White, Richard L.; Helfand, David J.; Becker , Robert H.; Glikman, Eilat; de Vries, Wim
Fonte: American Astronomical Society Publicador: American Astronomical Society
Tipo: Article; NonPeerReviewed Formato: application/pdf
Publicado em 01/01/2007 Português
Relevância na Pesquisa
35.56%
We present a technique to explore the radio sky into the nanojansky regime by employing image stacking using the FIRST survey. We first discuss the nonintuitive relationship between the mean and median values of a non-Gaussian distribution that is dominated by noise, followed by an analysis of the systematic effects present in FIRST's 20 cm VLA snapshot images. Image stacking allows us to recover the properties of source populations with flux densities a factor of 30 or more below the rms noise level. Mean estimates of radio flux density, luminosity, etc. are derivable for any source class having arcsecond positional accuracy. We use this technique to compute the mean radio properties for 41,295 quasars from the SDSS DR3 catalog. There is a tight correlation between optical and radio luminosity, with the radio luminosity increasing as the 0.85 power of optical luminosity. This implies declining radio loudness with optical luminosity: the most luminous objects (M_(UV) = -28.5) have average radio-to-optical ratios 3 times lower than the least luminous objects (M_(UV) = -20). There is also a striking correlation between optical color and radio loudness: quasars that are either redder or bluer than the norm are brighter radio sources, with objects 0.8 mag redder than the SDSS composite spectrum having radio loudness ratios that are higher by a factor of 10. We explore the long-standing question of whether a radio-loud/radio-quiet dichotomy exists in quasars...

Evaluation of an automated struvite reactor to recover phosphorus from source-separated urine collected at urine diversion toilets in eThekwini

Grau,Maximilian GP; Rhoton,Sara L; Brouckaert,Chris J; Buckley,Chris A
Fonte: Water SA Publicador: Water SA
Tipo: Artigo de Revista Científica Formato: text/html
Publicado em 01/04/2015 Português
Relevância na Pesquisa
25.7%
In the present study we attempted to develop a reactor system to recover phosphorus by struvite precipitation, and which can be installed anywhere in the field without access to a laboratory. A reactor was developed that can run fully automated and recover up to 93% of total phosphorus (total P). Turbidity and conductivity signals were investigated as automation proxies for magnesium dosage, thus making laboratory phosphate measurements to determine the exact magnesium dosage unnecessary. Conductivity is highly influenced by the dosing parameters (molarity and pump speed) and turbidity is affected by particle size distribution issues. Algorithms based on both conductivity and turbidity signals were not able to detect the precipitation endpoint in real time. However it proved possible to identify the endpoint retrospectively from the conductivity signal, and thereafter to dose an algorithm-calculated volume of urine to use up the excess magnesium dosed.