Página 1 dos resultados de 20 itens digitais encontrados em 0.004 segundos

Marcação de regiões de interesse em 3d sobre imagens radiológicas utilizando a web; Markup of regions of interest in 3d on radiological images using the web

Hage, Cleber Castro
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 24/09/2014 Português
Relevância na Pesquisa
19.05152%
Este trabalho faz parte de um projeto maior, o electronic Physician Annotation Device (ePAD). O ePAD permite a criação de uma base de conhecimento médico usando anotações semânticas sobre lesões em imagens radiológicas, usando uma plataforma Web. Essas anotações servirão para identificar, acompanhar e reason sobre lesões tumorais em pesquisas médicas (especialmente sobre câncer). A informação adquirida e persistida pelo sistema permite avaliação automática por computadores; recuperação de imagens hospitalares e outros serviços relacionados a exames médicos. O ePAD é um desenvolvimento conjunto de grupos de pesquisas do ICMC-USP e do Department of Radiology da Stanford University. O principal trabalho, apresentado neste texto, é um novo conjunto de funcionalidades na Web para adicionar a marcação de lesões em imagens radiológicas em três dimensões ao ePAD. Elas permitirão a obtenção de dados mais precisos acerca de medidas tridimensionais de lesões como volume, posição e cálculo de maior diâmetro. O objetivo é facilitar o trabalho dos profissionais de radiologia na análise de diagnósticos e acompanhamento de lesões produzindo um acompanhamento mais acurado da evolução de doenças como o câncer. Anotações podem ser conectadas a lesões e conter informações semânticas...

Gesture based interface for image annotation

Gonçalves, Duarte Nuno de Jesus
Fonte: Faculdade de Ciências e Tecnologia Publicador: Faculdade de Ciências e Tecnologia
Tipo: Dissertação de Mestrado
Publicado em //2008 Português
Relevância na Pesquisa
58.997954%
Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia; Given the complexity of visual information, multimedia content search presents more problems than textual search. This level of complexity is related with the difficulty of doing automatic image and video tagging, using a set of keywords to describe the content. Generally, this annotation is performed manually (e.g., Google Image) and the search is based on pre-defined keywords. However, this task takes time and can be dull. In this dissertation project the objective is to define and implement a game to annotate personal digital photos with a semi-automatic system. The game engine tags images automatically and the player role is to contribute with correct annotations. The application is composed by the following main modules: a module for automatic image annotation, a module that manages the game graphical interface (showing images and tags), a module for the game engine and a module for human interaction. The interaction is made with a pre-defined set of gestures, using a web camera. These gestures will be detected using computer vision techniques interpreted as the user actions. The dissertation also presents a detailed analysis of this application...

People and object tracking for video annotation

Silva, João Miguel Ferreira da
Fonte: Faculdade de Ciências e Tecnologia Publicador: Faculdade de Ciências e Tecnologia
Tipo: Dissertação de Mestrado
Publicado em //2012 Português
Relevância na Pesquisa
69.198574%
Dissertação para obtenção do Grau de Mestre em Engenharia Informática; Object tracking is a thoroughly researched problem, with a body of associated literature dating at least as far back as the late 1970s. However, and despite the development of some satisfactory real-time trackers, it has not yet seen widespread use. This is not due to a lack of applications for the technology, since several interesting ones exist. In this document, it is postulated that this status quo is due, at least in part, to a lack of easy to use software libraries supporting object tracking. An overview of the problems associated with object tracking is presented and the process of developing one such library is documented. This discussion includes how to overcome problems like heterogeneities in object representations and requirements for training or initial object position hints. Video annotation is the process of associating data with a video’s content. Associating data with a video has numerous applications, ranging from making large video archives or long videos searchable, to enabling discussion about and augmentation of the video’s content. Object tracking is presented as a valid approach to both automatic and manual video annotation, and the integration of the developed object tracking library into an existing video annotator...

An alternative approach to multiple genome comparison

Mancheron, Alban; Uricaru, Raluca; Rivals, Eric
Fonte: Oxford University Press Publicador: Oxford University Press
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
17.705424%
Genome comparison is now a crucial step for genome annotation and identification of regulatory motifs. Genome comparison aims for instance at finding genomic regions either specific to or in one-to-one correspondance between individuals/strains/species. It serves e.g. to pre-annotate a new genome by automatically transfering annotations from a known one. However, efficiency, flexibility and objectives of current methods do not suit the whole spectrum of applications, genome sizes and organizations. Innovative approaches are still needed. Hence, we propose an alternative way of comparing multiple genomes based on segmentation by similarity. In this framework, rather than being formulated as a complex optimization problem, genome comparison is seen as a segmentation question for which a single optimal solution can be found in almost linear time. We apply our method to analyse three strains of a virulent pathogenic bacteria, Ehrlichia ruminantium, and identify 92 new genes. We also find out that a substantial number of genes thought to be strain specific have potential orthologs in the other strains. Our solution is implemented in an efficient program, qod, equipped with a user-friendly interface, and enables the automatic transfer of annotations betwen compared genomes or contigs (Video in Supplementary Data). Because it somehow disregards the relative order of genomic blocks...

Haptic Exploratory Behavior During Object Discrimination: A Novel Automatic Annotation Method

Jansen, Sander E. M.; Bergmann Tiest, Wouter M.; Kappers, Astrid M. L.
Fonte: Public Library of Science Publicador: Public Library of Science
Tipo: Artigo de Revista Científica
Publicado em 06/02/2015 Português
Relevância na Pesquisa
39.216255%
In order to acquire information concerning the geometry and material of handheld objects, people tend to execute stereotypical hand movement patterns called haptic Exploratory Procedures (EPs). Manual annotation of haptic exploration trials with these EPs is a laborious task that is affected by subjectivity, attentional lapses, and viewing angle limitations. In this paper we propose an automatic EP annotation method based on position and orientation data from motion tracking sensors placed on both hands and inside a stimulus. A set of kinematic variables is computed from these data and compared to sets of predefined criteria for each of four EPs. Whenever all criteria for a specific EP are met, it is assumed that that particular hand movement pattern was performed. This method is applied to data from an experiment where blindfolded participants haptically discriminated between objects differing in hardness, roughness, volume, and weight. In order to validate the method, its output is compared to manual annotation based on video recordings of the same trials. Although mean pairwise agreement is less between human-automatic pairs than between human-human pairs (55.7% vs 74.5%), the proposed method performs much better than random annotation (2.4%). Furthermore...

VideoZoom: Summarizing surveillance images for safeguards video reviews

BLUNSDEN SCOTT; VERSINO Cristina
Fonte: Publications Office of the European Union Publicador: Publications Office of the European Union
Tipo: EUR - Scientific and Technical Research Reports Formato: Printed
Português
Relevância na Pesquisa
27.705425%
This report presents VideoZoom, a prototype review tool that builds automatic summaries out of sequences of surveillance images taken by cameras with a fixed point of view. These summary images are then visualised in a zooming user interface allowing the discovery and annotation of images of interest. The prototype system was used for detection of safeguards-relevant events in image sequences acquired in nuclear facilities. A first evaluation of the prototype system with inspectors from DG-ENER was performed. Results indicate that the system allows accurate reviews, can save effort and is easy to learn and use. In addition the system allows detection of unexpected events which would be missed by standard review tools.; JRC.E.8-Nuclear security

Interactive Video Annotation Tool

Serrano, Miguel Á.; Patricio Guisado, Miguel Ángel; García, Jesús; Molina, José M.
Fonte: Springer Publicador: Springer
Tipo: info:eu-repo/semantics/acceptedVersion; info:eu-repo/semantics/conferenceObject; info:eu-repo/semantics/bookPart
Publicado em //2010 Português
Relevância na Pesquisa
69.90802%
Abstract: Increasingly computer vision discipline needs annotated video databases to realize assessment tasks. Manually providing ground truth data to multimedia resources is a very expensive work in terms of effort, time and economic resources. Automatic and semi-automatic video annotation and labeling is the faster and more economic way to get ground truth for quite large video collections. In this paper, we describe a new automatic and supervised video annotation tool. Annotation tool is a modified version of ViPER-GT tool. ViPER-GT standard version allows manually editing and reviewing video metadata to generate assessment data. Automatic annotation capability is possible thanks to an incorporated tracking system which can deal the visual data association problem in real time. The research aim is offer a system which enables spends less time doing valid assessment models.; Proceedings of: Forth International Workshop on User-Centric Technologies and applications (CONTEXTS 2010). Valencia, 7-10 September , 2010.

Desarrollo de herramienta para la anotación manual de secuencias de vídeo

Witmaar García, Yoel
Fonte: Universidade Autônoma de Madrid Publicador: Universidade Autônoma de Madrid
Tipo: Trabalho de Conclusão de Curso
Português
Relevância na Pesquisa
39.917832%
Anotar imágenes y vídeos es algo importante para los que se dedican al tratamiento de imágenes, pero es una tarea muy dura para hacerla solamente de manera manual. Por eso es necesario desarrollar herramientas de anotación que faciliten este trabajo, de manera que el programa utilice algoritmos ya implementados que ayuden a hacer de la anotación de imágenes y videos una tarea más eficiente. El primer paso de este trabajo es el de investigar las herramientas que hay en la actualidad y valorar las características que tienen. Utilizando esta información se pueden obtener ideas sobre cuáles son las características que más van a ayudar a los usuarios a anotar con más eficacia y a anotar cualquiera de los formatos de vídeo o imagen que haga falta. El siguiente paso es elegir una herramienta de anotación de la que partir, una que por lo menos realice anotaciones manuales, para poder añadirle funcionalidades, como las anotaciones semiautomáticas y automáticas. También está como objetivo hacer que sea capaz de soportar más formatos de imagen o vídeo. Lo más eficaz para aumentar la eficacia a la hora de anotar de una herramienta es añadirle anotaciones automáticas, de manera que se utilicen algoritmos de detección de objetos que ya están implementados. También es eficaz añadir mecanismos de propagación de anotaciones de una imagen a otra...

Succeeding metadata based annotation scheme and visual tips for the automatic assessment of video aesthetic quality in car commercials

Fernández-Martínez, Fernando; Hernández-García, Alejandro; Díaz-de-María, Fernando
Fonte: Elsevier Publicador: Elsevier
Tipo: info:eu-repo/semantics/acceptedVersion; info:eu-repo/semantics/article
Publicado em /01/2015 Português
Relevância na Pesquisa
59.17107%
In this paper, we present a computational model capable to predict the viewer perception of car advertisements videos by using a set of low-level video descriptors. Our research goal relies on the hypothesis that these descriptors could reflect the aesthetic value of the videos and, in turn, their viewers' perception. To that effect, and as a novel approach to this problem, we automatically annotate our video corpus, downloaded from YouTube, by applying an unsupervised clustering algorithm to the retrieved metadata linked to the viewers' assessments of the videos. In this regard, a regular k-means algorithm is applied as partitioning method with k ranging from 2 to 5 clusters, modeling different satisfaction levels or classes. On the other hand, available metadata is categorized into two different types based on the profile of the viewers of the videos: metadata based on explicit and implicit opinion respectively. These two types of metadata are first individually tested and then combined together resulting in three different models or strategies that are thoroughly analyzed. Typical feature selection techniques are used over the implemented video descriptors as a pre-processing step in the classification of viewer perception, where several different classifiers have been considered as part of the experimental setup. Evaluation results show that the proposed video descriptors are clearly indicative of the subjective perception of viewers regardless of the implemented strategy and the number of classes considered. The strategy based on explicit opinion metadata clearly outperforms the implicit one in terms of classification accuracy. Finally...

Uma arquitetura de personalização de conteúdo baseada em anotações do usuário; An architecture for content personalization based on peer-level annotations

Manzato, Marcelo Garcia
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Tese de Doutorado Formato: application/pdf
Publicado em 14/02/2011 Português
Relevância na Pesquisa
27.705425%
A extração de metadados semânticos de vídeos digitais para uso em serviços de personalização é importante, já que o conteúdo é adaptado segundo as preferências de cada usuário. Entretanto, apesar de serem encontradas várias propostas na literatura, as técnicas de indexação automática são capazes de gerar informações semânticas apenas quando o domínio do conteúdo é restrito. Alternativamente, existem técnicas para a criação manual dessas informações por profissionais, contudo, são dispendiosas e suscetíveis a erros. Uma possível solução seria explorar anotações colaborativas dos usuários, mas tal estratégia provoca a perda de individualidade dos dados, impedindo a extração de preferências do indivíduo a partir da interação. Este trabalho tem como objetivo propor uma arquitetura de personalização que permite a indexação multimídia de modo irrestrito e barato, utilizando anotações colaborativas, mas mantendo-se a individualidade dos dados para complementar o perfil de interesses do usuário com conceitos relevantes. A multimodalidade de metadados e de preferências também é explorada na presente tese, fornecendo maior robustez na extração dessas informações, e obtendo-se uma maior carga semântica que traz benefícios às aplicações. Como prova de conceito...

Proposição de um método de análise de movimentos de jogadores de tênis de campo a partir de vídeos televisivos = : Proposition of a method for tennis players motion analysis on broadcast videos; Proposition of a method for tennis players motion analysis on broadcast videos

Cláudio Luís Roveri Vieira
Fonte: Biblioteca Digital da Unicamp Publicador: Biblioteca Digital da Unicamp
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 21/02/2013 Português
Relevância na Pesquisa
28.790796%
O objetivo do presente estudo é propor um conjunto de métodos para análise de movimentação de jogadores de tênis de campo a partir de vídeos pré calibrados com uso de câmeras fixas, bem como em vídeos televisivos. Primeiramente, foi proposto e validado um método para rastrear automaticamente jogadores de tênis de campo a partir de vídeos coletado in loco e de câmeras fixas e previamente calibradas. O percentual de rastreamento automático foi de 99,98%. As distâncias percorridas pelos dois jogadores durante um set de uma partida de tênis também puderam ser calculadas. A reconstrução das coordenadas bidimensionais da quadra foi validada calculando a repetibilidade intra-operador (0,009 m), a repetibilidade inter-operador (0,007 m), o erro relativo (comprimento e largura 0,03% 0,06%). O erro de medição alcançada para a posição do jogador foi de 0,36 m. Para ambas as coordenadas houve uma regressão linear significativa (R2 > 0,99, p <0,05) das posições obtidas pelos métodos de rastreamento automático e manual. Além disso, as distâncias percorridas por ambos os jogadores puderam ser extraídas como exemplo de aplicação do método. O segundo método proposto foi o de detecção automática de rallies em vídeos televisivos de tênis. O método desenvolvido foi baseado na extração de histograma...

Monitorização e caracterização de atividade física

Soares, Sérgio Hélder da Silva
Fonte: Instituto Politécnico do Porto Publicador: Instituto Politécnico do Porto
Tipo: Dissertação de Mestrado
Publicado em //2014 Português
Relevância na Pesquisa
28.3491%
A monitorização da atividade física é um tema que tem adquirido cada vez mais importância. Tal deve-se ao crescente sedentarismo da população em geral e adquirindo níveis muito elevados de importância devido a vários fatores como por exemplo o enorme crescimento tecnológico e menor tempo de lazer. Cada vez mais a população tem a tendência de substituir atividades como uma simples caminhada para o trabalho ou escola por algum tipo de tecnologia que reduz o consumo energético do corpo, sendo paradigmático o uso (excessivo) de viaturas automóveis. Em consequência da escassez de atividade física, doenças como a obesidade e problemas cardíacos têm vindo a aumentar nas várias faixas etárias, mas assume uma particular relevância em crianças. Nas últimas décadas têm aumentado as iniciativas de investigação com o objetivo de compreender os fatores que afetam a prática de atividade física para posteriormente a potenciar. Existem diversos métodos contudo, destaca-se preferencialmente os de observação direta, com observadores presentes. No entanto estes apresentam algumas limitações. Consequentemente são necessários esforços de investigação adicionais e novas técnicas ou metodologias. Nesta dissertação pretende-se contribuir ativamente para a investigação na área da promoção de atividade física através da utilização de vídeo...

Video metadata extraction in a videoMail system

Moskovchuk, Serhiy
Fonte: Universidade Nova de Lisboa Publicador: Universidade Nova de Lisboa
Tipo: Dissertação de Mestrado
Publicado em /05/2015 Português
Relevância na Pesquisa
49.589443%
Currently the world swiftly adapts to visual communication. Online services like YouTube and Vine show that video is no longer the domain of broadcast television only. Video is used for different purposes like entertainment, information, education or communication. The rapid growth of today’s video archives with sparsely available editorial data creates a big problem of its retrieval. The humans see a video like a complex interplay of cognitive concepts. As a result there is a need to build a bridge between numeric values and semantic concepts. This establishes a connection that will facilitate videos’ retrieval by humans. The critical aspect of this bridge is video annotation. The process could be done manually or automatically. Manual annotation is very tedious, subjective and expensive. Therefore automatic annotation is being actively studied. In this thesis we focus on the multimedia content automatic annotation. Namely the use of analysis techniques for information retrieval allowing to automatically extract metadata from video in a videomail system. Furthermore the identification of text, people, actions, spaces, objects, including animals and plants. Hence it will be possible to align multimedia content with the text presented in the email message and the creation of applications for semantic video database indexing and retrieving.

Human-machine collaboration for rapid speech transcription

Roy, Brandon C. (Brandon Cain)
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 127 p.
Português
Relevância na Pesquisa
18.684153%
Inexpensive storage and sensor technologies are yielding a new generation of massive multimedia datasets. The exponential growth in storage and processing power makes it possible to collect more data than ever before, yet without appropriate content annotation for search and analysis such corpora are of little use. While advances in data mining and machine learning have helped to automate some types of analysis, the need for human annotation still exists and remains expensive. The Human Speechome Project is a heavily data-driven longitudinal study of language acquisition. More than 100,000 hours of audio and video recordings have been collected over a two year period to trace one child's language development at home. A critical first step in analyzing this corpus is to obtain high quality transcripts of all speech heard and produced by the child. Unfortunately, automatic speech transcription has proven to be inadequate for these recordings, and manual transcription with existing tools is extremely labor intensive and therefore expensive. A new human-machine collaborative system for rapid speech transcription has been developed which leverages both the quality of human transcription and the speed of automatic speech processing. Machine algorithms sift through the massive dataset to find and segment speech. The results of automatic analysis are handed off to humans for transcription using newly designed tools with an optimized user interface. The automatic algorithms are tuned to optimize human performance...

A Data-Driven Approach for Tag Refinement and Localization in Web Videos

Ballan, Lamberto; Bertini, Marco; Serra, Giuseppe; Del Bimbo, Alberto
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
38.28588%
Tagging of visual content is becoming more and more widespread as web-based services and social networks have popularized tagging functionalities among their users. These user-generated tags are used to ease browsing and exploration of media collections, e.g. using tag clouds, or to retrieve multimedia content. However, not all media are equally tagged by users. Using the current systems is easy to tag a single photo, and even tagging a part of a photo, like a face, has become common in sites like Flickr and Facebook. On the other hand, tagging a video sequence is more complicated and time consuming, so that users just tag the overall content of a video. In this paper we present a method for automatic video annotation that increases the number of tags originally provided by users, and localizes them temporally, associating tags to keyframes. Our approach exploits collective knowledge embedded in user-generated tags and web sources, and visual similarity of keyframes and images uploaded to social sites like YouTube and Flickr, as well as web sources like Google and Bing. Given a keyframe, our method is able to select on the fly from these visual sources the training exemplars that should be the most relevant for this test sample, and proceeds to transfer labels across similar images. Compared to existing video tagging approaches that require training classifiers for each tag...

Text Based Approach For Indexing And Retrieval Of Image And Video: A Review

Bhute, Avinash N; Meshram, B. B.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 05/04/2014 Português
Relevância na Pesquisa
28.019836%
Text data present in multimedia contain useful information for automatic annotation, indexing. Extracted information used for recognition of the overlay or scene text from a given video or image. The Extracted text can be used for retrieving the videos and images. In this paper, firstly, we are discussed the different techniques for text extraction from images and videos. Secondly, we are reviewed the techniques for indexing and retrieval of image and videos by using extracted text.; Comment: 12 pages

Using Descriptive Video Services to Create a Large Data Source for Video Annotation Research

Torabi, Atousa; Pal, Christopher; Larochelle, Hugo; Courville, Aaron
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 03/03/2015 Português
Relevância na Pesquisa
38.404119%
In this work, we introduce a dataset of video annotated with high quality natural language phrases describing the visual content in a given segment of time. Our dataset is based on the Descriptive Video Service (DVS) that is now encoded on many digital media products such as DVDs. DVS is an audio narration describing the visual elements and actions in a movie for the visually impaired. It is temporally aligned with the movie and mixed with the original movie soundtrack. We describe an automatic DVS segmentation and alignment method for movies, that enables us to scale up the collection of a DVS-derived dataset with minimal human intervention. Using this method, we have collected the largest DVS-derived dataset for video description of which we are aware. Our dataset currently includes over 84.6 hours of paired video/sentences from 92 DVDs and is growing.; Comment: 7 pages

Seguimiento visual robusto en entornos complejos

Varona Gómez, Javier
Fonte: Bellaterra : Universitat Autònoma de Barcelona, Publicador: Bellaterra : Universitat Autònoma de Barcelona,
Tipo: Tesis i dissertacions electròniques; info:eu-repo/semantics/doctoralThesis Formato: application/pdf; application/pdf; application/pdf; application/pdf; application/pdf; application/pdf; application/pdf
Publicado em //2002 Português
Relevância na Pesquisa
28.28588%
Consultable des del TDX; Títol obtingut de la portada digitalitzada; El seguimiento visual de objetos se puede expresar como un problema de estimación, donde se desea encontrar los valores que describen la trayectoria de los objetos, pero sólo se dispone de observaciones. En esta Tesis se presentan dos aproximaciones para resolver este problema en aplicaciones complejas de visión por computador. La primera aproximación se basa en la utilización de la información del contexto donde tiene lugar el seguimiento. Como resultado se presenta una aplicación de anotación de vÌdeo: la reconstrucción 3D de jugadas de un partido de fútbol. Escogiendo un esquema Bayesiano de seguimiento visual, la segunda aproximación es un algoritmo que utiliza como observaciones los valores de apariencia de los píxels de la imagen. Este algoritmo, denominado iTrack, se basa en la construcción y ajuste de un modelo estadístico de la apariencia del objeto que se desea seguir. Para mostrar la utilidad del nuevo algoritmo se presenta una aplicación de vÌdeo vigilancia automática. Este problema es difícil de resolver debido a la diversidad de escenarios y de condiciones de adquisición.; Visual tracking can be stated as an estimation problem. The main goal is to estimate the values that describe the object trajectories...

Legal multimedia management and semantic annotation for improved search and retrieval

González-Conejero, Jorge; Teodoro, Emma; Galera, Núria
Fonte: Florence European Press Academic Publishing Publicador: Florence European Press Academic Publishing
Tipo: Parte de Livro Formato: application/pdf
Publicado em //2010; 2010 Português
Relevância na Pesquisa
28.584954%
In this work, we study the possibilities of multimedia management and automatic annotation focused on legal domain. In this field,professionals are used to consume the most part of their time searching and retrieving legal information. For instance, in scenarios as e-discovery and e-learning search and retrieval of the multimedia contents are the basis of the whole applications. In addition, the legal multimedia explosion increases the need of Store these files in a structured form to facilitate the access to this information in an efficient and effective way. Furthermore, the improvements achieved by sensors and video recorders in the last years increase the size of these files, producing an enormous demand of storage capability.JPEG2000 and MPEG-7 are international standards by the ISO/IEC organization that allow to reduce, in some degrees, the amount of data needed to store these files. These standards also permit to include the semantic annotation in the considered file formats, and to access to this information without the need to decompress the contained vídeo or image. How to obtain the semantic information from multimèdia is also studied as well as the different techniques to exploit and combine this information.

The Caltech Tomography Database and Automatic Processing Pipeline

Ding, H. Jane; Oikonomou, Catherine M.; Jensen, Grant J.
Fonte: Elsevier Publicador: Elsevier
Tipo: Article; PeerReviewed Formato: application/pdf
Publicado em /11/2015 Português
Relevância na Pesquisa
28.3491%
Here we describe the Caltech Tomography Database and automatic image processing pipeline, designed to process, store, display, and distribute electron tomographic data including tilt-series, sample information, data collection parameters, 3D reconstructions, correlated light microscope images, snapshots, segmentations, movies, and other associated files. Tilt-series are typically uploaded automatically during collection to a user’s “Inbox” and processed automatically, but can also be entered and processed in batches via scripts or file-by-file through an internet interface. As with the video website YouTube, each tilt-series is represented on the browsing page with a link to the full record, a thumbnail image and a video icon that delivers a movie of the tomogram in a pop-out window. Annotation tools allow users to add notes and snapshots. The database is fully searchable, and sets of tilt-series can be selected and re-processed, edited, or downloaded to a personal workstation. The results of further processing and snapshots of key results can be recorded in the database, automatically linked to the appropriate tilt-series. While the database is password-protected for local browsing and searching, datasets can be made public and individual files can be shared with collaborators over the Internet. Together these tools facilitate high-throughput tomography work by both individuals and groups.