Página 1 dos resultados de 24657 itens digitais encontrados em 0.054 segundos

Computer algorithms and applications used to assist the evaluation and treatment of adolescent idiopathic scoliosis: a review of published articles 2000–2009

Phan, Philippe; Mezghani, Neila; Aubin, Carl-Éric; de Guise, Jacques A.; Labelle, Hubert
Fonte: Springer-Verlag Publicador: Springer-Verlag
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
46.12%
Adolescent idiopathic scoliosis (AIS) is a complex spinal deformity whose assessment and treatment present many challenges. Computer applications have been developed to assist clinicians. A literature review on computer applications used in AIS evaluation and treatment has been undertaken. The algorithms used, their accuracy and clinical usability were analyzed. Computer applications have been used to create new classifications for AIS based on 2D and 3D features, assess scoliosis severity or risk of progression and assist bracing and surgical treatment. It was found that classification accuracy could be improved using computer algorithms that AIS patient follow-up and screening could be done using surface topography thereby limiting radiation and that bracing and surgical treatment could be optimized using simulations. Yet few computer applications are routinely used in clinics. With the development of 3D imaging and databases, huge amounts of clinical and geometrical data need to be taken into consideration when researching and managing AIS. Computer applications based on advanced algorithms will be able to handle tasks that could otherwise not be done which can possibly improve AIS patients’ management. Clinically oriented applications and evidence that they can improve current care will be required for their integration in the clinical setting.

Efficient algorithms for new computational models

Ruhl, Jan Matthias, 1973-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 163 p.; 1188364 bytes; 1287306 bytes; application/pdf; application/pdf
Português
Relevância na Pesquisa
46.1%
Advances in hardware design and manufacturing often lead to new ways in which problems can be solved computationally. In this thesis we explore fundamental problems in three computational models that are based on such recent advances. The first model is based on new chip architectures, where multiple independent processing units are placed on one chip, allowing for an unprecedented parallelism in hardware. We provide new scheduling algorithms for this computational model. The second model is motivated by peer-to-peer networks, where countless (often inexpensive) computing devices cooperate in distributed applications without any central control. We state and analyze new algorithms for load balancing and for locality-aware distributed data storage in peer-to-peer networks. The last model is based on extensions of the streaming model. It is an attempt to capture the class of problems that can be efficiently solved on massive data sets. We give a number of algorithms for this model, and compare it to other models that have been proposed for massive data set computations. Our algorithms and complexity results for these computational models follow the central thesis that it is an important part of theoretical computer science to model real-world computational structures...

Adaptive algorithms for problems involving black-box Lipschitz functions; Adaptive analysis of algorithms for problems involving black-box Lipschitz functions

Baran, Ilya, 1981-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 62 p.; 3135145 bytes; 3134950 bytes; application/pdf; application/pdf
Português
Relevância na Pesquisa
46.06%
Suppose we are given a black-box evaluator (an oracle that returns the function value at a given point) for a Lipschitz function with a known Lipschitz constant. We consider queries that can be answered about the function by using a finite number of black-box evaluations. Specifically, we study the problems of approximating a Lipschitz function, approximately integrating a Lipschitz function, approximately minimizing a Lipschitz function, and computing the winding number of a Lipschitz curve in R² around a point. The goal is to minimize the number of evaluations used for answering a query. Because the complexity of the problem instances varies widely, depending on the actual function, we wish to design adaptive algorithms whose performance is close to the best possible on every problem instance. We give optimally adaptive algorithms for winding number computation and univariate approximation and integration. We also give a near-optimal adaptive algorithm for univariate approximation when the output of function evaluations is corrupted by random noise. For optimization over higher dimensional domains, we prove that good adaptive algorithms are impossible.; by Ilya Baran.; Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science...

Real-time analysis of physiological data and development of alarm algorithms for patient monitoring in the Intensive Care Unit

Zhang, Ying, 1976-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 94 p.; 5118908 bytes; 5129794 bytes; application/pdf; application/pdf
Português
Relevância na Pesquisa
46.08%
The lack of effective data integration and knowledge representation in patient monitoring limits its utility to clinicians. Intelligent alarm algorithms that use artificial intelligence techniques have the potential to reduce false alarm rates and to improve data integration and knowledge representation. Crucial to the development of such algorithms is a well-annotated data set. In previous studies, clinical events were either unavailable or annotated without accurate time synchronization with physiological signals, generating uncertainties during both the development and evaluation of intelligent alarm algorithms. This research aims to help eliminate these uncertainties by designing a system that simultaneously collects physiological data and clinical annotations at the bedside, and to develop alarm algorithms in real time based on patient-specific data collected while using this system. In a standard pediatric intensive care unit, a working prototype of this system has helped collect a dataset of 196 hours of vital sign measurements at 1 Hz with 325 alarms generated by the bedside monitor and 2 instances of false negatives. About 89% of these alarms were clinically relevant true positives; 6% were true positives without clinical relevance; and 5% were false positives. Real-time machine learning showed improved performance over time and generated alarm algorithms that outperformed the previous generation of bedside monitors and came close in performance to the new generation. Results from this research suggest that the alarm algorithm(s) of the new patient monitoring systems have significantly improved sensitivity and specificity. They also demonstrated the feasibility of real-time learning at the bedside. Overall...

Cooperative diversity in wireless networks : algorithms and architectures

Laneman, J. Nicholas
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 187 p.; 6889171 bytes; 6888980 bytes; application/pdf; application/pdf
Português
Relevância na Pesquisa
46.08%
To effectively combat multipath fading across multiple protocol layers in wireless networks, this dissertation develops energy-efficient algorithms that employ certain kinds of cooperation among terminals, and illustrates how one might incorporate these algorithms into various network architectures. In these techniques, sets of terminals relay signals for each other to create a virtual antenna array, trading off the costs-in power, bandwidth, and complexity-for the greater benefits gained by exploiting spatial diversity in the channel. By contrast, classical network architectures only employ point-to-point transmission and thus forego these benefits. After summarizing a model for the wireless channel, we present various practical cooperative diversity algorithms based upon different types of relay processing and re-encoding, both with and without limited feedback from the ultimate receivers. Using information theoretic tools, we show that all these algorithms can achieve full spatial diversity, as if each terminal had as many transmit antennas as the entire set of cooperating terminals. Such diversity gains translate into greatly improved robustness to fading for the same transmit power, or substantially reduced transmit power for the same level of performance. For example...

Approximation algorithms for stochastic scheduling problems

Dean, Brian C. (Brian Christopher), 1975-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 113 p.; 8380199 bytes; 8393861 bytes; application/pdf; application/pdf
Português
Relevância na Pesquisa
46.1%
In this dissertation we study a broad class of stochastic scheduling problems characterized by the presence of hard deadline constraints. The input to such a problem is a set of jobs, each with an associated value, processing time, and deadline. We would like to schedule these jobs on a set of machines over time. In our stochastic setting, the processing time of each job is random, known in advance only as a probability distribution (and we make no assumptions about the structure of this distribution). Only after a job completes do we know its actual "instantiated" processing time with certainty. Each machine can process only a singe job at a time, and each job must be assigned to only one machine for processing. After a job starts processing we require that it must be allowed to complete - it cannot be canceled or "preempted" (put on hold and resumed later). Our goal is to devise a scheduling policy that maximizes the expected value of jobs that are scheduled by their deadlines. A scheduling policy observes the state of our machines over time, and any time a machine becomes available for use, it selects a new job to execute on that machine. Scheduling policies can be classified as adaptive or non-adaptive based on whether or not they utilize information learned from the instantiation of processing times of previously-completed jobs in their future scheduling decisions. A novel aspect of our work lies in studying the benefit one can obtain through adaptivity...

Multiscale Gaussian graphical models and algorithms for large-scale inference

Choi, Myung Jin, S.M. Massachusetts Institute of Technology
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 123 p.
Português
Relevância na Pesquisa
46.08%
Graphical models provide a powerful framework for stochastic processes by representing dependencies among random variables compactly with graphs. In particular, multiscale tree-structured graphs have attracted much attention for their computational efficiency as well as their ability to capture long-range correlations. However, tree models have limited modeling power that may lead to blocky artifacts. Previous works on extending trees to pyramidal structures resorted to computationally expensive methods to get solutions due to the resulting model complexity. In this thesis, we propose a pyramidal graphical model with rich modeling power for Gaussian processes, and develop efficient inference algorithms to solve large-scale estimation problems. The pyramidal graph has statistical links between pairs of neighboring nodes within each scale as well as between adjacent scales. Although the graph has many cycles, its hierarchical structure enables us to develop a class of fast algorithms in the spirit of multipole methods. The algorithms operate by guiding far-apart nodes to communicate through coarser scales and considering only local interactions at finer scales. The consistent stochastic structure of the pyramidal graph provides great flexibilities in designing and analyzing inference algorithms. Based on emerging techniques for inference on Gaussian graphical models...

Scheduling algorithms for throughput maximization in data networks

Brzezinski, Andrew
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 226 p.
Português
Relevância na Pesquisa
46.11%
This thesis considers the performance implications of throughput optimal scheduling in physically and computationally constrained data networks. We study optical networks, packet switches, and wireless networks, each of which has an assortment of features and constraints that challenge the design decisions of network architects. In this work, each of these network settings are subsumed under a canonical model and scheduling framework. Tools of queueing analysis are used to evaluate network throughput properties, and demonstrate throughput optimality of scheduling and routing algorithms under stochastic traffic. Techniques of graph theory are used to study network topologies having desirable throughput properties. Combinatorial algorithms are proposed for efficient resource allocation. In the optical network setting, the key enabling technology is wavelength division multiplexing (WDM), which allows each optical fiber link to simultaneously carry a large number of independent data streams at high rate. To take advantage of this high data processing potential, engineers and physicists have developed numerous technologies, including wavelength converters, optical switches, and tunable transceivers.; (cont.) While the functionality provided by these devices is of great importance in capitalizing upon the WDM resources...

Analysis and implementation of distributed algorithms for multi-robot systems

McLurkin, James D. (James Dwight), 1972-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 166 p.
Português
Relevância na Pesquisa
46.12%
Distributed algorithms for multi-robot systems rely on network communications to share information. However, the motion of the robots changes the network topology, which affects the information presented to the algorithm. For an algorithm to produce accurate output, robots need to communicate rapidly enough to keep the network topology correlated to their physical configuration. Infrequent communications will cause most multirobot distributed algorithms to produce less accurate results, and cause some algorithms to stop working altogether. The central theme of this work is that algorithm accuracy, communications bandwidth, and physical robot speed are related. This thesis has three main contributions: First, I develop a prototypical multi-robot application and computational model, propose a set of complexity metrics to evaluate distributed algorithm performance on multi-robot systems, and introduce the idea of the robot speed ratio, a dimensionless measure of robot speed relative to message speed in networks that rely on multi-hop communication. The robot speed ratio captures key relationships between communications bandwidth, mobility, and algorithm accuracy, and can be used at design time to trade off between them. I use this speed ratio to evaluate the performance of existing distributed algorithms for multi-hop communication and navigation. Second...

Approximation algorithms for stochastic scheduling on unrelated machines

Scott, Jacob (Jacob Healy)
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 67 p.
Português
Relevância na Pesquisa
46.08%
Motivated by problems in distributed computing, this thesis presents the first nontrivial polynomial time approximation algorithms for an important class of machine scheduling problems. We study the family of preemptive minimum makespan scheduling problems where jobs have stochastic processing requirements and provide the first approximation algorithms for these problems when machines have unrelated speeds. We show a series of algorithms that apply given increasingly general classes of precedence constraints on jobs. Letting n and m be, respectively, the number of jobs and machines in an instance, when jobs need an exponentially distributed amount of processing, we give: -- An O(log log min {m, n} )-approximation algorithm when jobs are independent; -- An 0 (log(n + m) log log min {m, n})-approximation algorithm when precedence constraints form disjoint chains; and, -- An O(log n log(n + m) log log min {m, n} )-approximation algorithm when precedence constraints form a directed forest. Very simple modifications allow our algorithms to apply to more general distributions, at the cost of slightly worse approximation ratios. Our O(log log n)-approximation algorithm for independent jobs holds when we allow restarting instead of preemption. Here jobs may switch machines...

Estimation and calibration algorithms for distributed sampling systems

Divi, Vijay, 1980-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 157 p.
Português
Relevância na Pesquisa
46.08%
Traditionally, the sampling of a signal is performed using a single component such as an analog-to-digital converter. However, many new technologies are motivating the use of multiple sampling components to capture a signal. In some cases such as sensor networks, multiple components are naturally found in the physical layout; while in other cases like time-interleaved analog-to-digital converters, additional components are added to increase the sampling rate. Although distributing the sampling load across multiple channels can provide large benefits in terms of speed, power, and resolution, a variety mismatch errors arise that require calibration in order to prevent a. degradation in system performance.In this thesis, we develop low-complexity, blind algorithms for the calibration of distributed sampling systems. In particular, we focus on recovery from timing skews that cause deviations from uniform timing. Methods for bandlimited input reconstruction from nonuniform recurrent samples are presented for both the small-mismatch and the low-SNR domains. Alternate iterative reconstruction methods are developed to give insight into the geometry of the problem.From these reconstruction methods, we develop time-skew estimation algorithms that have high performance and low complexity even for large numbers of components. We also extend these algorithms to compensate for gain mismatch between sampling components. To understand the feasibility of implementation...

Properties and algorithms of the (n, k)-star graphs

He, Liang.
Fonte: Brock University Publicador: Brock University
Tipo: Electronic Thesis or Dissertation
Português
Relevância na Pesquisa
56.06%
The (n, k)-star interconnection network was proposed in 1995 as an attractive alternative to the n-star topology in parallel computation. The (n, k )-star has significant advantages over the n-star which itself was proposed as an attractive alternative to the popular hypercube. The major advantage of the (n, k )-star network is its scalability, which makes it more flexible than the n-star as an interconnection network. In this thesis, we will focus on finding graph theoretical properties of the (n, k )-star as well as developing parallel algorithms that run on this network. The basic topological properties of the (n, k )-star are first studied. These are useful since they can be used to develop efficient algorithms on this network. We then study the (n, k )-star network from algorithmic point of view. Specifically, we will investigate both fundamental and application algorithms for basic communication, prefix computation, and sorting, etc. A literature review of the state-of-the-art in relation to the (n, k )-star network as well as some open problems in this area are also provided.

Decoding algorithms using side-effect machines

Brown, Joseph Alexander
Fonte: Brock University Publicador: Brock University
Tipo: Electronic Thesis or Dissertation
Português
Relevância na Pesquisa
55.98%
Bioinformatics applies computers to problems in molecular biology. Previous research has not addressed edit metric decoders. Decoders for quaternary edit metric codes are finding use in bioinformatics problems with applications to DNA. By using side effect machines we hope to be able to provide efficient decoding algorithms for this open problem. Two ideas for decoding algorithms are presented and examined. Both decoders use Side Effect Machines(SEMs) which are generalizations of finite state automata. Single Classifier Machines(SCMs) use a single side effect machine to classify all words within a code. Locking Side Effect Machines(LSEMs) use multiple side effect machines to create a tree structure of subclassification. The goal is to examine these techniques and provide new decoders for existing codes. Presented are ideas for best practices for the creation of these two types of new edit metric decoders.

Properties and algorithms of the (n, k)-arrangement graphs

Li, Yifeng
Fonte: Brock University Publicador: Brock University
Tipo: Electronic Thesis or Dissertation
Português
Relevância na Pesquisa
56.03%
The (n, k)-arrangement interconnection topology was first introduced in 1992. The (n, k )-arrangement graph is a class of generalized star graphs. Compared with the well known n-star, the (n, k )-arrangement graph is more flexible in degree and diameter. However, there are few algorithms designed for the (n, k)-arrangement graph up to present. In this thesis, we will focus on finding graph theoretical properties of the (n, k)- arrangement graph and developing parallel algorithms that run on this network. The topological properties of the arrangement graph are first studied. They include the cyclic properties. We then study the problems of communication: broadcasting and routing. Embedding problems are also studied later on. These are very useful to develop efficient algorithms on this network. We then study the (n, k )-arrangement network from the algorithmic point of view. Specifically, we will investigate both fundamental and application algorithms such as prefix sums computation, sorting, merging and basic geometry computation: finding convex hull on the (n, k )-arrangement graph. A literature review of the state-of-the-art in relation to the (n, k)-arrangement network is also provided, as well as some open problems in this area.

Properties and algorithms of the hyper-star graph and its related graphs

Zhang, Fan.
Fonte: Brock University Publicador: Brock University
Tipo: Electronic Thesis or Dissertation
Português
Relevância na Pesquisa
56.06%
The hyper-star interconnection network was proposed in 2002 to overcome the drawbacks of the hypercube and its variations concerning the network cost, which is defined by the product of the degree and the diameter. Some properties of the graph such as connectivity, symmetry properties, embedding properties have been studied by other researchers, routing and broadcasting algorithms have also been designed. This thesis studies the hyper-star graph from both the topological and algorithmic point of view. For the topological properties, we try to establish relationships between hyper-star graphs with other known graphs. We also give a formal equation for the surface area of the graph. Another topological property we are interested in is the Hamiltonicity problem of this graph. For the algorithms, we design an all-port broadcasting algorithm and a single-port neighbourhood broadcasting algorithm for the regular form of the hyper-star graphs. These algorithms are both optimal time-wise. Furthermore, we prove that the folded hyper-star, a variation of the hyper-star, to be maixmally fault-tolerant.

Beneath the surface electrocardiogram: computer algorithms for the non-invasive assessment of cardiac electrophysiology

Torbey, Sami
Fonte: Quens University Publicador: Quens University
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
56.01%
The surface electrocardiogram (ECG) is a periodic signal portraying the electrical activity of the heart from the torso. The past fifty years have witnessed a proliferation of computer algorithms destined for ECG analysis. Signal averaging is a noise reduction technique believed to enable the surface ECG to act as a non-invasive surrogate for cardiac electrophysiology. The P wave and the QRS complex of the ECG respectively depict atrial and ventricular depolarization. QRS detection is a pre-requisite to P wave and QRS averaging. A novel algorithm for robust QRS detection in mice achieves a four-fold reduction in false detections compared to leading commercial software, while its human version boasts an error rate of just 0.29% on a public database containing ECGs with varying morphologies and degrees of noise. A fully automated P wave and QRS averaging and onset/offset detection algorithm is also proposed. This approach is shown to predict atrial fibrillation, a common cardiac arrhythmia which could cause stroke or heart failure, from normal asymptomatic ECGs, with 93% sensitivity and 100% specificity. Automated signal averaging also proves to be slightly more reproducible in consecutive recordings than manual signal averaging performed by expert users. Several studies postulated that high-frequency energy content in the signal-averaged QRS may be a marker of sudden cardiac death. Traditional frequency spectrum analysis techniques have failed to consistently validate this hypothesis. Layered Symbolic Decomposition (LSD)...

Actor-critic algorithms

Konda, Vijaymohan (Vijaymohan Gao), 1973-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 147 leaves; 11090533 bytes; 11090292 bytes; application/pdf; application/pdf
Português
Relevância na Pesquisa
46.15%
Many complex decision making problems like scheduling in manufacturing systems, portfolio management in finance, admission control in communication networks etc., with clear and precise objectives, can be formulated as stochastic dynamic programming problems in which the objective of decision making is to maximize a single "overall" reward. In these formulations, finding an optimal decision policy involves computing a certain "value function" which assigns to each state the optimal reward one would obtain if the system was started from that state. This function then naturally prescribes the optimal policy, which is to take decisions that drive the system to states with maximum value. For many practical problems, the computation of the exact value function is intractable, analytically and numerically, due to the enormous size of the state space. Therefore one has to resort to one of the following approximation methods to find a good sub-optimal policy: (1) Approximate the value function. (2) Restrict the search for a good policy to a smaller family of policies. In this thesis, we propose and study actor-critic algorithms which combine the above two approaches with simulation to find the best policy among a parameterized class of policies. Actor-critic algorithms have two learning units: an actor and a critic. An actor is a decision maker with a tunable parameter. A critic is a function approximator. The critic tries to approximate the value function of the policy used by the actor...

Applications of topology in computer algorithms

Telgarsky, Rastislav
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 19/01/2012 Português
Relevância na Pesquisa
56.02%
The aim of this paper is to discuss some applications of general topology in computer algorithms including modeling and simulation, and also in computer graphics and image processing. While the progress in these areas heavily depends on advances in computing hardware, the major intellectual achievements are the algorithms. The applications of general topology in other branches of mathematics are not discussed, since they are not applications of mathematics outside of mathematics.; Comment: This paper is based on the invited lecture at International Conference on Topology and Applications held in August 23--27, 1999, at Kanagawa University in Yokohama, Japan

GPU acceleration of object classification algorithms using NVIDIA CUDA

Harvey, Jesse Patrick
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
46.07%
The field of computer vision has become an important part of today's society, supporting crucial applications in the medical, manufacturing, military intelligence and surveillance domains. Many computer vision tasks can be divided into fundamental steps: image acquisition, pre-processing, feature extraction, detection or segmentation, and high-level processing. This work focuses on classification and object detection, specifically k-Nearest Neighbors, Support Vector Machine classification, and Viola & Jones object detection. Object detection and classification algorithms are computationally intensive, which makes it difficult to perform classification tasks in real-time. This thesis aims in overcoming the processing limitations of the above classification algorithms by offloading computation to the graphics processing unit (GPU) using NVIDIA's Compute Unified Device Architecture (CUDA). The primary focus of this work is the implementation of the Viola and Jones object detector in CUDA. A multi-GPU implementation provides a speedup ranging from 1x to 6.5x over optimized OpenCV code for image sizes of 300 x 300 pixels up to 2900 x 1600 pixels while having comparable detection results. The second part of this thesis is the implementation of a multi-GPU multi-class SVM classifier. The classifier had the same accuracy as an identical implementation using LIBSVM with a speedup ranging from 89x to 263x on the tested datasets. The final part of this thesis was the extension of a previous CUDA k-Nearest Neighbor implementation by exploiting additional levels of parallelism. These extensions provided a speedup of 1.24x and 2.35x over the previous CUDA implementation. As an end result of this work...

Sparse signal processing for machine learning and computer vision

Zhou, Yin
Fonte: University of Delaware Publicador: University of Delaware
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
46.21%
Barner, Kenneth E.; Signal sparse representation solves inverse problems to find succinct expressions of data samples as a linear combination of a few atoms in the dictionary or codebook. This model has proven effective in image restoration, denoising, inpainting, compression, pattern classification and automatic unsupervised feature learning. Many classical sparse coding algorithms have exorbitant computational complexity in solving the sparse solution, which hinders their applicability in real-world large-scale machine learning and computer vision problems. In this dissertation, we will first present a family of locality-constrained dictionary learning algorithms, which can be seen as a special case of sparse coding. Compared to classical sparse coding, locality-constrained coding has closed-form solution and is much more computationally efficient. In addition, the locality-preserving property enables the newly proposed algorithms to better exploit the geometric structures of data manifold. Experimental results demonstrate that our algorithms are capable of achieving superior classification performance with substantially higher efficiency, compared to sparse-coding based dictionary algorithms. Sparse coding is an effective building block of learning visual features. A good feature representation is critical for machine learning algorithms to achieve satisfactory results. In recent years...