by Ron Yair Pinter.; Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1980.; MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING; Bibliography: leaf 45.
by Norman August Lehtomaki.; Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1981.; MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.; Includes bibliographical references.
by John Nikolaos Tsitsiklis.; Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1981.; MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.; Bibliography: leaves 120-124.
by Marcel Coderch i Collell.; Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1982.; MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.; Bibliography: leaves 328-332.
by Bruce R. Musicus.; Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1982.; MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.; Vita.; Includes bibliographical references.
Current methods for bipedal walking control include playback of recorded joint motion and the derivation of dynamic equations to map desired forces at the body to the required torques at the joints. Both methods require a significant amount of up-front knowledge about the structure and characteristics of the robot. This thesis presents an alternative method of control that removes the interdependence of the joint torques and simplifies the mathematics considerably. The simplification allows a programmer to create and tune a bipedal walk controller without requiring a complete model of the dynamics. The controller is implemented in a graphical programming language similar to fuzzy logic and neural networks, in which the algorithm is contained in the structure of the nodes rather than in the weights of the connections. The language and its development environment are specifically designed to assist the programmer to create and debug the algorithm in a live environment.; by Michael Alan Wessler.; Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.; Includes bibliographical references (p. 123-126).
In this paper, we address two longstanding questions about finding good separators in graphs of bounded genus and degree: 1. It is a classical result of Gilbert, Hutchinson, and Tarjan  that one can find asymptotically optimal separators on these graphs if he is given both the graph and an embedding of it onto a low genus surface. Does there exist a simple, efficient algorithm to find these separators given only the graph and not the embedding? 2. In practice, spectral partitioning heuristics work extremely well on these graphs. Is there a theoretical reason why this should be the case? We resolve these two questions by showing that a simple spectral algorithm finds separators of cut ratio O(sqrt(g/n)) and vertex bisectors of size O(sqrt(gn)) in these graphs, both of which are optimal. As our main technical lemma, we prove an O(g/n) bound on the second smallest eigenvalue of the Laplacian of such graphs and show that this is tight, thereby resolving a conjecture of Spielman and Teng. While this lemma is essentially combinatorial in nature, its proof comes from continuous mathematics, drawing on the theory of circle packings and the geometry of compact Riemann surfaces.; by Jonathan Kelner.; Thesis (S.M.)--Massachusetts Institute of Technology...
Infectious disease models predict the impact of outbreaks. Discrepancies between model predictions stem from both the disease parameters used and the underlying mathematics of the models. Smallpox has been modeled extensively in recent years to determine successful response guidelines for a future outbreak. Five models, which range in fidelity, were created for this thesis in an attempt to reveal the differences inherent in the mathematical techniques used in the models. The disease parameters were standardized across all models. Predictions for various outbreak scenarios are given, and the strengths and weaknesses of each modeling technique are discussed. The mixing strategy used greatly affects the predictions of the models. The results gathered indicate that mass vaccination should be considered as a primary response technique in the event of a future smallpox outbreak.; by Cory Y. McLean.; Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.; Includes bibliographical references (p. 119-121).
In this thesis, we advance a collection of new geometric techniques for the analysis of combinatorial algorithms. Using these techniques, we resolve several longstanding questions in the theory of linear programming, polytope theory, spectral graph theory, and graph partitioning. The thesis consists of two main parts. In the first part, which is joint work with Daniel Spielman, we present the first randomized polynomial-time simplex algorithm for linear programming, answering a question that has been open for over fifty years. Like the other known polynomial-time algorithms for linear programming, its running time depends polynomially on the number of bits used to represent its input. To do this, we begin by reducing the input linear program to a special form in which we merely need to certify boundedness of the linear program. As boundedness does not depend upon the right-hand-side vector, we run a modified version of the shadow-vertex simplex method in which we start with a random right-hand-side vector and then modify this vector during the course of the algorithm. This allows us to avoid bounding the diameter of the original polytope.; (cont.) Our analysis rests on a geometric statement of independent interest: given a polytope ... in isotropic position...
Proofs by induction are central to many computer science areas such as data structures, theory of computation, programming languages, program efficiency-time complexity, and program correctness. Proofs by induction can also improve students’ understanding and performance of computer science concepts such as programming languages, algorithm design, and recursion, as well as serve as a medium for teaching them. ^ Even though students are exposed to proofs by induction in many courses of their curricula, they still have difficulties understanding and performing them. This impacts the whole course of their studies, since proofs by induction are omnipresent in computer science. Specifically, students do not gain conceptual understanding of induction early in the curriculum and as a result, they have difficulties applying it to more advanced areas later on in their studies. ^ The goal of my dissertation is twofold: (1) identifying sources of computer science students’ difficulties with proofs by induction, and (2) developing a new approach to teaching proofs by induction by way of an interactive and multimodal electronic book (e-book). For the first goal, I undertook a study to identify possible sources of computer science students’ difficulties with proofs by induction. Its results suggest that there is a close correlation between students’ understanding of inductive definitions and their understanding and performance of proofs by induction. For designing and developing my e-book...
Process Algebra forms a cornerstone in the formal methods area of Computer Science. Among the more widely used approaches is Milner's Communication and Concurrency Systems (CCS). Recently CCS has been extended by Schmidt and Bibighaus through the introduction of Doubly Labeled Transition Systems. This framework has enhanced the model's ability to capture security and availability properties. In this thesis we reformulate, simplify, and extend Bibighaus' work using a graph theoretic framework. The intent is that this abstract mathematical view will make the results more accessible and stimulate additional research. Existing definitions and theorems are redefined and proved using Labeled and Doubly Labeled Transition Graphs (LTG and DLTG). CCS simulation concepts are recast as graph morphisms and the notion of abstraction and refinement are explained through the use of graphs. Bibighaus' work is extended by showing how to carry out non-atomic DLTG refinement, and by developing a form of graph composition involving graph refinements that share a common abstract graph. This type of composition is proven to always be possible with DLTG refinements, and we demonstrate that the composite graph is both a refinement of the abstract graph, and an abstract graph for the refinements from which it was made.; US Navy (USN) author.
We present a simplified proof of Godel's theorem by appealing to well known programming concepts. The significance of Godel's result to computer science, mathematics and logic is discussed.; Prepared for: Chief of Naval Research; Arlington, Virginia 22217.; http://archive.org/details/computersciencev00macl; N0001482WR20043; NA
by Art O'Leary.; Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1981.; MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.; Bibliography: leaf 85.
Using concepts from computer science and mathematics I develop three algorithms to find the minimum integer weights for voting games. Games with up to at least 17 players can be solved in a reasonable amount of time. First, coalitions are mapped to constraints, reducing the problem to constraint optimization. The optimization techniques used are Gomory's all-integer simplex algorithm and a variant of the popular integer programming method branch and bound. Theoretical results include that minimum integer weights are not unique and a confirmation of a prior result that minimum integer weights are proportional to a priori seat share. Thus, these algorithms can be useful for researchers evaluating the differences between proportional bargaining models and formateur models. The running times of the different algorithms are contrasted and analyzed for potential improvements.; by Aaron B. Strauss.; Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.; Includes bibliographical references (p. 73-76).
Privacy laws are an important facet of our society. But they can also serve as formidable barriers to medical research. The same laws that prevent casual disclosure of medical data have also made it difficult for researchers to access the information they need to conduct research into the causes of disease. But it is possible to overcome some of these legal barriers through technology. The US law known as HIPAA, for example, allows medical records to be released to researchers without patient consent if the records are provably anonymized prior to their disclosure. It is not enough for records to be seemingly anonymous. For example, one researcher estimates that 87.1% of the US population can be uniquely identified by the combination of their zip, gender, and date of birth - fields that most people would consider anonymous. One promising technique for provably anonymizing records is called k-anonymity. It modifies each record so that it matches k other individuals in a population - where k is an arbitrary parameter. This is achieved by, for example, changing specific information such as a date of birth, to a less specific counterpart such as a year of birth.; (cont.) Previous studies have shown that achieving k-anonymity while minimizing information loss is an NP-hard problem; thus a brute force search is out of the question for most real world data sets. In this thesis...
Recently, additive combinatorics has blossomed into a vibrant area in
mathematical sciences. But it seems to be a difficult area to define - perhaps
because of a blend of ideas and techniques from several seemingly unrelated
contexts which are used there. One might say that additive combinatorics is a
branch of mathematics concerning the study of combinatorial properties of
algebraic objects, for instance, Abelian groups, rings, or fields. This
emerging field has seen tremendous advances over the last few years, and has
recently become a focus of attention among both mathematicians and computer
scientists. This fascinating area has been enriched by its formidable links to
combinatorics, number theory, harmonic analysis, ergodic theory, and some other
branches; all deeply cross-fertilize each other, holding great promise for all
of them! In this exposition, we attempt to provide an overview of some
breakthroughs in this field, together with a number of seminal applications to
sundry parts of mathematics and some other disciplines, with emphasis on
computer science and cryptography.; Comment: 37 pages. In Proceedings of the International Number Theory
Conference in Memory of Alf van der Poorten, to appear
The strict globular $\omega$-categories formalize the execution paths of a
parallel automaton and the homotopies between them. One associates to such (and
any) $\omega$-category $\C$ three homology theories. The first one is called
the globular homology. It contains the oriented loops of $\C$. The two other
ones are called the negative (resp. positive) corner homology. They contain in
a certain manner the branching areas of execution paths or negative corners
(resp. the merging areas of execution paths or positive corners) of $\C$. Two
natural linear maps called the negative (resp. the positive) Hurewicz morphism
from the globular homology to the negative (resp. positive) corner homology are
constructed. We explain the reason why these constructions allow to
reinterprete some geometric problems coming from computer science.; Comment: 54 pages, 1 eps figure, LaTeX2e ; v2 construction of negative and
positive Hurewicz morphisms added, corrected reference ; v3 reorganized paper
for a better understanding ; v4 final version to appear in Mathematical
Structure in Computer Science ; v5 last minute correction (very minor
We study a fragmentation problem where an initial object of size x is broken
into m random pieces provided x>x_0 where x_0 is an atomic cut-off.
Subsequently the fragmentation process continues for each of those daughter
pieces whose sizes are bigger than x_0. The process stops when all the
fragments have sizes smaller than x_0. We show that the fluctuation of the
total number of splitting events, characterized by the variance, generically
undergoes a nontrivial phase transition as one tunes the branching number m
through a critical value m=m_c. For mm_c they are anomalously large and non-Gaussian. We apply this general
result to analyze two different search algorithms in computer science.; Comment: 5 pages RevTeX, 3 figures (.eps)
Scientific literature has itself been the subject of much scientific study,
for a variety of reasons: understanding how results are communicated, how ideas
spread, and assessing the influence of areas or individuals. However, most
prior work has focused on extracting and analyzing citation and stylistic
patterns. In this work, we introduce the notion of 'scienceography', which
focuses on the writing of science. We provide a first large scale study using
data derived from the arXiv e-print repository. Crucially, our data includes
the "source code" of scientific papers-the LaTEX source-which enables us to
study features not present in the "final product", such as the tools used and
private comments between authors. Our study identifies broad patterns and
trends in two example areas-computer science and mathematics-as well as
highlighting key differences in the way that science is written in these
fields. Finally, we outline future directions to extend the new topic of
scienceography.; Comment: 13 pages,16 figures. Sixth International Conference on FUN WITH
Computer science is a relatively young discipline combining science,
engineering, and mathematics. The main flavors of computer science research
involve the theoretical development of conceptual models for the different
aspects of computing and the more applicative building of software artifacts
and assessment of their properties. In the computer science publication
culture, conferences are an important vehicle to quickly move ideas, and
journals often publish deeper versions of papers already presented at
conferences. These peculiarities of the discipline make computer science an
original research field within the sciences, and, therefore, the assessment of
classical bibliometric laws is particularly important for this field. In this
paper, we study the skewness of the distribution of citations to papers
published in computer science publication venues (journals and conferences). We
find that the skewness in the distribution of mean citedness of different
venues combines with the asymmetry in citedness of articles in each venue,
resulting in a highly asymmetric citation distribution with a power law tail.
Furthermore, the skewness of conference publications is more pronounced than
the asymmetry of journal papers. Finally, the impact of journal papers...