Página 1 dos resultados de 66 itens digitais encontrados em 0.005 segundos

Escalonamento Work-Stealing de programas Divisão-e-Conquista com MPI-2; Scheduling Divide-and-Conquer programs by Work-Stealing with MPI-2

Pezzi, Guilherme Peretti
Fonte: Universidade Federal do Rio Grande do Sul Publicador: Universidade Federal do Rio Grande do Sul
Tipo: Dissertação Formato: application/pdf
Português
Relevância na Pesquisa
66.79%
Com o objetivo de ser portável e eficiente em arquiteturas HPC atuais, a execução de um programa paralelo deve ser adaptável. Este trabalho mostra como isso pode ser atingido utilizando MPI, através de criação dinâmica de processos, integrada com programação Divisão-e-Conquista e uma estratégia Work-Stealing para balancear os processos MPI, em ambientes heterogêneos e/ou dinâmicos, em tempo de execução. Este trabalho explica como implementar uma aplicação segundo o modelo de Divisão-e-Conquista com MPI, bem como a implementação de uma estratégia Work-Stealing. São apresentados resultados experimentais baseados em uma aplicação sintética, o problema das N-Rainhas (N-Queens). Valida-se tanto a adaptabilidade e a eficiência do código. Os resultados mostram que é possível utilizar um padrão amplamente difundido como o MPI, mesmo em plataformas de HPC não tão homogêneas como um cluster.; In order to be portable and efficient on modern HPC architectures, the execution of a parallel program must be adaptable. This work shows how to achieve this in MPI, by the dynamic creation of processes, coupled with Divide-and-Conquer programming and a Work-Stealing strategy to balance the MPI processes, in a heterogeneous and/or dynamic environment...

Work stealing inside GPUs; Roubo de trabalho em processadores gráficos

Toss, Julio
Fonte: Universidade Federal do Rio Grande do Sul Publicador: Universidade Federal do Rio Grande do Sul
Tipo: Trabalho de Conclusão de Curso Formato: application/pdf
Português
Relevância na Pesquisa
66.62%
Unidades de processamento gráfico (GPU) tornaram-se ferramentas de grande valia no domínio da computação de alto desempenho. Graças as recentes inovações e melhoramentos do hardware é possível utilizar processadores gráficos de propósito genéricos (GPGPUS) em uma ampla gama de aplicações científicas. No entanto, os modelos de programação existentes usados em GPGPU não são ainda suficientemente adaptáveis `as diversas formas de paralelismo que uma aplicação possa expressar. Neste contexto, propomos um modelo híbrido de programação paralela para GPGPU usando paralelismo de tarefas e de dados. Em oposição ao que e advogado pelo modelo de programação CUDA, baseado apenas no paralelismo de dados, mostramos que ´e possível explorar o paralelismo de tarefas em GPUs e escaloná-las de forma eficiente usando a técnica do roubo de tarefas. Apresentamos neste trabalho a implementação de um escalonador por roubo de tarefas em CUDA e comparamos seu desempenho aos métodos de escalonamento estático e por lisa aplicados aos problemas de transformação em array e particionamento em octree.; Graphics Processing units have become a valuable support for High Performance Computing (HPC) applications. However, despite the many improvements on the General Purpose GPU...

Escalonamento por roubo de tarefas em sistemas Multi-CPU e Multi-GPU

Pinto, Vinícius Garcia
Fonte: Universidade Federal do Rio Grande do Sul Publicador: Universidade Federal do Rio Grande do Sul
Tipo: Dissertação Formato: application/pdf
Português
Relevância na Pesquisa
46.52%
Nos últimos anos, uma das alternativas adotadas para aumentar o desempenho de sistemas de processamento de alto desempenho têm sido o uso de arquiteturas híbridas. Essas arquiteturas são constituídas de processadores multicore e coprocessadores especializados, como GPUs. Esses coprocessadores atuam como aceleradores em alguns tipos de operações. Por outro lado, as ferramentas e modelos de programação paralela atuais não são adequados para cenários híbridos, produzindo aplicações pouco portáveis. O paralelismo de tarefas considerado um paradigma de programação genérico e de alto nível pode ser adotado neste cenário. Porém, exige o uso de algoritmos de escalonamento dinâmicos, como o algoritmo de roubo de tarefas. Neste contexto, este trabalho apresenta um middleware (WORMS) que oferece suporte ao paralelismo de tarefas com escalonamento por roubo de tarefas em sistemas híbridos multi-CPU e multi-GPU. Esse middleware permite que as tarefas tenham implementação tanto para execução em CPUs quanto em GPUs, decidindo em tempo de execução qual das implementações será executada de acordo com os recursos de hardware disponíveis. Os resultados obtidos com o WORMS mostram ser possível superar, em algumas aplicações...

A runtime system for data-flow task programming on multicore architectures with accelerators; Uma ferramenta para programação com dependência de dados em arquiteturas multicore com aceleradores; Vers un support exécutif avec dépendance de données pour les architectures multicoeur avec des accélérateurs

Lima, João Vicente Ferreira
Fonte: Universidade Federal do Rio Grande do Sul Publicador: Universidade Federal do Rio Grande do Sul
Tipo: Tese de Doutorado Formato: application/pdf
Português
Relevância na Pesquisa
46.49%
In this thesis, we propose to study the issues of task parallelism with data dependencies on multicore architectures with accelerators. We target those architectures with the XKaapi runtime system developed by the MOAIS team (INRIA Rhône-Alpes). We first studied the issues on multi-GPU architectures for asynchronous execution and scheduling. Work stealing with heuristics showed significant performance results, but did not consider the computing power of different resources. Next, we designed a scheduling framework and a performance model to support scheduling strategies over XKaapi runtime. Finally, we performed experimental evaluations over the Intel Xeon Phi coprocessor in native execution. Our conclusion is twofold. First we concluded that data-flow task programming can be efficient on accelerators, which may be GPUs or Intel Xeon Phi coprocessors. Second, the runtime support of different scheduling strategies is essential. Cost models provide significant performance results over very regular computations, while work stealing can react to imbalances at runtime.; Esta tese investiga os desafios no uso de paralelismo de tarefas com dependências de dados em arquiteturas multi-CPU com aceleradores. Para tanto, o XKaapi, desenvolvido no grupo de pesquisa MOAIS (INRIA Rhône-Alpes)...

Massivel y parallel declarative computational models

Machado, Rui Mário da Silva
Fonte: Universidade de Évora Publicador: Universidade de Évora
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
46.37%
Current computer archictectures are parallel, with an increasing number of processors. Parallel programming is an error-prone task and declarative models such as those based on constraints relieve the programmer from some of its difficult aspects, because they abstract control away. In this work we study and develop techniques for declarative computational models based on constraints using GPI, aiming at large scale parallel execution. The main contributions of this work are: A GPI implementation of a scalable dynamic load balancing scheme based on work stealing, suitable for tree shaped computations and effective for systems with thousands of threads. A parallel constraint solver, MaCS, implemented to take advantage of the GPI programming model. Experimental evaluation shows very good scalability results on systems with hundreds of cores. A GPI parallel version of the Adaptive Search algorithm, including different variants. The study on different problems advances the understanding of scalability issues known to exist with large numbers of cores; ### SUMÁRIO: Actualmente as arquitecturas de computadores são paralelas, com um crescente número de processadores. A programação paralela é uma tarefa propensa a erros e modelos declarativos baseados em restrições aliviam o programador de aspectos difíceis dado que abstraem o controlo. Neste trabalho estudamos e desenvolvemos técnicas para modelos de computação declarativos baseados em restrições usando o GPI...

Adaptively Parallel Processor Allocation for Cilk Jobs

Sen, Siddhartha; Agrawal, Kunal
Fonte: MIT - Massachusetts Institute of Technology Publicador: MIT - Massachusetts Institute of Technology
Tipo: Artigo de Revista Científica Formato: 16407 bytes; application/pdf
Português
Relevância na Pesquisa
46.35%
The problem of allocating processor resources fairly and efficiently to parallel jobs has been studied extensively in the past. Most of this work, however, assumes that the instantaneous parallelism of the jobs is known and used by the job scheduler to make its decisions. In this project, we consider different ways of estimating the parallelism of jobs during runtime, as well as different ways of using this information to allocate processors in a fair and efficient manner. The goal of our project is to design and implement a dynamic processor-allocation system for adaptively parallel jobs. Adaptively parallel jobs are jobs for which the number of processors which can be used without waste—in other words, the parallelism of each job—varies during execution. We call the problem of allocating processors to adaptively parallel jobs the adaptively parallel processor-allocation problem. We propose to investigate the adaptively parallel processor-allocation problem for jobs running on a shared-memory multiprocessor (SMP) system. We focus on the specific case of parallel jobs that are scheduled with the randomized work-stealing algorithm, as is used in the Cilk multithreaded language (later, we will expand the scope of our research to include other kinds of parallel jobs). Our problem can be defined as follows: Consider an SMP system with P processors and J jobs. At any given time...

On-the-Fly Maintenance of Series-Parallel Relationships in Fork-Join Multithreaded Programs

Bender, Michael A.; Fineman, Jeremy T.; Gilbert, Seth; Leiserson, Charles E.
Fonte: MIT - Massachusetts Institute of Technology Publicador: MIT - Massachusetts Institute of Technology
Tipo: Artigo de Revista Científica Formato: 194904 bytes; application/pdf
Português
Relevância na Pesquisa
36.07%
A key capability of data-race detectors is to determine whether one thread executes logically in parallel with another or whether the threads must operate in series. This paper provides two algorithms, one serial and one parallel, to maintain series-parallel (SP) relationships "on the fly" for fork-join multithreaded programs. The serial SP-order algorithm runs in O(1) amortized time per operation. In contrast, the previously best algorithm requires a time per operation that is proportional to Tarjan’s functional inverse of Ackermann’s function. SP-order employs an order-maintenance data structure that allows us to implement a more efficient "English-Hebrew" labeling scheme than was used in earlier race detectors, which immediately yields an improved determinacy-race detector. In particular, any fork-join program running in T₁ time on a single processor can be checked on the fly for determinacy races in O(T₁) time. Corresponding improved bounds can also be obtained for more sophisticated data-race detectors, for example, those that use locks. By combining SP-order with Feng and Leiserson’s serial SP-bags algorithm, we obtain a parallel SP-maintenance algorithm, called SP-hybrid. Suppose that a fork-join program has n threads...

Nested Parallelism in Transactional Memory

Sukha, Jim ; Agrawal, Kunal ; Fineman, Jeremy
Fonte: Universidade de Rochester Publicador: Universidade de Rochester
Português
Relevância na Pesquisa
46.35%
Presented at The Second ACM SIGPLAN Workshop on Transactional Computing (TRANSACT 07), Portland, Oregon, August 16, 2007.; This paper describes XCilk, a runtime-system design for software transactional memory in a Cilk-like parallel programming language, which uses a work-stealing scheduler. XCilk supports transactions that themselves can contain nested parallelism and nested transactions, both of unbounded nesting depth. Thus, XCilk allows users to call a library function within a transaction, even if that function itself exploits concurrency and uses transactions. XCilk provides transactional memory with strong atomicity, eager updates, eager conflict detection and lazy cleanup on aborts. XCilk uses a new algorithm and data structure, called XConflict, to facilitate conflict detection between transactions. Using XConflict, XCilk guarantees provably good performance for closed-nested transactions of arbitrary depth in the special case when all accesses are writes and there is no memory contention. More precisely, XCilk executes a program with work T1 and critical-path length T¥ in O(T1=p+ pT¥) time on p processors if all memory accesses are writes and all concurrent paths of the XCilk program access disjoint sets of memory locations. Although this bound holds only under rather optimistic assumptions...

Counterproductive Work Behaviors and Moral Disengagement

GUALANDRI, MARIO
Fonte: La Sapienza Universidade de Roma Publicador: La Sapienza Universidade de Roma
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
36.13%
In recent years counterproductive work behavior (CWB) has become an increasingly popular topic of study among organizational researchers (Penny & Spector, 2005; Yang & Diefendorff, 2009). The peculiarity of CWB is that they differ from common negative acts since they are not accidental and are intended specifically to damage by purposeful action even if unintentionally (Spector & Fox, 2005). These behaviors may include acts such as direct aggression, theft, purposely failing to follow instructions or to perform work incorrectly, in the interest of violating significant organizational norms (Spector, Fox, Penney, Bruursema, Goh, & Kessler, 2006), reducing the efficiency and job performance of its members (Hollinger & Clark, 1982), and basically threatening the health and well being of the organizations and its members. The latest financial scandals affecting American and European stock markets, as well as the increase of deviant behavior in organizations have raised questions about the ethics in the working context highlighting the need to understand these occurrences in order to prevent and tackle them (Chappell & Di Martino, 2006; Fox & Spector, 2005; Greenberg, 1997; Vardi & Weitz, 2004; Wellen, 2004). In fact, several studies showed that those behaviors are one of the most serious problems that organizations are facing in many countries (Chappel & Di Martino...

A Scalable Locality-aware Adaptive Work-stealing Scheduler for Multi-core Task Parallelism

Guo, Yi
Fonte: Universidade Rice Publicador: Universidade Rice
Tipo: Thesis; Text Formato: 143 pp; application/pdf
Português
Relevância na Pesquisa
46.75%
Recent trend has made it clear that the processor makers are committed to the multicore chip designs. The number of cores per chip is increasing, while there is little or no increase in the clock speed. This parallelism trend poses a significant and urgent challenge on computer software because programs have to be written or transformed into a multi-threaded form to take full advantage of future hardware advances. Task parallelism has been identified as one of the prerequisites for software productivity. In task parallelism, programmers focus on decomposing the problem into subcomputations that can run in parallel and leave the compiler and runtime to handle the scheduling details. This separation of concerns between task decomposition and scheduling provides productivity to the programmer but poses challenges to the runtime scheduler. Our thesis is that work-stealing schedulers with adaptive scheduling policies and locality-awareness can provide a scalable and robust runtime foundation for multicore task parallelism. We evaluate our thesis using the new Scalable Locality-aware Adaptive Work-stealing (SLAW) runtime scheduler developed for the Habanero-Java programming language, a task-parallel variant of Java. SLAW's adaptive task scheduling is motivated by the study of two common scheduling policies in a work-stealing scheduler...

Real-time scheduling of parallel tasks in the Linux Kernel

Fonseca, José; Nogueira, Luis; Maia, Cláudio; Pinho, Luis Miguel
Fonte: IPP Hurray! Research Group Publicador: IPP Hurray! Research Group
Tipo: Relatório
Publicado em //2012 Português
Relevância na Pesquisa
46.3%
This paper proposes a global multiprocessor scheduling algorithm for the Linux kernel that combines the global EDF scheduler with a priority-aware work-stealing load balancing scheme, enabling parallel real-time tasks to be executed on more than one processor at a given time instant. We state that some priority inversion may actually be acceptable, provided it helps reduce contention, communication, synchronisation and coordination between parallel threads, while still guaranteeing the expected system’s predictability. Experimental results demonstrate the low scheduling overhead of the proposed approach comparatively to an existing real-time deadline-oriented scheduling class for the Linux kernel.

Supporting real-time parallel task models with work-stealing

Maia, Cláudio; Nogueira, Luis; Pinho, Luis Miguel
Fonte: IPP Hurray! Research Group Publicador: IPP Hurray! Research Group
Tipo: Artigo de Revista Científica
Publicado em //2012 Português
Relevância na Pesquisa
46.74%
Dynamic parallel scheduling using work-stealing has gained popularity in academia and industry for its good performance, ease of implementation and theoretical bounds on space and time. Cores treat their own double-ended queues (deques) as a stack, pushing and popping threads from the bottom, but treat the deque of another randomly selected busy core as a queue, stealing threads only from the top, whenever they are idle. However, this standard approach cannot be directly applied to real-time systems, where the importance of parallelising tasks is increasing due to the limitations of multiprocessor scheduling theory regarding parallelism. Using one deque per core is obviously a source of priority inversion since high priority tasks may eventually be enqueued after lower priority tasks, possibly leading to deadline misses as in this case the lower priority tasks are the candidates when a stealing operation occurs. Our proposal is to replace the single non-priority deque of work-stealing with ordered per-processor priority deques of ready threads. The scheduling algorithm starts with a single deque per-core, but unlike traditional work-stealing, the total number of deques in the system may now exceed the number of processors. Instead of stealing randomly...

Supporting intra-task parallelism in real-time multiprocessor systems

Fonseca, José
Fonte: Instituto Politécnico do Porto. Instituto Superior de Engenharia do Porto Publicador: Instituto Politécnico do Porto. Instituto Superior de Engenharia do Porto
Tipo: Dissertação de Mestrado
Publicado em //2012 Português
Relevância na Pesquisa
46.67%
Os sistemas de tempo real modernos geram, cada vez mais, cargas computacionais pesadas e dinâmicas, começando-se a tornar pouco expectável que sejam implementados em sistemas uniprocessador. Na verdade, a mudança de sistemas com um único processador para sistemas multi- processador pode ser vista, tanto no domínio geral, como no de sistemas embebidos, como uma forma eficiente, em termos energéticos, de melhorar a performance das aplicações. Simultaneamente, a proliferação das plataformas multi-processador transformaram a programação paralela num tópico de elevado interesse, levando o paralelismo dinâmico a ganhar rapidamente popularidade como um modelo de programação. A ideia, por detrás deste modelo, é encorajar os programadores a exporem todas as oportunidades de paralelismo através da simples indicação de potenciais regiões paralelas dentro das aplicações. Todas estas anotações são encaradas pelo sistema unicamente como sugestões, podendo estas serem ignoradas e substituídas, por construtores sequenciais equivalentes, pela própria linguagem. Assim, o modo como a computação é na realidade subdividida, e mapeada nos vários processadores, é da responsabilidade do compilador e do sistema computacional subjacente. Ao retirar este fardo do programador...

WSPE : um ambiente de programação peer-to-peer para a computação em grade; WSPE : a peer-to-peer programming environment for grid computing

Rosinha, Rômulo Bandeira
Fonte: Universidade Federal do Rio Grande do Sul Publicador: Universidade Federal do Rio Grande do Sul
Tipo: Dissertação Formato: application/pdf
Português
Relevância na Pesquisa
46.35%
Um ambiente de programação é uma ferramenta de software resultante da associa ção de um modelo de programação a um sistema de execução. O objetivo de um ambiente de programação é simpli car o desenvolvimento e a execução de aplicações em uma determinada infra-estrutura computacional. Uma infra-estrutura de Computa ção em Grade apresenta características peculiares que tornam pouco e cientes ambientes de programação existentes para infra-estruturas mais tradicionais, como máquinas maciçamente paralelas ou clusters de computadores. Este trabalho apresenta o WSPE, um ambiente de programação peer-to-peer para Computação em Grade. O WSPE oferece suporte para aplicações grid-unaware que seguem o modelo de programação de tarefas paralelas. A interface de programação WSPE é de nida através de anotações da linguagem Java. O sistema de execu- ção segue um modelo peer-to-peer totalmente descentralizado com o propósito de obter robustez e escalabilidade. Embora um sistema de execução necessite abordar diversos aspectos para se tornar completo, a concepção do sistema de execução WSPE aborda aspectos de desempenho, portabilidade, escalabilidade e adaptabilidade. Para tanto foram desenvolvidos ou adaptados mecanismos para as funções de escalonamento...

Provably Efficient Adaptive Scheduling for Parallel Jobs

He, Yuxiong; Hsu, Wen Jing; Leiserson, Charles E.
Fonte: MIT - Massachusetts Institute of Technology Publicador: MIT - Massachusetts Institute of Technology
Tipo: Artigo de Revista Científica Formato: 142623 bytes; application/pdf
Português
Relevância na Pesquisa
36.07%
Scheduling competing jobs on multiprocessors has always been an important issue for parallel and distributed systems. The challenge is to ensure global, system-wide efficiency while offering a level of fairness to user jobs. Various degrees of successes have been achieved over the years. However, few existing schemes address both efficiency and fairness over a wide range of work loads. Moreover, in order to obtain analytical results, most of them require prior information about jobs, which may be difficult to obtain in real applications. This paper presents two novel adaptive scheduling algorithms -- GRAD for centralized scheduling, and WRAD for distributed scheduling. Both GRAD and WRAD ensure fair allocation under all levels of workload, and they offer provable efficiency without requiring prior information of job's parallelism. Moreover, they provide effective control over the scheduling overhead and ensure efficient utilization of processors. To the best of our knowledge, they are the first non-clairvoyant scheduling algorithms that offer such guarantees. We also believe that our new approach of resource request-allotment protocol deserves further exploration. Specifically, both GRAD and WRAD are O(1)-competitive with respect to mean response time for batched jobs...

Work-Stealing Without the Baggage

Kumar, Vivek; Frampton, Daniel; Blackburn, Stephen; Grove, David; Tardieu, Oliver
Fonte: Conference Organising Committee Publicador: Conference Organising Committee
Tipo: Conference paper
Português
Relevância na Pesquisa
46.52%
Work-stealing is a promising approach for effectively exploiting software parallelism on parallel hardware. A programmer who uses work-stealing explicitly identifies potential parallelism and the runtime then schedules work, keeping otherwise idle hardwar

Work-stealing for mixed-mode parallelism by deterministic team-building

Wimmer, Martin; Träff, Jesper Larsson
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 22/12/2010 Português
Relevância na Pesquisa
46.68%
We show how to extend classical work-stealing to deal also with data parallel tasks that can require any number of threads r >= 1 for their execution. We explain in detail the so introduced idea of work-stealing with deterministic team-building which in a natural way generalizes classical work-stealing. A prototype C++ implementation of the generalized work-stealing algorithm has been given and is briefly described. Building on this, a serious, well-known contender for a best parallel Quicksort algorithm has been implemented, which naturally relies on both task and data parallelism. For instance, sorting 2^27-1 randomly generated integers we could improve the speed-up from 5.1 to 8.7 on a 32-core Intel Nehalem EX system, being consistently better than the tuned, task-parallel Cilk++ system.

Configurable Strategies for Work-stealing

Wimmer, Martin; Cederman, Daniel; Träff, Jesper Larsson; Tsigas, Philippas
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 28/05/2013 Português
Relevância na Pesquisa
46.62%
Work-stealing systems are typically oblivious to the nature of the tasks they are scheduling. For instance, they do not know or take into account how long a task will take to execute or how many subtasks it will spawn. Moreover, the actual task execution order is typically determined by the underlying task storage data structure, and cannot be changed. There are thus possibilities for optimizing task parallel executions by providing information on specific tasks and their preferred execution order to the scheduling system. We introduce scheduling strategies to enable applications to dynamically provide hints to the task-scheduling system on the nature of specific tasks. Scheduling strategies can be used to independently control both local task execution order as well as steal order. In contrast to conventional scheduling policies that are normally global in scope, strategies allow the scheduler to apply optimizations on individual tasks. This flexibility greatly improves composability as it allows the scheduler to apply different, specific scheduling choices for different parts of applications simultaneously. We present a number of benchmarks that highlight diverse, beneficial effects that can be achieved with scheduling strategies. Some benchmarks (branch-and-bound...

Analysis of Randomized Work Stealing with False Sharing

Cole, Richard; Ramachandran, Vijaya
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 21/03/2011 Português
Relevância na Pesquisa
46.35%
This paper analyzes the cache miss cost of algorithms when scheduled using randomized work stealing (RWS) in a parallel environment, taking into account the effects of false sharing. First, prior analyses (due to Acar et al.) are extended to incorporate false sharing. However, to control the possible delays due to false sharing, some restrictions on the algorithms seem necessary. Accordingly, the class of Hierarchical Tree algorithms is introduced and their performance analyzed. In addition, the paper analyzes the performance of a subclass of the Hierarchical Tree Algorithms, called HBP algorithms, when scheduled using RWS; improved complexity bounds are obtained for this subclass. This class was introduced in a companion paper with efficient resource oblivious computation in mind. Finally, we note that in a scenario in which there is no false sharing the results in this paper match prior bounds for cache misses but with reduced assumptions, and in particular with no need for a bounding concave function for the cost of cache misses as in prior work by Frigo and Strumpen. This allows non-trivial cache miss bounds in this case to be obtained for a larger class of algorithms.

Distributed Work Stealing for Constraint Solving

Pedro, Vasco; Abreu, Salvador
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 20/09/2010 Português
Relevância na Pesquisa
46.35%
With the dissemination of affordable parallel and distributed hardware, parallel and distributed constraint solving has lately been the focus of some attention. To effectually apply the power of distributed computational systems, there must be an effective sharing of the work involved in the search for a solution to a Constraint Satisfaction Problem (CSP) between all the participating agents, and it must happen dynamically, since it is hard to predict the effort associated with the exploration of some part of the search space. We describe and provide an initial experimental assessment of an implementation of a work stealing-based approach to distributed CSP solving.; Comment: Online proceedings of the Joint Workshop on Implementation of Constraint Logic Programming Systems and Logic-based Methods in Programming Environments (CICLOPS-WLPE 2010), Edinburgh, Scotland, U.K., July 15, 2010