Parallel processing Books

33 products


  • Functional and Concurrent Programming

    Pearson Education (US) Functional and Concurrent Programming

    15 in stock

    Book SynopsisMichel Charpentier is an associate professor with the Computer Science department at the University of New Hampshire (UNH). His interests over the years have ranged from distributed systems to formal verification and mobile sensor networks. He has been with UNH since 1999 and currently teaches courses in programming languages, concurrency, formal verification, and model-checking.Table of ContentsForeword by Cay Horstmann xxiii Preface xxv Acknowledgments xxxv About the Author xxxvii Part I. Functional Programming 1 Chapter 1: Concepts of Functional Programming 3 1.1 What Is Functional Programming? 3 1.2 Functions 4 1.3 From Functions to Functional Programming Concepts 6 1.4 Summary 7 Chapter 2: Functions in Programming Languages 9 2.1 Defining Functions 9 2.2 Composing Functions 10 2.3 Functions Defined as Methods 12 2.4 Operators Defined as Methods 12 2.5 Extension Methods 13 2.6 Local Functions 14 2.7 Repeated Arguments 15 2.8 Optional Arguments 16 2.9 Named Arguments 16 2.10 Type Parameters 17 2.11 Summary 19 Chapter 3: Immutability 21 3.1 Pure and Impure Functions 21 3.2 Actions 23 3.3 Expressions Versus Statements 25 3.4 Functional Variables 26 3.5 Immutable Objects 28 3.6 Implementation of Mutable State 29 3.7 Functional Lists 31 3.8 Hybrid Designs 32 3.9 Updating Collections of Mutable/Immutable Objects 35 3.10 Summary 36 Chapter 4: Case Study: Active–Passive Sets 39 4.1 Object-Oriented Design 39 4.2 Functional Values 41 4.3 Functional Objects 43 4.4 Summary 44 Chapter 5: Pattern Matching and Algebraic Data Types 47 5.1 Functional Switch 47 5.2 Tuples 48 5.3 Options 50 5.4 Revisiting Functional Lists 51 5.5 Trees 53 5.6 Illustration: List Zipper 56 5.7 Extractors 59 5.8 Summary 60 Chapter 6: Recursive Programming 63 6.1 The Need for Recursion 63 6.2 Recursive Algorithms 65 6.3 Key Principles of Recursive Algorithms 67 6.4 Recursive Structures 69 6.5 Tail Recursion 71 6.6 Examples of Tail Recursive Functions 73 6.7 Summary 77 Chapter 7: Recursion on Lists 79 7.1 Recursive Algorithms as Equalities 79 7.2 Traversing Lists 80 7.3 Returning Lists 82 7.4 Building Lists from the Execution Stack 84 7.5 Recursion on Multiple/Nested Lists 85 7.6 Recursion on Sublists Other Than the Tail 88 7.7 Building Lists in Reverse Order 90 7.8 Illustration: Sorting 92 7.9 Building Lists Efficiently 94 7.10 Summary 96 Chapter 8: Case Study: Binary Search Trees 99 8.1 Binary Search Trees 99 8.2 Sets of Integers as Binary Search Trees 100 8.3 Implementation Without Rebalancing 102 8.4 Self-Balancing Trees 107 8.5 Summary 113 Chapter 9: Higher-Order Functions 115 9.1 Functions as Values 115 9.2 Currying 118 9.3 Function Literals 120 9.4 Functions Versus Methods 123 9.5 Single-Abstract-Method Interfaces 124 9.6 Partial Application 125 9.7 Closures 130 9.8 Inversion of Control 133 9.9 Summary 133 Chapter 10: Standard Higher-Order Functions 137 10.1 Functions with Predicate Arguments 137 10.2 map and foreach 140 10.3 atMap 141 10.4 fold and reduce 146 10.5 iterate, tabulate, and unfold 148 10.6 sortWith, sortBy, maxBy, and minBy 149 10.7 groupBy and groupMap 150 10.8 Implementing Standard Higher-Order Functions 152 10.9 foreach, map, atMap, and for-Comprehensions 152 10.10 Summary 155 Chapter 11: Case Study: File Systems as Trees 157 11.1 Design Overview 157 11.2 A Node-Searching Helper Function 158 11.3 String Representation 158 11.4 Building Trees 160 11.5 Querying 164 11.6 Navigation 168 11.7 Tree Zipper 169 11.8 Summary 172 Chapter 12: Lazy Evaluation 173 12.1 Delayed Evaluation of Arguments 173 12.2 By-Name Arguments 174 12.3 Control Abstraction 176 12.4 Internal Domain-Specifc Languages 179 12.5 Streams as Lazily Evaluated Lists 180 12.6 Streams as Pipelines 182 12.7 Streams as Infinite Data Structures 184 12.8 Iterators 184 12.9 Lists, Streams, Iterators, and Views 187 12.10 Delayed Evaluation of Fields and Local Variables 190 12.11 Illustration: Subset-Sum 191 12.12 Summary 193 Chapter 13: Handling Failures 195 13.1 Exceptions and Special Values 195 13.2 Using Option 197 13.3 Using Try 198 13.4 Using Either 199 13.5 Higher-Order Functions and Pipelines 201 13.6 Summary 204 Chapter 14: Case Study: Trampolines 205 14.1 Tail-Call Optimization 205 14.2 Trampolines for Tail-Calls 206 14.3 Tail-Call Optimization in Java 207 14.4 Dealing with Non-Tail-Calls 209 14.5 Summary 213 A Brief Interlude 215 Chapter 15: Types (and Related Concepts) 217 15.1 Typing Strategies 217 15.2 Types as Sets 222 15.3 Types as Services 223 15.4 Abstract Data Types 224 15.5 Type Inference 225 15.6 Subtypes 229 15.7 Polymorphism 232 15.8 Type Variance 235 15.9 Type Bounds 241 15.10 Type Classes 245 15.11 Summary 250 Part II. Concurrent Programming 253 Chapter 16: Concepts of Concurrent Programming 255 16.1 Non-sequential Programs 255 16.2 Concurrent Programming Concepts 258 16.3 Summary 259 Chapter 17: Threads and Nondeterminism 261 17.1 Threads of Execution 261 17.2 Creating Threads Using Lambda Expressions 263 17.3 Nondeterministic Behavior of Multithreaded Programs 263 17.4 Thread Termination 264 17.5 Testing and Debugging Multithreaded Programs 266 17.6 Summary 268 Chapter 18: Atomicity and Locking 271 18.1 Atomicity 271 18.2 Non-atomic Operations 273 18.3 Atomic Operations and Non-atomic Composition 274 18.4 Locking 278 18.5 Intrinsic Locks 279 18.6 Choosing Locking Targets 281 18.7 Summary 283 Chapter 19: Thread-Safe Objects 285 19.1 Immutable Objects 285 19.2 Encapsulating Synchronization Policies 286 19.3 Avoiding Reference Escape 288 19.4 Public and Private Locks 289 19.5 Leveraging Immutable Types 290 19.6 Thread-Safety 293 19.7 Summary 295 Chapter 20: Case Study: Thread-Safe Queue 297 20.1 Queues as Pairs of Lists 297 20.2 Single Public Lock Implementation 298 20.3 Single Private Lock Implementation 301 20.4 Applying Lock Splitting 303 20.5 Summary 305 Chapter 21: Thread Pools 307 21.1 Fire-and-Forget Asynchronous Execution 307 21.2 Illustration: Parallel Server 309 21.3 Different Types of Thread Pools 312 21.4 Parallel Collections 314 21.5 Summary 318 Chapter 22: Synchronization 321 22.1 Illustration of the Need for Synchronization 321 22.2 Synchronizers 324 22.3 Deadlocks 325 22.4 Debugging Deadlocks with Thread Dumps 328 22.5 The Java Memory Model 330 22.6 Summary 335 Chapter 23: Common Synchronizers 337 23.1 Locks 337 23.2 Latches and Barriers 339 23.3 Semaphores 341 23.4 Conditions 343 23.5 Blocking Queues 349 23.6 Summary 353 Chapter 24: Case Study: Parallel Execution 355 24.1 Sequential Reference Implementation 355 24.2 One New Thread per Task 356 24.3 Bounded Number of Threads 357 24.4 Dedicated Thread Pool 359 24.5 Shared Thread Pool 360 24.6 Bounded Thread Pool 361 24.7 Parallel Collections 362 24.8 Asynchronous Task Submission Using Conditions 362 24.9 Two-Semaphore Implementation 367 24.10 Summary 368 Chapter 25: Futures and Promises 369 25.1 Functional Tasks 369 25.2 Futures as Synchronizers 371 25.3 Timeouts, Failures, and Cancellation 374 25.4 Future Variants 375 25.5 Promises 375 25.6 Illustration: Thread-Safe Caching 377 25.7 Summary 379 Chapter 26: Functional-Concurrent Programming 381 26.1 Correctness and Performance Issues with Blocking 381 26.2 Callbacks 384 26.3 Higher-Order Functions on Futures 385 26.4 Function atMap on Futures 388 26.5 Illustration: Parallel Server Revisited 390 26.6 Functional-Concurrent Programming Patterns 393 26.7 Summary 397 Chapter 27: Minimizing Thread Blocking 399 27.1 Atomic Operations 399 27.2 Lock-Free Data Structures 402 27.3 Fork/Join Pools 405 27.4 Asynchronous Programming 406 27.5 Actors 407 27.6 Reactive Streams 411 27.7 Non-blocking Synchronization 412 27.8 Summary 414 Chapter 28: Case Study: Parallel Strategies 417 28.1 Problem Definition 417 28.2 Sequential Implementation with Timeout 419 28.3 Parallel Implementation Using invokeAny 420 28.4 Parallel Implementation Using CompletionService 421 28.5 Asynchronous Implementation with Scala Futures 422 28.6 Asynchronous Implementation with CompletableFuture 426 28.7 Caching Results from Strategies 427 28.8 Summary 431 Appendix A. Features of Java and Kotlin 433 A.1 Functions in Java and Kotlin 433 A.2 Immutability 436 A.3 Pattern Matching and Algebraic Data Types 437 A.4 Recursive Programming 439 A.5 Higher-Order Functions 440 A.6 Lazy Evaluation 446 A.7 Handling Failures 449 A.8 Types 451 A.9 Threads 453 A.10 Atomicity and Locking 454 A.11 Thread-Safe Objects 455 A.12 Thread Pools 457 A.13 Synchronization 459 A.14 Futures and Functional-Concurrent Programming 460 A.15 Minimizing Thread Blocking 461 Glossary 463 Index 465

    15 in stock

    £37.79

  • OpenACC for Programmers

    Pearson Education (US) OpenACC for Programmers

    1 in stock

    Book SynopsisSunita Chandrasekaran is assistant professor in the Computer and Information Sciences Department at the University of Delaware. Her research interests include exploring the suitability of high-level programming models and runtime systems for HPC and embedded platforms, and migrating scientific applications to heterogeneous computing systems. Dr. Chandrasekaran was a post-doctoral fellow at the University of Houston and holds a Ph.D. from Nanyang Technological University, Singapore. She is a member of OpenACC, OpenMP, MCA and SPEC HPG. She has served on the program committees of various conferences and workshops including SC, ISC, ICPP, CCGrid, Cluster, and PACT, and has co-chaired parallel programming workshops co-located with SC, ISC, IPDPS, and SIAM. Guido Juckeland is head of the Computational Science Group, Department for Information Services and Computing, Helmholtz-Zentrum Dresden-Rossendorf, and coordinates the work of the GPU Center Table of ContentsForeword xv Preface xxi Acknowledgments xxiii About the Contributors xxv Chapter 1: OpenACC in a Nutshell 1 1.1 OpenACC Syntax 3 1.2 Compute Constructs 6 1.3 The Data Environment 11 1.4 Summary 15 1.5 Exercises 15 Chapter 2: Loop-Level Parallelism 17 2.1 Kernels Versus Parallel Loops 18 2.2 Three Levels of Parallelism 21 2.3 Other Loop Constructs 24 2.4 Summary 30 2.5 Exercises 31 Chapter 3: Programming Tools for OpenACC 33 3.1 Common Characteristics of Architectures 34 3.2 Compiling OpenACC Code 35 3.3 Performance Analysis of OpenACC Applications 36 3.4 Identifying Bugs in OpenACC Programs 51 3.5 Summary 53 3.6 Exercises 54 Chapter 4: Using OpenACC for Your First Program 59 4.1 Case Study 59 4.2 Creating a Naive Parallel Version 68 4.3 Performance of OpenACC Programs 71 4.4 An Optimized Parallel Version 73 4.5 Summary 78 4.6 Exercises 79 Chapter 5: Compiling OpenACC 81 5.1 The Challenges of Parallelism 82 5.2 Restructuring Compilers 88 5.3 Compiling OpenACC 92 5.4 Summary 97 5.5 Exercises 97 Chapter 6: Best Programming Practices 101 6.1 General Guidelines 102 6.2 Maximize On-Device Compute 105 6.3 Optimize Data Locality 108 6.4 A Representative Example 112 6.5 Summary 118 6.6 Exercises 119 Chapter 7: OpenACC and Performance Portability 121 7.1 Challenges 121 7.2 Target Architectures 123 7.3 OpenACC for Performance Portability 124 7.4 Code Refactoring for Performance Portability126 7.5 Summary 132 7.6 Exercises133 Chapter 8: Additional Approaches to Parallel Programming 135 8.1 Programming Models135 8.2 Programming Model Components142 8.3 A Case Study 155 8.4 Summary170 8.5 Exercises170 Chapter 9: OpenACC and Interoperability 173 9.1 Calling Native Device Code from OpenACC 174 9.2 Calling OpenACC from Native Device Code 181 9.3 Advanced Interoperability Topics 182 9.4 Summary185 9.5 Exercises185 Chapter 10: Advanced OpenACC 187 10.1 Asynchronous Operations 187 10.2 Multidevice Programming 204 10.3 Summary 213 10.4 Exercises 213 Chapter 11: Innovative Research Ideas Using OpenACC, Part I 215 11.1 Sunway OpenACC 215 11.2 Compiler Transformation of Nested Loops for Accelerators 224 Chapter 12: Innovative Research Ideas Using OpenACC, Part II 237 12.1 A Framework for Directive-Based High-Performance Reconfigurable Computing 237 12.2 Programming Accelerated Clusters Using XcalableACC 253 Index 269

    1 in stock

    £35.14

  • Multicore Software Development Techniques

    Elsevier Science Multicore Software Development Techniques

    Out of stock

    Book SynopsisProvides a set of practical processes and techniques used for multicore software development. This book focuses on solving day to day problems using practical tips and tricks and industry case studies to reinforce the key concepts in multicore software development.Table of Contents1. Principles of parallel computing2. Parallelism in all of its forms3. Multicore system architectures4. Multicore Software Architectures5. Multicore software development process6. A case study on Multicore Development7. Multicore Virtualization8. Performance and Optimization of Multicore systems9. Sequential to parallel migration of software applications10. Concurrency abstraction layers

    Out of stock

    £31.12

  • CUDA for Engineers

    Pearson Education (US) CUDA for Engineers

    2 in stock

    Book SynopsisDuane Storti is a professor of mechanical engineering at the University of Washington in Seattle. He has thirty-five years of experience in teaching and research in the areas of engineering mathematics, dynamics and vibrations, computer-aided design, 3D printing, and applied GPU computing.   Mete Yurtoglu is currently pursuing an M.S. in applied mathematics and a Ph.D. in mechanical engineering at the University of Washington in Seattle. His research interests include GPU-based methods for computer vision and machine learning.  Table of Contents Acknowledgments xvii About the Authors xix Introduction 1 What Is CUDA? 1 What Does “Need-to-Know” Mean for Learning CUDA? 2 What Is Meant by “for Engineers”? 3 What Do You Need to Get Started with CUDA? 4 How Is This Book Structured? 4 Conventions Used in This Book 8 Code Used in This Book 8 User’s Guide 9 Historical Context 10 References 12 Chapter 1: First Steps 13 Running CUDA Samples 13 Running Our Own Serial Apps 19 Summary 22 Suggested Projects 23 Chapter 2: CUDA Essentials 25 CUDA’s Model for Parallelism 25 Need-to-Know CUDA API and C Language Extensions 28 Summary 31 Suggested Projects 31 References 31 Chapter 3: From Loops to Grids 33 Parallelizing dist_v1 33 Parallelizing dist_v2 38 Standard Workflow 42 Simplified Workflow 43 Summary 47 Suggested Projects 48 References 48 Chapter 4: 2D Grids and Interactive Graphics 49 Launching 2D Computational Grids 50 Live Display via Graphics Interop 56 Application: Stability 66 Summary 76 Suggested Projects 76 References 77 Chapter 5: Stencils and Shared Memory 79 Thread Interdependence 80 Computing Derivatives on a 1D Grid 81 Summary 117 Suggested Projects 118 References 119 Chapter 6: Reduction and Atomic Functions 121 Threads Interacting Globally 121 Implementing parallel_dot 123 Computing Integral Properties: centroid_2d 130 Summary 138 Suggested Projects 138 References 138 Chapter 7: Interacting with 3D Data 141 Launching 3D Computational Grids: dist_3d 144 Viewing and Interacting with 3D Data: vis_3d 146 Summary 171 Suggested Projects 171 References 171 Chapter 8: Using CUDA Libraries 173 Custom versus Off-the-Shelf 173 Thrust 175 cuRAND 190 NPP 193 Linear Algebra Using cuSOLVER and cuBLAS . 201 cuDNN 207 ArrayFire 207 Summary 207 Suggested 208 References 209 Chapter 9: Exploring the CUDA Ecosystem 211 The Go-To List of Primary Sources 211 Further Sources 217 Summary 218 Suggested Projects 219 Appendix A: Hardware Setup 221 Checking for an NVIDIA GPU: Windows 221 Checking for an NVIDIA GPU: OS X 222 Checking for an NVIDIA GPU: Linux 223 Determining Compute Capability 223 Upgrading Compute Capability 225 Appendix B: Software Setup 229 Windows Setup 229 OS X Setup 238 Linux Setup 240 Appendix C: Need-to-Know C Programming 245 Characterization of C 245 C Language Basics 246 Data Types, Declarations, and Assignments 248 Defining Functions 250 Building Apps: Create, Compile, Run, Debug 251 Arrays, Memory Allocation, and Pointers 262 Control Statements: for, if 263 Sample C Programs 267 References 277 Appendix D: CUDA Practicalities: Timing, Profiling, Error Handling, and Debugging 279 Execution Timing and Profiling 279 Error Handling 292 Debugging in Windows 298 Debugging in Linux 305 CUDA-MEMCHECK 308 Using Visual Studio Property Pages 309 References 312 Index 313

    2 in stock

    £31.82

  • Parallel Optimization Theory Algorithms and Applications Numerical Mathematics and Scientific Computation

    Oxford University Press, USA Parallel Optimization Theory Algorithms and Applications Numerical Mathematics and Scientific Computation

    15 in stock

    Book SynopsisThis text provides an introduction to the methods of parallel optimization by introducing parallel computing ideas and techniques into both optimization theory and numerical algorithms for large-scale optimization problems.Trade Review"This book presents a domain that arises where two different branches of science, namely parallel computations and the theory of constrained optimization, intersect with real life problems. This domain, called parallel optimization, has been developing rapidly under the stimulus of progress in computer technology. The book focuses on parallel optimization methods for large-scale constrained optimization problems and structured linear problems. . . . [It] covers a vast portion of parallel optimization, though full coverage of this domain, as the authors admit, goes far beyond the capacity of a single monograph. This book, however, in over 500 pages brings an excellent and in-depth presentation of all the major aspects of a process which matches theory and methods of optimization with modern computers. The volume can be recommended for graduate students, faculty, and researchers in any of those fields."--Mathematical Reviews "This book presents a domain that arises where two different branches of science, namely parallel computations and the theory of constrained optimization, intersect with real life problems. This domain, called parallel optimization, has been developing rapidly under the stimulus of progress in computer technology. The book focuses on parallel optimization methods for large-scale constrained optimization problems and structured linear problems. . . . [It] covers a vast portion of parallel optimization, though full coverage of this domain, as the authors admit, goes far beyond the capacity of a single monograph. This book, however, in over 500 pages brings an excellent and in-depth presentation of all the major aspects of a process which matches theory and methods of optimization with modern computers. The volume can be recommended for graduate students, faculty, and researchers in any of those fields."--Mathematical ReviewsTable of ContentsForeword ; Preface ; Glossary of Symbols ; 1. Introduction ; Part I Theory ; 2. Generalized Distances and Generalized Projections ; 3. Proximal Minimization with D-Functions ; Part II Algorithms ; 4. Penalty Methods, Barrier Methods and Augmented Lagrangians ; 5. Iterative Methods for Convex Feasibility Problems ; 6. Iterative Algorithms for Linearly Constrained Optimization Problems ; 7. Model Decomposition Algorithms ; 8. Decompositions in Interior Point Algorithms ; Part III Applications ; 9. Matrix Estimation Problems ; 10. Image Reconsturction from Projections ; 11. The Inverse Problem in Radiation Therapy Treatment Planning ; 12. Multicommodity Network Flow Problems ; 13. Planning Under Uncertainty ; 14. Decompositions for Parallel Computing ; 15. Numerical Investigations

    15 in stock

    £195.75

  • Using MPI Portable Parallel Programming with the

    MIT Press Ltd Using MPI Portable Parallel Programming with the

    Out of stock

    Book SynopsisThe thoroughly updated edition of a guide to parallel programming with MPI, reflecting the latest specifications, with many detailed examples. This book offers a thoroughly updated guide to the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. Since the publication of the previous edition of Using MPI, parallel computing has become mainstream. Today, applications run on computers with millions of processors; multiple processors sharing memory and multicore processors with multiple hardware threads per core are common. The MPI-3 Forum recently brought the MPI standard up to date with respect to developments in hardware capabilities, core language evolution, the needs of applications, and experience gained over the years by vendors, implementers, and users. This third edition of Using MPI reflects these changes in both text and example code. The book takes an informal, tutorial approach, introducing each concept t

    Out of stock

    £55.80

  • Multiagent Systems

    MIT Press Ltd Multiagent Systems

    Out of stock

    Book Synopsis

    Out of stock

    £48.00

  • Principles of Concurrent and Distributed

    Pearson Education Principles of Concurrent and Distributed

    2 in stock

    Book SynopsisMordechai (Moti) Ben-Ari is an Associate Professor in the Department of Science Teaching at the Weizmann Institute of Science in Rehovot, Israel.  He is the author of texts on Ada, concurrent programming, programming languages, and mathematical logic, as well as Just a Theory: Exploring the Nature of Science.  In 2004 he was honored with the ACM/SIGCSE Award for Outstanding Contribution to Computer Science Education.Table of ContentsContents Preface xi 1 What is Concurrent Programming? 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Concurrency as abstract parallelism . . . . . . . . . . . . . . . . 2 1.3 Multitasking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 The terminology of concurrency . . . . . . . . . . . . . . . . . 4 1.5 Multiple computers . . . . . . . . . . . . . . . . . . . . . . . . 5 1.6 The challenge of concurrent programming . . . . . . . . . . . . 5 2 The Concurrent Programming Abstraction 7 2.1 The role of abstraction . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Concurrent execution as interleaving of atomic statements . . . . 8 2.3 Justification of the abstraction . . . . . . . . . . . . . . . . . . . 13 2.4 Arbitrary interleaving . . . . . . . . . . . . . . . . . . . . . . . 17 2.5 Atomic statements . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.6 Correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.7 Fairness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.8 Machine-code instructions . . . . . . . . . . . . . . . . . . . . . 24 2.9 Volatile and non-atomic variables . . . . . . . . . . . . . . . . . 28 2.10 The BACI concurrency simulator . . . . . . . . . . . . . . . . . 29 2.11 Concurrency in Ada . . . . . . . . . . . . . . . . . . . . . . . . 31 2.12 Concurrency in Java . . . . . . . . . . . . . . . . . . . . . . . . 34 2.13 Writing concurrent programs in Promela . . . . . . . . . . . . . 36 2.14 Supplement: the state diagram for the frog puzzle . . . . . . . . 37 3 The Critical Section Problem 45 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.2 The definition of the problem . . . . . . . . . . . . . . . . . . . 45 3.3 First attempt . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.4 Proving correctness with state diagrams . . . . . . . . . . . . . . 49 3.5 Correctness of the first attempt . . . . . . . . . . . . . . . . . . 53 3.6 Second attempt . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.7 Third attempt . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.8 Fourth attempt . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.9 Dekker’s algorithm . . . . . . . . . . . . . . . . . . . . . . . . 60 3.10 Complex atomic statements . . . . . . . . . . . . . . . . . . . . 61 4 Verification of Concurrent Programs 67 4.1 Logical specification of correctness properties . . . . . . . . . . 68 4.2 Inductive proofs of invariants . . . . . . . . . . . . . . . . . . . 69 4.3 Basic concepts of temporal logic . . . . . . . . . . . . . . . . . 72 4.4 Advanced concepts of temporal logic . . . . . . . . . . . . . . . 75 4.5 A deductive proof of Dekker’s algorithm . . . . . . . . . . . . . 79 4.6 Model checking . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.7 Spin and the Promela modeling language . . . . . . . . . . . . . 83 4.8 Correctness specifications in Spin . . . . . . . . . . . . . . . . . 86 4.9 Choosing a verification technique . . . . . . . . . . . . . . . . . 88 5 Advanced Algorithms for the Critical Section Problem 93 5.1 The bakery algorithm . . . . . . . . . . . . . . . . . . . . . . . 93 5.2 The bakery algorithm for N processes . . . . . . . . . . . . . . 95 5.3 Less restrictive models of concurrency . . . . . . . . . . . . . . 96 5.4 Fast algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 5.5 Implementations in Promela . . . . . . . . . . . . . . . . . . . . 104

    2 in stock

    £71.99

  • Principles of Parallel Programming

    Pearson Education (US) Principles of Parallel Programming

    Out of stock

    Book SynopsisLawrence Snyder is Professor of Computer Science and Engineering at the University of Washington in Seattle. He received his PhD from Carnegie Mellon University and has devoted most of his career to parallel computation research, including architecture, algorithms and languages. With Calvin Lin and UW graduate students, he developed the ZPL parallel programming language. He is a fellow of the ACM and IEEE. He is an ardent traveler, enthusiastic theater-goer and occasional skier. Calvin Lin is an Associate Professor of Computer Sciences at The University of Texas at Austin, where he also serves as Director of the Turing Scholars Honors Program of undergraduate CS majors. He received his PhD from the University of Washington under the supervision of Lawrence Snyder. His current research interests include compilers and micro-architecture. In his spare time, he is an avid ultimate Frisbee player and coach of UT's Men's Ultimate Frisbee team.Trade Review"...the first basic book on the subject that I've ever seen that seems to have the pulse on the true issues of parallelism that are relevant for students." - Alan Edelman, MIT "Principles of Parallel Programming is a wonderful book and I plan to use it in our new parallel programming course..." - Peiyi Tang, University of Arkansas, Little Rock "I like [Principles of Parallel Programming] very much for a few specific reasons: it's concise, covers the most relevant topics but does not take thousand pages to do it, it is hands on and it covers...recent developments with multi-core and GPGPU." - Edin Hodzic, Santa Clara UniversityTable of ContentsChapter 1 Introduction: Parallelism = Opportunities + Challenges The Power and Potential of Parallelism Examining Sequential and Parallel Programs A Paradigm Shift Parallelism Using Multiple Instruction Streams The Goals: Scalable Performance and Portability Summary Historical Context Exercises Chapter 2 Parallel Computers And Their Model Balancing Machine Specifics with Portability A Look at Five Parallel Computers The RAM: An Abstraction of a Sequential Computer The PRAM: A Parallel Computer Model The CTA: A Practical Parallel Computer Model Memory Reference Mechanisms A Closer Look at Communication Applying the CTA Model Summary Historical Perspective Exercises Chapter 3 Reasoning about Performance Introduction Motivation and Some Basic Concepts Sources of Performance Loss Parallel Structure Reasoning about Performance Performance Trade-Offs Measuring Performance What should we measure? Summary Historical Perspective Exercises Chapter 4 First Steps Towards Parallel Programming Task and Data Parallelism Peril-L Count 3s Example Conceptualizing Parallelism Alphabetizing Example Comparison of Three Solutions Summary Historical Perspective Exercises Chapter 5 Scalable Algorithmic Techniques The Inevitability of Trees Blocks of Independent Computation Schwartz’ Algorithm Assigning Work To Processes Statically Assigning Work to Processes Dynamically The Reduce & Scan Abstractions Trees Summary Historical Context Exercises Chapter 6 Programming with Threads POSIX Threads Thread Creation and Destruction Mutual Exclusion Synchronization Safety Issues Performance Issues Open MP The Count 3s Example Semantic Limitations on Reduction Thread Behavior and Interaction Sections Summary of OpenMP Java Threads Summary Historical Perspectives Exercises Chapter 7 Local View Programming Languages MPI: The Message Passing Interface Getting Started Safety Issues Performance Issues Co-Array Fortran Unified Parallel C Titanium Summary Exercises Chapter 8 Global View Programming Languages The Z-level Programming Language Basic Concepts of ZPL Life, An Example Design Principles Manipulating Arrays Of Different Ranks Reordering Data With Remap Parallel Execution of ZPL Performance Model Summary NESL Historical Context Exercises Chapter 9 Assessing Our Knowledge Introduction Evaluating Existing Approaches Lessons for the Future Summary Historical Perspectives Exercises Chapter 10 Future Directions in Parallel Programming Attached Processors Grid Computing Transactional Memory Summary Exercises Chapter 11 Capstone Project: Designing a Parallel Program Introduction Motivation Getting Started Summary Historical Perspective Exercises Appendix 1 More Advanced Concepts

    Out of stock

    £150.11

  • HighPerformance Parallel Database Processing and

    John Wiley & Sons Inc HighPerformance Parallel Database Processing and

    Out of stock

    Book SynopsisThe latest techniques and principles of parallel and grid database processing The growth in grid databases, coupled with the utility of parallel query processing, presents an important opportunity to understand and utilize high-performance parallel database processing within a major database management system (DBMS). This important new book provides readers with a fundamental understanding of parallelism in data-intensive applications, and demonstrates how to develop faster capabilities to support them. It presents a balanced treatment of the theoretical and practical aspects of high-performance databases to demonstrate how parallel query is executed in a DBMS, including concepts, algorithms, analytical models, and grid transactions. High-Performance Parallel Database Processing and Grid Databases serves as a valuable resource for researchers working in parallel databases and for practitioners interested in building a high-performance database. It is also a mucTable of ContentsPreface xv Part I Introduction 1. Introduction 3 1.1. A Brief Overview: Parallel Databases and Grid Databases 4 1.2. Parallel Query Processing: Motivations 5 1.3. Parallel Query Processing: Objectives 7 1.3.1. Speed Up 7 1.3.2. Scale Up 8 1.3.3. Parallel Obstacles 10 1.4. Forms of Parallelism 12 1.4.1. Interquery Parallelism 13 1.4.2. Intraquery Parallelism 14 1.4.3. Intraoperation Parallelism 15 1.4.4. Interoperation Parallelism 15 1.4.5. Mixed Parallelism—A More Practical Solution 18 1.5. Parallel Database Architectures 19 1.5.1. Shared-Memory and Shared-Disk Architectures 20 1.5.2. Shared-Nothing Architecture 22 1.5.3. Shared-Something Architecture 23 1.5.4. Interconnection Networks 24 1.6. Grid Database Architecture 26 1.7. Structure of this Book 29 1.8. Summary 30 1.9. Bibliographical Notes 30 1.10. Exercises 31 2. Analytical Models 33 2.1. Cost Models 33 2.2. Cost Notations 34 2.2.1. Data Parameters 34 2.2.2. Systems Parameters 36 2.2.3. Query Parameters 37 2.2.4. Time Unit Costs 37 2.2.5. Communication Costs 38 2.3. Skew Model 39 2.4. Basic Operations in Parallel Databases 43 2.4.1. Disk Operations 44 2.4.2. Main Memory Operations 45 2.4.3. Data Computation and Data Distribution 45 2.5. Summary 47 2.6. Bibliographical Notes 47 2.7. Exercises 47 Part II Basic Query Parallelism 3. Parallel Search 51 3.1. Search Queries 51 3.1.1. Exact-Match Search 52 3.1.2. Range Search Query 53 3.1.3. Multiattribute Search Query 54 3.2. Data Partitioning 54 3.2.1. Basic Data Partitioning 55 3.2.2. Complex Data Partitioning 60 3.3. Search Algorithms 69 3.3.1. Serial Search Algorithms 69 3.3.2. Parallel Search Algorithms 73 3.4. Summary 74 3.5. Bibliographical Notes 75 3.6. Exercises 75 4. Parallel Sort and GroupBy 77 4.1. Sorting, Duplicate Removal, and Aggregate Queries 78 4.1.1. Sorting and Duplicate Removal 78 4.1.2. Scalar Aggregate 79 4.1.3. GroupBy 80 4.2. Serial External Sorting Method 80 4.3. Algorithms for Parallel External Sort 83 4.3.1. Parallel Merge-All Sort 83 4.3.2. Parallel Binary-Merge Sort 85 4.3.3. Parallel Redistribution Binary-Merge Sort 86 4.3.4. Parallel Redistribution Merge-All Sort 88 4.3.5. Parallel Partitioned Sort 90 4.4. Parallel Algorithms for GroupBy Queries 92 4.4.1. Traditional Methods (Merge-All and Hierarchical Merging) 92 4.4.2. Two-Phase Method 93 4.4.3. Redistribution Method 94 4.5. Cost Models for Parallel Sort 96 4.5.1. Cost Models for Serial External Merge-Sort 96 4.5.2. Cost Models for Parallel Merge-All Sort 98 4.5.3. Cost Models for Parallel Binary-Merge Sort 100 4.5.4. Cost Models for Parallel Redistribution Binary-Merge Sort 101 4.5.5. Cost Models for Parallel Redistribution Merge-All Sort 102 4.5.6. Cost Models for Parallel Partitioned Sort 103 4.6. Cost Models for Parallel GroupBy 104 4.6.1. Cost Models for Parallel Two-Phase Method 104 4.6.2. Cost Models for Parallel Redistribution Method 107 4.7. Summary 109 4.8. Bibliographical Notes 110 4.9. Exercises 110 5. Parallel Join 112 5.1. Join Operations 112 5.2. Serial Join Algorithms 114 5.2.1. Nested-Loop Join Algorithm 114 5.2.2. Sort-Merge Join Algorithm 116 5.2.3. Hash-Based Join Algorithm 117 5.2.4. Comparison 120 5.3. Parallel Join Algorithms 120 5.3.1. Divide and Broadcast-Based Parallel Join Algorithms 121 5.3.2. Disjoint Partitioning-Based Parallel Join Algorithms 124 5.4. Cost Models 128 5.4.1. Cost Models for Divide and Broadcast 128 5.4.2. Cost Models for Disjoint Partitioning 129 5.4.3. Cost Models for Local Join 130 5.5. Parallel Join Optimization 132 5.5.1. Optimizing Main Memory 132 5.5.2. Load Balancing 133 5.6. Summary 134 5.7. Bibliographical Notes 135 5.8. Exercises 136 Part III Advanced Parallel Query Processing 6. Parallel GroupBy-Join 141 6.1. Groupby-Join Queries 141 6.1.1. Groupby Before Join 142 6.1.2. Groupby After Join 142 6.2. Parallel Algorithms for Groupby-Before-Join Query Processing 143 6.2.1. Early Distribution Scheme 143 6.2.2. Early GroupBy with Partitioning Scheme 145 6.2.3. Early GroupBy with Replication Scheme 146 6.3. Parallel Algorithms for Groupby-After-Join Query Processing 148 6.3.1. Join Partitioning Scheme 148 6.3.2. GroupBy Partitioning Scheme 150 6.4. Cost Model Notations 151 6.5. Cost Model for Groupby-Before-Join Query Processing 153 6.5.1. Cost Models for the Early Distribution Scheme 153 6.5.2. Cost Models for the Early GroupBy with Partitioning Scheme 156 6.5.3. Cost Models for the Early GroupBy with Replication Scheme 158 6.6. Cost Model for “Groupby-After-Join” Query Processing 159 6.6.1. Cost Models for the Join Partitioning Scheme 159 6.6.2. Cost Models for the GroupBy Partitioning Scheme 161 6.7. Summary 163 6.8. Bibliographical Notes 164 6.9. Exercises 164 7. Parallel Indexing 167 7.1. Parallel Indexing–an Internal Perspective on Parallel Indexing Structures 168 7.2. Parallel Indexing Structures 169 7.2.1. Nonreplicated Indexing (NRI) Structures 169 7.2.2. Partially Replicated Indexing (PRI) Structures 171 7.2.3. Fully Replicated Indexing (FRI) Structures 178 7.3. Index Maintenance 180 7.3.1. Maintaining a Parallel Nonreplicated Index 182 7.3.2. Maintaining a Parallel Partially Replicated Index 182 7.3.3. Maintaining a Parallel Fully Replicated Index 188 7.3.4. Complexity Degree of Index Maintenance 188 7.4. Index Storage Analysis 188 7.4.1. Storage Cost Models for Uniprocessors 189 7.4.2. Storage Cost Models for Parallel Processors 191 7.5. Parallel Processing of Search Queries using Index 192 7.5.1. Parallel One-Index Search Query Processing 192 7.5.2. Parallel Multi-Index Search Query Processing 195 7.6. Parallel Index Join Algorithms 200 7.6.1. Parallel One-Index Join 200 7.6.2. Parallel Two-Index Join 203 7.7. Comparative Analysis 207 7.7.1. Comparative Analysis of Parallel Search Index 207 7.7.2. Comparative Analysis of Parallel Index Join 213 7.8. Summary 216 7.9. Bibliographical Notes 217 7.10. Exercises 217 8. Parallel Universal Qualification—Collection Join Queries 219 8.1. Universal Quantification and Collection Join 220 8.2. Collection Types and Collection Join Queries 222 8.2.1. Collection-Equi Join Queries 222 8.2.2. Collection–Intersect Join Queries 223 8.2.3. Subcollection Join Queries 224 8.3. Parallel Algorithms for Collection Join Queries 225 8.4. Parallel Collection-Equi Join Algorithms 225 8.4.1. Disjoint Data Partitioning 226 8.4.2. Parallel Double Sort-Merge Collection-Equi Join Algorithm 227 8.4.3. Parallel Sort-Hash Collection-Equi Join Algorithm 228 8.4.4. Parallel Hash Collection-Equi Join Algorithm 232 8.5. Parallel Collection-Intersect Join Algorithms 233 8.5.1. Non-Disjoint Data Partitioning 234 8.5.2. Parallel Sort-Merge Nested-Loop Collection-Intersect Join Algorithm 244 8.5.3. Parallel Sort-Hash Collection-Intersect Join Algorithm 245 8.5.4. Parallel Hash Collection-Intersect Join Algorithm 246 8.6. Parallel Subcollection Join Algorithms 246 8.6.1. Data Partitioning 247 8.6.2. Parallel Sort-Merge Nested-Loop Subcollection Join Algorithm 248 8.6.3. Parallel Sort-Hash Subcollection Join Algorithm 249 8.6.4. Parallel Hash Subcollection Join Algorithm 251 8.7. Summary 252 8.8. Bibliographical Notes 252 8.9. Exercises 254 9. Parallel Query Scheduling and Optimization 256 9.1. Query Execution Plan 257 9.2. Subqueries Execution Scheduling Strategies 259 9.2.1. Serial Execution Among Subqueries 259 9.2.2. Parallel Execution Among Subqueries 261 9.3. Serial vs. Parallel Execution Scheduling 264 9.3.1. Nonskewed Subqueries 264 9.3.2. Skewed Subqueries 265 9.3.3. Skewed and Nonskewed Subqueries 267 9.4. Scheduling Rules 269 9.5. Cluster Query Processing Model 270 9.5.1. Overview of Dynamic Query Processing 271 9.5.2. A Cluster Query Processing Architecture 272 9.5.3. Load Information Exchange 273 9.6. Dynamic Cluster Query Optimization 275 9.6.1. Correction 276 9.6.2. Migration 280 9.6.3. Partition 281 9.7. Other Approaches to Dynamic Query Optimization 284 9.8. Summary 285 9.9. Bibliographical Notes 286 9.10. Exercises 286 Part IV Grid Databases 10. Transactions in Distributed and Grid Databases 291 10.1. Grid Database Challenges 292 10.2. Distributed Database Systems and Multidatabase Systems 293 10.2.1. Distributed Database Systems 293 10.2.2. Multidatabase Systems 297 10.3. Basic Definitions on Transaction Management 299 10.4. Acid Properties of Transactions 301 10.5. Transaction Management in Various Database Systems 303 10.5.1. Transaction Management in Centralized and Homogeneous Distributed Database Systems 303 10.5.2. Transaction Management in Heterogeneous Distributed Database Systems 305 10.6. Requirements in Grid Database Systems 307 10.7. Concurrency Control Protocols 309 10.8. Atomic Commit Protocols 310 10.8.1. Homogeneous Distributed Database Systems 310 10.8.2. Heterogeneous Distributed Database Systems 313 10.9. Replica Synchronization Protocols 314 10.9.1. Network Partitioning 315 10.9.2. Replica Synchronization Protocols 316 10.10. Summary 318 10.11. Bibliographical Notes 318 10.12. Exercises 319 11. Grid Concurrency Control 321 11.1. A Grid Database Environment 321 11.2. An Example 322 11.3. Grid Concurrency Control 324 11.3.1. Basic Functions Required by GCC 324 11.3.2. Grid Serializability Theorem 325 11.3.3. Grid Concurrency Control Protocol 329 11.3.4. Revisiting the Earlier Example 333 11.3.5. Comparison with Traditional Concurrency Control Protocols 334 11.4. Correctness of GCC Protocol 336 11.5. Features of GCC Protocol 338 11.6. Summary 339 11.7. Bibliographical Notes 339 11.8. Exercises 339 12. Grid Transaction Atomicity and Durability 341 12.1. Motivation 342 12.2. Grid Atomic Commit Protocol (Grid-ACP) 343 12.2.1. State Diagram of Grid-ACP 343 12.2.2. Grid-ACP Algorithm 344 12.2.3. Early-Abort Grid-ACP 346 12.2.4. Discussion 348 12.2.5. Message and Time Complexity Comparison Analysis 349 12.2.6. Correctness of Grid-ACP 350 12.3. Handling Failure of Sites with Grid-ACP 351 12.3.1. Model for Storing Log Files at the Originator and Participating Sites 351 12.3.2. Logs Required at the Originator Site 352 12.3.3. Logs Required at the Participant Site 353 12.3.4. Failure Recovery Algorithm for Grid-ACP 353 12.3.5. Comparison of Recovery Protocols 359 12.3.6. Correctness of Recovery Algorithm 361 12.4. Summary 365 12.5. Bibliographical Notes 366 12.6. Exercises 366 13. Replica Management in Grids 367 13.1. Motivation 367 13.2. Replica Architecture 368 13.2.1. High-Level Replica Management Architecture 368 13.2.2. Some Problems 369 13.3. Grid Replica Access Protocol (GRAP) 371 13.3.1. Read Transaction Operation for GRAP 371 13.3.2. Write Transaction Operation for GRAP 372 13.3.3. Revisiting the Example Problem 375 13.3.4. Correctness of GRAP 377 13.4. Handling Multiple Partitioning 378 13.4.1. Contingency GRAP 378 13.4.2. Comparison of Replica Management Protocols 381 13.4.3. Correctness of Contingency GRAP 383 13.5. Summary 384 13.6. Bibliographical Notes 385 13.7. Exercises 385 14. Grid Atomic Commitment in Replicated Data 387 14.1. Motivation 388 14.1.1. Architectural Reasons 388 14.1.2. Motivating Example 388 14.2. Modified Grid Atomic Commitment Protocol 390 14.2.1. Modified Grid-ACP 390 14.2.2. Correctness of Modified Grid-ACP 393 14.3. Transaction Properties in Replicated Environment 395 14.4. Summary 397 14.5. Bibliographical Notes 397 14.6. Exercises 398 Part V Other Data-Intensive Applications 15. Parallel Online Analytic Processing (OLAP) and Business Intelligence 401 15.1. Parallel Multidimensional Analysis 402 15.2. Parallelization of ROLLUP Queries 405 15.2.1. Analysis of Basic Single ROLLUP Queries 405 15.2.2. Analysis of Multiple ROLLUP Queries 409 15.2.3. Analysis of Partial ROLLUP Queries 411 15.2.4. Parallelization Without Using ROLLUP 412 15.3. Parallelization of CUBE Queries 412 15.3.1. Analysis of Basic CUBE Queries 413 15.3.2. Analysis of Partial CUBE Queries 416 15.3.3. Parallelization Without Using CUBE 417 15.4. Parallelization of Top-N and Ranking Queries 418 15.5. Parallelization of Cume_Dist Queries 419 15.6. Parallelization of NTILE and Histogram Queries 420 15.7. Parallelization of Moving Average and Windowing Queries 422 15.8. Summary 424 15.9. Bibliographical Notes 424 15.10. Exercises 425 16. Parallel Data Mining—Association Rules and Sequential Patterns 427 16.1. From Databases To Data Warehousing To Data Mining: A Journey 428 16.2. Data Mining: A Brief Overview 431 16.2.1. Data Mining Tasks 431 16.2.2. Querying vs. Mining 433 16.2.3. Parallelism in Data Mining 436 16.3. Parallel Association Rules 440 16.3.1. Association Rules: Concepts 441 16.3.2. Association Rules: Processes 444 16.3.3. Association Rules: Parallel Processing 448 16.4. Parallel Sequential Patterns 450 16.4.1. Sequential Patterns: Concepts 452 16.4.2. Sequential Patterns: Processes 456 16.4.3. Sequential Patterns: Parallel Processing 459 16.5. Summary 461 16.6. Bibliographical Notes 461 16.7. Exercises 462 17. Parallel Clustering and Classification 464 17.1. Clustering and Classification 464 17.1.1. Clustering 464 17.1.2. Classification 465 17.2. Parallel Clustering 467 17.2.1. Clustering: Concepts 467 17.2.2. k-Means Algorithm 468 17.2.3. Parallel k-Means Clustering 471 17.3. Parallel Classification 477 17.3.1. Decision Tree Classification: Structures 477 17.3.2. Decision Tree Classification: Processes 480 17.3.3. Decision Tree Classification: Parallel Processing 488 17.4. Summary 495 17.5. Bibliographical Notes 498 17.6. Exercises 498 Permissions 501 List of Conferences and Journals 507 Bibliography 511 Index 541

    Out of stock

    £140.35

  • Parallel Algorithms

    John Wiley & Sons Inc Parallel Algorithms

    1 in stock

    Book SynopsisParallel algorithms Made Easy The complexity of today''s applications coupled with the widespread use of parallel computing has made the design and analysis of parallel algorithms topics of growing interest. This volume fills a need in the field for an introductory treatment of parallel algorithms-appropriate even at the undergraduate level, where no other textbooks on the subject exist. It features a systematic approach to the latest design techniques, providing analysis and implementation details for each parallel algorithm described in the book. Introduction to Parallel Algorithms covers foundations of parallel computing; parallel algorithms for trees and graphs; parallel algorithms for sorting, searching, and merging; and numerical algorithms. This remarkable book: * Presents basic concepts in clear and simple terms * Incorporates numerous examples to enhance students'' understanding * Shows how to develop parallel algorithms for all classical problems in compuTrade Review"...an introduction to parallel algorithms..." (Zentralblatt fur Mathematik, Vol. 948, No. 23)Table of ContentsFOUNDATIONS OF PARALLEL COMPUTING. Elements of Parallel Computing. Data Structures for Parallel Computing. Paradigms for Parallel Algorithm. Simple Algorithms. ALGORITHMS FOR GRAPH MODELS. Tree Algorithms. Graph Algorithms. NC Algorithms for Chordal Graphs. ARRAY MANIPULATION ALGORITHMS. Searching and Merging. Sorting Algorithms. NUMERICAL ALGORITHMS. Algebraic Equations and Matrices. Differentiation and Integration. Differential Equations. Answers to Selected Exercises. Index.

    1 in stock

    £144.85

  • Parallel and Distributed Computing A Survey of

    John Wiley & Sons Inc Parallel and Distributed Computing A Survey of

    2 in stock

    Book SynopsisFocuses on the area of parallel and distributed computing, and considers the diverse approaches. Covering a comprehensive set of models and paradigms, this book serves as both an introduction and a survey. It is suitable for students and can be used as a foundation for parallel and distributed computing courses.Trade Review"A supplemental text providing a framework within which individual topics can be elaborated on in...courses...or a survey that researchers can consult before choosing a set of models and paradigms for the overlapping approaches to programming." (SciTech Book News, Vol. 25, No. 3, September 2001) "an excellent introduction to the field of parallel computing .." (CVu - Jnl of the Association C & C++ Users, February 2002)Table of ContentsArchitectures. Data Parallelism. Shared-Memory Programming. Message Passing. Client/Server Computing. Code Mobility. Coordination Models. Object-Oriented Models. High-Level Programming Models. Abstract Models. Final Comparison. References. Index.

    2 in stock

    £131.35

  • Connectionism and the Mind

    John Wiley and Sons Ltd Connectionism and the Mind

    10 in stock

    Book SynopsisConnectionism and the Mind provides a clear and balanced introduction to connectionist networks and explores theoretical and philosophical implications. Much of this discussion from the first edition has been updated, and three new chapters have been added on the relation of connectionism to recent work on dynamical systems theory, artificial life, and cognitive neuroscience. Read two of the sample chapters on line: Connectionism and the Dynamical Approach to Cognition: http://www.blackwellpublishing.com/pdf/bechtel.pdf Networks, Robots, and Artificial Life: http://www.blackwellpublishing.com/pdf/bechtel2.pdfTrade Review"Much more than just an update, this is a thorough and exciting re-build of the classic text. Excellent new treatments of modularity, dynamics, artificial life, and cognitive neuroscience locate connectionism at the very heart of contemporary debates. A superb combination of detail, clarity, scope, and enthusiasm." Andy Clark, University of Sussex "Connectionism and the Mind is an extraordinarily comprehensive and thoughtful review of connectionism, with particular emphasis on recent developments. This new edition will be a valuable primer to those new to the field. But there is more: Bechtel and Abrahamsen's trenchant and even-handed analysis of the conceptual issues that are addressed by connectionist models constitute an important original theoretical contribution to cognitive science." Jeff Elman, University of California at San DiegoTable of ContentsPreface. 1. Networks versus Symbol Systems: Two Approaches to Modeling Cognition:. A Revolution in the Making?. Forerunners of Connectionism: Pandemonium and Perceptrons. The Allure of Symbol Manipulation. The Disappearance and Re-emergence of Network Models. New Alliances and Unfinished Business. Notes. Sources and Suggested Readings. 2. Connectionist Architectures:. The Flavor of Connectionist Processing: A Simulation of Memory Retrieval. The Design Features of a Connectionist Architecture. The Allure of the Connectionist Approach. Challenges Facing Connectionist Networks. Summary. Notes. Sources and Suggested Readings. 3. Learning:. Traditional and Contemporary Approaches to Learning. Connectionist Models of Learning. Some Issues Regarding Learning. Notes. Sources and Suggested Readings. 4. Pattern Recognition and Cognition:. Networks as Pattern Recognition Devices. Extending Pattern Recognition to Higher Cognition. Logical Inference as Pattern Recognition. Beyond Pattern Recognition. Notes. Sources and Suggested Readings. 5. Are Rules Required to Process Representations?:. Is Language Use Governed by Rules?. Rumelhart and McClelland's Model of Past-Tense Acquisition. Pinker and Prince's Arguments for Rules. Accounting for the U-Shaped Learning Function. Conclusion. Notes. Sources and Suggested Readings. 6. Are Syntactically Structured Representations Needed?:. Fodor and Pylyshyn's Critique: The Need for Symbolic Representations with Constituent Structure. First Connectionist Response: Explicitly Implementing Rules and Representations. Second Connectionist Response: Implementing Functionally Compositional Representations. Third Connectionist Response: Employing Procedural Knowledge with External Symbols. Using External Symbols to Provide Exact Symbol Processing. Clarifying the Standard: Systematicity and Degree of Generalizability. Conclusion. Notes. Sources and Suggested Readings. 7. Simulating Higher Cognition: A Modular Architecture for Processing Scripts:. Overview of Scripts. Overview of Miikkulainen's DISCERN System. Modular Connectionist Architectures. FGREP: An Architecture that Allows the System to Devise Its Own Representations. A Self-organizing Lexicon using Kohonen Feature Maps. Encoding and Decoding Stories as Scripts. A Connectionist Episodic Memory. Performance: Paraphrasing Stories and Answering Questions. Evaluating DISCERN. Paths Beyond the First Decade of Connectionism. Notes. Sources and Suggested Readings. 8. Connectionism and the Dynamical Approach to Cognition:. Are We on the Road to a Dynamical Revolution?. Basic Concepts of DST: The Geometry of Change. Using Dynamical Systems Tools to Analyze Networks. Putting Chaos to Work in Networks. Is Dynamicism a Competitor to Connectionism?. Is Dynamicism Complementary to Connectionism?. Conclusion. Notes. Sources and Suggested Readings. 9. Networks, Robots, and Artificial Life:. Robots and the Genetic Algorithm. Cellular Automata and the Synthetic Strategy. Evolution and Learning in Food-seekers. Evolution and Development in Khepera. The Computational Neuroethology of Robots. When Philosophers Encounter Robots. Conclusion. Sources and Suggested Readings. 10. Connectionism and the Brain:. Connectionism Meets Cognitive Neuroscience. Four Connectionist Models of Brain Processes. The Neural Implausibility of Many Connectionist Models. Wither Connectionism?. Notes. Sources and Suggested Readings. Appendix A: Notation. Appendix B: Glossary. Bibliography. Name Index. Subject Index.

    10 in stock

    £152.75

  • Scientific Parallel Computing

    Princeton University Press Scientific Parallel Computing

    Out of stock

    Book SynopsisFrom mining genomes to the World Wide Web, from modeling financial markets to global weather patterns, parallel computing enables computations that would otherwise be impractical if not impossible with sequential approaches alone. This book covers the fundamentals of parallel computing.Trade Review"The text as a whole offers a good blend of theoretical and practical expertise with discussion of both hardware and software issues of parallel computing. This range of topics is the strength of the text, and not something found in other texts."--John Stone, Times Higher Education Supplement "L. Ridgway Scott, Terry Clark, and Babak Bagheri have prepared a thorough treatment of the foundational and advanced principles of parallel computing... [T]his book provides an excellent background for understanding grids and parallel algorithms in general."--ChoiceTable of ContentsPreface ix Notation xiii Chapter 1. Introduction 1 1.1 Overview 1 1.2 What is parallel computing? 3 1.3 Performance 4 1.4 Why parallel? 11 1.5 Two simple examples 15 1.6 Mesh-based applications 24 1.7 Parallel perspectives 30 1.8 Exercises 33 Chapter 2. Parallel Performance 37 2.1 Summation example 37 2.2 Performance measures 38 2.3 Limits to performance 44 2.4 Scalability 48 2.5 Parallel performance analysis 56 2.6 Parallel payoff 59 2.7 Real world parallelism 64 2.8 Starting SPMD programming 66 2.9 Exercises 66 Chapter 3. Computer Architecture 71 3.1 PMS notation 71 3.2 Shared memory multiprocessor 75 3.3 Distributed memory multicomputer 79 3.4 Pipeline and vector processors 87 3.5 Comparison of parallel architectures 89 3.6 Taxonomies 92 3.7 Current trends 94 3.8 Exercises 95 Chapter 4. Dependences 99 4.1 Data dependences 100 4.2 Loop-carried data dependences 103 4.3 Dependence examples 110 4.4 Testing for loop-carried dependences 112 4.5 Loop transformations 114 4.6 Dependence examples continued 120 4.7 Exercises 123 Chapter 5. Parallel Languages 127 5.1 Critical factors 129 5.2 Command and control 134 5.3 Memory models 136 5.4 Shared memory programming 139 5.5 Message passing 143 5.6 Examples and comments 148 5.7 Parallel language developments 153 5.8 Exercises 154 Chapter 6. Collective Operations 157 6.1 The @notation 157 6.2 Tree/ring algorithms 158 6.3 Reduction operations 162 6.4 Reduction operation applications 164 6.5 Parallel prefix algorithms 168 6.6 Performance of reduction operations 169 6.7 Data movement operations 173 6.8 Exercises 174 Chapter 7. Current Programming Standards 177 7.1 Introduction to MPI 177 7.2 Collective operations in MPI 181 7.3 Introduction to POSIX threads 184 7.4 Exercises 187 Chapter 8. The Planguage Model 191 8.1 I P language details 192 8.2 Ranges and arrays 198 8.3 Reduction operations in Pfortran 200 8.4 Introduction to PC 204 8.5 Reduction operations in PC 206 8.6 Planguages versus message passing 207 8.7 Exercises 208 Chapter 9. High Performance Fortran 213 9.1 HPF data distribution directives 214 9.2 Other mechanisms for expressing concurrency 219 9.3 Compiling HPF 220 9.4 HPF comparisons and review 221 9.5 Exercises 222 Chapter 10. Loop Tiling 227 10.1 Loop tiling 227 10.2 Work vs.data decomposition 228 10.3 Tiling in OpenMP 228 10.4 Teams 232 10.5 Parallel regions 233 10.6 Exercises 234 Chapter 11. Matrix Eigen Analysis 237 11.1 The Leslie matrix model 237 11.2 The power method 242 11.3 A parallel Leslie matrix program 244 11.4 Matrix-vector product 249 11.5 Power method applications 251 11.6 Exercises 253 Chapter 12. Linea Systems 257 12.1 Gaussian elimination 257 12.2 Solving triangular systems in parallel 262 12.3 Divide-and-conquer algorithms 271 12.4 Exercises 277 12.5 Projects 281 Chapter 13. Particle Dynamics 283 13.1 Model assumptions 284 13.2 Using Newton's third law 285 13.3 Further code complications 288 13.4 Pair list generation 290 13.5 Force calculation with a pair list 296 13.6 Performance of replication algorithm 299 13.7 Case study:particle dynamics in HPF 302 13.8 Exercises 307 13.9 Projects 310 Chapter 14. Mesh Methods 315 14.1 Boundary value problems 315 14.2 Iterative methods 319 14.3 Multigrid methods 322 14.4 Multidimensional problems 327 14.5 Initial value problems 328 14.6 Exercises 333 14.7 Projects 334 Chapter 15. Sorting 335 15.1 Introduction 335 15.2 Parallel sorting 337 15.3 Spatial sorting 342 15.4 Exercises 353 15.5 Projects 355 Bibliography 357 Index 369

    Out of stock

    £76.00

  • Parallel Natural Language Processing

    Intellect Books Parallel Natural Language Processing

    Out of stock

    Book SynopsisThis volume offers 13 contributions by scientists from the fields of computer, artificial intelligence, and computational linguistics. The chapters provide an extensive introduction to the field, as well as articles that deal with both coarse grained and fine-grained approaches.

    Out of stock

    £25.17

  • Foundations of Scalable Systems

    O'Reilly Media Foundations of Scalable Systems

    7 in stock

    Book SynopsisThis practical book covers design approaches and technologies that make it possible to scale an application quickly and cost-effectively. Author Ian Gorton takes software architects and developers through the principles of foundational distributed systems.

    7 in stock

    £39.74

  • Edsger Wybe Dijkstra

    Association of Computing Machinery,U.S. Edsger Wybe Dijkstra

    15 in stock

    Book SynopsisEdsger Wybe Dijkstra (1930-2002) was one of the most influential researchers in the history of computer science, making fundamental contributions to both the theory and practice of computing. In this book, 31 computer scientists present and discuss Dijkstra’s numerous contributions to computing science and assess their impact.

    15 in stock

    £69.30

  • Edsger Wybe Dijkstra

    Association of Computing Machinery,U.S. Edsger Wybe Dijkstra

    15 in stock

    Book SynopsisEdsger Wybe Dijkstra (1930-2002) was one of the most influential researchers in the history of computer science, making fundamental contributions to both the theory and practice of computing. In this book, 31 computer scientists present and discuss Dijkstra’s numerous contributions to computing science and assess their impact.

    15 in stock

    £99.90

  • Toward Teraflop Computing & New Grand Challenge

    Nova Science Publishers Inc Toward Teraflop Computing & New Grand Challenge

    Out of stock

    Book SynopsisToward Teraflop Computing & New Grand Challenge Applications Proceedings of the Mardi Gras ''94 Conference, February 10-12, 1994 Louisiana State University

    Out of stock

    £127.99

  • Constructive Methods for Parallel Programming

    Nova Science Publishers Inc Constructive Methods for Parallel Programming

    1 in stock

    Book SynopsisConstructive Methods for Parallel Programming

    1 in stock

    £85.59

  • Quality of Parallel & Distributed Programs &

    Nova Science Publishers Inc Quality of Parallel & Distributed Programs &

    Out of stock

    Book SynopsisThe field of parallel computing dates back to the mid-fifties, where research labs started the development of so-called supercomputers with the aim to significantly increase the performance, mainly the number of (floating point) operations a machine is able to perform per unit of time. Since then, significant advances in hardware and software technology have brought the field to a point where the long-time challenge of tera-flop computing was reached in 1998. While increases in performance are still a driving factor in parallel and distributed processing, there are many other challenges to be addressed in the field. Enabled by growth of the Internet, the majority of desktop computers nowadays can be seen as part of a huge distributed system, the World Wide Web. Advances in wireless networks extend the scope to a variety of mobile devices (including notebooks, PDAs, or mobile phones). Information is therefore distributed by nature, users require immediate access to information sources, to computing power, and to communication facilities. While performance in the sense defined above is still an important criterion in such kind of systems, other issues, including correctness, reliability, security, ease of use, ubiquitous access, intelligent services, etc. must be considered already in the development process itself. This extended notion of performance covering all those aspects is called "quality of parallel and distributed programs and systems". In order to examine and guarantee quality of parallel and distributed programs and systems special models, metrics and tools are necessary. The six papers selected for this volume tackle various aspects of these problems.

    Out of stock

    £55.99

  • Advanced Parallel & Distributed Computing:

    Nova Science Publishers Inc Advanced Parallel & Distributed Computing:

    1 in stock

    Book Synopsis

    1 in stock

    £129.74

  • Performance Modelling Techniques for Parallel

    Nova Science Publishers Inc Performance Modelling Techniques for Parallel

    1 in stock

    Book Synopsis

    1 in stock

    £36.74

  • Grokking Concurrency

    Manning Publications Grokking Concurrency

    10 in stock

    Book SynopsisThis easy-to-read, hands-on guide demystifies concurrency concepts like threading, asynchronous programming, and parallel processing in any language. For readers who know the basics of programming. Grokking Concurrency is the ultimate guide to effective concurrency practices that will help you leverage multiple cores, excel with high loads, handle terabytes of data, and continue working after hardware and software failures. The core concepts in this guide will remain eternally relevant, whether you are building web apps, IoT systems, or handling big data. Specifically, you will: Get up to speed with the core concepts of concurrency, asynchrony, and parallel programming Learn the strengths and weaknesses of different hardware architectures Improve the sequential performance characteristics of your software Solve common problems for concurrent programming Compose patterns into a series of practices for writing scalable systems Write and implement concurrency systems that scale to any size Grokking Concurrency demystifies writing high-performance concurrent code through clear explanations of core concepts, interesting illustrations, insightful examples, and detailed techniques you can apply to your own projects. About the technology Microservices, big data, real-time systems, and other performance-intensive applications can all slow your systems to a crawl. You know the solution is “concurrency.” Now what? How do you choose among concurrency approaches? How can you be sure you will actually reduce latency and complete your jobs faster? This entertaining, fully illustrated guide answers all of your concurrency questions so you can start taking full advantage of modern multicore processors.Trade ReviewDon't be afraid about concurrency, learn from Grokking Concurrency! Eddu Melendez This book is a model of clarity. It clearly puts back not-so-well-known concepts in context. Luc Rogge The Manning Grokking series has a well deserved good reputation and this book will not let the series down. Patrick Regan

    10 in stock

    £38.99

  • Kubernetes in Production Best Practices: Build

    Packt Publishing Limited Kubernetes in Production Best Practices: Build

    1 in stock

    Book SynopsisDesign, build, and operate scalable and reliable Kubernetes infrastructure for productionKey Features Implement industry best practices to build and manage production-grade Kubernetes infrastructure Learn how to architect scalable Kubernetes clusters, harden container security, and fine-tune resource management Understand, manage, and operate complex business workloads confidently Book DescriptionAlthough out-of-the-box solutions can help you to get a cluster up and running quickly, running a Kubernetes cluster that is optimized for production workloads is a challenge, especially for users with basic or intermediate knowledge. With detailed coverage of cloud industry standards and best practices for achieving scalability, availability, operational excellence, and cost optimization, this Kubernetes book is a blueprint for managing applications and services in production.You'll discover the most common way to deploy and operate Kubernetes clusters, which is to use a public cloud-managed service from AWS, Azure, or Google Cloud Platform (GCP). This book explores Amazon Elastic Kubernetes Service (Amazon EKS), the AWS-managed version of Kubernetes, for working through practical exercises. As you get to grips with implementation details specific to AWS and EKS, you'll understand the design concepts, implementation best practices, and configuration applicable to other cloud-managed services. Throughout the book, you'll also discover standard and cloud-agnostic tools, such as Terraform and Ansible, for provisioning and configuring infrastructure.By the end of this book, you'll be able to leverage Kubernetes to operate and manage your production environments confidently.What you will learn Explore different infrastructure architectures for Kubernetes deployment Implement optimal open source and commercial storage management solutions Apply best practices for provisioning and configuring Kubernetes clusters, including infrastructure as code (IaC) and configuration as code (CAC) Configure the cluster networking plugin and core networking components to get the best out of them Secure your Kubernetes environment using the latest tools and best practices Deploy core observability stacks, such as monitoring and logging, to fine-tune your infrastructure Who this book is forThis book is for cloud infrastructure experts, DevOps engineers, site reliability engineers, and engineering managers looking to design and operate Kubernetes infrastructure for production. Basic knowledge of Kubernetes, Terraform, Ansible, Linux, and AWS is needed to get the most out of this book.Table of ContentsTable of Contents Introduction to Kubernetes Infrastructure and Production-Readiness Architecting Production-Grade Kubernetes Infrastructure Provisioning Kubernetes Clusters Using AWS and Terraform Managing Cluster Configuration with Ansible Configuring and Enhancing Kubernetes Networking Services Securing Kubernetes Effectively Managing Storage and Stateful Applications Deploying Seamless and Reliable Applications Monitoring, Logging, and Observability Operating and Maintaining Efficient Kubernetes Clusters

    1 in stock

    £27.99

  • Measuring Organisational Efficiency

    College Publications Measuring Organisational Efficiency

    15 in stock

    15 in stock

    £13.50

  • New Age International (UK) Ltd Parallel Computing

    5 in stock

    Book Synopsis

    5 in stock

    £28.50

  • Parallel Processing and Applied Mathematics: 14th

    Springer International Publishing AG Parallel Processing and Applied Mathematics: 14th

    1 in stock

    Book SynopsisThis two-volume set, LNCS 13826 and LNCS 13827, constitutes the proceedings of the 14th International Conference on Parallel Processing and Applied Mathematics, PPAM 2022, held in Gdansk, Poland, in September 2022.The 77 regular papers presented in these volumes were selected from 132 submissions. For regular tracks of the conference, 33 papers were selected from 62 submissions.The papers were organized in topical sections named as follows:Part I: numerical algorithms and parallel scientific computing; parallel non-numerical algorithms; GPU computing; performance analysis and prediction in HPC systems; scheduling for parallel computing; environments and frameworks for parallel/cloud computing; applications of parallel and distributed computing; soft computing with applications and special session on parallel EVD/SVD and its application in matrix computations.Part II: 9th Workshop on Language-Based Parallel Programming (WLPP 2022); 6th Workshop on Models, Algorithms and Methodologies for Hybrid Parallelism in New HPC Systems (MAMHYP 2022); first workshop on quantum computing and communication; First Workshop on Applications of Machine Learning and Artificial Intelligence in High Performance Computing (WAML 2022); 4th workshop on applied high performance numerical algorithms for PDEs; 5th minisymposium on HPC applications in physical sciences; 8th minisymposium on high performance computing interval methods; 7th workshop on complex collective systems.Table of Contents​Numerical Algorithms and Parallel Scientific Computing.- How accurate does Newton have to be?.- General framework for deriving reproducible Krylov subspace algorithms: BiCGStab case.- A generalized parallel prefix sums algorithm for arbitrary size array.- Infinite-Precision Inner Product and Sparse Matrix-Vector Multiplication Using Ozaki Scheme with Dot2 on Manycore Processors.- Advanced Stochastic Approaches for Applied Computing in Environmental Modeling.- Parallel Non-numerical Algorithms.- Parallel Suffix Sorting for Large String Analytics.- Parallel Extremely Randomized Decision Forests on Graphics Processors for Text Classification.- RDBMS speculative support improvement by the use of the query hypergraph representation.- GPU Computing.- Mixed Precision Algebraic Multigrid on GPUs.- Compact in-memory representation of decision trees in GPU-accelerated evolutionary induction.- Neural Nets with a Newton Conjugate Gradient Method on Multiple GPUs.- Performance Analysis and Prediction in HPC Systems.- Exploring Techniques for the Analysis of Spontaneous Asynchronicity in MPI-Parallel Applications.- Cost and Performance Analysis of MPI-based SaaS on the Private Cloud Infrastructure.- Building a Fine-Grained Analytical Performance Model for Complex Scientific Simulations.- Evaluation of machine learning techniques for predicting run times of scientific workflow jobs.- Smart clustering of HPC applications using similar job detection methods.- Scheduling for Parallel Computing.- Distributed Work Stealing in a Task-Based Dataflow Runtime.- Task Scheduler for Heterogeneous Data Centres based on Deep Reinforcement Learning.- Shisha: Online scheduling of CNN pipelines on heterogeneous architectures.- Proactive Task Offloading for Load Balancing in Iterative Applications.- Environments and Frameworks for Parallel/Cloud Computing.- Language Agnostic Approach for Unification of Implementation Variants for Different Computing Devices.- High Performance Dataframes from Parallel Processing Patterns.- Global Access to Legacy Data-Sets in Multi-Cloud Applications with Onedata.- Applications of Parallel and Distributed Computing.- MD-Bench: A generic proxy-app toolbox for state-of-the-art molecular dynamics algorithms.- Breaking Down the Parallel Performance of GROMACS, a High-Performance Molecular Dynamics Software.- GPU-based Molecular Dynamics of Turbulent Liquid Flows with OpenMM.- A novel parallel approach for modeling the dynamics of aerodynamically interacting particles in turbulent flows.- Reliable energy measurement on heterogeneous Systems–on–Chip based environments.- Distributed Objective Function Evaluation for Optimization of Radiation Therapy Treatment Plans.- Soft Computing with Applications.- GPU4SNN: GPU-based Acceleration for Spiking Neural Network Simulations.- Ant System Inspired Heuristic Optimization of UAVs Deployment for k-Coverage Problem.- Dataset related experimental investigation of chess position evaluation using a deep neural network.- Using AI-based edge processing in monitoring the pedestrian crossing.- Special Session on Parallel EVD/SVD and its Application in Matrix Computations.- Automatic code selection for the dense symmetric generalized eigenvalue problem using ATMathCoreLib.- On Relative Accuracy of the One-Sided Block-Jacobi SVD Algorithm.

    1 in stock

    £53.99

  • Parallel Processing and Applied Mathematics: 14th

    Springer International Publishing AG Parallel Processing and Applied Mathematics: 14th

    1 in stock

    Book SynopsisThis two-volume set, LNCS 13826 and LNCS 13827, constitutes the proceedings of the 14th International Conference on Parallel Processing and Applied Mathematics, PPAM 2022, held in Gdansk, Poland, in September 2022.The 77 regular papers presented in these volumes were selected from 132 submissions. For regular tracks of the conference, 33 papers were selected from 62 submissions.The papers were organized in topical sections named as follows:Part I: numerical algorithms and parallel scientific computing; parallel non-numerical algorithms; GPU computing; performance analysis and prediction in HPC systems; scheduling for parallel computing; environments and frameworks for parallel/cloud computing; applications of parallel and distributed computing; soft computing with applications and special session on parallel EVD/SVD and its application in matrix computations.Part II: 9th Workshop on Language-Based Parallel Programming (WLPP 2022); 6th Workshop on Models, Algorithms and Methodologies for Hybrid Parallelism in New HPC Systems (MAMHYP 2022); first workshop on quantum computing and communication; First Workshop on Applications of Machine Learning and Artificial Intelligence in High Performance Computing (WAML 2022); 4th workshop on applied high performance numerical algorithms for PDEs; 5th minisymposium on HPC applications in physical sciences; 8th minisymposium on high performance computing interval methods; 7th workshop on complex collective systems.Table of Contents​9th Workshop on Language-Based Parallel Programming (WLPP 2022).- Kokkos-Based Implementation of MPCD on Heterogeneous Nodes.- Comparison of Load Balancing Schemes for Asynchronous Many-Task Runtimes.- New Insights on the Revised Definition of the Performance Portability Metric.- Inferential statistical analysis of performance portability.- NPDP Benchmark Suite for Loop Tiling Effectiveness Evaluation.- Parallel Vectorized Implementations of Compensated Summation Algorithms.- 6th Workshop on Models, Algorithms and Methodologies for Hybrid Parallelism in New HPC Systems (MAMHYP 2022).- Malleability Techniques for HPC Systems.- Algorithm and software overhead: a theoretical approach to performance portability.- Benchmarking A High Performance Computing Heterogeneous Cluster.- A Generative Adversarial Network approach for noise and artifacts reduction in MRI head and neck imaging.- A GPU accelerated Hyperspectral 3D Convolutional Neural Network Classification at the Edge with Principal Component Analysis preprocessing.- Parallel gEUD models for accelerated IMRT planning on modern HPC platforms.- First Workshop on Quantum Computing and Communication.- On Quantum-Assisted LDPC Decoding Augmented with Classical Post-Processing.- Quantum annealing to solve the unrelated parallel machine scheduling problem.- Early experiences with a photonic quantum simulator for solving Job Shop Scheduling Problem.- Some remarks on super-gram operators for general bipartite quantum states.- Solving the Traveling Salesman Problem with a Hybrid Quantum-Classical Feedforward Neural Network.- Software aided analysis of EWL based quantum games.- First Workshop on Applications of Machine Learning and Artificial Intelligence in High Performance Computing (WAML 2022).- Adaptation of AI-accelerated CFD simulations to the IPU platform.- Performance Analysis of Convolution Algorithms for Deep Learning on Edge Processors.- Machine Learning-based Online Scheduling in Distributed Computing.- High Performance Computing Queue Time Prediction using Clustering and Regression.- Acceptance Rates of Invertible Neural Networks on Electron Spectra from Near-Critical Laser-Plasmas: A Comparison.- 4th Workshop on Applied High Performance Numerical Algorithms for PDEs.- MATLAB implementation of hp finite elements on rectangles using hierarchical basis functions.- Adaptive Parallel Average Schwarz Preconditioner for Crouzeix-Raviart Finite Volume Method.- Parareal method for anisotropic diffusion denoising.- Comparison of block preconditioners for the Stokes problem with discontinuous viscosity and friction.- On minimization of nonlinear energies using FEM in MATLAB.- A model for crowd evacuation dynamics: 2D numerical simulations.- 5th Minisymposium on HPC Applications in Physical Sciences.- Parallel Identification of Unique Sequences in Nuclear Structure Calculations.- Experimental and computer study of molecular dynamics of a new pyridazine derivative.- Description of magnetic nanomolecules by the extended multi-orbital Hubbard model: perturbative vs numerical approach.- Structural and electronic properties of small-diameter Carbon NanoTubes: a DFT study.- 8th Minisymposium on High Performance Computing Interval Methods.- Need for Techniques Intermediate Between Interval and Probabilistic Ones.- A Cross-Platform Benchmark for Interval Computation Libraries.- Testing interval arithmetic libraries, including their IEEE-1788 compliance.- A survey of interval algorithms for solving multicriteria analysis problems.- 7th Workshop on Complex Collective Systems.- Social Fragmentation Transitions in Large-Scale Parameter Sweep Simulations of Adaptive Social Networks.- Parking search in urban street networks: Taming down the complexity of the search-time problem via a coarse-graining approach.- A multi-agent cellular automata model of lane changing behaviour considering the aggressiveness and the autonomy.- Comparison of the use of UWB and BLE as positioning methods in data-driven modeling of pedestrian dynamics.- An Insight into the State-of-the-Art Vehicular Fog Computing with an Opportunistic Flavour.

    1 in stock

    £94.99

  • Solving Partial Differential Equations On

    World Scientific Publishing Co Pte Ltd Solving Partial Differential Equations On

    Out of stock

    Book SynopsisThis is an introductory book on supercomputer applications written by a researcher who is working on solving scientific and engineering application problems on parallel computers. The book is intended to quickly bring researchers and graduate students working on numerical solutions of partial differential equations with various applications into the area of parallel processing.The book starts from the basic concepts of parallel processing, like speedup, efficiency and different parallel architectures, then introduces the most frequently used algorithms for solving PDEs on parallel computers, with practical examples. Finally, it discusses more advanced topics, including different scalability metrics, parallel time stepping algorithms and new architectures and heterogeneous computing networks which have emerged in the last few years of high performance computing. Hundreds of references are also included in the book to direct interested readers to more detailed and in-depth discussions of specific topics.

    Out of stock

    £80.75

  • Modeling, Simulation, And Control Of Flexible

    World Scientific Publishing Co Pte Ltd Modeling, Simulation, And Control Of Flexible

    Out of stock

    Book SynopsisOne critical barrier leading to successful implementation of flexible manufacturing and related automated systems is the ever-increasing complexity of their modeling, analysis, simulation, and control. Research and development over the last three decades has provided new theory and graphical tools based on Petri nets and related concepts for the design of such systems. The purpose of this book is to introduce a set of Petri-net-based tools and methods to address a variety of problems associated with the design and implementation of flexible manufacturing systems (FMSs), with several implementation examples.There are three ways this book will directly benefit readers. First, the book will allow engineers and managers who are responsible for the design and implementation of modern manufacturing systems to evaluate Petri nets for applications in their work. Second, it will provide sufficient breadth and depth to allow development of Petri-net-based industrial applications. Third, it will allow the basic Petri net material to be taught to industrial practitioners, students, and academic researchers much more efficiently. This will foster further research and applications of Petri nets in aiding the successful implementation of advanced manufacturing systems.Table of ContentsFlexible manufacturing systems - an overview; Petri nets as an integrated tool and methodology in FMS design; fundamentals of Petri nets; modelling FMS with Petri nets; FMS performance analysis; Petri net simulation and tools; performance evaluation of pull and push paradigms in flexible automation; augmented-timed Petri nets for modelling breakdown handling; real-time Petri nets for discrete event control; comparison of real-time Petri nets and ladder logic diagrams; an object-oriented design methodology for development of FMS control software; scheduling using Petri nets; Petri nets and future research.

    Out of stock

    £89.10

  • Parallel Algorithms

    World Scientific Publishing Co Pte Ltd Parallel Algorithms

    2 in stock

    Book SynopsisThis book is an introduction to the field of parallel algorithms and the underpinning techniques to realize the parallelization. The emphasis is on designing algorithms within the timeless and abstracted context of a high-level programming language. The focus of the presentation is on practical applications of the algorithm design using different models of parallel computation. Each model is illustrated by providing an adequate number of algorithms to solve some problems that quite often arise in many applications in science and engineering.The book is largely self-contained, presuming no special knowledge of parallel computers or particular mathematics. In addition, the solutions to all exercises are included at the end of each chapter.The book is intended as a text in the field of the design and analysis of parallel algorithms. It includes adequate material for a course in parallel algorithms at both undergraduate and graduate levels.

    2 in stock

    £108.00

  • Sonar Publishing Mastering Scala: Elegance in Code

    Out of stock

    Book Synopsis

    Out of stock

    £21.84

© 2026 Book Curl

    • American Express
    • Apple Pay
    • Diners Club
    • Discover
    • Google Pay
    • Maestro
    • Mastercard
    • PayPal
    • Shop Pay
    • Union Pay
    • Visa

    Login

    Forgot your password?

    Don't have an account yet?
    Create account