Parallel processing Books

21 products


  • Functional and Concurrent Programming

    Pearson Education (US) Functional and Concurrent Programming

    15 in stock

    Book SynopsisMichel Charpentier is an associate professor with the Computer Science department at the University of New Hampshire (UNH). His interests over the years have ranged from distributed systems to formal verification and mobile sensor networks. He has been with UNH since 1999 and currently teaches courses in programming languages, concurrency, formal verification, and model-checking.Table of ContentsForeword by Cay Horstmann xxiii Preface xxv Acknowledgments xxxv About the Author xxxvii Part I. Functional Programming 1 Chapter 1: Concepts of Functional Programming 3 1.1 What Is Functional Programming? 3 1.2 Functions 4 1.3 From Functions to Functional Programming Concepts 6 1.4 Summary 7 Chapter 2: Functions in Programming Languages 9 2.1 Defining Functions 9 2.2 Composing Functions 10 2.3 Functions Defined as Methods 12 2.4 Operators Defined as Methods 12 2.5 Extension Methods 13 2.6 Local Functions 14 2.7 Repeated Arguments 15 2.8 Optional Arguments 16 2.9 Named Arguments 16 2.10 Type Parameters 17 2.11 Summary 19 Chapter 3: Immutability 21 3.1 Pure and Impure Functions 21 3.2 Actions 23 3.3 Expressions Versus Statements 25 3.4 Functional Variables 26 3.5 Immutable Objects 28 3.6 Implementation of Mutable State 29 3.7 Functional Lists 31 3.8 Hybrid Designs 32 3.9 Updating Collections of Mutable/Immutable Objects 35 3.10 Summary 36 Chapter 4: Case Study: Active–Passive Sets 39 4.1 Object-Oriented Design 39 4.2 Functional Values 41 4.3 Functional Objects 43 4.4 Summary 44 Chapter 5: Pattern Matching and Algebraic Data Types 47 5.1 Functional Switch 47 5.2 Tuples 48 5.3 Options 50 5.4 Revisiting Functional Lists 51 5.5 Trees 53 5.6 Illustration: List Zipper 56 5.7 Extractors 59 5.8 Summary 60 Chapter 6: Recursive Programming 63 6.1 The Need for Recursion 63 6.2 Recursive Algorithms 65 6.3 Key Principles of Recursive Algorithms 67 6.4 Recursive Structures 69 6.5 Tail Recursion 71 6.6 Examples of Tail Recursive Functions 73 6.7 Summary 77 Chapter 7: Recursion on Lists 79 7.1 Recursive Algorithms as Equalities 79 7.2 Traversing Lists 80 7.3 Returning Lists 82 7.4 Building Lists from the Execution Stack 84 7.5 Recursion on Multiple/Nested Lists 85 7.6 Recursion on Sublists Other Than the Tail 88 7.7 Building Lists in Reverse Order 90 7.8 Illustration: Sorting 92 7.9 Building Lists Efficiently 94 7.10 Summary 96 Chapter 8: Case Study: Binary Search Trees 99 8.1 Binary Search Trees 99 8.2 Sets of Integers as Binary Search Trees 100 8.3 Implementation Without Rebalancing 102 8.4 Self-Balancing Trees 107 8.5 Summary 113 Chapter 9: Higher-Order Functions 115 9.1 Functions as Values 115 9.2 Currying 118 9.3 Function Literals 120 9.4 Functions Versus Methods 123 9.5 Single-Abstract-Method Interfaces 124 9.6 Partial Application 125 9.7 Closures 130 9.8 Inversion of Control 133 9.9 Summary 133 Chapter 10: Standard Higher-Order Functions 137 10.1 Functions with Predicate Arguments 137 10.2 map and foreach 140 10.3 atMap 141 10.4 fold and reduce 146 10.5 iterate, tabulate, and unfold 148 10.6 sortWith, sortBy, maxBy, and minBy 149 10.7 groupBy and groupMap 150 10.8 Implementing Standard Higher-Order Functions 152 10.9 foreach, map, atMap, and for-Comprehensions 152 10.10 Summary 155 Chapter 11: Case Study: File Systems as Trees 157 11.1 Design Overview 157 11.2 A Node-Searching Helper Function 158 11.3 String Representation 158 11.4 Building Trees 160 11.5 Querying 164 11.6 Navigation 168 11.7 Tree Zipper 169 11.8 Summary 172 Chapter 12: Lazy Evaluation 173 12.1 Delayed Evaluation of Arguments 173 12.2 By-Name Arguments 174 12.3 Control Abstraction 176 12.4 Internal Domain-Specifc Languages 179 12.5 Streams as Lazily Evaluated Lists 180 12.6 Streams as Pipelines 182 12.7 Streams as Infinite Data Structures 184 12.8 Iterators 184 12.9 Lists, Streams, Iterators, and Views 187 12.10 Delayed Evaluation of Fields and Local Variables 190 12.11 Illustration: Subset-Sum 191 12.12 Summary 193 Chapter 13: Handling Failures 195 13.1 Exceptions and Special Values 195 13.2 Using Option 197 13.3 Using Try 198 13.4 Using Either 199 13.5 Higher-Order Functions and Pipelines 201 13.6 Summary 204 Chapter 14: Case Study: Trampolines 205 14.1 Tail-Call Optimization 205 14.2 Trampolines for Tail-Calls 206 14.3 Tail-Call Optimization in Java 207 14.4 Dealing with Non-Tail-Calls 209 14.5 Summary 213 A Brief Interlude 215 Chapter 15: Types (and Related Concepts) 217 15.1 Typing Strategies 217 15.2 Types as Sets 222 15.3 Types as Services 223 15.4 Abstract Data Types 224 15.5 Type Inference 225 15.6 Subtypes 229 15.7 Polymorphism 232 15.8 Type Variance 235 15.9 Type Bounds 241 15.10 Type Classes 245 15.11 Summary 250 Part II. Concurrent Programming 253 Chapter 16: Concepts of Concurrent Programming 255 16.1 Non-sequential Programs 255 16.2 Concurrent Programming Concepts 258 16.3 Summary 259 Chapter 17: Threads and Nondeterminism 261 17.1 Threads of Execution 261 17.2 Creating Threads Using Lambda Expressions 263 17.3 Nondeterministic Behavior of Multithreaded Programs 263 17.4 Thread Termination 264 17.5 Testing and Debugging Multithreaded Programs 266 17.6 Summary 268 Chapter 18: Atomicity and Locking 271 18.1 Atomicity 271 18.2 Non-atomic Operations 273 18.3 Atomic Operations and Non-atomic Composition 274 18.4 Locking 278 18.5 Intrinsic Locks 279 18.6 Choosing Locking Targets 281 18.7 Summary 283 Chapter 19: Thread-Safe Objects 285 19.1 Immutable Objects 285 19.2 Encapsulating Synchronization Policies 286 19.3 Avoiding Reference Escape 288 19.4 Public and Private Locks 289 19.5 Leveraging Immutable Types 290 19.6 Thread-Safety 293 19.7 Summary 295 Chapter 20: Case Study: Thread-Safe Queue 297 20.1 Queues as Pairs of Lists 297 20.2 Single Public Lock Implementation 298 20.3 Single Private Lock Implementation 301 20.4 Applying Lock Splitting 303 20.5 Summary 305 Chapter 21: Thread Pools 307 21.1 Fire-and-Forget Asynchronous Execution 307 21.2 Illustration: Parallel Server 309 21.3 Different Types of Thread Pools 312 21.4 Parallel Collections 314 21.5 Summary 318 Chapter 22: Synchronization 321 22.1 Illustration of the Need for Synchronization 321 22.2 Synchronizers 324 22.3 Deadlocks 325 22.4 Debugging Deadlocks with Thread Dumps 328 22.5 The Java Memory Model 330 22.6 Summary 335 Chapter 23: Common Synchronizers 337 23.1 Locks 337 23.2 Latches and Barriers 339 23.3 Semaphores 341 23.4 Conditions 343 23.5 Blocking Queues 349 23.6 Summary 353 Chapter 24: Case Study: Parallel Execution 355 24.1 Sequential Reference Implementation 355 24.2 One New Thread per Task 356 24.3 Bounded Number of Threads 357 24.4 Dedicated Thread Pool 359 24.5 Shared Thread Pool 360 24.6 Bounded Thread Pool 361 24.7 Parallel Collections 362 24.8 Asynchronous Task Submission Using Conditions 362 24.9 Two-Semaphore Implementation 367 24.10 Summary 368 Chapter 25: Futures and Promises 369 25.1 Functional Tasks 369 25.2 Futures as Synchronizers 371 25.3 Timeouts, Failures, and Cancellation 374 25.4 Future Variants 375 25.5 Promises 375 25.6 Illustration: Thread-Safe Caching 377 25.7 Summary 379 Chapter 26: Functional-Concurrent Programming 381 26.1 Correctness and Performance Issues with Blocking 381 26.2 Callbacks 384 26.3 Higher-Order Functions on Futures 385 26.4 Function atMap on Futures 388 26.5 Illustration: Parallel Server Revisited 390 26.6 Functional-Concurrent Programming Patterns 393 26.7 Summary 397 Chapter 27: Minimizing Thread Blocking 399 27.1 Atomic Operations 399 27.2 Lock-Free Data Structures 402 27.3 Fork/Join Pools 405 27.4 Asynchronous Programming 406 27.5 Actors 407 27.6 Reactive Streams 411 27.7 Non-blocking Synchronization 412 27.8 Summary 414 Chapter 28: Case Study: Parallel Strategies 417 28.1 Problem Definition 417 28.2 Sequential Implementation with Timeout 419 28.3 Parallel Implementation Using invokeAny 420 28.4 Parallel Implementation Using CompletionService 421 28.5 Asynchronous Implementation with Scala Futures 422 28.6 Asynchronous Implementation with CompletableFuture 426 28.7 Caching Results from Strategies 427 28.8 Summary 431 Appendix A. Features of Java and Kotlin 433 A.1 Functions in Java and Kotlin 433 A.2 Immutability 436 A.3 Pattern Matching and Algebraic Data Types 437 A.4 Recursive Programming 439 A.5 Higher-Order Functions 440 A.6 Lazy Evaluation 446 A.7 Handling Failures 449 A.8 Types 451 A.9 Threads 453 A.10 Atomicity and Locking 454 A.11 Thread-Safe Objects 455 A.12 Thread Pools 457 A.13 Synchronization 459 A.14 Futures and Functional-Concurrent Programming 460 A.15 Minimizing Thread Blocking 461 Glossary 463 Index 465

    15 in stock

    £37.79

  • OpenACC for Programmers

    Pearson Education (US) OpenACC for Programmers

    1 in stock

    Book SynopsisSunita Chandrasekaran is assistant professor in the Computer and Information Sciences Department at the University of Delaware. Her research interests include exploring the suitability of high-level programming models and runtime systems for HPC and embedded platforms, and migrating scientific applications to heterogeneous computing systems. Dr. Chandrasekaran was a post-doctoral fellow at the University of Houston and holds a Ph.D. from Nanyang Technological University, Singapore. She is a member of OpenACC, OpenMP, MCA and SPEC HPG. She has served on the program committees of various conferences and workshops including SC, ISC, ICPP, CCGrid, Cluster, and PACT, and has co-chaired parallel programming workshops co-located with SC, ISC, IPDPS, and SIAM. Guido Juckeland is head of the Computational Science Group, Department for Information Services and Computing, Helmholtz-Zentrum Dresden-Rossendorf, and coordinates the work of the GPU Center Table of ContentsForeword xv Preface xxi Acknowledgments xxiii About the Contributors xxv Chapter 1: OpenACC in a Nutshell 1 1.1 OpenACC Syntax 3 1.2 Compute Constructs 6 1.3 The Data Environment 11 1.4 Summary 15 1.5 Exercises 15 Chapter 2: Loop-Level Parallelism 17 2.1 Kernels Versus Parallel Loops 18 2.2 Three Levels of Parallelism 21 2.3 Other Loop Constructs 24 2.4 Summary 30 2.5 Exercises 31 Chapter 3: Programming Tools for OpenACC 33 3.1 Common Characteristics of Architectures 34 3.2 Compiling OpenACC Code 35 3.3 Performance Analysis of OpenACC Applications 36 3.4 Identifying Bugs in OpenACC Programs 51 3.5 Summary 53 3.6 Exercises 54 Chapter 4: Using OpenACC for Your First Program 59 4.1 Case Study 59 4.2 Creating a Naive Parallel Version 68 4.3 Performance of OpenACC Programs 71 4.4 An Optimized Parallel Version 73 4.5 Summary 78 4.6 Exercises 79 Chapter 5: Compiling OpenACC 81 5.1 The Challenges of Parallelism 82 5.2 Restructuring Compilers 88 5.3 Compiling OpenACC 92 5.4 Summary 97 5.5 Exercises 97 Chapter 6: Best Programming Practices 101 6.1 General Guidelines 102 6.2 Maximize On-Device Compute 105 6.3 Optimize Data Locality 108 6.4 A Representative Example 112 6.5 Summary 118 6.6 Exercises 119 Chapter 7: OpenACC and Performance Portability 121 7.1 Challenges 121 7.2 Target Architectures 123 7.3 OpenACC for Performance Portability 124 7.4 Code Refactoring for Performance Portability126 7.5 Summary 132 7.6 Exercises133 Chapter 8: Additional Approaches to Parallel Programming 135 8.1 Programming Models135 8.2 Programming Model Components142 8.3 A Case Study 155 8.4 Summary170 8.5 Exercises170 Chapter 9: OpenACC and Interoperability 173 9.1 Calling Native Device Code from OpenACC 174 9.2 Calling OpenACC from Native Device Code 181 9.3 Advanced Interoperability Topics 182 9.4 Summary185 9.5 Exercises185 Chapter 10: Advanced OpenACC 187 10.1 Asynchronous Operations 187 10.2 Multidevice Programming 204 10.3 Summary 213 10.4 Exercises 213 Chapter 11: Innovative Research Ideas Using OpenACC, Part I 215 11.1 Sunway OpenACC 215 11.2 Compiler Transformation of Nested Loops for Accelerators 224 Chapter 12: Innovative Research Ideas Using OpenACC, Part II 237 12.1 A Framework for Directive-Based High-Performance Reconfigurable Computing 237 12.2 Programming Accelerated Clusters Using XcalableACC 253 Index 269

    1 in stock

    £35.14

  • CUDA for Engineers

    Pearson Education (US) CUDA for Engineers

    1 in stock

    Book SynopsisDuane Storti is a professor of mechanical engineering at the University of Washington in Seattle. He has thirty-five years of experience in teaching and research in the areas of engineering mathematics, dynamics and vibrations, computer-aided design, 3D printing, and applied GPU computing.   Mete Yurtoglu is currently pursuing an M.S. in applied mathematics and a Ph.D. in mechanical engineering at the University of Washington in Seattle. His research interests include GPU-based methods for computer vision and machine learning.  Table of Contents Acknowledgments xvii About the Authors xix Introduction 1 What Is CUDA? 1 What Does “Need-to-Know” Mean for Learning CUDA? 2 What Is Meant by “for Engineers”? 3 What Do You Need to Get Started with CUDA? 4 How Is This Book Structured? 4 Conventions Used in This Book 8 Code Used in This Book 8 User’s Guide 9 Historical Context 10 References 12 Chapter 1: First Steps 13 Running CUDA Samples 13 Running Our Own Serial Apps 19 Summary 22 Suggested Projects 23 Chapter 2: CUDA Essentials 25 CUDA’s Model for Parallelism 25 Need-to-Know CUDA API and C Language Extensions 28 Summary 31 Suggested Projects 31 References 31 Chapter 3: From Loops to Grids 33 Parallelizing dist_v1 33 Parallelizing dist_v2 38 Standard Workflow 42 Simplified Workflow 43 Summary 47 Suggested Projects 48 References 48 Chapter 4: 2D Grids and Interactive Graphics 49 Launching 2D Computational Grids 50 Live Display via Graphics Interop 56 Application: Stability 66 Summary 76 Suggested Projects 76 References 77 Chapter 5: Stencils and Shared Memory 79 Thread Interdependence 80 Computing Derivatives on a 1D Grid 81 Summary 117 Suggested Projects 118 References 119 Chapter 6: Reduction and Atomic Functions 121 Threads Interacting Globally 121 Implementing parallel_dot 123 Computing Integral Properties: centroid_2d 130 Summary 138 Suggested Projects 138 References 138 Chapter 7: Interacting with 3D Data 141 Launching 3D Computational Grids: dist_3d 144 Viewing and Interacting with 3D Data: vis_3d 146 Summary 171 Suggested Projects 171 References 171 Chapter 8: Using CUDA Libraries 173 Custom versus Off-the-Shelf 173 Thrust 175 cuRAND 190 NPP 193 Linear Algebra Using cuSOLVER and cuBLAS . 201 cuDNN 207 ArrayFire 207 Summary 207 Suggested 208 References 209 Chapter 9: Exploring the CUDA Ecosystem 211 The Go-To List of Primary Sources 211 Further Sources 217 Summary 218 Suggested Projects 219 Appendix A: Hardware Setup 221 Checking for an NVIDIA GPU: Windows 221 Checking for an NVIDIA GPU: OS X 222 Checking for an NVIDIA GPU: Linux 223 Determining Compute Capability 223 Upgrading Compute Capability 225 Appendix B: Software Setup 229 Windows Setup 229 OS X Setup 238 Linux Setup 240 Appendix C: Need-to-Know C Programming 245 Characterization of C 245 C Language Basics 246 Data Types, Declarations, and Assignments 248 Defining Functions 250 Building Apps: Create, Compile, Run, Debug 251 Arrays, Memory Allocation, and Pointers 262 Control Statements: for, if 263 Sample C Programs 267 References 277 Appendix D: CUDA Practicalities: Timing, Profiling, Error Handling, and Debugging 279 Execution Timing and Profiling 279 Error Handling 292 Debugging in Windows 298 Debugging in Linux 305 CUDA-MEMCHECK 308 Using Visual Studio Property Pages 309 References 312 Index 313

    1 in stock

    £31.82

  • Parallel Optimization Theory Algorithms and Applications Numerical Mathematics and Scientific Computation

    Oxford University Press, USA Parallel Optimization Theory Algorithms and Applications Numerical Mathematics and Scientific Computation

    15 in stock

    Book SynopsisThis text provides an introduction to the methods of parallel optimization by introducing parallel computing ideas and techniques into both optimization theory and numerical algorithms for large-scale optimization problems.Trade Review"This book presents a domain that arises where two different branches of science, namely parallel computations and the theory of constrained optimization, intersect with real life problems. This domain, called parallel optimization, has been developing rapidly under the stimulus of progress in computer technology. The book focuses on parallel optimization methods for large-scale constrained optimization problems and structured linear problems. . . . [It] covers a vast portion of parallel optimization, though full coverage of this domain, as the authors admit, goes far beyond the capacity of a single monograph. This book, however, in over 500 pages brings an excellent and in-depth presentation of all the major aspects of a process which matches theory and methods of optimization with modern computers. The volume can be recommended for graduate students, faculty, and researchers in any of those fields."--Mathematical Reviews "This book presents a domain that arises where two different branches of science, namely parallel computations and the theory of constrained optimization, intersect with real life problems. This domain, called parallel optimization, has been developing rapidly under the stimulus of progress in computer technology. The book focuses on parallel optimization methods for large-scale constrained optimization problems and structured linear problems. . . . [It] covers a vast portion of parallel optimization, though full coverage of this domain, as the authors admit, goes far beyond the capacity of a single monograph. This book, however, in over 500 pages brings an excellent and in-depth presentation of all the major aspects of a process which matches theory and methods of optimization with modern computers. The volume can be recommended for graduate students, faculty, and researchers in any of those fields."--Mathematical ReviewsTable of ContentsForeword ; Preface ; Glossary of Symbols ; 1. Introduction ; Part I Theory ; 2. Generalized Distances and Generalized Projections ; 3. Proximal Minimization with D-Functions ; Part II Algorithms ; 4. Penalty Methods, Barrier Methods and Augmented Lagrangians ; 5. Iterative Methods for Convex Feasibility Problems ; 6. Iterative Algorithms for Linearly Constrained Optimization Problems ; 7. Model Decomposition Algorithms ; 8. Decompositions in Interior Point Algorithms ; Part III Applications ; 9. Matrix Estimation Problems ; 10. Image Reconsturction from Projections ; 11. The Inverse Problem in Radiation Therapy Treatment Planning ; 12. Multicommodity Network Flow Problems ; 13. Planning Under Uncertainty ; 14. Decompositions for Parallel Computing ; 15. Numerical Investigations

    15 in stock

    £195.75

  • Principles of Concurrent and Distributed

    Pearson Education Principles of Concurrent and Distributed

    2 in stock

    Book SynopsisMordechai (Moti) Ben-Ari is an Associate Professor in the Department of Science Teaching at the Weizmann Institute of Science in Rehovot, Israel.  He is the author of texts on Ada, concurrent programming, programming languages, and mathematical logic, as well as Just a Theory: Exploring the Nature of Science.  In 2004 he was honored with the ACM/SIGCSE Award for Outstanding Contribution to Computer Science Education.Table of ContentsContents Preface xi 1 What is Concurrent Programming? 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Concurrency as abstract parallelism . . . . . . . . . . . . . . . . 2 1.3 Multitasking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 The terminology of concurrency . . . . . . . . . . . . . . . . . 4 1.5 Multiple computers . . . . . . . . . . . . . . . . . . . . . . . . 5 1.6 The challenge of concurrent programming . . . . . . . . . . . . 5 2 The Concurrent Programming Abstraction 7 2.1 The role of abstraction . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Concurrent execution as interleaving of atomic statements . . . . 8 2.3 Justification of the abstraction . . . . . . . . . . . . . . . . . . . 13 2.4 Arbitrary interleaving . . . . . . . . . . . . . . . . . . . . . . . 17 2.5 Atomic statements . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.6 Correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.7 Fairness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.8 Machine-code instructions . . . . . . . . . . . . . . . . . . . . . 24 2.9 Volatile and non-atomic variables . . . . . . . . . . . . . . . . . 28 2.10 The BACI concurrency simulator . . . . . . . . . . . . . . . . . 29 2.11 Concurrency in Ada . . . . . . . . . . . . . . . . . . . . . . . . 31 2.12 Concurrency in Java . . . . . . . . . . . . . . . . . . . . . . . . 34 2.13 Writing concurrent programs in Promela . . . . . . . . . . . . . 36 2.14 Supplement: the state diagram for the frog puzzle . . . . . . . . 37 3 The Critical Section Problem 45 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.2 The definition of the problem . . . . . . . . . . . . . . . . . . . 45 3.3 First attempt . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.4 Proving correctness with state diagrams . . . . . . . . . . . . . . 49 3.5 Correctness of the first attempt . . . . . . . . . . . . . . . . . . 53 3.6 Second attempt . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.7 Third attempt . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.8 Fourth attempt . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.9 Dekker’s algorithm . . . . . . . . . . . . . . . . . . . . . . . . 60 3.10 Complex atomic statements . . . . . . . . . . . . . . . . . . . . 61 4 Verification of Concurrent Programs 67 4.1 Logical specification of correctness properties . . . . . . . . . . 68 4.2 Inductive proofs of invariants . . . . . . . . . . . . . . . . . . . 69 4.3 Basic concepts of temporal logic . . . . . . . . . . . . . . . . . 72 4.4 Advanced concepts of temporal logic . . . . . . . . . . . . . . . 75 4.5 A deductive proof of Dekker’s algorithm . . . . . . . . . . . . . 79 4.6 Model checking . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.7 Spin and the Promela modeling language . . . . . . . . . . . . . 83 4.8 Correctness specifications in Spin . . . . . . . . . . . . . . . . . 86 4.9 Choosing a verification technique . . . . . . . . . . . . . . . . . 88 5 Advanced Algorithms for the Critical Section Problem 93 5.1 The bakery algorithm . . . . . . . . . . . . . . . . . . . . . . . 93 5.2 The bakery algorithm for N processes . . . . . . . . . . . . . . 95 5.3 Less restrictive models of concurrency . . . . . . . . . . . . . . 96 5.4 Fast algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 5.5 Implementations in Promela . . . . . . . . . . . . . . . . . . . . 104

    2 in stock

    £71.99

  • Parallel Algorithms

    John Wiley & Sons Inc Parallel Algorithms

    1 in stock

    Book SynopsisParallel algorithms Made Easy The complexity of today''s applications coupled with the widespread use of parallel computing has made the design and analysis of parallel algorithms topics of growing interest. This volume fills a need in the field for an introductory treatment of parallel algorithms-appropriate even at the undergraduate level, where no other textbooks on the subject exist. It features a systematic approach to the latest design techniques, providing analysis and implementation details for each parallel algorithm described in the book. Introduction to Parallel Algorithms covers foundations of parallel computing; parallel algorithms for trees and graphs; parallel algorithms for sorting, searching, and merging; and numerical algorithms. This remarkable book: * Presents basic concepts in clear and simple terms * Incorporates numerous examples to enhance students'' understanding * Shows how to develop parallel algorithms for all classical problems in compuTrade Review"...an introduction to parallel algorithms..." (Zentralblatt fur Mathematik, Vol. 948, No. 23)Table of ContentsFOUNDATIONS OF PARALLEL COMPUTING. Elements of Parallel Computing. Data Structures for Parallel Computing. Paradigms for Parallel Algorithm. Simple Algorithms. ALGORITHMS FOR GRAPH MODELS. Tree Algorithms. Graph Algorithms. NC Algorithms for Chordal Graphs. ARRAY MANIPULATION ALGORITHMS. Searching and Merging. Sorting Algorithms. NUMERICAL ALGORITHMS. Algebraic Equations and Matrices. Differentiation and Integration. Differential Equations. Answers to Selected Exercises. Index.

    1 in stock

    £144.85

  • Parallel and Distributed Computing A Survey of

    John Wiley & Sons Inc Parallel and Distributed Computing A Survey of

    2 in stock

    Book SynopsisFocuses on the area of parallel and distributed computing, and considers the diverse approaches. Covering a comprehensive set of models and paradigms, this book serves as both an introduction and a survey. It is suitable for students and can be used as a foundation for parallel and distributed computing courses.Trade Review"A supplemental text providing a framework within which individual topics can be elaborated on in...courses...or a survey that researchers can consult before choosing a set of models and paradigms for the overlapping approaches to programming." (SciTech Book News, Vol. 25, No. 3, September 2001) "an excellent introduction to the field of parallel computing .." (CVu - Jnl of the Association C & C++ Users, February 2002)Table of ContentsArchitectures. Data Parallelism. Shared-Memory Programming. Message Passing. Client/Server Computing. Code Mobility. Coordination Models. Object-Oriented Models. High-Level Programming Models. Abstract Models. Final Comparison. References. Index.

    2 in stock

    £131.35

  • Connectionism and the Mind

    John Wiley and Sons Ltd Connectionism and the Mind

    10 in stock

    Book SynopsisConnectionism and the Mind provides a clear and balanced introduction to connectionist networks and explores theoretical and philosophical implications. Much of this discussion from the first edition has been updated, and three new chapters have been added on the relation of connectionism to recent work on dynamical systems theory, artificial life, and cognitive neuroscience. Read two of the sample chapters on line: Connectionism and the Dynamical Approach to Cognition: http://www.blackwellpublishing.com/pdf/bechtel.pdf Networks, Robots, and Artificial Life: http://www.blackwellpublishing.com/pdf/bechtel2.pdfTrade Review"Much more than just an update, this is a thorough and exciting re-build of the classic text. Excellent new treatments of modularity, dynamics, artificial life, and cognitive neuroscience locate connectionism at the very heart of contemporary debates. A superb combination of detail, clarity, scope, and enthusiasm." Andy Clark, University of Sussex "Connectionism and the Mind is an extraordinarily comprehensive and thoughtful review of connectionism, with particular emphasis on recent developments. This new edition will be a valuable primer to those new to the field. But there is more: Bechtel and Abrahamsen's trenchant and even-handed analysis of the conceptual issues that are addressed by connectionist models constitute an important original theoretical contribution to cognitive science." Jeff Elman, University of California at San DiegoTable of ContentsPreface. 1. Networks versus Symbol Systems: Two Approaches to Modeling Cognition:. A Revolution in the Making?. Forerunners of Connectionism: Pandemonium and Perceptrons. The Allure of Symbol Manipulation. The Disappearance and Re-emergence of Network Models. New Alliances and Unfinished Business. Notes. Sources and Suggested Readings. 2. Connectionist Architectures:. The Flavor of Connectionist Processing: A Simulation of Memory Retrieval. The Design Features of a Connectionist Architecture. The Allure of the Connectionist Approach. Challenges Facing Connectionist Networks. Summary. Notes. Sources and Suggested Readings. 3. Learning:. Traditional and Contemporary Approaches to Learning. Connectionist Models of Learning. Some Issues Regarding Learning. Notes. Sources and Suggested Readings. 4. Pattern Recognition and Cognition:. Networks as Pattern Recognition Devices. Extending Pattern Recognition to Higher Cognition. Logical Inference as Pattern Recognition. Beyond Pattern Recognition. Notes. Sources and Suggested Readings. 5. Are Rules Required to Process Representations?:. Is Language Use Governed by Rules?. Rumelhart and McClelland's Model of Past-Tense Acquisition. Pinker and Prince's Arguments for Rules. Accounting for the U-Shaped Learning Function. Conclusion. Notes. Sources and Suggested Readings. 6. Are Syntactically Structured Representations Needed?:. Fodor and Pylyshyn's Critique: The Need for Symbolic Representations with Constituent Structure. First Connectionist Response: Explicitly Implementing Rules and Representations. Second Connectionist Response: Implementing Functionally Compositional Representations. Third Connectionist Response: Employing Procedural Knowledge with External Symbols. Using External Symbols to Provide Exact Symbol Processing. Clarifying the Standard: Systematicity and Degree of Generalizability. Conclusion. Notes. Sources and Suggested Readings. 7. Simulating Higher Cognition: A Modular Architecture for Processing Scripts:. Overview of Scripts. Overview of Miikkulainen's DISCERN System. Modular Connectionist Architectures. FGREP: An Architecture that Allows the System to Devise Its Own Representations. A Self-organizing Lexicon using Kohonen Feature Maps. Encoding and Decoding Stories as Scripts. A Connectionist Episodic Memory. Performance: Paraphrasing Stories and Answering Questions. Evaluating DISCERN. Paths Beyond the First Decade of Connectionism. Notes. Sources and Suggested Readings. 8. Connectionism and the Dynamical Approach to Cognition:. Are We on the Road to a Dynamical Revolution?. Basic Concepts of DST: The Geometry of Change. Using Dynamical Systems Tools to Analyze Networks. Putting Chaos to Work in Networks. Is Dynamicism a Competitor to Connectionism?. Is Dynamicism Complementary to Connectionism?. Conclusion. Notes. Sources and Suggested Readings. 9. Networks, Robots, and Artificial Life:. Robots and the Genetic Algorithm. Cellular Automata and the Synthetic Strategy. Evolution and Learning in Food-seekers. Evolution and Development in Khepera. The Computational Neuroethology of Robots. When Philosophers Encounter Robots. Conclusion. Sources and Suggested Readings. 10. Connectionism and the Brain:. Connectionism Meets Cognitive Neuroscience. Four Connectionist Models of Brain Processes. The Neural Implausibility of Many Connectionist Models. Wither Connectionism?. Notes. Sources and Suggested Readings. Appendix A: Notation. Appendix B: Glossary. Bibliography. Name Index. Subject Index.

    10 in stock

    £152.75

  • Foundations of Scalable Systems

    O'Reilly Media Foundations of Scalable Systems

    5 in stock

    Book SynopsisThis practical book covers design approaches and technologies that make it possible to scale an application quickly and cost-effectively. Author Ian Gorton takes software architects and developers through the principles of foundational distributed systems.

    5 in stock

    £39.74

  • Edsger Wybe Dijkstra

    Association of Computing Machinery,U.S. Edsger Wybe Dijkstra

    15 in stock

    Book SynopsisEdsger Wybe Dijkstra (1930-2002) was one of the most influential researchers in the history of computer science, making fundamental contributions to both the theory and practice of computing. In this book, 31 computer scientists present and discuss Dijkstra’s numerous contributions to computing science and assess their impact.

    15 in stock

    £69.30

  • Edsger Wybe Dijkstra

    Association of Computing Machinery,U.S. Edsger Wybe Dijkstra

    15 in stock

    Book SynopsisEdsger Wybe Dijkstra (1930-2002) was one of the most influential researchers in the history of computer science, making fundamental contributions to both the theory and practice of computing. In this book, 31 computer scientists present and discuss Dijkstra’s numerous contributions to computing science and assess their impact.

    15 in stock

    £99.90

  • Constructive Methods for Parallel Programming

    Nova Science Publishers Inc Constructive Methods for Parallel Programming

    1 in stock

    Book SynopsisConstructive Methods for Parallel Programming

    1 in stock

    £85.59

  • Advanced Parallel & Distributed Computing:

    Nova Science Publishers Inc Advanced Parallel & Distributed Computing:

    1 in stock

    Book Synopsis

    1 in stock

    £129.74

  • Performance Modelling Techniques for Parallel

    Nova Science Publishers Inc Performance Modelling Techniques for Parallel

    1 in stock

    Book Synopsis

    1 in stock

    £36.74

  • Grokking Concurrency

    Manning Publications Grokking Concurrency

    10 in stock

    Book SynopsisThis easy-to-read, hands-on guide demystifies concurrency concepts like threading, asynchronous programming, and parallel processing in any language. For readers who know the basics of programming. Grokking Concurrency is the ultimate guide to effective concurrency practices that will help you leverage multiple cores, excel with high loads, handle terabytes of data, and continue working after hardware and software failures. The core concepts in this guide will remain eternally relevant, whether you are building web apps, IoT systems, or handling big data. Specifically, you will: Get up to speed with the core concepts of concurrency, asynchrony, and parallel programming Learn the strengths and weaknesses of different hardware architectures Improve the sequential performance characteristics of your software Solve common problems for concurrent programming Compose patterns into a series of practices for writing scalable systems Write and implement concurrency systems that scale to any size Grokking Concurrency demystifies writing high-performance concurrent code through clear explanations of core concepts, interesting illustrations, insightful examples, and detailed techniques you can apply to your own projects. About the technology Microservices, big data, real-time systems, and other performance-intensive applications can all slow your systems to a crawl. You know the solution is “concurrency.” Now what? How do you choose among concurrency approaches? How can you be sure you will actually reduce latency and complete your jobs faster? This entertaining, fully illustrated guide answers all of your concurrency questions so you can start taking full advantage of modern multicore processors.Trade ReviewDon't be afraid about concurrency, learn from Grokking Concurrency! Eddu Melendez This book is a model of clarity. It clearly puts back not-so-well-known concepts in context. Luc Rogge The Manning Grokking series has a well deserved good reputation and this book will not let the series down. Patrick Regan

    10 in stock

    £38.99

  • Kubernetes in Production Best Practices: Build

    Packt Publishing Limited Kubernetes in Production Best Practices: Build

    1 in stock

    Book SynopsisDesign, build, and operate scalable and reliable Kubernetes infrastructure for productionKey Features Implement industry best practices to build and manage production-grade Kubernetes infrastructure Learn how to architect scalable Kubernetes clusters, harden container security, and fine-tune resource management Understand, manage, and operate complex business workloads confidently Book DescriptionAlthough out-of-the-box solutions can help you to get a cluster up and running quickly, running a Kubernetes cluster that is optimized for production workloads is a challenge, especially for users with basic or intermediate knowledge. With detailed coverage of cloud industry standards and best practices for achieving scalability, availability, operational excellence, and cost optimization, this Kubernetes book is a blueprint for managing applications and services in production.You'll discover the most common way to deploy and operate Kubernetes clusters, which is to use a public cloud-managed service from AWS, Azure, or Google Cloud Platform (GCP). This book explores Amazon Elastic Kubernetes Service (Amazon EKS), the AWS-managed version of Kubernetes, for working through practical exercises. As you get to grips with implementation details specific to AWS and EKS, you'll understand the design concepts, implementation best practices, and configuration applicable to other cloud-managed services. Throughout the book, you'll also discover standard and cloud-agnostic tools, such as Terraform and Ansible, for provisioning and configuring infrastructure.By the end of this book, you'll be able to leverage Kubernetes to operate and manage your production environments confidently.What you will learn Explore different infrastructure architectures for Kubernetes deployment Implement optimal open source and commercial storage management solutions Apply best practices for provisioning and configuring Kubernetes clusters, including infrastructure as code (IaC) and configuration as code (CAC) Configure the cluster networking plugin and core networking components to get the best out of them Secure your Kubernetes environment using the latest tools and best practices Deploy core observability stacks, such as monitoring and logging, to fine-tune your infrastructure Who this book is forThis book is for cloud infrastructure experts, DevOps engineers, site reliability engineers, and engineering managers looking to design and operate Kubernetes infrastructure for production. Basic knowledge of Kubernetes, Terraform, Ansible, Linux, and AWS is needed to get the most out of this book.Table of ContentsTable of Contents Introduction to Kubernetes Infrastructure and Production-Readiness Architecting Production-Grade Kubernetes Infrastructure Provisioning Kubernetes Clusters Using AWS and Terraform Managing Cluster Configuration with Ansible Configuring and Enhancing Kubernetes Networking Services Securing Kubernetes Effectively Managing Storage and Stateful Applications Deploying Seamless and Reliable Applications Monitoring, Logging, and Observability Operating and Maintaining Efficient Kubernetes Clusters

    1 in stock

    £27.99

  • Measuring Organisational Efficiency

    College Publications Measuring Organisational Efficiency

    15 in stock

    15 in stock

    £13.50

  • New Age International (UK) Ltd Parallel Computing

    5 in stock

    Book Synopsis

    5 in stock

    £28.50

  • Parallel Processing and Applied Mathematics: 14th

    Springer International Publishing AG Parallel Processing and Applied Mathematics: 14th

    1 in stock

    Book SynopsisThis two-volume set, LNCS 13826 and LNCS 13827, constitutes the proceedings of the 14th International Conference on Parallel Processing and Applied Mathematics, PPAM 2022, held in Gdansk, Poland, in September 2022.The 77 regular papers presented in these volumes were selected from 132 submissions. For regular tracks of the conference, 33 papers were selected from 62 submissions.The papers were organized in topical sections named as follows:Part I: numerical algorithms and parallel scientific computing; parallel non-numerical algorithms; GPU computing; performance analysis and prediction in HPC systems; scheduling for parallel computing; environments and frameworks for parallel/cloud computing; applications of parallel and distributed computing; soft computing with applications and special session on parallel EVD/SVD and its application in matrix computations.Part II: 9th Workshop on Language-Based Parallel Programming (WLPP 2022); 6th Workshop on Models, Algorithms and Methodologies for Hybrid Parallelism in New HPC Systems (MAMHYP 2022); first workshop on quantum computing and communication; First Workshop on Applications of Machine Learning and Artificial Intelligence in High Performance Computing (WAML 2022); 4th workshop on applied high performance numerical algorithms for PDEs; 5th minisymposium on HPC applications in physical sciences; 8th minisymposium on high performance computing interval methods; 7th workshop on complex collective systems.Table of Contents​Numerical Algorithms and Parallel Scientific Computing.- How accurate does Newton have to be?.- General framework for deriving reproducible Krylov subspace algorithms: BiCGStab case.- A generalized parallel prefix sums algorithm for arbitrary size array.- Infinite-Precision Inner Product and Sparse Matrix-Vector Multiplication Using Ozaki Scheme with Dot2 on Manycore Processors.- Advanced Stochastic Approaches for Applied Computing in Environmental Modeling.- Parallel Non-numerical Algorithms.- Parallel Suffix Sorting for Large String Analytics.- Parallel Extremely Randomized Decision Forests on Graphics Processors for Text Classification.- RDBMS speculative support improvement by the use of the query hypergraph representation.- GPU Computing.- Mixed Precision Algebraic Multigrid on GPUs.- Compact in-memory representation of decision trees in GPU-accelerated evolutionary induction.- Neural Nets with a Newton Conjugate Gradient Method on Multiple GPUs.- Performance Analysis and Prediction in HPC Systems.- Exploring Techniques for the Analysis of Spontaneous Asynchronicity in MPI-Parallel Applications.- Cost and Performance Analysis of MPI-based SaaS on the Private Cloud Infrastructure.- Building a Fine-Grained Analytical Performance Model for Complex Scientific Simulations.- Evaluation of machine learning techniques for predicting run times of scientific workflow jobs.- Smart clustering of HPC applications using similar job detection methods.- Scheduling for Parallel Computing.- Distributed Work Stealing in a Task-Based Dataflow Runtime.- Task Scheduler for Heterogeneous Data Centres based on Deep Reinforcement Learning.- Shisha: Online scheduling of CNN pipelines on heterogeneous architectures.- Proactive Task Offloading for Load Balancing in Iterative Applications.- Environments and Frameworks for Parallel/Cloud Computing.- Language Agnostic Approach for Unification of Implementation Variants for Different Computing Devices.- High Performance Dataframes from Parallel Processing Patterns.- Global Access to Legacy Data-Sets in Multi-Cloud Applications with Onedata.- Applications of Parallel and Distributed Computing.- MD-Bench: A generic proxy-app toolbox for state-of-the-art molecular dynamics algorithms.- Breaking Down the Parallel Performance of GROMACS, a High-Performance Molecular Dynamics Software.- GPU-based Molecular Dynamics of Turbulent Liquid Flows with OpenMM.- A novel parallel approach for modeling the dynamics of aerodynamically interacting particles in turbulent flows.- Reliable energy measurement on heterogeneous Systems–on–Chip based environments.- Distributed Objective Function Evaluation for Optimization of Radiation Therapy Treatment Plans.- Soft Computing with Applications.- GPU4SNN: GPU-based Acceleration for Spiking Neural Network Simulations.- Ant System Inspired Heuristic Optimization of UAVs Deployment for k-Coverage Problem.- Dataset related experimental investigation of chess position evaluation using a deep neural network.- Using AI-based edge processing in monitoring the pedestrian crossing.- Special Session on Parallel EVD/SVD and its Application in Matrix Computations.- Automatic code selection for the dense symmetric generalized eigenvalue problem using ATMathCoreLib.- On Relative Accuracy of the One-Sided Block-Jacobi SVD Algorithm.

    1 in stock

    £53.99

  • Parallel Processing and Applied Mathematics: 14th

    Springer International Publishing AG Parallel Processing and Applied Mathematics: 14th

    1 in stock

    Book SynopsisThis two-volume set, LNCS 13826 and LNCS 13827, constitutes the proceedings of the 14th International Conference on Parallel Processing and Applied Mathematics, PPAM 2022, held in Gdansk, Poland, in September 2022.The 77 regular papers presented in these volumes were selected from 132 submissions. For regular tracks of the conference, 33 papers were selected from 62 submissions.The papers were organized in topical sections named as follows:Part I: numerical algorithms and parallel scientific computing; parallel non-numerical algorithms; GPU computing; performance analysis and prediction in HPC systems; scheduling for parallel computing; environments and frameworks for parallel/cloud computing; applications of parallel and distributed computing; soft computing with applications and special session on parallel EVD/SVD and its application in matrix computations.Part II: 9th Workshop on Language-Based Parallel Programming (WLPP 2022); 6th Workshop on Models, Algorithms and Methodologies for Hybrid Parallelism in New HPC Systems (MAMHYP 2022); first workshop on quantum computing and communication; First Workshop on Applications of Machine Learning and Artificial Intelligence in High Performance Computing (WAML 2022); 4th workshop on applied high performance numerical algorithms for PDEs; 5th minisymposium on HPC applications in physical sciences; 8th minisymposium on high performance computing interval methods; 7th workshop on complex collective systems.Table of Contents​9th Workshop on Language-Based Parallel Programming (WLPP 2022).- Kokkos-Based Implementation of MPCD on Heterogeneous Nodes.- Comparison of Load Balancing Schemes for Asynchronous Many-Task Runtimes.- New Insights on the Revised Definition of the Performance Portability Metric.- Inferential statistical analysis of performance portability.- NPDP Benchmark Suite for Loop Tiling Effectiveness Evaluation.- Parallel Vectorized Implementations of Compensated Summation Algorithms.- 6th Workshop on Models, Algorithms and Methodologies for Hybrid Parallelism in New HPC Systems (MAMHYP 2022).- Malleability Techniques for HPC Systems.- Algorithm and software overhead: a theoretical approach to performance portability.- Benchmarking A High Performance Computing Heterogeneous Cluster.- A Generative Adversarial Network approach for noise and artifacts reduction in MRI head and neck imaging.- A GPU accelerated Hyperspectral 3D Convolutional Neural Network Classification at the Edge with Principal Component Analysis preprocessing.- Parallel gEUD models for accelerated IMRT planning on modern HPC platforms.- First Workshop on Quantum Computing and Communication.- On Quantum-Assisted LDPC Decoding Augmented with Classical Post-Processing.- Quantum annealing to solve the unrelated parallel machine scheduling problem.- Early experiences with a photonic quantum simulator for solving Job Shop Scheduling Problem.- Some remarks on super-gram operators for general bipartite quantum states.- Solving the Traveling Salesman Problem with a Hybrid Quantum-Classical Feedforward Neural Network.- Software aided analysis of EWL based quantum games.- First Workshop on Applications of Machine Learning and Artificial Intelligence in High Performance Computing (WAML 2022).- Adaptation of AI-accelerated CFD simulations to the IPU platform.- Performance Analysis of Convolution Algorithms for Deep Learning on Edge Processors.- Machine Learning-based Online Scheduling in Distributed Computing.- High Performance Computing Queue Time Prediction using Clustering and Regression.- Acceptance Rates of Invertible Neural Networks on Electron Spectra from Near-Critical Laser-Plasmas: A Comparison.- 4th Workshop on Applied High Performance Numerical Algorithms for PDEs.- MATLAB implementation of hp finite elements on rectangles using hierarchical basis functions.- Adaptive Parallel Average Schwarz Preconditioner for Crouzeix-Raviart Finite Volume Method.- Parareal method for anisotropic diffusion denoising.- Comparison of block preconditioners for the Stokes problem with discontinuous viscosity and friction.- On minimization of nonlinear energies using FEM in MATLAB.- A model for crowd evacuation dynamics: 2D numerical simulations.- 5th Minisymposium on HPC Applications in Physical Sciences.- Parallel Identification of Unique Sequences in Nuclear Structure Calculations.- Experimental and computer study of molecular dynamics of a new pyridazine derivative.- Description of magnetic nanomolecules by the extended multi-orbital Hubbard model: perturbative vs numerical approach.- Structural and electronic properties of small-diameter Carbon NanoTubes: a DFT study.- 8th Minisymposium on High Performance Computing Interval Methods.- Need for Techniques Intermediate Between Interval and Probabilistic Ones.- A Cross-Platform Benchmark for Interval Computation Libraries.- Testing interval arithmetic libraries, including their IEEE-1788 compliance.- A survey of interval algorithms for solving multicriteria analysis problems.- 7th Workshop on Complex Collective Systems.- Social Fragmentation Transitions in Large-Scale Parameter Sweep Simulations of Adaptive Social Networks.- Parking search in urban street networks: Taming down the complexity of the search-time problem via a coarse-graining approach.- A multi-agent cellular automata model of lane changing behaviour considering the aggressiveness and the autonomy.- Comparison of the use of UWB and BLE as positioning methods in data-driven modeling of pedestrian dynamics.- An Insight into the State-of-the-Art Vehicular Fog Computing with an Opportunistic Flavour.

    1 in stock

    £94.99

  • Parallel Algorithms

    World Scientific Publishing Co Pte Ltd Parallel Algorithms

    2 in stock

    Book SynopsisThis book is an introduction to the field of parallel algorithms and the underpinning techniques to realize the parallelization. The emphasis is on designing algorithms within the timeless and abstracted context of a high-level programming language. The focus of the presentation is on practical applications of the algorithm design using different models of parallel computation. Each model is illustrated by providing an adequate number of algorithms to solve some problems that quite often arise in many applications in science and engineering.The book is largely self-contained, presuming no special knowledge of parallel computers or particular mathematics. In addition, the solutions to all exercises are included at the end of each chapter.The book is intended as a text in the field of the design and analysis of parallel algorithms. It includes adequate material for a course in parallel algorithms at both undergraduate and graduate levels.

    2 in stock

    £108.00

© 2026 Book Curl

    • American Express
    • Apple Pay
    • Diners Club
    • Discover
    • Google Pay
    • Maestro
    • Mastercard
    • PayPal
    • Shop Pay
    • Union Pay
    • Visa

    Login

    Forgot your password?

    Don't have an account yet?
    Create account