Computer architecture and logic design Books
Taylor & Francis Ltd How to Set Up Information Systems: A
Book SynopsisThis introductory user's guide to systems analysis and systems design focuses on building sustainable information systems to meet tomorrow's needs. It shows how practitioners can apply multiple participatory perspectives in development, so as to avoid future problems. As a practical guide, it is presented to be readily comprehensible and is organized to enable users to concentrate on their goals efficiently, and with minimum theoretical elaboration. The chapters follow the sequence involved in planning an information system, explaining key words, the time involved in each step, ending with a tutorial or exercises.Trade Review[An] excellent book.' Guy Fitzgerald, , Professor of Information Systems, Brunel University 'The book stands out in its field through the intelligent and constructive use of the soft systems methodology to develop the themes' Peter Roberts, former Visiting Professor, Open University and City University 'A useful text for teachers and practitioners of a Multiview approach to information analysis and design. It has matured and gained focus in this new edition' Gilbert Mansell, Head of Department of Multimedia and Information Systems, University of Huddersfield 'A textbook for people intending to practice information systems analysis and design.' SciTech Book NewsTable of ContentsInformation Systems and Organization * What is Systems Analysis and Systems Design? * The Role of the Systems Planner or Systems Analyst * Selecting Planning and Development Tools * The Human Activity System: Making a Model * Information Modelling: Making a Workable System * Technical and Social Needs: The Balance * The Human-Computer Interface * Technical Aspects: What is Needed? * Total Design, Training, Hardware, Software and Implementation * Glossary, Appendices, Further Reading, Index
£52.24
AU Press Mind, Body, World: Foundations of Cognitive
Book SynopsisCognitive science arose in the 1950s when it became apparent that anumber of disciplines, including psychology, computer science,linguistics, and philosophy, were fragmenting. Perhaps owing to thefield’s immediate origins in cybernetics, as well as to thefoundational assumption that cognition is information processing,cognitive science initially seemed more unified than psychology.However, as a result of differing interpretations of the foundationalassumption and dramatically divergent views of the meaning of the terminformation processing, three separate schools emerged:classical cognitive science, connectionist cognitive science, andembodied cognitive science. Examples, cases, and research findings taken from the wide range ofphenomena studied by cognitive scientists effectively explain andexplore the relationship among the three perspectives. Intended tointroduce both graduate and senior undergraduate students to thefoundations of cognitive science, Mind, Body, World addressesa number of questions currently being asked by those practicing in thefield: What are the core assumptions of the three different schools?What are the relationships between these different sets of coreassumptions? Is there only one cognitive science, or are there manydifferent cognitive sciences? Giving the schools equal treatment anddisplaying a broad and deep understanding of the field, Dawsonhighlights the fundamental tensions and lines of fragmentation thatexist among the schools and provides a refreshing and unifyingframework for students of cognitive science.Table of ContentsList of Figures and Tables | ix Preface | xiii Who Is This Book Written For? | xiv Acknowledgements | xv Chapter 1. The Cognitive Sciences: One or Many? | 1 1.0 Chapter Overview | 1 1.1 A Fragmented Psychology | 2 1.2 A Unified Cognitive Science | 3 1.3 Cognitive Science or the Cognitive Sciences? | 6 1.4 Cognitive Science: Pre-paradigmatic? | 13 1.5 A Plan of Action | 16 Chapter 2. Multiple Levels of Investigation | 19 2.0 Chapter Overview | 19 2.1 Machines and Minds | 20 2.2 From the Laws of Thought to Binary Logic | 23 2.3 From the Formal to the Physical | 29 2.4 Multiple Procedures and Architectures | 32 2.5 Relays and Multiple Realizations | 35 2.6 Multiple Levels of Investigation and Explanation | 38 2.7 Formal Accounts of Input-Output Mappings | 40 2.8 Behaviour by Design and by Artifact | 41 2.9 Algorithms from Artifacts | 43 2.10 Architectures against Homunculi | 46 2.11 Implementing Architectures | 48 2.12 Levelling the Field | 51 Chapter 3. Elements of Classical Cognitive Science | 55 3.0 Chapter Overview | 55 3.1 Mind, Disembodied | 56 3.2 Mechanizing the Infinite | 59 3.3 Phrase Markers and Fractals | 65 3.4 Behaviourism, Language, and Recursion | 68 3.5 Underdetermination and Innateness | 72 3.6 Physical Symbol Systems | 75 3.7 Componentiality, Computability, and Cognition | 78 3.8 The Intentional Stance | 82 3.9 Structure and Process | 85 3.10 A Classical Architecture for Cognition | 89 3.11 Weak Equivalence and the Turing Test | 93 3.12 Towards Strong Equivalence | 97 3.13 The Impenetrable Architecture | 106 3.14 Modularity of Mind | 113 3.15 Reverse Engineering | 119 3.16 What is Classical Cognitive Science? | 122 Chapter 4. Elements of Connectionist Cognitive Science | 125 4.0 Chapter Overview | 125 4.1 Nurture versus Nature | 126 4.2 Associations | 133 4.3 Nonlinear Transformations | 139 4.4 The Connectionist Sandwich | 142 4.5 Connectionist Computations: An Overview | 148 4.6 Beyond the Terminal Meta-postulate | 149 4.7 What Do Output Unit Activities Represent? | 152 4.8 Connectionist Algorithms: An Overview | 158 4.9 Empiricism and Internal Representations | 159 4.10 Chord Classification by a Multilayer Perceptron | 162 4.11 Trigger Features | 172 4.12 A Parallel Distributed Production System | 177 4.13 Of Coarse Codes | 184 4.14 Architectural Connectionism: An Overview | 188 4.15 New Powers of Old Networks | 189 4.16 Connectionist Reorientation | 193 4.17 Perceptrons and Jazz Progressions | 195 4.18 What Is Connectionist Cognitive Science? | 198 Chapter 5. Elements of Embodied Cognitive Science | 205 5.0 Chapter Overview | 205 5.1 Abandoning Methodological Solipsism | 206 5.2 Societal Computing | 210 5.3 Stigmergy and Superorganisms | 212 5.4 Embodiment, Situatedness, and Feedback | 216 5.5 Umwelten, Affordances, and Enactive Perception | 219 5.6 Horizontal Layers of Control | 222 5.7 Mind in Action | 224 5.8 The Extended Mind | 230 5.9 The Roots of Forward Engineering | 235 5.10 Reorientation without Representation | 239 5.11 Robotic Moments in Social Environments | 245 5.12 The Architecture of Mind Reading | 250 5.13 Levels of Embodied Cognitive Science | 255 5.14 What Is Embodied Cognitive Science? | 260 Chapter 6. Classical Music and Cognitive Science | 265 6.0 Chapter Overview | 265 6.1 The Classical Nature of Classical Music | 266 6.2 The Classical Approach to Musical Cognition | 273 6.3 Musical Romanticism and Connectionism | 280 6.4 The Connectionist Approach to Musical Cognition | 286 6.5 The Embodied Nature of Modern Music | 291 6.6 The Embodied Approach to Musical Cognition | 301 6.7 Cognitive Science and Classical Music | 307 Chapter 7. Marks of the Classical? | 315 7.0 Chapter Overview | 315 7.1 Symbols and Situations | 316 7.2 Marks of the Classical | 324 7.3 Centralized versus Decentralized Control | 326 7.4 Serial versus Parallel Processing | 334 7.5 Local versus Distributed Representations | 339 7.6 Internal Representations | 343 7.7 Explicit Rules versus Implicit Knowledge | 345 7.8 The Cognitive Vocabulary | 348 7.9 From Classical Marks to Hybrid Theories | 355 Chapter 8. Seeing and Visualizing | 359 8.0 Chapter Overview | 359 8.1 The Transparency of Visual Processing | 360 8.2 The Poverty of the Stimulus | 362 8.3 Enrichment via Unconscious Inference | 368 8.4 Natural Constraints | 371 8.5 Vision, Cognition, and Visual Cognition | 379 8.6 Indexing Objects in the World | 383
£33.15
Oro Editions digitalSTRUCTURES: Data and Urban Strategies of
Book SynopsisdigitalSTRUCTURES: Data and Urban Strategies of the Civic Future provokes a larger body of work that engages with digital property and data infrastructures. Digital currencies (cryptocurrencies) and digital property require large amounts of land, resources, and data centers and infrastructures to store these “supplies.” There is a larger architectural and urban infrastructural challenge and urgency on how these various kinds of digital exchanges are mediated, to limit the detrimental use of our everyday resources. If our everyday objects are digital and no longer physical, how does it challenge ecological questions? How does this affect the future of urban living? The case-studies, interviews, and guest contributions prompt discussions that were part of the CityX Venice, Sezione del Padiglione Italia, at the 17th La Biennale di Venezia. Guest contributors were prompted to challenge and provoke the topics that are questioning the issues of open innovation models that operate a city, robotics and artificial intelligent systems, supply chains affected by digital storage, and data infrastructural arguments that play a large role within our Web 3.0 urban digital and real landscapes. Using a mixed-media approach, the book couples a novel exploration of XR (mixed-reality) and AR (augmented reality) into diagrammatic mapping and graphical cartography, and how data interacts with various open innovation models in digital property and real property.
£23.96
Morgan & Claypool Publishers An Architecture for Fast and General Data
Book SynopsisThe past few years have seen a major change in computing systems, as growing data volumes and stalling processor speeds require more and more applications to scale out to clusters. Today, a myriad data sources, from the Internet to business operations to scientific instruments, produce large and valuable data streams. However, the processing capabilities of single machines have not kept up with the size of data. As a result, organizations increasingly need to scale out their computations over clusters. At the same time, the speed and sophistication required of data processing have grown. In addition to simple queries, complex algorithms like machine learning and graph analysis are becoming common. And in addition to batch processing, streaming analysis of real-time data is required to let organizations take timely action. Future computing platforms will need to not only scale out traditional workloads, but support these new applications too.This book, a revised version of the 2014 ACM Dissertation Award winning dissertation, proposes an architecture for cluster computing systems that can tackle emerging data processing workloads at scale. Whereas early cluster computing systems, like MapReduce, handled batch processing, our architecture also enables streaming and interactive queries, while keeping MapReduce's scalability and fault tolerance. And whereas most deployed systems only support simple one-pass computations (e.g., SQL queries), ours also extends to the multi-pass algorithms required for complex analytics like machine learning. Finally, unlike the specialized systems proposed for some of these workloads, our architecture allows these computations to be combined, enabling rich new applications that intermix, for example, streaming and batch processing.We achieve these results through a simple extension to MapReduce that adds primitives for data sharing, called Resilient Distributed Datasets (RDDs). We show that this is enough to capture a wide range of workloads. We implement RDDs in the open source Spark system, which we evaluate using synthetic and real workloads. Spark matches or exceeds the performance of specialized systems in many domains, while offering stronger fault tolerance properties and allowing these workloads to be combined. Finally, we examine the generality of RDDs from both a theoretical modeling perspective and a systems perspective.This version of the dissertation makes corrections throughout the text and adds a new section on the evolution of Apache Spark in industry since 2014. In addition, editing, formatting, and links for the references have been added.Table of Contents Preface 1. Introduction 2. Resilient Distributed Datasets 3. Models Built over RDDs 4. Discretized Streams 5. Generality of RDDs 6. Conclusion References Author's Biography
£49.50
Morgan & Claypool Publishers An Architecture for Fast and General Data Processing on Large Clusters
Book SynopsisThe past few years have seen a major change in computing systems, as growing data volumes and stalling processor speeds require more and more applications to scale out to clusters. Today, a myriad data sources, from the Internet to business operations to scientific instruments, produce large and valuable data streams. However, the processing capabilities of single machines have not kept up with the size of data. As a result, organizations increasingly need to scale out their computations over clusters. At the same time, the speed and sophistication required of data processing have grown. In addition to simple queries, complex algorithms like machine learning and graph analysis are becoming common. And in addition to batch processing, streaming analysis of real-time data is required to let organizations take timely action. Future computing platforms will need to not only scale out traditional workloads, but support these new applications too.This book, a revised version of the 2014 ACM Dissertation Award winning dissertation, proposes an architecture for cluster computing systems that can tackle emerging data processing workloads at scale. Whereas early cluster computing systems, like MapReduce, handled batch processing, our architecture also enables streaming and interactive queries, while keeping MapReduce's scalability and fault tolerance. And whereas most deployed systems only support simple one-pass computations (e.g., SQL queries), ours also extends to the multi-pass algorithms required for complex analytics like machine learning. Finally, unlike the specialized systems proposed for some of these workloads, our architecture allows these computations to be combined, enabling rich new applications that intermix, for example, streaming and batch processing.We achieve these results through a simple extension to MapReduce that adds primitives for data sharing, called Resilient Distributed Datasets (RDDs). We show that this is enough to capture a wide range of workloads. We implement RDDs in the open source Spark system, which we evaluate using synthetic and real workloads. Spark matches or exceeds the performance of specialized systems in many domains, while offering stronger fault tolerance properties and allowing these workloads to be combined. Finally, we examine the generality of RDDs from both a theoretical modeling perspective and a systems perspective.This version of the dissertation makes corrections throughout the text and adds a new section on the evolution of Apache Spark in industry since 2014. In addition, editing, formatting, and links for the references have been added.Table of Contents Preface 1. Introduction 2. Resilient Distributed Datasets 3. Models Built over RDDs 4. Discretized Streams 5. Generality of RDDs 6. Conclusion References Author's Biography
£60.00
Morgan & Claypool Publishers Shared-Memory Parallelism Can Be Simple, Fast,
Book SynopsisParallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to allow the solutions developed to run efficiently under many different settings. This thesis addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The thesis provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results developed in this thesis serve to ease the transition into the multicore era.The first part of this thesis introduces tools and techniques for deterministic parallel programming, including means for encapsulating nondeterminism via powerful commutative building blocks, as well as a novel framework for executing sequential iterative loops in parallel, which lead to deterministic parallel algorithms that are efficient both in theory and in practice. The second part of this thesis introduces Ligra, the first high-level shared memory framework for parallel graph traversal algorithms. The framework allows programmers to express graph traversal algorithms using very short and concise code, delivers performance competitive with that of highly-optimized code, and is up to orders of magnitude faster than existing systems designed for distributed memory. This part of the thesis also introduces Ligra , which extends Ligra with graph compression techniques to reduce space usage and improve parallel performance at the same time, and is also the first graph processing system to support in-memory graph compression.The third and fourth parts of this thesis bridge the gap between theory and practice in parallel algorithm design by introducing the first algorithms for a variety of important problems on graphs and strings that are efficient both in theory and in practice. For example, the thesis develops the first linear-work and polylogarithmic-depth algorithms for suffix tree construction and graph connectivity that are also practical, as well as a work-efficient, polylogarithmic-depth, and cache-efficient shared-memory algorithm for triangle computations that achieves a 2–5x speedup over the best existing algorithms on 40 cores.This is a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award.Table of Contents Introduction Preliminaries and Notation Programming Techniques for Deterministic Parallelism Internally Deterministic Parallelism: Techniques and Algorithms Deterministic Parallelism in Sequential Iterative Algorithms A Deterministic Phase-Concurrent Parallel Hash Table Priority Updates: A Contention-Reducing Primitive for Deterministic Programming Large-Scale Shared-Memory Graph Analytics Ligra: A Lightweight Graph Processing Framework for Shared Memory Ligra : Adding Compression to Ligra Parallel Graph Algorithms Linear-Work Parallel Graph Connectivity Parallel and Cache-Oblivious Triangle Computations Parallel String Algorithms Parallel Cartesian Tree and Suffix Tree Construction Parallel Computation of Longest Common Prefixes Parallel Lempel-Ziv Factorization Parallel Wavelet Tree Construction Conclusion and Future Work Bibliography
£75.65
Morgan & Claypool Publishers Shared-Memory Parallelism Can Be Simple, Fast,
Book SynopsisParallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to allow the solutions developed to run efficiently under many different settings. This thesis addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The thesis provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results developed in this thesis serve to ease the transition into the multicore era.The first part of this thesis introduces tools and techniques for deterministic parallel programming, including means for encapsulating nondeterminism via powerful commutative building blocks, as well as a novel framework for executing sequential iterative loops in parallel, which lead to deterministic parallel algorithms that are efficient both in theory and in practice. The second part of this thesis introduces Ligra, the first high-level shared memory framework for parallel graph traversal algorithms. The framework allows programmers to express graph traversal algorithms using very short and concise code, delivers performance competitive with that of highly-optimized code, and is up to orders of magnitude faster than existing systems designed for distributed memory. This part of the thesis also introduces Ligra , which extends Ligra with graph compression techniques to reduce space usage and improve parallel performance at the same time, and is also the first graph processing system to support in-memory graph compression.The third and fourth parts of this thesis bridge the gap between theory and practice in parallel algorithm design by introducing the first algorithms for a variety of important problems on graphs and strings that are efficient both in theory and in practice. For example, the thesis develops the first linear-work and polylogarithmic-depth algorithms for suffix tree construction and graph connectivity that are also practical, as well as a work-efficient, polylogarithmic-depth, and cache-efficient shared-memory algorithm for triangle computations that achieves a 2–5x speedup over the best existing algorithms on 40 cores.This is a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award.Table of Contents Introduction Preliminaries and Notation Programming Techniques for Deterministic Parallelism Internally Deterministic Parallelism: Techniques and Algorithms Deterministic Parallelism in Sequential Iterative Algorithms A Deterministic Phase-Concurrent Parallel Hash Table Priority Updates: A Contention-Reducing Primitive for Deterministic Programming Large-Scale Shared-Memory Graph Analytics Ligra: A Lightweight Graph Processing Framework for Shared Memory Ligra : Adding Compression to Ligra Parallel Graph Algorithms Linear-Work Parallel Graph Connectivity Parallel and Cache-Oblivious Triangle Computations Parallel String Algorithms Parallel Cartesian Tree and Suffix Tree Construction Parallel Computation of Longest Common Prefixes Parallel Lempel-Ziv Factorization Parallel Wavelet Tree Construction Conclusion and Future Work Bibliography
£89.25
Springer Nature Switzerland AG Fundamentals of Computer Architecture and Design
Book SynopsisThis textbook provides semester-length coverage of computer architecture and design, providing a strong foundation for students to understand modern computer system architecture and to apply these insights and principles to future computer designs. It is based on the author’s decades of industrial experience with computer architecture and design, as well as with teaching students focused on pursuing careers in computer engineering. Unlike a number of existing textbooks for this course, this one focuses not only on CPU architecture, but also covers in great detail in system buses, peripherals and memories. This book teaches every element in a computing system in two steps. First, it introduces the functionality of each topic (and subtopics) and then goes into “from-scratch design” of a particular digital block from its architectural specifications using timing diagrams. The author describes how the data-path of a certain digital block is generated using timing diagrams, a method which most textbooks do not cover, but is valuable in actual practice. In the end, the user is ready to use both the design methodology and the basic computing building blocks presented in the book to be able to produce industrial-strength designs.Trade Review“This book can be part of computer engineering and electrical engineering graduate coursework and can be a reference book for engineers. It takes a bottom-up approach in which the author has covered basic principles before going into the breadth and depth of complex topics. It can broadly be divided in three sections: logic design, I/O, and central processing unit (CPU) design.” (Krishna Nagar, Computing Reviews , January, 25 , 2018) Table of ContentsReview Of Combinational Circuits.- Review Of Sequential Circuits.- Review Of Asynchronous Circuits.- System Bus.- Memory Circuits And Systems.- Central Processing Unit.- System Peripherals.- Special Topics.- Appendix.
£94.99
Springer Nature Switzerland AG High-Performance Modelling and Simulation for Big Data Applications: Selected Results of the COST Action IC1406 cHiPSet
Book SynopsisThis open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications.Table of ContentsWhy High-Performance Modelling and Simulation for Big Data Applications Matters.- Parallelization of hierarchical matrix algorithms for electromagnetic scattering problems.- Tail Distribution and Extreme Quantile Estimation using Non-Parametric Approaches.- Towards efficient and scalable data-intensive content delivery: State-of-the-art, issues and challenges.- Big Data in 5G Distributed Applications.- Big Data Processing, Analysis and Applications in Mobile Cellular Networks.- Medical Data Processing and Analysis for Remote Health and Activities Monitoring.- Towards human cell simulation.- Cloud-based High Throughput Virtual Screening in Novel Drug Discovery.- Ultra Wide Band Body Area Networks: Design and integration with Computational Clouds.- Survey on AI-based multimodal methods for emotion detection.- Forecasting Cryptocurrency Value by Sentiment Analysis: An HPC-oriented Survey of the State-of-the-Art in the Cloud Era.
£40.49
Springer Nature Switzerland AG Architecture of Computing Systems – ARCS 2019: 32nd International Conference, Copenhagen, Denmark, May 20–23, 2019, Proceedings
Book SynopsisThis book constitutes the proceedings of the 32nd International Conference on Architecture of Computing Systems, ARCS 2019, held in Copenhagen, Denmark, in May 2019. The 24 full papers presented in this volume were carefully reviewed and selected from 40 submissions. ARCS has always been a conference attracting leading-edge research outcomes in Computer Architecture and Operating Systems, including a wide spectrum of topics ranging from embedded and real-time systems all the way to large-scale and parallel systems. The selected papers are organized in the following topical sections: Dependable systems; real-time systems; special applications; architecture; memory hierarchy; FPGA; energy awareness; NoC/SoC. The chapter 'MEMPower: Data-Aware GPU Memory Power Model' is open access under a CC BY 4.0 license at link.springer.com.Table of ContentsDependable Systems.- Hardware/Software Co-designed Security Extensions for Embedded Devices.- SDES - Scalable Software Support for Dependable Embedded Systems.- Real-Time Systems.- Asynchronous Critical Sections in Real-Time Multiprocessor Systems.- Resource-Aware Parameter Tuning for Real-Time Applications.- A Hybrid NoC Enabling Fail-Operational and Hard Real-Time Communication in MPSoC.- Special Applications.- DSL-based Acceleration of Automotive Environment Perception and Mapping Algorithms for embedded CPUs, GPUs, and FPGAs.- Applying the Concept of Artificial DNA and Hormone System to a Low-Performance Automotive Environment.- A Parallel Adaptive Swarm Search Framework for Solving Black-Box Optimization Problems.- Architecture.- Leros: the Return of the Accumulator Machine.- A Generic Functional Simulation of Heterogeneous Systems.- Evaluating Dynamic Task Scheduling in a Task-based Runtime System for Heterogeneous Architectures.- Dynamic Scheduling of Pipelined Functional Units in Coarse-Grained Reconfigurable Array Elements.- Memory Hierarchy.- CyPhOS { A Component-based Cache-Aware Multi-Core Operating System.- Investigation of L2-Cache interferences in a NXP QorIQ T4240 multicore processor.- MEMPower: Data-Aware GPU Memory Power Model.- FPGA.- Effective FPGA Architecture for General CRC.- Receive-Side Notification for Enhanced RDMA in FPGA Based Networks.- An Efficient FPGA Accelerator Design for Optimized CNNs using OpenCL.- Energy Awareness.- The Return of Power Gating: Smart Leakage Energy Reductions in Modern Out-of-Order Processor Architectures.- A Heterogeneous and Reconfigurable Embedded Architecture for Energy-efficient Execution of Convolutional Neural Networks.- An energy efficient embedded processor for hard real-time Java applications.- NoC/SoC.- A Minimal Network Interface for a Simple Network-on-Chip.- Network Coding in Networks-on-Chip with Lossy Links.- Application Specific Reconfigurable SoC Interconnection Network Architectures.
£49.49
Springer Nature Switzerland AG System Verilog Assertions and Functional Coverage: Guide to Language, Methodology and Applications
Book SynopsisThis book provides a hands-on, application-oriented guide to the language and methodology of both SystemVerilog Assertions and Functional Coverage. Readers will benefit from the step-by-step approach to learning language and methodology nuances of both SystemVerilog Assertions and Functional Coverage, which will enable them to uncover hidden and hard to find bugs, point directly to the source of the bug, provide for a clean and easy way to model complex timing checks and objectively answer the question ‘have we functionally verified everything’. Written by a professional end-user of ASIC/SoC/CPU and FPGA design and Verification, this book explains each concept with easy to understand examples, simulation logs and applications derived from real projects. Readers will be empowered to tackle the modeling of complex checkers for functional verification and exhaustive coverage models for functional coverage, thereby drastically reducing their time to design, debug and cover. This updated third edition addresses the latest functional set released in IEEE-1800 (2012) LRM, including numerous additional operators and features. Additionally, many of the Concurrent Assertions/Operators explanations are enhanced, with the addition of more examples and figures. · Covers in its entirety the latest IEEE-1800 2012 LRM syntax and semantics; · Covers both SystemVerilog Assertions and SystemVerilog Functional Coverage languages and methodologies; · Provides practical applications of the what, how and why of Assertion Based Verification and Functional Coverage methodologies; · Explains each concept in a step-by-step fashion and applies it to a practical real life example; · Includes 6 practical LABs that enable readers to put in practice the concepts explained in the book.Table of ContentsIntroduction.- System Verilog Assertions.- Immediate Assertions.- Concurrent Assertions – Basics (sequence, property, assert).- Sampled Value Functions $rose, $fell.- Operators.- System Functions and Tasks.- Multiple clocks.- Local Variables.- Recursive property.- Detecting and using endpoint of a sequence.- ‘expect’.- ‘assume’ and formal (static functional) verification.- Other important topics.- Asynchronous Assertions !!!.- IEEE-1800–2009 Features.- SystemVerilog Assertions LABs.- System Verilog Assertions – LAB Answers.- Functional Coverage.- Performance Implications of coverage methodology.- Coverage Options.
£66.49
Springer Nature Switzerland AG Sequential and Parallel Algorithms and Data
Book SynopsisThis textbook is a concise introduction to the basic toolbox of structures that allow efficient organization and retrieval of data, key algorithms for problems on graphs, and generic techniques for modeling, understanding, and solving algorithmic problems. The authors aim for a balance between simplicity and efficiency, between theory and practice, and between classical results and the forefront of research. Individual chapters cover arrays and linked lists, hash tables and associative arrays, sorting and selection, priority queues, sorted sequences, graph representation, graph traversal, shortest paths, minimum spanning trees, optimization, collective communication and computation, and load balancing. The authors also discuss important issues such as algorithm engineering, memory hierarchies, algorithm libraries, and certifying algorithms. Moving beyond the sequential algorithms and data structures of the earlier related title, this book takes into account the paradigm shift towards the parallel processing required to solve modern performance-critical applications and how this impacts on the teaching of algorithms. The book is suitable for undergraduate and graduate students and professionals familiar with programming and basic mathematical language. Most chapters have the same basic structure: the authors discuss a problem as it occurs in a real-life situation, they illustrate the most important applications, and then they introduce simple solutions as informally as possible and as formally as necessary so the reader really understands the issues at hand. As they move to more advanced and optional issues, their approach gradually leads to a more mathematical treatment, including theorems and proofs. The book includes many examples, pictures, informal explanations, and exercises, and the implementation notes introduce clean, efficient implementations in languages such as C++ and Java.Trade Review“The style of the book is accessible and is suitable for a wide range of audiences, from mathematicians and computer scientists to researchers from other fields who would like to use parallelised approaches in their research.” (Irina Ioana Mohorianu, zbMATH 1445.68003, 2020)Table of ContentsAppetizer: Integer Arithmetic.- Introduction.- Representing Sequences by Arrays and Linked Lists.- Hash Tables and Associative Arrays.- Sorting and Selection.- Priority Queues.- Sorted Sequences.- Graph Representation.- Graph Traversal.- Shortest Paths.- Minimum Spanning Trees.- Generic Approaches to Optimization.- Collective Communication and Computation.- Load Balancing.- App. A, Mathematical Background.- App. B, Computer Architecture Aspects.- App. C, Support for Parallelism in C++.- App. D, The Message Passing Interface (MPI).- App. E, List of Commercial Products, Trademarks and Licenses.
£39.99
Springer Nature Switzerland AG Software Architecture: 13th European Conference, ECSA 2019, Paris, France, September 9–13, 2019, Proceedings
Book SynopsisThis book constitutes the refereed proceedings of the 13th European Conference on Software Architecture, ECSA 2019, held in Paris, France, in September 2019. In the Research Track, 11 full papers presented together with 4 short papers were carefully reviewed and selected from 63 submissions. They are organized in topical sections as follows: Services and Micro-services, Software Architecture in Development Process, Adaptation and Design Space Exploration, and Quality Attributes. In the Industrial Track, 6 submissions were received and 3 were accepted to form part of these proceedings. Table of ContentsServices and Micro-services.- Guiding Architectural Decision Making on Service Mesh Based Microservice Architectures.- Supporting Architectural Decision Making on Data Management in Microservice Architectures.- From a Monolith to a Microservices Architecture: An Approach Based on Transactional Contexts.- Software Architecture in Development Process.- An Exploratory Study of Naturalistic Decision Making in Complex Software Architecture Environments.- Evaluating the Effectiveness of Multi-level Greedy Modularity Clustering for Software Architecture Recovery.- What Quality Attributes Can we Find in Product Backlogs? A Machine Learning Perspective.- Architecturing Elastic Edge Storage Services for Data-Driven Decision Making.- Adaptation and Design Space Exploration.- Continuous Adaptation Management in Collective Intelligence Systems.- ADOOPLA – Product-Line- and Product-Level PLA Optimization.- Assessing Adaptability of Software Architectures for Cyber Physical Production Systems.- Quality Attributes.- Optimising Architectures for Performance, Cost, and Security.- QoS-based Formation of Software Architectures in the Internet of Things.- A Survey on Big Data Analytics Solutions Deployment.- Assessing the Quality Impact of Features in Component-based Software Architectures.- Components and Design Alternatives in E-Assessment Systems.- Industry track.- A Four-Layer Architecture Pattern for Constructing and Managing Digital Twins.- Tool Support for the Migration to Microservice Architecture: An Industrial Case Study.- ACE: Easy Deployment of Field Optimization Experiments.
£44.99
Springer Nature Switzerland AG Intelligent Internet of Things: From Device to Fog and Cloud
Book SynopsisThis holistic book is an invaluable reference for addressing various practical challenges in architecting and engineering Intelligent IoT and eHealth solutions for industry practitioners, academic and researchers, as well as for engineers involved in product development. The first part provides a comprehensive guide to fundamentals, applications, challenges, technical and economic benefits, and promises of the Internet of Things using examples of real-world applications. It also addresses all important aspects of designing and engineering cutting-edge IoT solutions using a cross-layer approach from device to fog, and cloud covering standards, protocols, design principles, reference architectures, as well as all the underlying technologies, pillars, and components such as embedded systems, network, cloud computing, data storage, data processing, big data analytics, machine learning, distributed ledger technologies, and security. In addition, it discusses the effects of Intelligent IoT, which are reflected in new business models and digital transformation. The second part provides an insightful guide to the design and deployment of IoT solutions for smart healthcare as one of the most important applications of IoT. Therefore, the second part targets smart healthcare-wearable sensors, body area sensors, advanced pervasive healthcare systems, and big data analytics that are aimed at providing connected health interventions to individuals for healthier lifestyles.Table of ContentsIntroduction.- Engineering an AI-driven IoT platform.- Smart and connected IoT devices.- Engineering IoT networks.- IoT cloud architecture and design.- End-to-end security.- Machine learning fundamentals.- Big Data and advanced analytics.- AI-driven IoT for smart health.- Biomedical engineering fundamentals.- Biosensors and connected wearable eHealth devices.- Applications of machine learning & IoT in healthcare.- AI-driven IoT eHealth prototyping lab.- Conclusion.
£66.49
Springer Nature Switzerland AG How Transistor Area Shrank by 1 Million Fold
Book SynopsisThis book explains in layman’s terms how CMOS transistors work. The author explains step-by-step how CMOS transistors are built, along with an explanation of the purpose of each process step. He describes for readers the key inventions and developments in science and engineering that overcame huge obstacles, enabling engineers to shrink transistor area by over 1 million fold and build billions of transistor switches that switch over a billion times a second, all on a piece of silicon smaller than a thumbnail.Table of ContentsIntroduction.- Overview.- Semiconductors and Insulators.- Diodes, MOS Transistors, Bipolar Transistors, Inverters.- Building High Performance MOS Transistors.- Parasitic MOS and Bipolar Transistors.- Design Rules and Photo Patterns.- CMOS Inverter Process Flow.- Key Inventions & Developments that Enabled Scaling.- Process Flow with Histories of Scaling at Key Steps.
£37.99
Springer Nature Switzerland AG Reversible Computation: Extending Horizons of Computing: Selected Results of the COST Action IC1405
Book SynopsisThis open access State-of-the-Art Survey presents the main recent scientific outcomes in the area of reversible computation, focusing on those that have emerged during COST Action IC1405 "Reversible Computation - Extending Horizons of Computing", a European research network that operated from May 2015 to April 2019.Reversible computation is a new paradigm that extends the traditional forwards-only mode of computation with the ability to execute in reverse, so that computation can run backwards as easily and naturally as forwards. It aims to deliver novel computing devices and software, and to enhance existing systems by equipping them with reversibility. There are many potential applications of reversible computation, including languages and software tools for reliable and recovery-oriented distributed systems and revolutionary reversible logic gates and circuits, but they can only be realized and have lasting effect if conceptual and firm theoretical foundations are established first. Table of ContentsFoundations of Reversible Computation.- Software and Reversible Systems: A Survey of Recent Activities.- Simulation and Design of Quantum Circuits.- Research on Reversible Functions Having Component Functions with Specified Properties - An Overview.- A Case Study for Reversible Computing: Reversible Debugging.- Towards Choreographic-Based Monitoring.- Reversibility in Chemical Reactions.- Reversible Control of Robots.- Reversible Languages and Incremental State Saving in Optimistic Parallel Discrete Event Simulation.- Reversible Computation in Wireless Communications.- Error Reconciliation in Quantum Key Distribution Protocols.
£34.99
Springer Nature Switzerland AG Embedded System Design: Embedded Systems
Book SynopsisA unique feature of this open access textbook is to provide a comprehensive introduction to the fundamental knowledge in embedded systems, with applications in cyber-physical systems and the Internet of things. It starts with an introduction to the field and a survey of specification models and languages for embedded and cyber-physical systems. It provides a brief overview of hardware devices used for such systems and presents the essentials of system software for embedded systems, including real-time operating systems. The author also discusses evaluation and validation techniques for embedded systems and provides an overview of techniques for mapping applications to execution platforms, including multi-core platforms. Embedded systems have to operate under tight constraints and, hence, the book also contains a selected set of optimization techniques, including software optimization techniques. The book closes with a brief survey on testing. This fourth edition has been updated and revised to reflect new trends and technologies, such as the importance of cyber-physical systems (CPS) and the Internet of things (IoT), the evolution of single-core processors to multi-core processors, and the increased importance of energy efficiency and thermal issues.Table of ContentsChapter 1. Introduction.- Chapter 2. Specifications and Modeling.- Chapter 3. Embedded System Hardware.- Chapter 4. System Software.- Chapter 5. Evaluation and Validation.- Chapter 6. Application Mapping.- Chapter 7. Optimization.- Chapter 8. Test.
£42.74
Springer Nature Switzerland AG Introduction to SystemVerilog
Book SynopsisThis book provides a hands-on, application-oriented guide to the entire IEEE standard 1800 SystemVerilog language. Readers will benefit from the step-by-step approach to learning the language and methodology nuances, which will enable them to design and verify complex ASIC/SoC and CPU chips. The author covers the entire spectrum of the language, including random constraints, SystemVerilog Assertions, Functional Coverage, Class, checkers, interfaces, and Data Types, among other features of the language. Written by an experienced, professional end-user of ASIC/SoC/CPU and FPGA designs, this book explains each concept with easy to understand examples, simulation logs and applications derived from real projects. Readers will be empowered to tackle the complex task of multi-million gate ASIC designs. Provides comprehensive coverage of the entire IEEE standard SystemVerilog language; Covers important topics such as constrained random verification, SystemVerilog Class, Assertions, Functional coverage, data types, checkers, interfaces, processes and procedures, among other language features; Uses easy to understand examples and simulation logs; examples are simulatable and will be provided online; Written by an experienced, professional end-user of ASIC/SoC/CPU and FPGA designs. This is quite a comprehensive work. It must have taken a long time to write it. I really like that the author has taken apart each of the SystemVerilog constructs and talks about them in great detail, including example code and simulation logs. For example, there is a chapter dedicated to arrays, and another dedicated to queues - that is great to have! The Language Reference Manual (LRM) is quite dense and difficult to use as a text for learning the language. This book explains semantics at a level of detail that is not possible in an LRM. This is the strength of the book. This will be an excellent book for novice users and as a handy reference for experienced programmers. Mark GlasserCerebras SystemsTable of ContentsIntroduction.- Data Types.- Arrays.- Queues.- Structures.- Packages.- Class.- SystemVerilog 'module'.- SystemVerilog 'program'.- Interfaces.- Operators.- Constrained Random Test Generation and Verification.- SystemVerilog Assertions.- Functional Coverage.- SystemVerilog Processes.- Procedural programming statements.- Processes.- Tasks and Functions.- Clocking Blocks.- Checkers.- Inter-process communication and synchronization.- Utility System tasks and functions.
£69.99
Springer Nature Switzerland AG Introduction to Computation: Haskell, Logic and
Book SynopsisComputation, itself a form of calculation, incorporates steps that include arithmetical and non-arithmetical (logical) steps following a specific set of rules (an algorithm). This uniquely accessible textbook introduces students using a very distinctive approach, quite rapidly leading them into essential topics with sufficient depth, yet in a highly intuitive manner. From core elements like sets, types, Venn diagrams and logic, to patterns of reasoning, calculus, recursion and expression trees, the book spans the breadth of key concepts and methods that will enable students to readily progress with their studies in Computer Science.Trade Review“This book is intended as a textbook for an introductory course in computation for students beginning in informatics. No prerequisites are needed, all concepts, even elementary ones ... . it is also very suited for self-study, even if a reader is interested in Haskell or symbolic logic alone. ... Comprehension is supported by exercises for each chapter ... .” (Dieter Riebesehl, zbMATH 1497.68005, 2022)Table of Contents1 Sets 132 Types 193 Simple Computations 274 Venn Diagrams and Logical Connectives 355 Lists and Comprehensions 456 Features and Predicates 557 Testing Your Programs 638 Patterns of Reasoning 739 More Patterns of Reasoning 8110 Lists and Recursion 9111 More Fun with Recursion 10112 Higher-Order Functions 11113 Higher and Higher 12314 Sequent Calculus 13115 Algebraic Data Types 14316 Expression Trees 15717 Karnaugh Maps 17518 Relations and Quantifiers 18319 Checking Satisfiability 19120 Data Representation 20321 Data Abstraction 22122 Efficient CNF Conversion 23723 Counting Satisfying Valuations 24924 Type Classes 26325 Search in Trees 27526 Combinatorial Algorithms 28527 Finite Automata 29928 Deterministic Finite Automata 31129 Non-Deterministic Finite Automata 32130 Input/Output and Monads 34131 Regular Expressions 35932 Non-Regular Languages 369Index 377
£28.49
Springer Nature Switzerland AG Masterclass Enterprise Architecture Management
Book SynopsisThis textbook provides a hands-on introduction to enterprise architecture management. It guides the reader through the applications of methods and tools to typical business problems by presenting enterprise architecture frameworks and by sharing experiences from industry.The structure of the book represents the typical stages of the journey of an enterprise architect. Chapter 1 addresses the central question of what to achieve with the introduction of an enterprise architecture. Chapter 2 then introduces concepts and visualizations for business architecture that help with understanding the business. In chapter 3 the development of an application architecture is outlined, which provides transparency on information systems and their business context. Next, chapter 4 presents visual tools to analyze, improve and eventually optimize the application landscape. Chapter 5 discusses both traditional organizational as well as collaborative approaches to enterprise architecture management. Eventually, several established enterprise architecture frameworks like TOGAF, Zachmann, ArchiMate, and IAF are described in chapter 6. The book concludes with a summary and an outlook on future research potential in chapter 7. Based on their experiences through several years of teaching, the authors introduce students step-by-step to enterprise architecture development and management. Their book is intended as a guide for master classes at universities and includes lots of exercises and references for further reading. Table of Contents1 Introduction.- 2 Understanding Business Architecture.- 3 Developing Application Architecture.- 4 Analysing Enterprise Architecture.- 5 Managing Enterprise Architecture.- 6 Applying Frameworks.- 7 Summary and Outlook.
£52.24
Springer Nature Switzerland AG Formal Verification of Floating-Point Hardware
Book SynopsisThis is the first book to focus on the problem of ensuring the correctness of floating-point hardware designs through mathematical methods. Formal Verification of Floating-Point Hardware Design, Second Edition advances a verification methodology based on a unified theory of register-transfer logic and floating-point arithmetic that has been developed and applied to the formal verification of commercial floating-point units over the course of more than two decades, during which the author was employed by several major microprocessor design companies. The theory is extended to the analysis of several algorithms and optimization techniques that are commonly used in commercial implementations of elementary arithmetic operations. As a basis for the formal verification of such implementations, high-level specifications of the basic arithmetic instructions of several major industry-standard floating-point architectures are presented, including all details pertaining to the handling of exceptional conditions. The methodology is illustrated in the comprehensive verification of a variety of state-of-the-art commercial floating-point designs developed by Arm Holdings. This revised edition reflects the evolving microarchitectures and increasing sophistication of Arm processors, and the variation in the design goals of execution speed, hardware area requirements, and power consumption. Many new results have been added to Parts I—III (Register-Transfer Logic, Floating-Point Arithmetic, and Implementation of Elementary Operations), extending the theory and describing new techniques. These were derived as required in the verification of the new RTL designs described in Part V. Table of ContentsPart I - Register-Transfer Logic.- Basic Arithmetic Functions.- Bit Vectors.- Logical Operations.- Part II - Floating-Point Arithmetic.- Floating-Point Numbers.- Floating-Point Formats.- Rounding.- IEEE-Compliant Square Root.- Part III - Implementation of Elementary Operations.- Addition.- Multiplication.- SRT Division and Square Root.- FMA-Based Division.- Part IV - Comparative Architectures: SSE, x87, and Arm.- SSE Floating-Point Instructions.- x87 Instructions.- Arm Floating-Point.- Instructions.- Part V - Formal Verification of RTL Designs.- The RAC Modeling Language.- Double-Precision Multiplication and Scaling.- Double-Precision Addition and FMA.- Multi-Precision Radix-8 SRT Division.- 64-bit Integer Division.- Multi-Precision Radix-4 SRT Square Root.- Multi-Precision Radix-2 SRT Division.- Fused Multiply-Add of a Graphics Processor.
£107.99
Springer Nature Switzerland AG Logic Functions and Equations: Fundamentals and
Book Synopsis The greatly expanded and updated 3rd edition of this textbook offers the reader a comprehensive introduction to the concepts of logic functions and equations and their applications across computer science and engineering. The authors’ approach emphasizes a thorough understanding of the fundamental principles as well as numerical and computer-based solution methods. The book provides insight into applications across propositional logic, binary arithmetic, coding, cryptography, complexity, logic design, and artificial intelligence.Updated throughout, some major additions for the 3rd edition include: a new chapter about the concepts contributing to the power of XBOOLE; a new chapter that introduces into the application of the XBOOLE-Monitor XBM 2; many tasks that support the readers in amplifying the learned content at the end of the chapters; solutions of a large subset of these tasks to confirm learning success; challenging tasks that need the power of the XBOOLE software for their solution. The XBOOLE-monitor XBM 2 software is used to solve the exercises; in this way the time-consuming and error-prone manipulation on the bit level is moved to an ordinary PC, more realistic tasks can be solved, and the challenges of thinking about algorithms leads to a higher level of education.Table of ContentsPart I Theoretical Foundations 1. Basic Algebraic Structures 2. Logic Functions 3. Logic Equations 4. Boolean Differential Calculus 5. Sets, Lattices, and Classes Logic Functions Part II Applications 6. Logics, Arithmetic, and Special Functions 7. SAT-Problems 8. Extremely Complex Problems 9. Combinational Circuits 10. Sequential Circuits References Index
£56.99
Springer Nature Switzerland AG Model Checking, Synthesis, and Learning: Essays Dedicated to Bengt Jonsson on The Occasion of His 60th Birthday
Book SynopsisThis Festschrift, dedicated to Bengt Jonsson on the occasion of his 60th birthday, contains papers written by many of his friends and collaborators.Bengt has made major contributions covering a wide range of topics including verification and learning. His works on verification, in finite state systems, learning, testing, probabilistic systems, timed systems, and distributed systems reflect both the diversity and the depth of his research. Besides being an excellent scientist, Bengt is also a leader who has greatly influenced the careers of both his students and his colleagues. His main focus throughout his career has been in the area of formal methods, and the research papers dedicated to him in this volume address related topics, particularly related to model checking, temporal logic, and automata learning.Table of ContentsModel Checking, Synthesis, and Learning.- From Linear Temporal Logics to Büchi Automata: The Early and Simple Principle.- Cause-Effect Reaction Latency In Real-Time Systems.- Quantitative Analysis of Interval Markov Chains.- Regular Model Checking: Evolution and Perspectives.- Regular Model Checking Revisited.- High-Level Representation of Benchmark Families for Petri Games.- Towards Engineering Digital Twinsby Active Behaviour Mining.- Never-Stop Context-Free Learning.- A Taxonomy and Reductions for Common Register Automata Formalisms.
£52.24
Springer Nature Switzerland AG Neuromorphic Computing Principles and
Book SynopsisThis book focuses on neuromorphic computing principles and organization and how to build fault-tolerant scalable hardware for large and medium scale spiking neural networks with learning capabilities. In addition, the book describes in a comprehensive way the organization and how to design a spike-based neuromorphic system to perform network of spiking neurons communication, computing, and adaptive learning for emerging AI applications. The book begins with an overview of neuromorphic computing systems and explores the fundamental concepts of artificial neural networks. Next, we discuss artificial neurons and how they have evolved in their representation of biological neuronal dynamics. Afterward, we discuss implementing these neural networks in neuron models, storage technologies, inter-neuron communication networks, learning, and various design approaches. Then, comes the fundamental design principle to build an efficient neuromorphic system in hardware. The challenges that need to be solved toward building a spiking neural network architecture with many synapses are discussed. Learning in neuromorphic computing systems and the major emerging memory technologies that promise neuromorphic computing are then given.A particular chapter of this book is dedicated to the circuits and architectures used for communication in neuromorphic systems. In particular, the Network-on-Chip fabric is introduced for receiving and transmitting spikes following the Address Event Representation (AER) protocol and the memory accessing method. In addition, the interconnect design principle is covered to help understand the overall concept of on-chip and off-chip communication. Advanced on-chip interconnect technologies, including si-photonic three-dimensional interconnects and fault-tolerant routing algorithms, are also given. The book also covers the main threats of reliability and discusses several recovery methods for multicore neuromorphic systems. This is important for reliable processing in several embedded neuromorphic applications. A reconfigurable design approach that supports multiple target applications via dynamic reconfigurability, network topology independence, and network expandability is also described in the subsequent chapters. The book ends with a case study about a real hardware-software design of a reliable three-dimensional digital neuromorphic processor geared explicitly toward the 3D-ICs biological brain’s three-dimensional structure. The platform enables high integration density and slight spike delay of spiking networks and features a scalable design. We present methods for fault detection and recovery in a neuromorphic system as well.Neuromorphic Computing Principles and Organization is an excellent resource for researchers, scientists, graduate students, and hardware-software engineers dealing with the ever-increasing demands on fault-tolerance, scalability, and low power consumption. It is also an excellent resource for teaching advanced undergraduate and graduate students about the fundamentals concepts, organization, and actual hardware-software design of reliable neuromorphic systems with learning and fault-tolerance capabilities.Table of Contents1 Introduction to Neuromorphic Computing Systems.- 2 Neuromorphic System Design Fundamentals.- 3 Learning in Neuromorphic Systems.- 4 Emerging Memory Devices for Neuromorphic Systems.- 5 Communication Networks for Neuromorphic Systems.- 6 Fault-Tolerant Neuromorphic System Design.- 7 Reconfigurable Neuromorphic Computing System.- 8 Case Study: Real Hardware-Software Design of 3D-NoC-based Neuromorphic System.- 9 Survey of Neuromorphic Systems.
£49.49
Springer Nature Switzerland AG Computer Systems: Digital Design, Fundamentals of
Book SynopsisThis updated textbook covers digital design, fundamentals of computer architecture, and ARM assembly language. The book starts by introducing computer abstraction, basic number systems, character coding, basic knowledge in digital design, and components of a computer. The book goes on to discuss information representation in computing, Boolean algebra and logic gates, and sequential logic. The book also presents introduction to computer architecture, Cache mapping methods, and virtual memory. The author also covers ARM architecture, ARM instructions, ARM assembly language using Keil development tools, and bitwise control structure using C and ARM assembly language. The book includes a set of laboratory experiments related to digital design using Logisim software and ARM assembly language programming using Keil development tools. In addition, each chapter features objectives, summaries, key terms, review questions, and problems.Table of ContentsChapter1: Signal and number systems.- Chapter2: Boolean Logics and Logic Gates.- Chapter3: Minterms, Maxterms, Karnaugh Map (K-Map), and Universal Gates.- Chapter4: Combinational Logic.- Chapter5: Synchronous Sequential Logic.- Chapter6: Introduction to Computer Architecture.- Chapter7: Memory.- Chapter8: Assembly Language and ARM Instructions Part I.- Chapter9: ARM Assembly Language Programming Using Keil Development Tools.- Chapter10: ARM Instructions Part II and Instraction Formats.- Chapter11: Bitwise and Control Structures Used for Programming with C and ARM Assembly Language.
£61.74
Springer Nature Switzerland AG Computer Systems: Digital Design, Fundamentals of
Book SynopsisThis updated textbook covers digital design, fundamentals of computer architecture, and ARM assembly language. The book starts by introducing computer abstraction, basic number systems, character coding, basic knowledge in digital design, and components of a computer. The book goes on to discuss information representation in computing, Boolean algebra and logic gates, and sequential logic. The book also presents introduction to computer architecture, Cache mapping methods, and virtual memory. The author also covers ARM architecture, ARM instructions, ARM assembly language using Keil development tools, and bitwise control structure using C and ARM assembly language. The book includes a set of laboratory experiments related to digital design using Logisim software and ARM assembly language programming using Keil development tools. In addition, each chapter features objectives, summaries, key terms, review questions, and problems.Table of ContentsChapter1: Signal and number systems.- Chapter2: Boolean Logics and Logic Gates.- Chapter3: Minterms, Maxterms, Karnaugh Map (K-Map), and Universal Gates.- Chapter4: Combinational Logic.- Chapter5: Synchronous Sequential Logic.- Chapter6: Introduction to Computer Architecture.- Chapter7: Memory.- Chapter8: Assembly Language and ARM Instructions Part I.- Chapter9: ARM Assembly Language Programming Using Keil Development Tools.- Chapter10: ARM Instructions Part II and Instraction Formats.- Chapter11: Bitwise and Control Structures Used for Programming with C and ARM Assembly Language.
£44.99
Springer Nature Switzerland AG Approximate Computing Techniques: From Component-
Book SynopsisThis book serves as a single-source reference to the latest advances in Approximate Computing (AxC), a promising technique for increasing performance or reducing the cost and power consumption of a computing system. The authors discuss the different AxC design and validation techniques, and their integration. They also describe real AxC applications, spanning from mobile to high performance computing and also safety-critical applications. Table of ContentsGeneral introduction Motivations.- Number representations.- Data level approximation.- Dynamic precision scaling.- Hardware level approximation.- Inexact operators.- Computation level approximation - algorithmic level.- Analysis of approximation effect on application quality.- Techniques for finite precision arithmetic.- Compilers and Programming Languages for Approximate Computing.- Design space exploration.- Word-length optimization for fixed-point and floating-point.- HLS of approximate accelerators.- Approximate Computing for IoT Applications.- Approximating Safety-Critical Applications.- Approximate Computing for HPC Applications.
£66.49
Springer Nature Switzerland AG VLSI Physical Design: From Graph Partitioning to
Book SynopsisThe complexity of modern chip design requires extensive use of specialized software throughout the process. To achieve the best results, a user of this software needs a high-level understanding of the underlying mathematical models and algorithms. In addition, a developer of such software must have a keen understanding of relevant computer science aspects, including algorithmic performance bottlenecks and how various algorithms operate and interact. This book introduces and compares the fundamental algorithms that are used during the IC physical design phase, wherein a geometric chip layout is produced starting from an abstract circuit design. This updated second edition includes recent advancements in the state-of-the-art of physical design, and builds upon foundational coverage of essential and fundamental techniques. Numerous examples and tasks with solutions increase the clarity of presentation and facilitate deeper understanding. A comprehensive set of slides is available on the Internet for each chapter, simplifying use of the book in instructional settings.“This improved, second edition of the book will continue to serve the EDA and design community well. It is a foundational text and reference for the next generation of professionals who will be called on to continue the advancement of our chip design tools and design the most advanced micro-electronics.” Dr. Leon Stok, Vice President, Electronic Design Automation, IBM Systems Group“This is the book I wish I had when I taught EDA in the past, and the one I’m using from now on.” Dr. Louis K. Scheffer, Howard Hughes Medical Institute“I would happily use this book when teaching Physical Design. I know of no other work that’s as comprehensive and up-to-date, with algorithmic focus and clear pseudocode for the key algorithms. The book is beautifully designed!”Prof. John P. Hayes, University of Michigan“The entire field of electronic design automation owes the authors a great debt for providing a single coherent source on physical design that is clear and tutorial in nature, while providing details on key state-of-the-art topics such as timing closure.”Prof. Kurt Keutzer, University of California, Berkeley“An excellent balance of the basics and more advanced concepts, presented by top experts in the field.” Prof. Sachin Sapatnekar, University of MinnesotaTable of Contents1 Introduction. 1.1 Electronic Design Automation (EDA). 1.2 VLSI Design Flow. 1.3 VLSI Design Styles. 1.4 Layout Layers and Design Rules. 1.5 Physical Design Optimizations. 1.6 Algorithms and Complexity. 1.7 Graph Theory Terminology. 1.8 Common EDA Terminology. 2 Netlist and System Partitioning. 2.1 Introduction. 2.2 Terminology. 2.3 Optimization Goals. 2.4 Partitioning Algorithms. 2.5 A Framework for Multilevel Partitioning. 2.6 System Partitioning onto Multiple FPGAs. Chapter 2 Exercises.3 Chip Planning. 3.1 Introduction to Floorplanning. 3.2 Optimization Goals in Floorplanning. 3.3 Terminology. 3.4 Floorplan Representations. 3.5 Floorplanning Algorithms. 3.6 Pin Assignment. 3.7 Power and Ground Routing. Chapter 3 Exercises.4 Global and Detailed Placement. 4.1 Introduction. 4.2 Optimization Objectives. 4.3 Global Placement. 4.4 Legalization and Detailed Placement. Chapter 4 Exercises.5 Global Routing. 5.1 Introduction. 5.2 Terminology and Definitions. 5.3 Optimization Goals. 5.4 Representations of Routing Regions. 5.5 The Global Routing Flow. 5.6 Single-Net Routing. 5.7 Full-Netlist Routing. 5.8 Modern Global Routing. Chapter 5 Exercises.6 Detailed Routing. 6.1 Terminology. 6.2 Horizontal and Vertical Constraint Graphs. 6.3 Channel Routing Algorithms. 6.4 Switchbox Routing. 6.5 Over-the-Cell Routing Algorithms. 6.6 Modern Challenges in Detailed Routing. Chapter 6 Exercises.7 Specialized Routing. 7.1 Introduction to Area Routing. 7.2 Net Ordering in Area Routing. 7.3 Non-Manhattan Routing. 7.4 Basic Concepts in Clock Networks. 7.5 Modern Clock Tree Synthesis. Chapter 7 Exercises.8 Timing Closure. 8.1 Introduction. 8.2 Timing Analysis and Performance Constraints. 8.3 Timing-Driven Placement. 8.4 Timing-Driven Routing. 8.5 Physical Synthesis. 8.6 Performance-Driven Design Flow. 8.7 Conclusions. Chapter 8 Exercises. A Solutions to Chapter Exercises. B Example CMOS Cell Layouts.
£66.49
Springer Nature Switzerland AG VLSI Physical Design: From Graph Partitioning to
Book SynopsisThe complexity of modern chip design requires extensive use of specialized software throughout the process. To achieve the best results, a user of this software needs a high-level understanding of the underlying mathematical models and algorithms. In addition, a developer of such software must have a keen understanding of relevant computer science aspects, including algorithmic performance bottlenecks and how various algorithms operate and interact. This book introduces and compares the fundamental algorithms that are used during the IC physical design phase, wherein a geometric chip layout is produced starting from an abstract circuit design. This updated second edition includes recent advancements in the state-of-the-art of physical design, and builds upon foundational coverage of essential and fundamental techniques. Numerous examples and tasks with solutions increase the clarity of presentation and facilitate deeper understanding. A comprehensive set of slides is available on the Internet for each chapter, simplifying use of the book in instructional settings.“This improved, second edition of the book will continue to serve the EDA and design community well. It is a foundational text and reference for the next generation of professionals who will be called on to continue the advancement of our chip design tools and design the most advanced micro-electronics.” Dr. Leon Stok, Vice President, Electronic Design Automation, IBM Systems Group“This is the book I wish I had when I taught EDA in the past, and the one I’m using from now on.” Dr. Louis K. Scheffer, Howard Hughes Medical Institute“I would happily use this book when teaching Physical Design. I know of no other work that’s as comprehensive and up-to-date, with algorithmic focus and clear pseudocode for the key algorithms. The book is beautifully designed!”Prof. John P. Hayes, University of Michigan“The entire field of electronic design automation owes the authors a great debt for providing a single coherent source on physical design that is clear and tutorial in nature, while providing details on key state-of-the-art topics such as timing closure.”Prof. Kurt Keutzer, University of California, Berkeley“An excellent balance of the basics and more advanced concepts, presented by top experts in the field.” Prof. Sachin Sapatnekar, University of MinnesotaTable of Contents1 Introduction. 1.1 Electronic Design Automation (EDA). 1.2 VLSI Design Flow. 1.3 VLSI Design Styles. 1.4 Layout Layers and Design Rules. 1.5 Physical Design Optimizations. 1.6 Algorithms and Complexity. 1.7 Graph Theory Terminology. 1.8 Common EDA Terminology. 2 Netlist and System Partitioning. 2.1 Introduction. 2.2 Terminology. 2.3 Optimization Goals. 2.4 Partitioning Algorithms. 2.5 A Framework for Multilevel Partitioning. 2.6 System Partitioning onto Multiple FPGAs. Chapter 2 Exercises.3 Chip Planning. 3.1 Introduction to Floorplanning. 3.2 Optimization Goals in Floorplanning. 3.3 Terminology. 3.4 Floorplan Representations. 3.5 Floorplanning Algorithms. 3.6 Pin Assignment. 3.7 Power and Ground Routing. Chapter 3 Exercises.4 Global and Detailed Placement. 4.1 Introduction. 4.2 Optimization Objectives. 4.3 Global Placement. 4.4 Legalization and Detailed Placement. Chapter 4 Exercises.5 Global Routing. 5.1 Introduction. 5.2 Terminology and Definitions. 5.3 Optimization Goals. 5.4 Representations of Routing Regions. 5.5 The Global Routing Flow. 5.6 Single-Net Routing. 5.7 Full-Netlist Routing. 5.8 Modern Global Routing. Chapter 5 Exercises.6 Detailed Routing. 6.1 Terminology. 6.2 Horizontal and Vertical Constraint Graphs. 6.3 Channel Routing Algorithms. 6.4 Switchbox Routing. 6.5 Over-the-Cell Routing Algorithms. 6.6 Modern Challenges in Detailed Routing. Chapter 6 Exercises.7 Specialized Routing. 7.1 Introduction to Area Routing. 7.2 Net Ordering in Area Routing. 7.3 Non-Manhattan Routing. 7.4 Basic Concepts in Clock Networks. 7.5 Modern Clock Tree Synthesis. Chapter 7 Exercises.8 Timing Closure. 8.1 Introduction. 8.2 Timing Analysis and Performance Constraints. 8.3 Timing-Driven Placement. 8.4 Timing-Driven Routing. 8.5 Physical Synthesis. 8.6 Performance-Driven Design Flow. 8.7 Conclusions. Chapter 8 Exercises. A Solutions to Chapter Exercises. B Example CMOS Cell Layouts.
£52.24
Springer Nature Switzerland AG 3D Interconnect Architectures for Heterogeneous
Book SynopsisThis book describes the first comprehensive approach to the optimization of interconnect architectures in 3D systems on chips (SoCs), specially addressing the challenges and opportunities arising from heterogeneous integration. Readers learn about the physical implications of using heterogeneous 3D technologies for SoC integration, while also learning to maximize the 3D-technology gains, through a physical-effect-aware architecture design. The book provides a deep theoretical background covering all abstraction-levels needed to research and architect tomorrow’s 3D-integrated circuits, an extensive set of optimization methods (for power, performance, area, and yield), as well as an open-source optimization and simulation framework for fast exploration of novel designs.Table of ContentsPart I Introduction1 Introduction to 3D Technologies 1.1 Motivation for Heterogenous 3D ICs 1.2 3D Technologies 1.3 TSV Capacitances—A Problem Resistant to Scaling 1.4 Conclusion 2 Interconnect Architectures for 3D Technologies 2.1 Interconnect Architectures 2.2 Overview of Interconnect Architectures for 3D ICs 2.3 Three-dimensional Networks on chips 2.4 Conclusion Part II 3D Technology Modeling 3 Power and Performance Formulas 3.1 High-Level Formula for the Power Consumption 3.2 High-Level Formula for the Propagation Delay 3.3 Matrix Formulations 3.4 Evaluation 3.5 Conclusion 4 Capacitance Estimation 4.1 Existing Capacitance Models 4.2 Edge and MOS Effects on the TSV Capacitances 4.3 TSV Capacitance Model 4.4 Evaluation 4.5 Conclusion Part III System Modeling xiii xiv Contents 5 Application and Simulation Models 5.1 Overview of the Modeling Approach 5.2 Application Traffic Model 5.3 Simulation Model of 3D NoCs 5.4 Simulator Interfaces 5.5 Conclusion 6 Bit-level Statistics 6.1 Existing Approaches to Estimate the Bit-Level Statistics for Single Data Streams 6.2 Data-Stream Multiplexing 6.3 Bit-Level Statistics with Data-Stream Multiplexing 6.4 Evaluation 6.5 Conclusion 7 Ratatoskr Framework 7.1 Ratatoskr for Practitioners 7.2 Implementation 7.3 Evaluation 7.4 Case Study: Link Power Estimation and Optimization 7.5 Conclusion Part IV 3D-Interconnect Optimization 8 Low-Power Technique for 3D Interconnects 8.1 Fundamental Idea 8.2 Power-Optimal TSV assignment 8.3 Systematic Net-to-TSV Assignments 8.4 Combination with Traditional Low-Power Codes 8.5 Evaluation 8.6 Conclusion 9 Low-Power Technique for High-Performance 3D Interconnects. 9.1 Edge-Effect-Aware Crosstalk Classification 9.2 Existing Approaches and Their Limitations 9.3 Proposed Technique 9.4 Extension to a Low-Power 3D CAC 9.5 Evaluation 9.6 Conclusion 10 Low-Power Technique for High-Performance 3D Interconnects (Misaligned) 10.1 Temporal-Misalignment Effect on the Crosstalk 10.2 Exploiting Misalignment to Improve the Performance 10.3 Effect on the TSV Power Consumption Contents xv 10.4 Evaluation 10.5 Conclusion 11 Low-Power Technique for Yield-Enhanced 3D Interconnects 11.1 Existing TSV Yield-Enhancement Techniques 11.2 Preliminaries—Logical Impact of TSV Faults 11.3 Fundamental Idea 11.4 Formal Problem Description 11.5 TSV Redundancy Schemes 11.6 Evaluation 11.7 Case Study 11.8 Conclusion Part V NoC Optimization for Heterogeneous 3D Integration 12 Heterogeneous Buffering for 3D NoCs251 12.1 Buffer Distributions and Depths 12.2 Routers with Optimized Buffer Distribution 12.3 Routers with Optimized Buffer Depths 12.4 Evaluation 12.5 Discussion 12.6 Conclusion 13 Heterogeneous Routing for 3D NoCs 13.1 Heterogeneity and Routing 13.2 Modeling Heterogeneous Technologies 13.3 Modeling Communication 13.4 Routing Limitations from Heterogeneity 13.5 Heterogeneous Routing Algorithms 13.6 Heterogeneous Router Architectures 13.7 Low-Power Routing in Heterogeneous 3D ICs 13.8 Evaluation 13.9 Discussion 13.10Conclusion 14 Heterogeneous Virtualisation for 3D NoCs 14.1 Problem Description 14.2 Heterogeneous Microarchitectures Exploiting Traffic Imbalance 14.3 Evaluation 14.4 Conclusion 15 Network Synthesis and SoC Floor Planning 15.1 Fundamental Idea 15.2 Modelling and Optimization 15.3 Mixed-Integer Linear Program 15.4 Heuristic Solution xvi Contents 15.5 Evaluation 15.6 Conclusion Part VI Finale 16 Conclusion 16.1 Putting it all together 16.2 Impact on Future Work A Appendix B Pseudo Codes C Method to Calculate the Depletion-Region Widths D Modeling Logical OR Relations
£94.99
Springer International Publishing AG Chip Multiprocessor Architecture: Techniques to Improve Throughput and Latency
Book SynopsisChip multiprocessors - also called multi-core microprocessors or CMPs for short - are now the only way to build high-performance microprocessors, for a variety of reasons. Large uniprocessors are no longer scaling in performance, because it is only possible to extract a limited amount of parallelism from a typical instruction stream using conventional superscalar instruction issue techniques. In addition, one cannot simply ratchet up the clock speed on today's processors, or the power dissipation will become prohibitive in all but water-cooled systems. Compounding these problems is the simple fact that with the immense numbers of transistors available on today's microprocessor chips, it is too costly to design and debug ever-larger processors every year or two. CMPs avoid these problems by filling up a processor die with multiple, relatively simpler processor cores instead of just one huge core. The exact size of a CMP's cores can vary from very simple pipelines to moderately complex superscalar processors, but once a core has been selected the CMP's performance can easily scale across silicon process generations simply by stamping down more copies of the hard-to-design, high-speed processor core in each successive chip generation. In addition, parallel code execution, obtained by spreading multiple threads of execution across the various cores, can achieve significantly higher performance than would be possible using only a single core. While parallel threads are already common in many useful workloads, there are still important workloads that are hard to divide into parallel threads. The low inter-processor communication latency between the cores in a CMP helps make a much wider range of applications viable candidates for parallel execution than was possible with conventional, multi-chip multiprocessors; nevertheless, limited parallelism in key applications is the main factor limiting acceptance of CMPs in some types of systems. After a discussion of the basic pros and cons of CMPs when they are compared with conventional uniprocessors, this book examines how CMPs can best be designed to handle two radically different kinds of workloads that are likely to be used with a CMP: highly parallel, throughput-sensitive applications at one end of the spectrum, and less parallel, latency-sensitive applications at the other. Throughput-sensitive applications, such as server workloads that handle many independent transactions at once, require careful balancing of all parts of a CMP that can limit throughput, such as the individual cores, on-chip cache memory, and off-chip memory interfaces. Several studies and example systems, such as the Sun Niagara, that examine the necessary tradeoffs are presented here. In contrast, latency-sensitive applications - many desktop applications fall into this category - require a focus on reducing inter-core communication latency and applying techniques to help programmers divide their programs into multiple threads as easily as possible. This book discusses many techniques that can be used in CMPs to simplify parallel programming, with an emphasis on research directions proposed at Stanford University. To illustrate the advantages possible with a CMP using a couple of solid examples, extra focus is given to thread-level speculation (TLS), a way to automatically break up nominally sequential applications into parallel threads on a CMP, and transactional memory. This model can greatly simplify manual parallel programming by using hardware - instead of conventional software locks - to enforce atomic code execution of blocks of instructions, a technique that makes parallel coding much less error-prone. Contents: The Case for CMPs / Improving Throughput / Improving Latency Automatically / Improving Latency using Manual Parallel Programming / A Multicore World: The Future of CMPsTable of ContentsContents: The Case for CMPs.- Improving Throughput.- Improving Latency Automatically.- Improving Latency using Manual Parallel Programming.- A Multicore World: The Future of CMPs.
£26.59
Springer International Publishing AG Computer Architecture Techniques for Power-Efficiency
Book SynopsisIn the last few years, power dissipation has become an important design constraint, on par with performance, in the design of new computer systems. Whereas in the past, the primary job of the computer architect was to translate improvements in operating frequency and transistor count into performance, now power efficiency must be taken into account at every step of the design process. While for some time, architects have been successful in delivering 40% to 50% annual improvement in processor performance, costs that were previously brushed aside eventually caught up. The most critical of these costs is the inexorable increase in power dissipation and power density in processors. Power dissipation issues have catalyzed new topic areas in computer architecture, resulting in a substantial body of work on more power-efficient architectures. Power dissipation coupled with diminishing performance gains, was also the main cause for the switch from single-core to multi-core architectures and a slowdown in frequency increase. This book aims to document some of the most important architectural techniques that were invented, proposed, and applied to reduce both dynamic power and static power dissipation in processors and memory hierarchies. A significant number of techniques have been proposed for a wide range of situations and this book synthesizes those techniques by focusing on their common characteristics. Table of Contents: Introduction / Modeling, Simulation, and Measurement / Using Voltage and Frequency Adjustments to Manage Dynamic Power / Optimizing Capacitance and Switching Activity to Reduce Dynamic Power / Managing Static (Leakage) Power / ConclusionsTable of ContentsIntroduction.- Modeling, Simulation, and Measurement.- Using Voltage and Frequency Adjustments to Manage Dynamic Power.- Optimizing Capacitance and Switching Activity to Reduce Dynamic Power.- Managing Static (Leakage) Power.- Conclusions.
£26.99
Springer International Publishing AG Fault Tolerant Computer Architecture
Book SynopsisFor many years, most computer architects have pursued one primary goal: performance. Architects have translated the ever-increasing abundance of ever-faster transistors provided by Moore's law into remarkable increases in performance. Recently, however, the bounty provided by Moore's law has been accompanied by several challenges that have arisen as devices have become smaller, including a decrease in dependability due to physical faults. In this book, we focus on the dependability challenge and the fault tolerance solutions that architects are developing to overcome it. The two main purposes of this book are to explore the key ideas in fault-tolerant computer architecture and to present the current state-of-the-art - over approximately the past 10 years - in academia and industry. Table of Contents: Introduction / Error Detection / Error Recovery / Diagnosis / Self-Repair / The FutureTable of ContentsIntroduction.- Error Detection.- Error Recovery.- Diagnosis.- Self-Repair.- The Future.
£25.19
Springer International Publishing AG Introduction to Reconfigurable Supercomputing
Book SynopsisThis book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPC) who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigurable parallel codes. We hope to show that FPGA acceleration, based on the exploitation of the data parallelism, pipelining and concurrency remains promising in view of the diminishing improvements in traditional processor and system design. Table of Contents: FPGA Technology / Reconfigurable Supercomputing / Algorithmic Considerations / FPGA Programming Languages / Case Study: Sorting / Alternative Technologies and Concluding RemarksTable of ContentsFPGA Technology.- Reconfigurable Supercomputing.- Algorithmic Considerations.- FPGA Programming Languages.- Case Study: Sorting.- Alternative Technologies and Concluding Remarks.
£25.19
Springer International Publishing AG Transactional Memory, Second Edition
Book SynopsisThe advent of multicore processors has renewed interest in the idea of incorporating transactions into the programming model used to write parallel programs. This approach, known as transactional memory, offers an alternative, and hopefully better, way to coordinate concurrent threads. The ACI (atomicity, consistency, isolation) properties of transactions provide a foundation to ensure that concurrent reads and writes of shared data do not produce inconsistent or incorrect results. At a higher level, a computation wrapped in a transaction executes atomically - either it completes successfully and commits its result in its entirety or it aborts. In addition, isolation ensures the transaction produces the same result as if no other transactions were executing concurrently. Although transactions are not a parallel programming panacea, they shift much of the burden of synchronizing and coordinating parallel computations from a programmer to a compiler, to a language runtime system, or to hardware. The challenge for the system implementers is to build an efficient transactional memory infrastructure. This book presents an overview of the state of the art in the design and implementation of transactional memory systems, as of early spring 2010. Table of Contents: Introduction / Basic Transactions / Building on Basic Transactions / Software Transactional Memory / Hardware-Supported Transactional Memory / ConclusionsTable of ContentsIntroduction.- Basic Transactions.- Building on Basic Transactions.- Software Transactional Memory.- Hardware-Supported Transactional Memory.- Conclusions.
£31.49
Springer International Publishing AG Processor Microarchitecture: An Implementation Perspective
Book SynopsisThis lecture presents a study of the microarchitecture of contemporary microprocessors. The focus is on implementation aspects, with discussions on their implications in terms of performance, power, and cost of state-of-the-art designs. The lecture starts with an overview of the different types of microprocessors and a review of the microarchitecture of cache memories. Then, it describes the implementation of the fetch unit, where special emphasis is made on the required support for branch prediction. The next section is devoted to instruction decode with special focus on the particular support to decoding x86 instructions. The next chapter presents the allocation stage and pays special attention to the implementation of register renaming. Afterward, the issue stage is studied. Here, the logic to implement out-of-order issue for both memory and non-memory instructions is thoroughly described. The following chapter focuses on the instruction execution and describes the different functional units that can be found in contemporary microprocessors, as well as the implementation of the bypass network, which has an important impact on the performance. Finally, the lecture concludes with the commit stage, where it describes how the architectural state is updated and recovered in case of exceptions or misspeculations. This lecture is intended for an advanced course on computer architecture, suitable for graduate students or senior undergrads who want to specialize in the area of computer architecture. It is also intended for practitioners in the industry in the area of microprocessor design. The book assumes that the reader is familiar with the main concepts regarding pipelining, out-of-order execution, cache memories, and virtual memory. Table of Contents: Introduction / Caches / The Instruction Fetch Unit / Decode / Allocation / The Issue Stage / Execute / The Commit Stage / References / Author BiographiesTable of ContentsIntroduction.- Caches.- The Instruction Fetch Unit.- Decode.- Allocation.- The Issue Stage.- Execute.- The Commit Stage.- References.- Author Biographies.
£26.59
Springer International Publishing AG Quantum Computing for Computer Architects, Second Edition
Book SynopsisQuantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm. While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore the systems-level challenges in achieving scalable, fault-tolerant quantum computation. In this lecture, we provide an engineering-oriented introduction to quantum computation with an overview of the theory behind key quantum algorithms. Next, we look at architectural case studies based upon experimental data and future projections for quantum computation implemented using trapped ions. While we focus here on architectures targeted for realization using trapped ions, the techniques for quantum computer architecture design, quantum fault-tolerance, and compilation described in this lecture are applicable to many other physical technologies that may be viable candidates for building a large-scale quantum computing system. We also discuss general issues involved with programming a quantum computer as well as a discussion of work on quantum architectures based on quantum teleportation. Finally, we consider some of the open issues remaining in the design of quantum computers. Table of Contents: Introduction / Basic Elements for Quantum Computation / Key Quantum Algorithms / Building Reliable and Scalable Quantum Architectures / Simulation of Quantum Computation / Architectural Elements / Case Study: The Quantum Logic Array Architecture / Programming the Quantum Architecture / Using the QLA for Quantum Simulation: The Transverse Ising Model / Teleportation-Based Quantum Architectures / Concluding RemarksTable of ContentsIntroduction.- Basic Elements for Quantum Computation.- Key Quantum Algorithms.- Building Reliable and Scalable Quantum Architectures.- Simulation of Quantum Computation.- Architectural Elements.- Case Study: The Quantum Logic Array Architecture.- Programming the Quantum Architecture.- Using the QLA for Quantum Simulation: The Transverse Ising Model.- Teleportation-Based Quantum Architectures.- Concluding Remarks.
£31.49
Springer International Publishing AG Phase Change Memory: From Devices to Systems
Book SynopsisAs conventional memory technologies such as DRAM and Flash run into scaling challenges, architects and system designers are forced to look at alternative technologies for building future computer systems. This synthesis lecture begins by listing the requirements for a next generation memory technology and briefly surveys the landscape of novel non-volatile memories. Among these, Phase Change Memory (PCM) is emerging as a leading contender, and the authors discuss the material, device, and circuit advances underlying this exciting technology. The lecture then describes architectural solutions to enable PCM for main memories. Finally, the authors explore the impact of such byte-addressable non-volatile memories on future storage and system designs. Table of Contents: Next Generation Memory Technologies / Architecting PCM for Main Memories / Tolerating Slow Writes in PCM / Wear Leveling for Durability / Wear Leveling Under Adversarial Settings / Error Resilience in Phase Change Memories / Storage and System Design With Emerging Non-Volatile MemoriesTable of ContentsNext Generation Memory Technologies.- Architecting PCM for Main Memories.- Tolerating Slow Writes in PCM.- Wear Leveling for Durability.- Wear Leveling Under Adversarial Settings.- Error Resilience in Phase Change Memories.- Storage and System Design With Emerging Non-Volatile Memories.
£25.19
Springer International Publishing AG Automatic Parallelization: An Overview of Fundamental Compiler Techniques
Book SynopsisCompiling for parallelism is a longstanding topic of compiler research. This book describes the fundamental principles of compiling "regular" numerical programs for parallelism. We begin with an explanation of analyses that allow a compiler to understand the interaction of data reads and writes in different statements and loop iterations during program execution. These analyses include dependence analysis, use-def analysis and pointer analysis. Next, we describe how the results of these analyses are used to enable transformations that make loops more amenable to parallelization, and discuss transformations that expose parallelism to target shared memory multicore and vector processors. We then discuss some problems that arise when parallelizing programs for execution on distributed memory machines. Finally, we conclude with an overview of solving Diophantine equations and suggestions for further readings in the topics of this book to enable the interested reader to delve deeper into the field. Table of Contents: Introduction and overview / Dependence analysis, dependence graphs and alias analysis / Program parallelization / Transformations to modify and eliminate dependences / Transformation of iterative and recursive constructs / Compiling for distributed memory machines / Solving Diophantine equations / A guide to further readingTable of ContentsIntroduction and overview.- Dependence analysis, dependence graphs and alias analysis.- Program parallelization.- Transformations to modify and eliminate dependences.- Transformation of iterative and recursive constructs.- Compiling for distributed memory machines.- Solving Diophantine equations.- A guide to further reading.
£26.99
Springer International Publishing AG Multithreading Architecture
Book SynopsisMultithreaded architectures now appear across the entire range of computing devices, from the highest-performing general purpose devices to low-end embedded processors. Multithreading enables a processor core to more effectively utilize its computational resources, as a stall in one thread need not cause execution resources to be idle. This enables the computer architect to maximize performance within area constraints, power constraints, or energy constraints. However, the architectural options for the processor designer or architect looking to implement multithreading are quite extensive and varied, as evidenced not only by the research literature but also by the variety of commercial implementations. This book introduces the basic concepts of multithreading, describes a number of models of multithreading, and then develops the three classic models (coarse-grain, fine-grain, and simultaneous multithreading) in greater detail. It describes a wide variety of architectural and software design tradeoffs, as well as opportunities specific to multithreading architectures. Finally, it details a number of important commercial and academic hardware implementations of multithreading. Table of Contents: Introduction / Multithreaded Execution Models / Coarse-Grain Multithreading / Fine-Grain Multithreading / Simultaneous Multithreading / Managing Contention / New Opportunities for Multithreaded Processors / Experimentation and Metrics / Implementations of Multithreaded Processors / ConclusionTable of ContentsIntroduction.- Multithreaded Execution Models.- Coarse-Grain Multithreading.- Fine-Grain Multithreading.- Simultaneous Multithreading.- Managing Contention.- New Opportunities for Multithreaded Processors.- Experimentation and Metrics.- Implementations of Multithreaded Processors.- Conclusion.
£25.19
Springer International Publishing AG Resilient Architecture Design for Voltage Variation
Book SynopsisShrinking feature size and diminishing supply voltage are making circuits sensitive to supply voltage fluctuations within the microprocessor, caused by normal workload activity changes. If left unattended, voltage fluctuations can lead to timing violations or even transistor lifetime issues that degrade processor robustness. Mechanisms that learn to tolerate, avoid, and eliminate voltage fluctuations based on program and microarchitectural events can help steer the processor clear of danger, thus enabling tighter voltage margins that improve performance or lower power consumption. We describe the problem of voltage variation and the factors that influence this variation during processor design and operation. We also describe a variety of runtime hardware and software mitigation techniques that either tolerate, avoid, and/or eliminate voltage violations. We hope processor architects will find the information useful since tolerance, avoidance, and elimination are generalizable constructs that can serve as a basis for addressing other reliability challenges as well. Table of Contents: Introduction / Modeling Voltage Variation / Understanding the Characteristics of Voltage Variation / Traditional Solutions and Emerging Solution Forecast / Allowing and Tolerating Voltage Emergencies / Predicting and Avoiding Voltage Emergencies / Eliminiating Recurring Voltage Emergencies / Future Directions on ResiliencyTable of ContentsIntroduction.- Modeling Voltage Variation.- Understanding the Characteristics of Voltage Variation.- Traditional Solutions and Emerging Solution Forecast.- Allowing and Tolerating Voltage Emergencies.- Predicting and Avoiding Voltage Emergencies.- Eliminiating Recurring Voltage Emergencies.- Future Directions on Resiliency.
£26.59
Springer International Publishing AG Shared-Memory Synchronization
Book SynopsisThis book offers a comprehensive survey of shared-memory synchronization, with an emphasis on “systems-level” issues. It includes sufficient coverage of architectural details to understand correctness and performance on modern multicore machines, and sufficient coverage of higher-level issues to understand how synchronization is embedded in modern programming languages.The primary intended audience for this book is “systems programmers”—the authors of operating systems, library packages, language run-time systems, concurrent data structures, and server and utility programs. Much of the discussion should also be of interest to application programmers who want to make good use of the synchronization mechanisms available to them, and to computer architects who want to understand the ramifications of their design decisions on systems-level code. Table of ContentsIntroduction.- Architectural Background.- Essential Theory.- Practical Spin Locks.- Busy-wait Synchronization with Conditions.- Read-mostly Atomicity.- Synchronization and Scheduling.- Nonblocking Algorithms.- Transactional Memory.- Author's Biography.
£33.24
Springer International Publishing AG On-Chip Networks, Second Edition
Book SynopsisThis book targets engineers and researchers familiar with basic computer architecture concepts who are interested in learning about on-chip networks. This work is designed to be a short synthesis of the most critical concepts in on-chip network design. It is a resource for both understanding on-chip network basics and for providing an overview of state of-the-art research in on-chip networks. We believe that an overview that teaches both fundamental concepts and highlights state-of-the-art designs will be of great value to both graduate students and industry engineers. While not an exhaustive text, we hope to illuminate fundamental concepts for the reader as well as identify trends and gaps in on-chip network research. With the rapid advances in this field, we felt it was timely to update and review the state of the art in this second edition. We introduce two new chapters at the end of the book. We have updated the latest research of the past years throughout the book and also expanded our coverage of fundamental concepts to include several research ideas that have now made their way into products and, in our opinion, should be textbook concepts that all on-chip network practitioners should know. For example, these fundamental concepts include message passing, multicast routing, and bubble flow control schemes.Table of ContentsPreface.- Acknowledgments.- Introduction.- Interface with System Architecture.- Topology.- Routing.- Flow Control.- Router Microarchitecture.- Modeling and Evaluation.- Case Studies.- Conclusions.- References.- Authors' Biographies.
£37.85
Springer International Publishing AG The Datacenter as a Computer: Designing
Book SynopsisThis book describes warehouse-scale computers (WSCs), the computing platforms that power cloud computing and all the great web services we use every day. It discusses how these new systems treat the datacenter itself as one massive computer designed at warehouse scale, with hardware and software working in concert to deliver good levels of internet service performance. The book details the architecture of WSCs and covers the main factors influencing their design, operation, and cost structure, and the characteristics of their software base. Each chapter contains multiple real-world examples, including detailed case studies and previously unpublished details of the infrastructure used to power Google's online services. Targeted at the architects and programmers of today's WSCs, this book provides a great foundation for those looking to innovate in this fascinating and important area, but the material will also be broadly interesting to those who just want to understand the infrastructure powering the internet. The third edition reflects four years of advancements since the previous edition and nearly doubles the number of pictures and figures. New topics range from additional workloads like video streaming, machine learning, and public cloud to specialized silicon accelerators, storage and network building blocks, and a revised discussion of data center power and cooling, and uptime. Further discussions of emerging trends and opportunities ensure that this revised edition will remain an essential resource for educators and professionals working on the next generation of WSCs.Table of ContentsAcknowlegements.- Introduction.- Workloads and Software Infrastructure.- WSC Hardware Building Blocks.- Data Center Basics: Building, Power, and Cooling.- Energy and Power Efficiency.- Modeling Costs.- Dealing with Failures and Repairs.- Closing Remarks.- Bibliography.- Author Biographies.
£40.49
Springer International Publishing AG Quantum Computer Systems: Research for Noisy Intermediate-Scale Quantum Computers
Book SynopsisThis book targets computer scientists and engineers who are familiar with concepts in classical computer systems but are curious to learn the general architecture of quantum computing systems. It gives a concise presentation of this new paradigm of computing from a computer systems' point of view without assuming any background in quantum mechanics. As such, it is divided into two parts. The first part of the book provides a gentle overview on the fundamental principles of the quantum theory and their implications for computing. The second part is devoted to state-of-the-art research in designing practical quantum programs, building a scalable software systems stack, and controlling quantum hardware components. Most chapters end with a summary and an outlook for future directions. This book celebrates the remarkable progress that scientists across disciplines have made in the past decades and reveals what roles computer scientists and engineers can play to enable practical-scale quantum computing.Table of ContentsPreface.- Acknowledgments.- List of Notations.- Introduction.- Think Quantumly About Computing.- Quantum Application Design.- Optimizing Quantum Systems--An Overview.- Quantum Programming Languages.- Circuit Synthesis and Compilation.- Microarchitecture and Pulse Compilation.- Noise Mitigation and Error Correction.- Classical Simulation of Quantum Computation.- Concluding Remarks.- Bibliography.- Authors' Biographies.
£49.49
Springer International Publishing AG Optimization and Mathematical Modeling in Computer Architecture
Book SynopsisIn this book we give an overview of modeling techniques used to describe computer systems to mathematical optimization tools. We give a brief introduction to various classes of mathematical optimization frameworks with special focus on mixed integer linear programming which provides a good balance between solver time and expressiveness. We present four detailed case studies -- instruction set customization, data center resource management, spatial architecture scheduling, and resource allocation in tiled architectures -- showing how MILP can be used and quantifying by how much it outperforms traditional design exploration techniques. This book should help a skilled systems designer to learn techniques for using MILP in their problems, and the skilled optimization expert to understand the types of computer systems problems that MILP can be applied to.Table of ContentsAcknowledgments.- Introduction.- An Overview of Optimization.- Case Study: Instruction Set Customization.- Case Study: Data Center Resource Management.- Case Study: Spatial Architecture Scheduling.- Case Study: Resource Allocation in Tiled Architectures.- Conclusions.- Bibliography.- Authors' Biographies.
£25.19
Springer International Publishing AG On-Chip Photonic Interconnects: A Computer Architect's Perspective
Book SynopsisAs the number of cores on a chip continues to climb, architects will need to address both bandwidth and power consumption issues related to the interconnection network. Electrical interconnects are not likely to scale well to a large number of processors for energy efficiency reasons, and the problem is compounded by the fact that there is a fixed total power budget for a die, dictated by the amount of heat that can be dissipated without special (and expensive) cooling and packaging techniques. Thus, there is a need to seek alternatives to electrical signaling for on-chip interconnection applications. Photonics, which has a fundamentally different mechanism of signal propagation, offers the potential to not only overcome the drawbacks of electrical signaling, but also enable the architect to build energy efficient, scalable systems. The purpose of this book is to introduce computer architects to the possibilities and challenges of working with photons and designing on-chip photonic interconnection networks.Table of ContentsList of Figures.- List of Tables.- List of Acronyms.- Acknowledgments.- Introduction.- Photonic Interconnect Basics.- Link Construction.- On-Chip Photonic Networks.- Challenges.- Other Developments.- Summary and Conclusion.- Bibliography.- Authors' Biographies.
£25.19
Springer International Publishing AG Euro-Par 2022: Parallel Processing: 28th International Conference on Parallel and Distributed Computing, Glasgow, UK, August 22–26, 2022, Proceedings
Book SynopsisThis book constitutes the proceedings of the 33rd International Conference on Parallel and Distributed Computing, Euro-Par 2022, held in GLasgow, UK, in August 2022.The 25 full papers presented in this volume were carefully reviewed and selected from 102 submissions. The conference Euro-Par 2022 covers all aspects of parallel and distributed computing, ranging from theory to practice, scaling from the smallest to the largest parallel and distributed systems, from fundamental computational problems and models to full-fledged applications, from architecture and interface design and implementation to tools, infrastructures and applications. Table of ContentsCompilers, Tools and Environments.- Performance and Power Modeling, Prediction and Evaluation.- Scheduling and Load Balancing.- Data Management, Analytics and Machine Learning.- Cluster and Cloud Computing.- Theory and Algorithms for Parallel and Distributed Processing.- Parallel and Distributed Programming, Interfaces, and Languages.- Multicore and Manycore Parallelism.- Parallel Numerical Methods and Applications.
£53.99