Algorithms and data structures Books

479 products


  • An Introduction to Kolmogorov Complexity and Its

    Springer Nature Switzerland AG An Introduction to Kolmogorov Complexity and Its

    15 in stock

    Book SynopsisThis must-read textbook presents an essential introduction to Kolmogorov complexity (KC), a central theory and powerful tool in information science that deals with the quantity of information in individual objects. The text covers both the fundamental concepts and the most important practical applications, supported by a wealth of didactic features.This thoroughly revised and enhanced fourth edition includes new and updated material on, amongst other topics, the Miller-Yu theorem, the Gács-Kučera theorem, the Day-Gács theorem, increasing randomness, short lists computable from an input string containing the incomputable Kolmogorov complexity of the input, the Lovász local lemma, sorting, the algorithmic full Slepian-Wolf theorem for individual strings, multiset normalized information distance and normalized web distance, and conditional universal distribution.Topics and features: describes the mathematical theory of KC, including the theories of algorithmic complexity and algorithmic probability; presents a general theory of inductive reasoning and its applications, and reviews the utility of the incompressibility method; covers the practical application of KC in great detail, including the normalized information distance (the similarity metric) and information diameter of multisets in phylogeny, language trees, music, heterogeneous files, and clustering; discusses the many applications of resource-bounded KC, and examines different physical theories from a KC point of view; includes numerous examples that elaborate the theory, and a range of exercises of varying difficulty (with solutions); offers explanatory asides on technical issues, and extensive historical sections; suggests structures for several one-semester courses in the preface.As the definitive textbook on Kolmogorov complexity, this comprehensive and self-contained work is an invaluable resource for advanced undergraduate students, graduate students, and researchers in all fields of science.Table of ContentsPreliminaries Algorithmic Complexity Algorithmic Prefix Complexity Algorithmic Probability Inductive Reasoning The Incompressibility Method Resource-Bounded Complexity Physics, Information, and Computation

    15 in stock

    £71.99

  • Data Structures and Algorithms with Scala: A

    Springer Nature Switzerland AG Data Structures and Algorithms with Scala: A

    1 in stock

    Book SynopsisThis practically-focused textbook presents a concise tutorial on data structures and algorithms using the object-functional language Scala. The material builds upon the foundation established in the title Programming with Scala: Language Exploration by the same author, which can be treated as a companion text for those less familiar with Scala.Topics and features: discusses data structures and algorithms in the form of design patterns; covers key topics on arrays, lists, stacks, queues, hash tables, binary trees, sorting, searching, and graphs; describes examples of complete and running applications for each topic; presents a functional approach to implementations for data structures and algorithms (excepting arrays); provides numerous challenge exercises (with solutions), encouraging the reader to take existing solutions and improve upon them; offers insights from the author’s extensive industrial experience; includes a glossary, and an appendix supplying an overview of discrete mathematics.Highlighting the techniques and skills necessary to quickly derive solutions to applied problems, this accessible text will prove invaluable to time-pressured students and professional software engineers.Table of ContentsFoundational Components Fundamental Algorithms Arrays Lists Stacks Queues Hash Tables Binary Trees Sorting Searching Graphs Appendix A: Solutions for Selected Exercises Appendix B: Review of Discrete Mathematical Topics

    1 in stock

    £31.34

  • Quantum Technology and Optimization Problems: First International Workshop, QTOP 2019, Munich, Germany, March 18, 2019, Proceedings

    Springer Nature Switzerland AG Quantum Technology and Optimization Problems: First International Workshop, QTOP 2019, Munich, Germany, March 18, 2019, Proceedings

    1 in stock

    Book SynopsisThis book constitutes the refereed proceedings of the First International Workshop on Quantum Technology and Optimization Problems, QTOP 2019, held in Munich, Germany, in March 2019.The 18 full papers presented together with 1 keynote paper in this volume were carefully reviewed and selected from 21 submissions. The papers are grouped in the following topical sections: analysis of optimization problems; quantum gate algorithms; applications of quantum annealing; and foundations and quantum technologies.Table of ContentsAnalysis of Optimization Problems.- Quantum Gate Algorithms.- Applications of Quantum Annealing.- Foundations and Quantum Technologies.

    1 in stock

    £58.49

  • Quality, Reliability, Security and Robustness in Heterogeneous Systems: 14th EAI International Conference, Qshine 2018, Ho Chi Minh City, Vietnam, December 3–4, 2018, Proceedings

    Springer Nature Switzerland AG Quality, Reliability, Security and Robustness in Heterogeneous Systems: 14th EAI International Conference, Qshine 2018, Ho Chi Minh City, Vietnam, December 3–4, 2018, Proceedings

    1 in stock

    Book SynopsisThis book constitutes the refereed post-conference proceedings of the 14th EAI International Conference on Quality, Reliability, Security and Robustness in Heterogeneous Networks, QShine 2018, held in Ho Chi Minh City, Vietnam, in December 2018. The 13 revised full papers were carefully reviewed and selected from 28 submissions. The papers are organized thematically in tracks, starting with security and privacy, telecommunication systems and networks, networks and applications.Table of ContentsImproving Privacy for GeoIP DNS Traffic.- Deep Reinforcement Learning based QoS-aware Routing in Knowledge-defined networking.- 3 Throughput optimization for multirate multicasting through association control in IEEE 802.11 WLAN.- An NS-3 MPTCP Implementation.- A Novel Security Framework for Industrial IoT based on ISA 100.11a.- Social-aware Caching and Resource Sharing Optimization for Video Delivering in 5G Networks.- Energy Efficiency in QoS Constrained 60 GHz Millimeter-Wave Ultra-dense Networks.- Priority-based Device Discovery in Public Safety D2D Networks with Full Duplexing.- Modified Direct Method for Point-to-Point Blocking.- Probability in Multi-service Switching Networks with Resource Allocation Control.- Inconsistencies among Spectral Robustness Metrics.- QoS criteria for energy-aware switching networks.- Modelling Overflow Systems with Queuing in Primary.- Exploring YouTube’s CDN Heterogeneity.

    1 in stock

    £34.19

  • Computational Intelligence in Music, Sound, Art and Design: 8th International Conference, EvoMUSART 2019, Held as Part of EvoStar 2019, Leipzig, Germany, April 24–26, 2019, Proceedings

    Springer Nature Switzerland AG Computational Intelligence in Music, Sound, Art and Design: 8th International Conference, EvoMUSART 2019, Held as Part of EvoStar 2019, Leipzig, Germany, April 24–26, 2019, Proceedings

    1 in stock

    Book SynopsisThis book constitutes the refereed proceedings of the 8th International Conference on Evolutionary Computation in Combinatorial Optimization, EvoMUSART 2019, held in Leipzig, Germany, in April 2019, co-located with the Evo*2019 events EuroGP, EvoCOP and EvoApplications. The 16 revised full papers presented were carefully reviewed and selected from 24 submissions. The papers cover a wide range of topics and application areas, including: visual art and music generation, analysis, and interpretation; sound synthesis; architecture; video; poetry; design; and other creative tasks.Table of ContentsDeep Learning Concepts for Evolutionary Art.- Adversarial Evolution and Deep Learning – How Does An Artist Play with Our Visual System.- Autonomy, Authenticity, Authorship and Intention in Computer Generated Art.- Camera Obscurer: Generative Art for Design Inspiration.- Swarm-Based Identification of Animation Key Points from 2D-medialness Maps.- Paintings, Polygons and Plant Propagation.- Evolutionary Games for Audiovisual Works: Exploring the Demographic Prisoner's Dilemma.- Emojinating: Evolving Emoji Blends.- Automatically Generating Engaging Presentation Slide Decks.- Tired of choosing? Just Add Structure and Virtual Reality.- EvoChef: Show Me What to Cook! Artificial Evolution of Culinary Arts.- Comparing Models for Harmony Prediction in An Interactive Audio Looper.- Stochastic Synthesizer Patch Exploration in Edisyn.- Evolutionary Multi-Objective Training Set Selection of Data Instances and Augmentations for Vocal Detection.- Automatic Jazz Melody Composition Through a Learning-Based Genetic Algorithm.- Exploring Transfer Functions in Evolved CTRNNs for Music Generation.

    1 in stock

    £44.99

  • Theory of Information and its Value

    Springer Nature Switzerland AG Theory of Information and its Value

    1 in stock

    Book SynopsisThis English version of Ruslan L. Stratonovich’s Theory of Information (1975) builds on theory and provides methods, techniques, and concepts toward utilizing critical applications. Unifying theories of information, optimization, and statistical physics, the value of information theory has gained recognition in data science, machine learning, and artificial intelligence. With the emergence of a data-driven economy, progress in machine learning, artificial intelligence algorithms, and increased computational resources, the need for comprehending information is essential. This book is even more relevant today than when it was first published in 1975. It extends the classic work of R.L. Stratonovich, one of the original developers of the symmetrized version of stochastic calculus and filtering theory, to name just two topics.Each chapter begins with basic, fundamental ideas, supported by clear examples; the material then advances to great detail and depth. The reader is not required to be familiar with the more difficult and specific material. Rather, the treasure trove of examples of stochastic processes and problems makes this book accessible to a wide readership of researchers, postgraduates, and undergraduate students in mathematics, engineering, physics and computer science who are specializing in information theory, data analysis, or machine learning.Trade Review“The book could be useful in advanced graduate courses with students, who are not afraid of integrals and probabilities.” (Jaak Henno, zbMATH 1454.94002, 2021)Table of Contents

    1 in stock

    £89.99

  • Sequential and Parallel Algorithms and Data

    Springer Nature Switzerland AG Sequential and Parallel Algorithms and Data

    15 in stock

    Book SynopsisThis textbook is a concise introduction to the basic toolbox of structures that allow efficient organization and retrieval of data, key algorithms for problems on graphs, and generic techniques for modeling, understanding, and solving algorithmic problems. The authors aim for a balance between simplicity and efficiency, between theory and practice, and between classical results and the forefront of research. Individual chapters cover arrays and linked lists, hash tables and associative arrays, sorting and selection, priority queues, sorted sequences, graph representation, graph traversal, shortest paths, minimum spanning trees, optimization, collective communication and computation, and load balancing. The authors also discuss important issues such as algorithm engineering, memory hierarchies, algorithm libraries, and certifying algorithms. Moving beyond the sequential algorithms and data structures of the earlier related title, this book takes into account the paradigm shift towards the parallel processing required to solve modern performance-critical applications and how this impacts on the teaching of algorithms. The book is suitable for undergraduate and graduate students and professionals familiar with programming and basic mathematical language. Most chapters have the same basic structure: the authors discuss a problem as it occurs in a real-life situation, they illustrate the most important applications, and then they introduce simple solutions as informally as possible and as formally as necessary so the reader really understands the issues at hand. As they move to more advanced and optional issues, their approach gradually leads to a more mathematical treatment, including theorems and proofs. The book includes many examples, pictures, informal explanations, and exercises, and the implementation notes introduce clean, efficient implementations in languages such as C++ and Java.Trade Review“The style of the book is accessible and is suitable for a wide range of audiences, from mathematicians and computer scientists to researchers from other fields who would like to use parallelised approaches in their research.” (Irina Ioana Mohorianu, zbMATH 1445.68003, 2020)Table of ContentsAppetizer: Integer Arithmetic.- Introduction.- Representing Sequences by Arrays and Linked Lists.- Hash Tables and Associative Arrays.- Sorting and Selection.- Priority Queues.- Sorted Sequences.- Graph Representation.- Graph Traversal.- Shortest Paths.- Minimum Spanning Trees.- Generic Approaches to Optimization.- Collective Communication and Computation.- Load Balancing.- App. A, Mathematical Background.- App. B, Computer Architecture Aspects.- App. C, Support for Parallelism in C++.- App. D, The Message Passing Interface (MPI).- App. E, List of Commercial Products, Trademarks and Licenses.

    15 in stock

    £39.99

  • Analysis of Experimental Algorithms: Special Event, SEA² 2019, Kalamata, Greece, June 24-29, 2019, Revised Selected Papers

    Springer Nature Switzerland AG Analysis of Experimental Algorithms: Special Event, SEA² 2019, Kalamata, Greece, June 24-29, 2019, Revised Selected Papers

    1 in stock

    Book SynopsisThis book constitutes the refereed post-conference proceedings of the Special Event on the Analysis of Experimental Algorithms, SEA² 2019, held in Kalamata, Greece, in June 2019.The 35 revised full papers presented were carefully reviewed and selected from 45 submissions. The papers cover a wide range of topics in both computer science and operations research/mathematical programming. They focus on the role of experimentation and engineering techniques in the design and evaluation of algorithms, data structures, and computational optimization methods.

    1 in stock

    £62.99

  • Graph Drawing and Network Visualization: 27th

    Springer Nature Switzerland AG Graph Drawing and Network Visualization: 27th

    15 in stock

    Book SynopsisThis book constitutes the refereed proceedings of the 27th International Symposium on Graph Drawing and Network Visualization, GD 2019, held in Prague, Czech Republic, in September 2019.The 42 papers and 12 posters presented in this volume were carefully reviewed and selected from 113 submissions. They were organized into the following topical sections: Cartograms and Intersection Graphs, Geometric Graph Theory, Clustering, Quality Metrics, Arrangements, A Low Number of Crossings, Best Paper in Track 1, Morphing and Planarity, Parameterized Complexity, Collinearities, Topological Graph Theory, Best Paper in Track 2, Level Planarity, Graph Drawing Contest Report, and Poster Abstracts.Table of ContentsCartograms and Intersection Graphs.- Stick Graphs with Length Constraints.- Representing Graphs and Hypergraphs by Touching Polygons in 3D.- Optimal Morphs of Planar Orthogonal Drawings II.- Computing Stable Demers Cartograms.- Geometric Graph Theory.- Bundled Crossings Revisited.- Crossing Numbers of Beyond-Planar Graphs.- On the 2-Colored Crossing Number.- Minimal Representations of Order Types by Geometric Graphs.- Balanced Schnyder woods for planar triangulations: an experimental study with applications to graph drawing and graph separators.- Clustering.- A Quality Metric for Visualization of Clusters in Graphs.- Multi-level Graph Drawing using Infomap Clustering.- On Strict (Outer-)Confluent Graphs.- Quality Metrics.- On the Edge-Length Ratio of Planar Graphs.- Node Overlap Removal Algorithms: A Comparative Study.- Graphs with large total angular resolution.- Arrangements.- Computing Height-Optimal Tangles Faster.- On Arrangements of Orthogonal Circles.- Extending Simple Drawings.- Coloring Hasse diagrams and disjointness graphs of curves.- A Low Number of Crossings.- Efficient Generation of Different Topological Representations of Graphs Beyond-Planarity.- The QuaSEFE Problem.- ChordLink: A New Hybrid Visualization Model.- Stress-Plus-X (SPX) Graph Layout.- Best Paper in Track 1.- Exact Crossing Number Parameterized by Vertex Cover.- Morphing and Planarity.- Maximizing Ink in Partial Edge Drawings of k-Plane Graphs.- Graph Drawing with Morphing Partial Edges.- A Note on Universal Point Sets for Planar Graphs.- Parameterized Complexity.- Parameterized Algorithms for Book Embedding Problems.- Sketched Representations and Orthogonal Planarity of Bounded Treewidth Graphs.- Collinearities.- 4-Connected Triangulations on Few Lines.- Line and Plane Cover Numbers Revisited.- Drawing planar graphs with few segments on a polynomial grid.- Variants of the Segment Number of a Graph.- Topological Graph Theory.- Local and Union Page Numbers.- Mixed Linear Layouts: Complexity, Heuristics, and Experiments.- Homotopy height, grid-major height and graph-drawing height.- On the Edge-Vertex Ratio of Maximal Thrackles.- Best Paper in Track 2.- Symmetry Detection and Classification in Drawings of Graphs.- Level Planarity.- An SPQR-Tree-Like Embedding Representation for Upward Planarity.- A Natural Quadratic Approach to the Generalized Graph Layering Problem.- Graph Stories in Small Area.- Level-Planar Drawings with Few Slopes.- Graph Drawing Contest Report.- Graph Drawing Contest Report.- Poster Abstracts.- A 1-planarity Testing and Embedding Algorithm.- Stretching Two Pseudolines in Planar Straight-Line Drawings.- Adventures in Abstraction: Reachability in Hierarchical Drawings.- On Topological Book Embedding for k-Plane Graphs.- On Compact RAC Drawings.- FPQ-choosable Planarity Testing.- Packing Trees into 1-Planar Graphs.- Geographic Network Visualization Techniques: A Work-In-Progress Taxonomy.- On the Simple Quasi Crossing Number of K 11.- Minimising Crossings in a Tree-Based Network.- Crossing Families and Their Generalizations.- Which Sets of Strings are Pseudospherical?.

    15 in stock

    £42.74

  • Verification, Model Checking, and Abstract Interpretation: 21st International Conference, VMCAI 2020, New Orleans, LA, USA, January 16–21, 2020, Proceedings

    Springer Nature Switzerland AG Verification, Model Checking, and Abstract Interpretation: 21st International Conference, VMCAI 2020, New Orleans, LA, USA, January 16–21, 2020, Proceedings

    15 in stock

    Book SynopsisThis book constitutes the proceedings of the 21st International Conference on Verification, Model Checking, and Abstract Interpretation, VMCAI 2020. The 21 papers presented in this volume were carefully reviewed from 44 submissions. VMCAI provides a forum for researchers from the communities of verification, model checking, and abstract Interpretation, facilitating interaction, cross-fertilization, and advancement of hybrid methods that combine these and related areas. Table of ContentsWitnessing Secure Compilation.- BackFlow: Backward Context-sensitive Flow Reconstruction of Taint Analysis Results.- Fixing Code That Explodes Under Symbolic Evaluation.- The Correctness of a Code Generator for a Functional Language.- Leveraging Compiler Intermediate Representation for Multi- and Cross-Language Verification.- Putting the Squeeze on Array Programs: Loop Verification via Inductive Rank Reduction.- A Systematic Approach to Abstract Interpretation of Program Transformations.- Sharing ghost variables in a collection of abstract domains.- Harnessing Static Analysis to Help Learn Pseudo-Inverses of String Manipulating Procedures for Automatic Test Generation.- Synthesizing Environment Invariants for Modular Hardware Verification.- Systematic Classi cation of Attackers via Bounded Model Checking.- Cheap CTL Compassion in NuSMV.- A Cooperative Parallelization Approach for Property-Directed k-Induction.- Generalized Property-Directed Reachability for Hybrid Systems.- Language Inclusion for Finite Prime Event Structures.- Promptness and Bounded Fairness in Concurrent and Parameterized Systems.- Solving LIA* Using Approximations.- Formalizing and checking Multilevel Consistency.- Practical Abstractions for Automated Veri cation of Shared-Memory Concurrency.- How to Win First-Order Safety Games.- Improving Parity Game Solvers with Justifications.

    15 in stock

    £66.49

  • Distributed Computing for Emerging Smart Networks: First International Workshop, DiCES-N 2019, Hammamet, Tunisia, October 30, 2019, Revised Selected Papers

    Springer Nature Switzerland AG Distributed Computing for Emerging Smart Networks: First International Workshop, DiCES-N 2019, Hammamet, Tunisia, October 30, 2019, Revised Selected Papers

    15 in stock

    Book SynopsisThis book contains extended versions of the best papers presented at the First International Workshop on Distributed Computing for Emerging Smart Networks, DiCES-N 2019, held in Hammamet, Tunisia, in October 2019.The 9 revised full papers included in this volume were carefully reviewed and selected from 24 initial submissions. The papers are organized in the following topical sections: ​intelligent transportation systems; distributed computing for networking and communication; articial intelligence applied to cyber physical systems.Table of ContentsIntelligent Transportation Systems.- Distributed Computing for Networking and Communication.- Articial Intelligence Applied to Cyber Physical Systems.

    15 in stock

    £52.24

  • Complexity and Approximation: In Memory of Ker-I Ko

    Springer Nature Switzerland AG Complexity and Approximation: In Memory of Ker-I Ko

    15 in stock

    Book SynopsisThis Festschrift is in honor of Ker-I Ko, Professor in the Stony Brook University, USA. Ker-I Ko was one of the founding fathers of computational complexity over real numbers and analysis. He and Harvey Friedman devised a theoretical model for real number computations by extending the computation of Turing machines. He contributed significantly to advancing the theory of structural complexity, especially on polynomial-time isomorphism, instance complexity, and relativization of polynomial-time hierarchy. Ker-I also made many contributions to approximation algorithm theory of combinatorial optimization problems. This volume contains 17 contributions in the area of complexity and approximation. Those articles are authored by researchers over the world, including North America, Europe and Asia. Most of them are co-authors, colleagues, friends, and students of Ker-I Ko.Table of ContentsIn Memoriam: Ker-I Ko (1950-2018).- Ker-I Ko and the Study of Resource-Bounded Kolmogorov Complexity.- The Power of Self-Reducibility Selectivity, Information, and Approximation.- Who Asked Us - How the Theory of Computing Answers, QuestionsAbout Analysis.- Promise Problems on Probability Distributions.- On Nonadaptive Reductions to the Set of Random Strings and its Dense Subsets.- Computability of the Solutions to Navier-Stokes Equations via Recursive Approximation.- Automatic Generation of Structured Overviews over a Very Large Corpus of Documents.- Better Upper Bounds for Searching on a Line with Byzantine Robots.- A Survey on Double Greedy Algorithms for Maximizing Non-monotone Submodular Functions.- Sequential Location Game on Continuous Directional Star Networks.- Core Decomposition, Maintenance and Applications.- Active and Busy Time Scheduling Problem: a Survey.- A Note on the Position Value for Hypergraph Communication Situations.- An Efficient Approximation Algorithm for the Steiner Tree Problem.- A Review for Submodular Optimization on Machine Scheduling Problems.- Edge Computing Integrated with Blockchain Technologies.

    15 in stock

    £52.24

  • Treewidth, Kernels, and Algorithms: Essays Dedicated to Hans L. Bodlaender on the Occasion of His 60th Birthday

    Springer Nature Switzerland AG Treewidth, Kernels, and Algorithms: Essays Dedicated to Hans L. Bodlaender on the Occasion of His 60th Birthday

    15 in stock

    Book SynopsisThis Festschrift was published in honor of Hans L. Bodlaender on the occasion of his 60th birthday. The 14 full and 5 short contributions included in this volume show the many transformative discoveries made by H.L. Bodlaender in the areas of graph algorithms, parameterized complexity, kernelization and combinatorial games. The papers are written by his former Ph.D. students and colleagues as well as by his former Ph.D. advisor, Jan van Leeuwen.Chapter “Crossing Paths with Hans Bodlaender: A Personal View on Cross-Composition for Sparsification Lower Bounds” is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.Table of ContentsSeeing Arboretum for the (partial k) Trees.- Collaborating With Hans: Some Remaining Wonderments.- Hans Bodlaender and the Theory of Kernelization Lower Bounds.- Algorithms, Complexity, and Hans.- Lower Bounds for Dominating Set in Ball Graphs and for Weighted Dominating Set in Unit-Ball Graphs.- As Time Goes By: Reflections on Treewidth for Temporal Graphs.- Possible and Impossible Attempts to Solve the Treewidth Problem via ILPs.- Crossing Paths with Hans Bodlaender: A Personal View on Cross-Composition for Sparsification Lower Bounds.- Efficient Graph Minors Theory and Parameterized Algorithms for (Planar) Disjoint Paths.- Four shorts stories on surprising algorithmic uses of treewidth.- Algorithms for NP-Hard Problems via Rank-related Parameters of Matrices.- A Survey on Spanning Tree Congestion.- Surprising Applications of Treewidth Bounds for Planar Graphs.- Computing tree decompositions.- Experimental analysis of treewidth.- A Retrospective on (Meta) Kernelization.- Games, Puzzles and Treewidth.- Fast Algorithms for Join Operations on Tree Decompositions.

    15 in stock

    £52.24

  • Blockchain and Distributed Ledger Technology Use

    Springer Nature Switzerland AG Blockchain and Distributed Ledger Technology Use

    15 in stock

    Book SynopsisBlockchain and other trustless systems have gone from being relatively obscure technologies, which were only known to a small community of computer scientists and cryptologists, to mainstream phenomena that are now considered powerful game changers for many industries. This book explores and assesses real-world use cases and case studies on blockchain and related technologies. The studies describe the respective applications and address how these technologies have been deployed, the rationale behind their application, and finally, their outcomes. The book shares a wealth of experiences and lessons learned regarding financial markets, energy, SCM, healthcare, law and compliance. Given its scope, it is chiefly intended for academics and practitioners who want to learn more about blockchain applications.Table of ContentsToward More Rigorous Blockchain Research: Recommendations for Writing Blockchain Case Studies.- From a Use Case Categorization Scheme Towards A Maturity Model for Engineering Distributed Ledgers.- What’s In The Box? Combating Counterfeit Medications in Pharmaceutical Supply Chains With Blockchain Vigilant Information Systems.- A Use Case of Blockchain in Healthcare: Allergy Card.- International Exchange Of Financial Information on Distributed Ledgers: Outlook and Design Blueprint.- A Blockchain Supported Solution for Compliant Digital Security Offerings.- A Blockchain-Driven Approach to Fulfill the GDPR Recording Requirements.- Wibson: A Case Study of a Decentralized, Privacy-Preserving Data Marketplace.- : Business Process Transformation in Natural Resources Development Using Blockchain: Indigenous Entrepreneurship, Trustless Technology, and Rebuilding Trust.- Smart City Applications on the Blockchain: Development of a Multi-Layer Taxonomy.- A Case Study of Blockchain-Induced Digital Transformation in the Public Sector.- Analyzing the Potential of DLT-Based Applications in Smart Factories.- Disrupting Platform Organizations with Blockchain Technology and the Internet of Things?.- Using Blockchain for Online Multimedia Management: Characteristics of Existing Platforms.- Supply Chain Visibility Ledger.

    15 in stock

    £151.99

  • Persuasive Technology. Designing for Future Change: 15th International Conference on Persuasive Technology, PERSUASIVE 2020, Aalborg, Denmark, April 20–23, 2020, Proceedings

    Springer Nature Switzerland AG Persuasive Technology. Designing for Future Change: 15th International Conference on Persuasive Technology, PERSUASIVE 2020, Aalborg, Denmark, April 20–23, 2020, Proceedings

    15 in stock

    Book SynopsisThis book constitutes the refereed proceedings of the 15th International Conference on Persuasive Technology, PERSUASIVE 2020, held in Aalborg, Denmark, in April 2020. The 18 full papers presented in this book were carefully reviewed and selected from 79 submissions. The papers are grouped in the following topical sections: methodological and theoretical perspectives on persuasive design; persuasive in practice, digital insights; persuasive technologies for health and wellbeing; persuasive solutions for a sustainable future; and on security and ethics in persuasive technology.Table of ContentsMethodological and theoretical perspectives on persuasive design.- Persuasive in practice, digital insights.- Persuasive technologies for health and wellbeing.- Persuasive solutions for a sustainable future.- On security and ethics in Persuasive Technology.

    15 in stock

    £47.49

  • Primer for Data Analytics and Graduate Study in Statistics

    Springer Nature Switzerland AG Primer for Data Analytics and Graduate Study in Statistics

    15 in stock

    This book is specially designed to refresh and elevate the level of understanding of the foundational background in probability and distributional theory required to be successful in a graduate-level statistics program. Advanced undergraduate students and introductory graduate students from a variety of quantitative backgrounds will benefit from the transitional bridge that this volume offers, from a more generalized study of undergraduate mathematics and statistics to the career-focused, applied education at the graduate level. In particular, it focuses on growing fields that will be of potential interest to future M.S. and Ph.D. students, as well as advanced undergraduates heading directly into the workplace: data analytics, statistics and biostatistics, and related areas.

    15 in stock

    £71.24

  • Universal Access in Human-Computer Interaction. Design Approaches and Supporting Technologies: 14th International Conference, UAHCI 2020, Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 202

    Springer Nature Switzerland AG Universal Access in Human-Computer Interaction. Design Approaches and Supporting Technologies: 14th International Conference, UAHCI 2020, Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 202

    15 in stock

    Book SynopsisThis two-volume set of LNCS 12188 and 12189 constitutes the refereed proceedings of the 14th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2020, held as part of the 22nd International Conference, HCI International 2020, which took place in Copenhagen, Denmark, in July 2020. The conference was held virtually due to the COVID-19 pandemic. The total of 1439 papers and 238 posters have been accepted for publication in the HCII 2020 proceedings from a total of 6326 submissions. UAHCI 2020 includes a total of 80 regular papers which are organized in topical sections named: Design for All Theory, Methods and Practice; User Interfaces and Interaction Techniques for Universal Access; Web Accessibility; Virtual and Augmented Reality for Universal Access; Robots in Universal Access; Technologies for Autism Spectrum Disorders; Technologies for Deaf Users; Universal Access to Learning and Education; Social Media, Digital Services, eInclusion and Innovation; Intelligent Assistive Environments.Table of ContentsUniversal Design of ICT: A Historical Journey from Specialized Adaptations towards Designing for Diversity.- From Accessible Interfaces to Useful and Adapted Interactions.- Integrated Assistive Auxiliary System - Developing Low Cost Assistive Technology to Provide Computational Accessibility for Disabled People.- Co-creating Persona Scenarios with Diverse Users Enriching Inclusive Design.- Construction of an Inexpensive Eye Tracker for Social Inclusion and Education.- Understanding Organizations through Systems Oriented Design: Mapping Critical Intervention Points for Universal Design.- Process Modelling (BPM) in Healthcare – Breast Cancer Screening.- Brain-Computer Interfaces for Communication in Severe Acquired Brain Damage: Challenges and Strategies in Clinical Research and Development.- Evaluating Hands-on and Hands-free Input Methods for a Simple Game.- Affective Communication Enhancement System for Locked-In Syndrome Patients.- Perceived Midpoint of the Forearm.- User Interfaces in Dark Mode During Daytime – Improved Productivity or Just Cool-Looking?.- Usability Evaluation of Short Dwell-time Activated Eye Typing Techniques.- A Comparative Study of Three Sudoku Input Methods for Touch Displays.- QB-Gest: Qwerty Bimanual Gestural Input for Eyes-free Smartphone Text Input.- Exploring WAI-Aria Techniques to Enhance Screen Reader Interaction: The Case of a Portal for Rating Accessibility of Cultural Heritage Sites.- Impact of Sentence length on the Readability of Web for Screen Reader Users.- Towards Universal Accessibility on the Web: Do Grammar Checking Tools Improve Text Readability?.- Investigating the Effect of Adding Visual Content to Textual Search Interfaces on Accessibility of Dyslexic Users.- A Comparative Study of Accessibility and Usability of Norwegian University Websites for Screen Reader Users Based on User Experience and Automated Assessment.- Usability of User-centric Mobile application design from Visually Impaired People's Perspective.- Large Scale Augmented Reality for Collaborative Environments.- Walking Support for Visually Impaired Using AR/MR and Virtual Braille Block.- Effect of Background Element Difference on Regional Cerebral Blood Flow while Viewing Stereoscopic Video Clips.- Dementia: I Am Physically Fading. Can Virtual Reality Help? Physical Training for People with Dementia in Confined Mental Health Units.- A Virtual Rehabilitation System for Occupational Therapy with Hand Motion Capture and Force Feedback -Implementation with Vibration Motor.- iVision: An Assistive System for the Blind based on Augmented Reality and Machine Learning.- Relationship between Eye Movements and Individual Differences in Motion Sickness Susceptibility While Viewing Stereoscopic Movies under Controlled Consciousness.- HoloPrognosis - An AR-based Serious Exercise Game for Early Stage Parkinson’s Disease Patients.- A Feasibility Study on the Application of Virtual Reality Technology for the Rehabilitation of Upper Limbs after Stroke.- Usable and Accessible Robot Programming System for People Who are Visually Impaired.- Lego Robots in Puppet Play for Children with Cerebral Palsy.- Being Aware of One’s Self in the Auto-Generated Chat with a Communication Robot.- Voice User Interfaces for Service Robots: Design Principles and Methodology.- Robotic Cane for the Visually Impaired.

    15 in stock

    £66.49

  • Fundamentals of Data Analytics: With a View to Machine Learning

    Springer Nature Switzerland AG Fundamentals of Data Analytics: With a View to Machine Learning

    15 in stock

    Book SynopsisThis book introduces the basic methodologies for successful data analytics. Matrix optimization and approximation are explained in detail and extensively applied to dimensionality reduction by principal component analysis and multidimensional scaling. Diffusion maps and spectral clustering are derived as powerful tools. The methodological overlap between data science and machine learning is emphasized by demonstrating how data science is used for classification as well as supervised and unsupervised learning.Table of Contents1 Introduction.- 2 Prerequisites from Matrix Analysis.- 3 Multivariate Distributions and Moments.- 4 Dimensionality Reduction.- 5 Classification and Clustering.- 6 Support Vector Machines.- 7 Machine Learning.- Index.

    15 in stock

    £52.24

  • Powers of Two: The Information Universe —

    Springer Nature Switzerland AG Powers of Two: The Information Universe —

    15 in stock

    Book SynopsisIs everything Information? This is a tantalizing question which emerges in modern physics, life sciences, astronomy and in today’s information and technology-driven society. In Powers of Two expert authors undertake a unique expedition - in words and images - throughout the world (and scales) of information. The story resembles, in a way, the classic Powers of Ten journeys through space: from us to the macro and the micro worlds . However, by following Powers of Two through the world of information, a completely different and timely paradigm unfolds. Every power of two, 1, 2, 4, 8…. tells us a different story: starting from the creation of the very first bit at the Big Bang and the evolution of life, through 50 years of computational science, and finally into deep space, describing the information in black holes and even in the entire universe and beyond…. All this to address one question: Is our universe made of information? In this book, we experience the Information Universe in nature and in our society and how information lies at the very foundation of our understanding of the Universe.From the Foreword by Robbert Dijkgraaf: This book is in many ways a vastly extended version of Shannon’s one-page blueprint. It carries us all the way to the total information content of the Universe. And it bears testimony of how widespread the use of data has become in all aspects of life. Information is the connective tissue of the modern sciences. […] Undoubtedly, future generations will look back at this time, so much enthralled by Big Data and quantum computers, as beholden to the information metaphor. But that is exactly the value of this book. With its crisp descriptions and evocative illustrations, it brings the reader into the here and now, at the very frontier of scientific research, including the excitement and promise of all the outstanding questions and future discoveries.Message for the e-reader of the book Powers of Two The book has been designed to be read in two-page spreads in full screen mode. For optimal reader experience in a downloaded .pdf file we strongly recommend you use the following settings in Adobe Acrobat Reader: - Taskbar: View > Page Display > two page view - Taskbar: View > Page Display > Show Cover Page in Two Page View - Taskbar: ^ Preferences > Full Screen > deselect " Fill screen with one page at a time" - Taskbar: View > Full screen mode or ctrl L (cmd L on a Mac) ***** Note: for reading the previews on Spinger link (and on-line reading in a browser), the full screen two-page view only works with these browsers: Firefox - Taskbar: on top of the text, at the uppermost right you will see then >> (which is a drop-down menu) >> even double pages - Fullscreen: F11 or Control+Cmd+F with Mac Edge - Taskbar middle: Two-page view and select show cover page separatelyTrade Review“The book … a very unusual collection of some facts about the relationship between the immaterial world represented by bits and the real physical world described by fundamental physical equations. This book continues the very categorical point of view of J. A. Wheeler … . The book presents short articles on various areas of modern science … in which it is shown that in these areas in some mysterious way there is a connection with the theory of information.” (Vladimir Dzhunushaliev, zbMATH 1479.83004, 2022)Table of ContentsForeword by Robbert DijkgraafChapter 0: IntroductionJoy-riding the Universe – by the authorWorking as an astronomer, data scientist and professor of astro-informatics for nearly fifty years, Edwin Valentijn has witnessed and first-hand engineered the dawn of the era of Big Data in science and society. Throughout his career, he became increasingly aware of the role of information in our world: in computers, in our society, and even in nature and in the Universe itself.The Information UniverseFollowing the increasing powers of two, the story paints a journey through the whole world of information, both in society and in nature. Each step opens a door into a new world: from the first bits with the Big Bang and the dawn of life, going through fifty years of human technology, all the way up to the information content of the whole Universe.What is Information? - Item pageThe basics of information are introduced.Chapter 1: The beginningSpace-time foam – Ti (0 bit: 20 =1)The very first power of two: 20, corresponds to the value one. This identifies the single, eternal, indistinguishable state: the primordial sea from which our Universe emerged – sometimes called the Space-time foam. I call this Ti, the reverse of It. This is one of the miraculous new notions in the story of the Powers of Two.Multiverse: Anthropic principle (Item page)From Ti, the primordial space-time foam, countless universes arise with widely different characteristics: the Multiverse. The Anthropic Principle is a philosophical consideration which states that we, people, will find ourselves in a universe that is suitable for intelligent life to emerge. Therefore, this Principle demonstrates that conditions in our Universe are not “fine-tuned” to the existence of human life and a “creator” doesn’t exist.Big bang (1 bit: 21 =2 states)At the Big Bang the first bit is created. From the indistinguishable unity of the primordial foam Ti, “the zeros were separated from the 1’s”: the first bit corresponds to two possible states. This bit is the first step on our journey to capture the ever-increasing complexity of our expanding Universe in terms of information, through the increasing powers of two.What is a bit? (Item page)The bit is at the core of the concept of information. A bit is any system that can have two states. Humans assign meanings to these states, which are illustrated with the concept of the traffic light: red or green, stop or go. The combination of multiple bits creates an exponentially increasing number of possible states, and hence meanings.Multicellular life (2 bit: 22 =4 states) / (4 bit: 24 =16 states)?Life started with exchanging information between cells. This is fundamental for the evolution of any kind of life. It took at least two billion years for uni-cellular to evolve into multi-cellular organisms around 600 million years ago, and to start the exchange of information between their different cells. By exchanging information, cells collaborate and act as a unified whole: life.The game of life (Item page)The characteristic features of life (or any complex system in the Universe) can be created from information. A simple computer game is all you need to demonstrate this concept. A famous example is Conway's Game of Life, which is full of visuals of living, growing, moving and dying objects. This game was already made on the computers of the early 70's with just a few lines of code.Chapter 2: People's Information UniverseASCII (7 bit: 27 =128 states)There is currently no physical theory how the digital world connects to the human consciousness. In the world of Information Technology (IT) all information exchange is based on agreements between people. For instance, ASCII, a simple list relating each letter of the alphabet to a 7-bit string, connects the digital world to the human consciousness. Machu Picchu (8 bit: 28 =256, 1 byte)The Intiwatana stone, a giant rock carved by the Inca's of ancient Machu Picchu in Peru, can be considered as a first 8-bit hard disk. Why so? As the sunrays lit the different surfaces of this huge rock throughout the year, it triggered the Inca's activities: sowing, harvesting, celebrating and praying.This ancient stone dissolves both the boundaries between heaven and earth, and those between the digital and natural Information Universe. In fact, the stone represents an ultimate picture of the cross-over between the in vivo and the in vitro Information Universe - a main theme of the book. In vitro being the man made technology to handle information and in vivo being the information built in nature, in this case the orbit and the light rays of the sun.First computers (16 bit: 216 =65.536, 2 byte)When computers emerged in the 1970's, astronomers first adopted them to steer their telescopes. Back then, a maximal effort to understand the mathematics of the problem was needed to squeeze the solution into the small computer memory. Nowadays, with large amounts of computing power and machine learning at their disposal, scientists and computer programmers often do the reverse.Star Peace vs. Star Wars (Item page)King Juan Carlos adored the harmony of galaxies as a source of inspiration for people on earth, in those days when Ronald Reagan was promoting his Star-wars programme. With this adoration in mind, in 1985, he gave an inspiring speech at the Royal inauguration of the international astronomical observatory on La Palma, Canary Islands. The inauguration was attended by, for those days, an unprecedented large crowd of European royals and government officials despite the great threat of terrorist attacks by the ETA. (the next and later spreads on facts vs fakes elucidate the relevance of this spread in the story line).Pre-internet Facts and Fakes (Item page)“Edwin Valentijn saved the life of the Dutch Queen Beatrix by catching her just before falling off a cliff at the inauguration on La Palma”, according to the headlines in Dutch newspapers. Fake news-stories are at all times alike and can only be dispelled by tracing links of information to their source, links or associations being a fundamental property of the Information Universe. Later, I discuss the less innocent case of overdrawing attention to terrorist attacks in the past decade.Hard disk (24 bit: 224 =1.6*107, 2 Mb)Only sixty years ago, a 5 MB hard disk weighed over five tons, and had to be loaded onto an aeroplane by using a truck. Now, we carry a thousand times more information in our trouser pocket. This demonstrates the amazing advance of information technology over the past decades. (Picture: first IBM hard disk loaded onto a plane).The telephone (Item page) As a precursor of the Internet, the telephone offered many of the same advantages and dangers, and was heavily discussed at its introduction. Whether telephone or the Internet, it all revolves around communication or copying of information. The telephone, as example of it, is one of the major discoveries of the 20th century. DNA (32 bit: 232 = 4*109, 500 Mb) – Guest author: Charley Lineweaver The information in the DNA creates life. All base pairs of the human DNA can be stored on a 500 Mb drive. How is this information communicated? How does a cell know it has to build part of a liver and not an eye, while they all have the same DNA? Apoptosis and the role of information exchange.Where does biological Information come from? (Item page) – Guest author: Charley Lineweaver Charley Lineweaver, expert on evolutionary biology, exoplanetology and astrobiology, will expand on the role of information in the evolution of life.Lifelines (Item page) – Guest author: Morris SwertzWhat is the role of nature versus that of nurture? A key question in modern health research. In Lifesciences, this question is addressed now using Big Data, like the astronomers who acquire huge data volumes to address the same question on the nature of galaxies. In Lifelines, a cohort of 165.000 people is studied over a period of 30 years using hospital data, blood samples and DNA scans.DVD (33 bit: 233 =9*109, 1 Gb)It’ s amazing how fast the digital image revolution went since 1989.30 years ago, Philips lab approached me since they had made a big discovery: it was possible to store many digital images on a CD. They were chasing me for digital images. While NASA had less than a thousand, I had 32.000 galaxy images obtained by scanning photographic plates from the European Southern Observatory – the first large digital image collection.Human Brain (36 bit: 236 =7*1010, 9 Gb) – Guest author: Katrin Amunts- JulichIn the large EU human brain project, the activities of the human brain are simulated in computers. This is a very difficult mission since the transistors in computers consume 100.000 billion times more energy than the synapsis of neurons. Our brains consist of 1011 neurons, corresponding to 9 Gb of data.Thinking of Karlheinz Meier, coordinator of the Human Brain Project in Heidelberg, Katrin Amunts will author two spreads on the role of information in the human brain.Neuromorphic computing – Guest author: Katrin AmuntsCurrently, it takes a hundred years of a supercomputer’s time to compete with the learning power of only a single day of the human brain. “Neuromorphic computing” researchers design electronic systems inspired by the human brain, in order to make computers many times faster and more energy efficient.CT scan (38 bit: 238 =3*1011, 34 Gb) – Guest author: Anders YnnermanNow it is possible to look inside animal and human bodies on touchscreens. Forensic investigations on, for instance, corpses of victims can be done with touch-screen tables. You can look inside, rotate, scroll and zoom animal and human bodies using tens of gigabytes of CT scan data. Prof. Anders Ynnerman explains how he does it.Terabytes (45 bit: 245 =4.4*1012, 1 Tb) - The largest (astronomical) datasetsDark energy and dark matter: two mysterious constituents of our Universe. How do astronomers get and handle the data from the VLT Survey Telescope on a high mountain top in Chile to shed lights on these ‘still too dark’ topics. This Telescope surveys the sky every hour at night generating Terabytes of astronomical data.Gravity as a lens (Item page) – Guest author: Margot BrouwerWhen light rays are bent by the gravity of a heavy object, this object acts as a lens. This effect can be used to map dark matter, which is invisible but constitutes 80% of the matter in our visible Universe. In 1915, Albert Einstein posed that gravity is equivalent to the curvature of the fabric of space and time itself, leading to the lensing effect.Weak gravitational lensing surveys – Guest author: Margot BrouwerTerabytes of astronomical data are reduced to a few numbers, describing how dark matter behaves and what is its true nature. https://www.youtube.com/watch?v=ZCyYGWqCmFw&t=23sEntering the Petabyte regime (53 bit: 253 =1*1015, 1 Pb)How do we technically acquire and deal with Petabytes of data?Dark Matter maps (Item page)A first dark matter map projected on the night sky. An ultimate encounter between the digital world of modern astronomical observations, and nature: the mysterious dark matter mapped on top of the everyday “night” stellar sky. A visualization that condenses Terabytes of astronomical data to a simple map.Metadata for Peta-data (62 bit: 262 =6*1017, 600 Pb)With pointers, one can connect everything in the Information Universe. Pointers are often inserted in Metadata (data about data) - an ultimate tool for dealing with Big Data. It is possible to create unique pointers to hundreds of Petabytes of data, using a string of less than 64 bits. This is what makes pointers so powerful and indispensable in current and future stages of the big data era; not only for astronomical research, but also for companies like Google, Amazon and Facebook.Downloading the Universe (Item page)The universe can be seen as a spreadsheet, certainly in the way we map it on our computers (in vitro), but also in nature (in vivo). Perceiving the Universe as a spreadsheet links bit to It.Meta data (Item page)A visualisation of the enormous complexity of data models which trace all pointers between data items. (picture: thrilling still from a full dome animation of a data model)Future (astronomical) datasets (item page)While current telescopes collect astronomical datasets of Terabytes, future telescopes such as the LSST and the Euclid satellite, instead, will collect Petabytes. These enormous amounts of data need a whole new approach to data management. For the Euclid satellite my “Universe as a spreadsheet” approach has been adopted.The Euclid satellite (Item page) – Guest author: Margot BrouwerEuclid is ESA’s new space mission to map the Dark Universe. At a distance of 1.5 million kilometres from Earth, this telescope will observe billions of galaxies. Its goal: to shed light on the nature of Dark Matter and Dark Energy, which make up 95% of our Universe. Dr. Margot Brouwer, Dutch scientific communication officer for Euclid, will explain more.The Information Universe (Item page)The resemblance of the overall structure of the real observed Universe (in vivo) with the simulated universe (in vitro), based on the concurrent cosmological model, gave a lot of credit to the latter. When we zoom out the Universe, we see billions of galaxies forming a web-like structure. Amazingly, astronomers can now compute and simulate these structures with very large supercomputers.The lost boy (Item page)Information is timeless, and knows no boundaries. It crosses over the in vivo and the in vitro Information Universe. This concept is well illustrated through daily life stories involving time. At the age of five, a boy loses sight of his older brother on a train in India, and eventually gets lost on the streets of Mumbai. Twenty years later, after being adopted by a family in Australia, he is able to find his natural mother (in vivo) through only searching on Google maps (in vitro).Qbits (50 qbit: 250 =1.1*1015 qbit, 1 Pbit) – Guest author: Lieven VandersypenUsing fundamental particles (quanta, such as electrons) to perform calculations and build computers, is one of the most exciting cross-overs between the in vivo and the in vitro Information Universe. Prof. Lieven Vandersypen, who leads a Quantum Computing group at TU Delft in the Netherlands, will explain how this technology will change the way we compute.Quantum entanglement (Item page) – Guest author: Lieven VandersypenThe states of two particles can be intimately linked (entangled), no matter how far they are separated. What Einstein famously dismissed as “spooky action at a distance”, can now be established on demand at TU Delft in the Netherlands. Prof. Vandersypen will explain how his research group, for the first time ever, both create and apply this entanglement in laboratory.Entanglement (item page) - EVThe Square Kilometre Array (64 bit: 264 =1.3*1018, 1 Eb) – Guest author: TBAThe Square Kilometre Telescope will collect data at the rate of the global internet traffic of 2013, in its endeavour to answer fundamental questions about the origin and evolution of the Universe, and its search for extra-terrestrial life.Cryptography (128 bit: 2128 =3.4*1038) – Guest author Tanja LangeEncrypted messages should not be decoded by adversaries, be they criminals or hostile countries. Cryptography enables secure communications and is one of the few applications which require 128-bit numbers. A guest author will explain more.Chapter 3: Deep spaceThe Desert (128-256 bit) Theoretical physics is not progressing much in the last decennia – some call it a crisis. Likely, an observational breakthrough is out of reach: the highest man-made information density on earth is produced by the high energy accelerators at CERN. But these accelerators have to be 1013 -1015 more powerful to reach the fundamental unit of information, which is probably at the same level of the Planck length. Unfortunately, there is no way to reach this unit of information with these instruments. This enormous gap in reaching all the domains in the Information Universe is illustrated in a figure and in a very sobering, but instructive table in the Appendix.Black holes (128-256 bit?) – Guest author: Manus VisserCan information disappear into a black hole? The Information paradox. Stephen Hawking wondered it and started a field in which space and time are described in terms of information. Dr. Manus Visser, expert on gravity and space-time, will explain more.Observing a Black Hole: Event Horizon Telescope – Guest author: Heino FalckeThe first image of a black hole. Prof. Heino Falcke, chair of the Event Horizon Telescope Science Council, will explain how information from a world-wide network of telescopes was combined using atomic clocks, to create the first ever image of a black hole. (Picture: first image of a black hole)Cogwheels: a deeper level – Guest author: Gerard 't HooftNobel laureate ‘t Hooft explains his views on cogwheels, carrying the fundamental information in the Universe.Gravitational waves – Guest author: Chris van den BroeckLinks: The Universe as a spreadsheetLinks, joins, references, URLs, blockchain, associations and even entanglement in physics are all different words for the same building block, forming the connections in the Information Universe.Cosmic Microwave Background – Guest author: Margot BrouwerParticles of light created in the hot and dense state of the Universe after the Big Bang are still flying through the Universe today. Together, these 1077 photons contain the largest amount of information known in the Universe. This information can still be accessed through telescopes, and brings us invaluable information about the dawn of our Universe.Emergent Gravity – Guest author: Erik VerlindeProf. Erik Verlinde, professor of theoretical physics at the University of Amsterdam, won the Spinoza prize for his new theory explaining gravity. In his theory, all matter, space and time consist of information and are all connected by entanglement. If this theory is correct, the information content of the entire Universe is 2399. This is the highest power described in this book, and actually, in physics.Chapter 4: It from BitOne big information processing machine – Guest author: Gerard 't Hooft (TBC)t Hooftt Hooft: : ““there is something happening at a different level of nature”there is something happening at a different level of nature”..On the origin of physical information. – Guest author: Stefano GottardiThe ear In the ear information is copied a dozen times!The eye – on the visual perception of data- climate change. Links to - facts and fakes- the system of ScienceThe System of ScienceHow does this system work? Discussing Hegel’s system of science, logic, technology, Nature, life, physics, consciousness.Artificial IntelligenceThe machine learning and the data-base oriented communities are still living on different planets. I discuss and revisit Tegmark’s recent book Life 3.0 by comparing 3 crosscuts through the Information Universe: i) the classical computer centric view ii) the data centric view iii) the artificial intelligence view.Information densityThe average information density of the universe can be compared to that of written text.Black Body radiation On the information aspects of the third big physical breakthrough of the 20th century (next to General relativity and quantum mechanics).EntropyDiscussing Shannon’s work and identifying that “Information only exists in relation to its environment”. Examples will be given.Cosmic information, cosmogenesis and dark energy by PadmanabhanCosmic information connects the cosmological constant to cosmogenesisIt from BitIs the Universe one big information processing machine?ConsciousnessVery little is known about the consciousness and I refrain from addressing the consciousness per se. A relevant list of about 5 facts we do know are listed. Any view on the relation between the consciousness and the Information Universe should at least deal with this list.Somnium – Musician Jacco Gardner performing at DOTLiveplanetarium at Eurosonic 2019 show case music festival- Inspired by Kepler’s Somnium – directed by EV The Information UniverseAn overview.Facts and fakesHow is all this related to the current facts and fakes issues on the Internet? How do you make sure that what you are reading is accurate and comes from a reliable source?The link between Open Science, FAIR and reliability of data.

    15 in stock

    £40.49

  • Powers of Two: The Information Universe —

    Springer Nature Switzerland AG Powers of Two: The Information Universe —

    2 in stock

    Book SynopsisIs everything Information? This is a tantalizing question which emerges in modern physics, life sciences, astronomy and in today’s information and technology-driven society. In Powers of Two expert authors undertake a unique expedition - in words and images - throughout the world (and scales) of information. The story resembles, in a way, the classic Powers of Ten journeys through space: from us to the macro and the micro worlds . However, by following Powers of Two through the world of information, a completely different and timely paradigm unfolds. Every power of two, 1, 2, 4, 8…. tells us a different story: starting from the creation of the very first bit at the Big Bang and the evolution of life, through 50 years of computational science, and finally into deep space, describing the information in black holes and even in the entire universe and beyond…. All this to address one question: Is our universe made of information? In this book, we experience the Information Universe in nature and in our society and how information lies at the very foundation of our understanding of the Universe.From the Foreword by Robbert Dijkgraaf: This book is in many ways a vastly extended version of Shannon’s one-page blueprint. It carries us all the way to the total information content of the Universe. And it bears testimony of how widespread the use of data has become in all aspects of life. Information is the connective tissue of the modern sciences. […] Undoubtedly, future generations will look back at this time, so much enthralled by Big Data and quantum computers, as beholden to the information metaphor. But that is exactly the value of this book. With its crisp descriptions and evocative illustrations, it brings the reader into the here and now, at the very frontier of scientific research, including the excitement and promise of all the outstanding questions and future discoveries.Message for the e-reader of the book Powers of Two The book has been designed to be read in two-page spreads in full screen mode. For optimal reader experience in a downloaded .pdf file we strongly recommend you use the following settings in Adobe Acrobat Reader: - Taskbar: View > Page Display > two page view - Taskbar: View > Page Display > Show Cover Page in Two Page View - Taskbar: ^ Preferences > Full Screen > deselect " Fill screen with one page at a time" - Taskbar: View > Full screen mode or ctrl L (cmd L on a Mac) ***** Note: for reading the previews on Spinger link (and on-line reading in a browser), the full screen two-page view only works with these browsers: Firefox - Taskbar: on top of the text, at the uppermost right you will see then >> (which is a drop-down menu) >> even double pages - Fullscreen: F11 or Control+Cmd+F with Mac Edge - Taskbar middle: Two-page view and select show cover page separatelyTrade Review“The book … a very unusual collection of some facts about the relationship between the immaterial world represented by bits and the real physical world described by fundamental physical equations. This book continues the very categorical point of view of J. A. Wheeler … . The book presents short articles on various areas of modern science … in which it is shown that in these areas in some mysterious way there is a connection with the theory of information.” (Vladimir Dzhunushaliev, zbMATH 1479.83004, 2022)Table of ContentsForeword by Robbert DijkgraafChapter 0: IntroductionJoy-riding the Universe – by the authorWorking as an astronomer, data scientist and professor of astro-informatics for nearly fifty years, Edwin Valentijn has witnessed and first-hand engineered the dawn of the era of Big Data in science and society. Throughout his career, he became increasingly aware of the role of information in our world: in computers, in our society, and even in nature and in the Universe itself.The Information UniverseFollowing the increasing powers of two, the story paints a journey through the whole world of information, both in society and in nature. Each step opens a door into a new world: from the first bits with the Big Bang and the dawn of life, going through fifty years of human technology, all the way up to the information content of the whole Universe.What is Information? - Item pageThe basics of information are introduced.Chapter 1: The beginningSpace-time foam – Ti (0 bit: 20 =1)The very first power of two: 20, corresponds to the value one. This identifies the single, eternal, indistinguishable state: the primordial sea from which our Universe emerged – sometimes called the Space-time foam. I call this Ti, the reverse of It. This is one of the miraculous new notions in the story of the Powers of Two.Multiverse: Anthropic principle (Item page)From Ti, the primordial space-time foam, countless universes arise with widely different characteristics: the Multiverse. The Anthropic Principle is a philosophical consideration which states that we, people, will find ourselves in a universe that is suitable for intelligent life to emerge. Therefore, this Principle demonstrates that conditions in our Universe are not “fine-tuned” to the existence of human life and a “creator” doesn’t exist.Big bang (1 bit: 21 =2 states)At the Big Bang the first bit is created. From the indistinguishable unity of the primordial foam Ti, “the zeros were separated from the 1’s”: the first bit corresponds to two possible states. This bit is the first step on our journey to capture the ever-increasing complexity of our expanding Universe in terms of information, through the increasing powers of two.What is a bit? (Item page)The bit is at the core of the concept of information. A bit is any system that can have two states. Humans assign meanings to these states, which are illustrated with the concept of the traffic light: red or green, stop or go. The combination of multiple bits creates an exponentially increasing number of possible states, and hence meanings.Multicellular life (2 bit: 22 =4 states) / (4 bit: 24 =16 states)?Life started with exchanging information between cells. This is fundamental for the evolution of any kind of life. It took at least two billion years for uni-cellular to evolve into multi-cellular organisms around 600 million years ago, and to start the exchange of information between their different cells. By exchanging information, cells collaborate and act as a unified whole: life.The game of life (Item page)The characteristic features of life (or any complex system in the Universe) can be created from information. A simple computer game is all you need to demonstrate this concept. A famous example is Conway's Game of Life, which is full of visuals of living, growing, moving and dying objects. This game was already made on the computers of the early 70's with just a few lines of code.Chapter 2: People's Information UniverseASCII (7 bit: 27 =128 states)There is currently no physical theory how the digital world connects to the human consciousness. In the world of Information Technology (IT) all information exchange is based on agreements between people. For instance, ASCII, a simple list relating each letter of the alphabet to a 7-bit string, connects the digital world to the human consciousness. Machu Picchu (8 bit: 28 =256, 1 byte)The Intiwatana stone, a giant rock carved by the Inca's of ancient Machu Picchu in Peru, can be considered as a first 8-bit hard disk. Why so? As the sunrays lit the different surfaces of this huge rock throughout the year, it triggered the Inca's activities: sowing, harvesting, celebrating and praying.This ancient stone dissolves both the boundaries between heaven and earth, and those between the digital and natural Information Universe. In fact, the stone represents an ultimate picture of the cross-over between the in vivo and the in vitro Information Universe - a main theme of the book. In vitro being the man made technology to handle information and in vivo being the information built in nature, in this case the orbit and the light rays of the sun.First computers (16 bit: 216 =65.536, 2 byte)When computers emerged in the 1970's, astronomers first adopted them to steer their telescopes. Back then, a maximal effort to understand the mathematics of the problem was needed to squeeze the solution into the small computer memory. Nowadays, with large amounts of computing power and machine learning at their disposal, scientists and computer programmers often do the reverse.Star Peace vs. Star Wars (Item page)King Juan Carlos adored the harmony of galaxies as a source of inspiration for people on earth, in those days when Ronald Reagan was promoting his Star-wars programme. With this adoration in mind, in 1985, he gave an inspiring speech at the Royal inauguration of the international astronomical observatory on La Palma, Canary Islands. The inauguration was attended by, for those days, an unprecedented large crowd of European royals and government officials despite the great threat of terrorist attacks by the ETA. (the next and later spreads on facts vs fakes elucidate the relevance of this spread in the story line).Pre-internet Facts and Fakes (Item page)“Edwin Valentijn saved the life of the Dutch Queen Beatrix by catching her just before falling off a cliff at the inauguration on La Palma”, according to the headlines in Dutch newspapers. Fake news-stories are at all times alike and can only be dispelled by tracing links of information to their source, links or associations being a fundamental property of the Information Universe. Later, I discuss the less innocent case of overdrawing attention to terrorist attacks in the past decade.Hard disk (24 bit: 224 =1.6*107, 2 Mb)Only sixty years ago, a 5 MB hard disk weighed over five tons, and had to be loaded onto an aeroplane by using a truck. Now, we carry a thousand times more information in our trouser pocket. This demonstrates the amazing advance of information technology over the past decades. (Picture: first IBM hard disk loaded onto a plane).The telephone (Item page) As a precursor of the Internet, the telephone offered many of the same advantages and dangers, and was heavily discussed at its introduction. Whether telephone or the Internet, it all revolves around communication or copying of information. The telephone, as example of it, is one of the major discoveries of the 20th century. DNA (32 bit: 232 = 4*109, 500 Mb) – Guest author: Charley Lineweaver The information in the DNA creates life. All base pairs of the human DNA can be stored on a 500 Mb drive. How is this information communicated? How does a cell know it has to build part of a liver and not an eye, while they all have the same DNA? Apoptosis and the role of information exchange.Where does biological Information come from? (Item page) – Guest author: Charley Lineweaver Charley Lineweaver, expert on evolutionary biology, exoplanetology and astrobiology, will expand on the role of information in the evolution of life.Lifelines (Item page) – Guest author: Morris SwertzWhat is the role of nature versus that of nurture? A key question in modern health research. In Lifesciences, this question is addressed now using Big Data, like the astronomers who acquire huge data volumes to address the same question on the nature of galaxies. In Lifelines, a cohort of 165.000 people is studied over a period of 30 years using hospital data, blood samples and DNA scans.DVD (33 bit: 233 =9*109, 1 Gb)It’ s amazing how fast the digital image revolution went since 1989.30 years ago, Philips lab approached me since they had made a big discovery: it was possible to store many digital images on a CD. They were chasing me for digital images. While NASA had less than a thousand, I had 32.000 galaxy images obtained by scanning photographic plates from the European Southern Observatory – the first large digital image collection.Human Brain (36 bit: 236 =7*1010, 9 Gb) – Guest author: Katrin Amunts- JulichIn the large EU human brain project, the activities of the human brain are simulated in computers. This is a very difficult mission since the transistors in computers consume 100.000 billion times more energy than the synapsis of neurons. Our brains consist of 1011 neurons, corresponding to 9 Gb of data.Thinking of Karlheinz Meier, coordinator of the Human Brain Project in Heidelberg, Katrin Amunts will author two spreads on the role of information in the human brain.Neuromorphic computing – Guest author: Katrin AmuntsCurrently, it takes a hundred years of a supercomputer’s time to compete with the learning power of only a single day of the human brain. “Neuromorphic computing” researchers design electronic systems inspired by the human brain, in order to make computers many times faster and more energy efficient.CT scan (38 bit: 238 =3*1011, 34 Gb) – Guest author: Anders YnnermanNow it is possible to look inside animal and human bodies on touchscreens. Forensic investigations on, for instance, corpses of victims can be done with touch-screen tables. You can look inside, rotate, scroll and zoom animal and human bodies using tens of gigabytes of CT scan data. Prof. Anders Ynnerman explains how he does it.Terabytes (45 bit: 245 =4.4*1012, 1 Tb) - The largest (astronomical) datasetsDark energy and dark matter: two mysterious constituents of our Universe. How do astronomers get and handle the data from the VLT Survey Telescope on a high mountain top in Chile to shed lights on these ‘still too dark’ topics. This Telescope surveys the sky every hour at night generating Terabytes of astronomical data.Gravity as a lens (Item page) – Guest author: Margot BrouwerWhen light rays are bent by the gravity of a heavy object, this object acts as a lens. This effect can be used to map dark matter, which is invisible but constitutes 80% of the matter in our visible Universe. In 1915, Albert Einstein posed that gravity is equivalent to the curvature of the fabric of space and time itself, leading to the lensing effect.Weak gravitational lensing surveys – Guest author: Margot BrouwerTerabytes of astronomical data are reduced to a few numbers, describing how dark matter behaves and what is its true nature. https://www.youtube.com/watch?v=ZCyYGWqCmFw&t=23sEntering the Petabyte regime (53 bit: 253 =1*1015, 1 Pb)How do we technically acquire and deal with Petabytes of data?Dark Matter maps (Item page)A first dark matter map projected on the night sky. An ultimate encounter between the digital world of modern astronomical observations, and nature: the mysterious dark matter mapped on top of the everyday “night” stellar sky. A visualization that condenses Terabytes of astronomical data to a simple map.Metadata for Peta-data (62 bit: 262 =6*1017, 600 Pb)With pointers, one can connect everything in the Information Universe. Pointers are often inserted in Metadata (data about data) - an ultimate tool for dealing with Big Data. It is possible to create unique pointers to hundreds of Petabytes of data, using a string of less than 64 bits. This is what makes pointers so powerful and indispensable in current and future stages of the big data era; not only for astronomical research, but also for companies like Google, Amazon and Facebook.Downloading the Universe (Item page)The universe can be seen as a spreadsheet, certainly in the way we map it on our computers (in vitro), but also in nature (in vivo). Perceiving the Universe as a spreadsheet links bit to It.Meta data (Item page)A visualisation of the enormous complexity of data models which trace all pointers between data items. (picture: thrilling still from a full dome animation of a data model)Future (astronomical) datasets (item page)While current telescopes collect astronomical datasets of Terabytes, future telescopes such as the LSST and the Euclid satellite, instead, will collect Petabytes. These enormous amounts of data need a whole new approach to data management. For the Euclid satellite my “Universe as a spreadsheet” approach has been adopted.The Euclid satellite (Item page) – Guest author: Margot BrouwerEuclid is ESA’s new space mission to map the Dark Universe. At a distance of 1.5 million kilometres from Earth, this telescope will observe billions of galaxies. Its goal: to shed light on the nature of Dark Matter and Dark Energy, which make up 95% of our Universe. Dr. Margot Brouwer, Dutch scientific communication officer for Euclid, will explain more.The Information Universe (Item page)The resemblance of the overall structure of the real observed Universe (in vivo) with the simulated universe (in vitro), based on the concurrent cosmological model, gave a lot of credit to the latter. When we zoom out the Universe, we see billions of galaxies forming a web-like structure. Amazingly, astronomers can now compute and simulate these structures with very large supercomputers.The lost boy (Item page)Information is timeless, and knows no boundaries. It crosses over the in vivo and the in vitro Information Universe. This concept is well illustrated through daily life stories involving time. At the age of five, a boy loses sight of his older brother on a train in India, and eventually gets lost on the streets of Mumbai. Twenty years later, after being adopted by a family in Australia, he is able to find his natural mother (in vivo) through only searching on Google maps (in vitro).Qbits (50 qbit: 250 =1.1*1015 qbit, 1 Pbit) – Guest author: Lieven VandersypenUsing fundamental particles (quanta, such as electrons) to perform calculations and build computers, is one of the most exciting cross-overs between the in vivo and the in vitro Information Universe. Prof. Lieven Vandersypen, who leads a Quantum Computing group at TU Delft in the Netherlands, will explain how this technology will change the way we compute.Quantum entanglement (Item page) – Guest author: Lieven VandersypenThe states of two particles can be intimately linked (entangled), no matter how far they are separated. What Einstein famously dismissed as “spooky action at a distance”, can now be established on demand at TU Delft in the Netherlands. Prof. Vandersypen will explain how his research group, for the first time ever, both create and apply this entanglement in laboratory.Entanglement (item page) - EVThe Square Kilometre Array (64 bit: 264 =1.3*1018, 1 Eb) – Guest author: TBAThe Square Kilometre Telescope will collect data at the rate of the global internet traffic of 2013, in its endeavour to answer fundamental questions about the origin and evolution of the Universe, and its search for extra-terrestrial life.Cryptography (128 bit: 2128 =3.4*1038) – Guest author Tanja LangeEncrypted messages should not be decoded by adversaries, be they criminals or hostile countries. Cryptography enables secure communications and is one of the few applications which require 128-bit numbers. A guest author will explain more.Chapter 3: Deep spaceThe Desert (128-256 bit) Theoretical physics is not progressing much in the last decennia – some call it a crisis. Likely, an observational breakthrough is out of reach: the highest man-made information density on earth is produced by the high energy accelerators at CERN. But these accelerators have to be 1013 -1015 more powerful to reach the fundamental unit of information, which is probably at the same level of the Planck length. Unfortunately, there is no way to reach this unit of information with these instruments. This enormous gap in reaching all the domains in the Information Universe is illustrated in a figure and in a very sobering, but instructive table in the Appendix.Black holes (128-256 bit?) – Guest author: Manus VisserCan information disappear into a black hole? The Information paradox. Stephen Hawking wondered it and started a field in which space and time are described in terms of information. Dr. Manus Visser, expert on gravity and space-time, will explain more.Observing a Black Hole: Event Horizon Telescope – Guest author: Heino FalckeThe first image of a black hole. Prof. Heino Falcke, chair of the Event Horizon Telescope Science Council, will explain how information from a world-wide network of telescopes was combined using atomic clocks, to create the first ever image of a black hole. (Picture: first image of a black hole)Cogwheels: a deeper level – Guest author: Gerard 't HooftNobel laureate ‘t Hooft explains his views on cogwheels, carrying the fundamental information in the Universe.Gravitational waves – Guest author: Chris van den BroeckLinks: The Universe as a spreadsheetLinks, joins, references, URLs, blockchain, associations and even entanglement in physics are all different words for the same building block, forming the connections in the Information Universe.Cosmic Microwave Background – Guest author: Margot BrouwerParticles of light created in the hot and dense state of the Universe after the Big Bang are still flying through the Universe today. Together, these 1077 photons contain the largest amount of information known in the Universe. This information can still be accessed through telescopes, and brings us invaluable information about the dawn of our Universe.Emergent Gravity – Guest author: Erik VerlindeProf. Erik Verlinde, professor of theoretical physics at the University of Amsterdam, won the Spinoza prize for his new theory explaining gravity. In his theory, all matter, space and time consist of information and are all connected by entanglement. If this theory is correct, the information content of the entire Universe is 2399. This is the highest power described in this book, and actually, in physics.Chapter 4: It from BitOne big information processing machine – Guest author: Gerard 't Hooft (TBC)t Hooftt Hooft: : ““there is something happening at a different level of nature”there is something happening at a different level of nature”..On the origin of physical information. – Guest author: Stefano GottardiThe ear In the ear information is copied a dozen times!The eye – on the visual perception of data- climate change. Links to - facts and fakes- the system of ScienceThe System of ScienceHow does this system work? Discussing Hegel’s system of science, logic, technology, Nature, life, physics, consciousness.Artificial IntelligenceThe machine learning and the data-base oriented communities are still living on different planets. I discuss and revisit Tegmark’s recent book Life 3.0 by comparing 3 crosscuts through the Information Universe: i) the classical computer centric view ii) the data centric view iii) the artificial intelligence view.Information densityThe average information density of the universe can be compared to that of written text.Black Body radiation On the information aspects of the third big physical breakthrough of the 20th century (next to General relativity and quantum mechanics).EntropyDiscussing Shannon’s work and identifying that “Information only exists in relation to its environment”. Examples will be given.Cosmic information, cosmogenesis and dark energy by PadmanabhanCosmic information connects the cosmological constant to cosmogenesisIt from BitIs the Universe one big information processing machine?ConsciousnessVery little is known about the consciousness and I refrain from addressing the consciousness per se. A relevant list of about 5 facts we do know are listed. Any view on the relation between the consciousness and the Information Universe should at least deal with this list.Somnium – Musician Jacco Gardner performing at DOTLiveplanetarium at Eurosonic 2019 show case music festival- Inspired by Kepler’s Somnium – directed by EV The Information UniverseAn overview.Facts and fakesHow is all this related to the current facts and fakes issues on the Internet? How do you make sure that what you are reading is accurate and comes from a reliable source?The link between Open Science, FAIR and reliability of data.

    2 in stock

    £42.74

  • Fundamentals of Quantum Computing: Theory and

    Springer Nature Switzerland AG Fundamentals of Quantum Computing: Theory and

    15 in stock

    Book SynopsisThis introductory book on quantum computing includes an emphasis on the development of algorithms. Appropriate for both university students as well as software developers interested in programming a quantum computer, this practical approach to modern quantum computing takes the reader through the required background and up to the latest developments. Beginning with introductory chapters on the required math and quantum mechanics, Fundamentals of Quantum Computing proceeds to describe four leading qubit modalities and explains the core principles of quantum computing in detail. Providing a step-by-step derivation of math and source code, some of the well-known quantum algorithms are explained in simple ways so the reader can try them either on IBM Q or Microsoft QDK. The book also includes a chapter on adiabatic quantum computing and modern concepts such as topological quantum computing and surface codes.Features:o Foundational chapters that build the necessary background on math and quantum mechanics.o Examples and illustrations throughout provide a practical approach to quantum programming with end-of-chapter exercises.o Detailed treatment on four leading qubit modalities -- trapped-ion, superconducting transmons, topological qubits, and quantum dots -- teaches how qubits work so that readers can understand how quantum computers work under the hood and devise efficient algorithms and error correction codes. Also introduces protected qubits - 0-π qubits, fluxon parity protected qubits, and charge-parity protected qubits. o Principles of quantum computing, such as quantum superposition principle, quantum entanglement, quantum teleportation, no-cloning theorem, quantum parallelism, and quantum interference are explained in detail. A dedicated chapter on quantum algorithm explores both oracle-based, and Quantum Fourier Transform-based algorithms in detail with step-by-step math and working code that runs on IBM QisKit and Microsoft QDK. Topics on EPR Paradox, Quantum Key Distribution protocols, Density Matrix formalism, and Stabilizer formalism are intriguing. While focusing on the universal gate model of quantum computing, this book also introduces adiabatic quantum computing and quantum annealing.This book includes a section on fault-tolerant quantum computing to make the discussions complete. The topics on Quantum Error Correction, Surface codes such as Toric code and Planar code, and protected qubits help explain how fault tolerance can be built at the system level.Trade Review“The book represents a new and fresh approach to quantum computing, starting with theoretical physical knowledge that is highlighted by beautiful figures. Then, quantum computing is explained by quantum programing languages and extensive languages. It is recommended to everyone interested in quantum computing. It is easy to follow through a beautiful and clear presentation, programming examples and additional exercises.” (Andreas Wichert, zbMATH 1477.68005, 2022)Table of ContentsPART ONE 1 Foundations of Quantum Mechanics 1.1 Matter 1.2 Atoms, Elementary Particles, and Molecules 1.3 Light and Quantization of Energy 1.4 Electron Configuration 1.5 Wave-Particle Duality and Probabilistic Nature 1.6 Wavefunctions and Probability Amplitudes 1.7 Some exotic states of matter 1.8 Summary 1.9 Practice Problems 1.10 References and further reading 2 Dirac’s bra-ket notation and Hermitian Operators2.1 Scalars 2.2 Complex Numbers 2.3 Vectors 2.4 Matrices 2.5 Linear Vector Spaces 2.6 Using Dirac’s bra-ket notation 2.7 Expectation Values and Variances2.8 Eigenstates, Eigenvalues and Eigenfunctions2.9 Characteristic Polynomial 2.10 Definite Symmetric Matrices 2.11 Tensors2.12 Statistics and Probability2.13 Summary 2.14 Practice problems2.15 References and further reading3 The Quantum Superposition Principle and Bloch Sphere Representation3.1 Euclidian Space3.2 Metric Space3.3 Hilbert space.3.4 Schrodinger Equation3.5 Postulates of Quantum Mechanics3.6 Quantum Tunneling3.7 Stern and Gerlach Experiment3.8 Bloch sphere representation3.9 Projective Measurements3.10 Qudits3.11 Summary3.12 Practice Problems3.13 References and further readingPART TWO4 Qubit Modalities4.1 The vocabulary of quantum computing4.2 Classical Computers – a recap 4.3 Qubits and usability4.4 Noisy Intermediate Scale Quantum Technology4.5 Qubit Metrics4.6 Leading Qubit Modalities4.7 A note on the dilution refrigerator4.8 Summary4.9 Practice Problems4.10 References and further reading5 Quantum Circuits and DiVincenzo Criteria5.1 Setting up the development environment5.2 Learning Quantum Programming Languages 5.3 Introducing Quantum Circuits 5.4 Quantum Gates 5.5 The Compute Stage5.6 Quantum Entanglement5.7 No-Cloning theorem5.8 Quantum Teleportation5.9 Superdense coding5.10 Greenberger–Horne–Zeilinger state (GHZ state)5.11 Walsh-Hadamard Transform5.12 Quantum Interference5.13 Phase kickback5.14 DiVincenzo’s criteria for quantum computation5.15 Summary 5.16 Practice Problems5.17 References and further reading6 Quantum Communications6.1 EPR Paradox6.2 Density Matrix Formalism6.3 Von Neumann Entropy6.4 Photons6.5 Quantum Communication6.6 The Quantum Channel6.7 Quantum Communication Protocols6.8 RSA Security6.9 Summary6.10 Practice Problems6.11 References and further reading7 Quantum Algorithms7.1 Quantum Ripple Adder Circuit7.2 Quantum Fourier Transformation7.3 Deutsch-Jozsa oracle7.4 The Bernstein-Vazirani Oracle7.5 Simon’s algorithm7.6 Quantum arithmetic using QFT7.7 Modular exponentiation7.8 Grover’s search algorithm 7.9 Shor’s algorithm7.10 A quantum algorithm for k-means7.11 Quantum Phase Estimation (QPE)7.12 HHL algorithm for solving linear equations7.13 Quantum Complexity Theory7.14 Summary 7.15 Practice Problems7.16 References and further reading8 Adiabatic Optimization and Quantum Annealing8.1 Adiabatic evolution8.2 Proof of the Adiabatic Theorem8.3 Adiabatic optimization8.4 Quantum Annealing8.5 Summary8.6 Practice Problems8.7 References and further reading9 Quantum Error Correction9.1 Classical Error Correction9.2 Quantum Error Codes9.3 Stabilizer formalism9.4 The path forward – fault-tolerant quantum computing9.5 Surface codes9.6 Protected qubits9.7 Practice Problems9.8 References and further reading10 Conclusion10.1 How many qubits do we need?10.2 Classical simulation10.3 Backends today10.4 Future state10.5 References

    15 in stock

    £75.99

  • Algorithm Portfolios: Advances, Applications, and

    Springer Nature Switzerland AG Algorithm Portfolios: Advances, Applications, and

    1 in stock

    Book SynopsisThis book covers algorithm portfolios, multi-method schemes that harness optimization algorithms into a joint framework to solve optimization problems. It is expected to be a primary reference point for researchers and doctoral students in relevant domains that seek a quick exposure to the field. The presentation focuses primarily on the applicability of the methods and the non-expert reader will find this book useful for starting designing and implementing algorithm portfolios. The book familiarizes the reader with algorithm portfolios through current advances, applications, and open problems. Fundamental issues in building effective and efficient algorithm portfolios such as selection of constituent algorithms, allocation of computational resources, interaction between algorithms and parallelism vs. sequential implementations are discussed. Several new applications are analyzed and insights on the underlying algorithmic designs are provided. Future directions, new challenges, and open problems in the design of algorithm portfolios and applications are explored to further motivate research in this field.Table of Contents1. Metaheuristic optimization algorithms.- 2. Algorithm portfolios.- 3. Selection of constituent algorithms.- 4. Allocation of computation resources.- 5. Sequential and parallel models.- 6. Recent applications.- 7. Epilogue.- References.

    1 in stock

    £49.49

  • Statistical Foundations, Reasoning and Inference:

    Springer Nature Switzerland AG Statistical Foundations, Reasoning and Inference:

    15 in stock

    Book SynopsisThis textbook provides a comprehensive introduction to statistical principles, concepts and methods that are essential in modern statistics and data science. The topics covered include likelihood-based inference, Bayesian statistics, regression, statistical tests and the quantification of uncertainty. Moreover, the book addresses statistical ideas that are useful in modern data analytics, including bootstrapping, modeling of multivariate distributions, missing data analysis, causality as well as principles of experimental design. The textbook includes sufficient material for a two-semester course and is intended for master’s students in data science, statistics and computer science with a rudimentary grasp of probability theory. It will also be useful for data science practitioners who want to strengthen their statistics skills.Table of ContentsIntroduction.- Background in Probability.- Parametric Statistical Models.- Maximum Likelihood Inference.- Bayesian Statistics.- Statistical Decisions.- Regression.- Bootstrapping.- Model Selection and Model Averaging.- Multivariate and Extreme Value Distributions.- Missing and Deficient Data.- Experiments and Causality.

    15 in stock

    £94.99

  • Encyclopedia of Cryptography Security and Privacy

    Springer Encyclopedia of Cryptography Security and Privacy

    1 in stock

    Book SynopsisSecurity Policies and Access Control.- Public key encryption, digital signatures.- Number theory, primality tests, discrete log, factorisation.- Public-key cryptography, hardware, physical attacks.- Implementation aspects of cryptographic algorithms.- Hardware attacks.- Multi-party computation, voting schemes, digital signature schemes.- Web security.- DBMS and Application Security.- Biometrics.- Software Security.- Network Security.- Formal Methods and Assurance.- Sensor and Ad Hoc Networks.- DOS.- Privacy-preserving data mining.- Private information retrieval.- Privacy metrics and data protection.- Wireless Security.- Broadcast channel, secret sharing, threshold schemes, subliminal channels.- Risk management and organizational security and privacy.- Usable/user-centric privacy.- Less-constrained biometrics.- Access and Query Privacy.- Cryptocurrencies.- Encryption-Based Access Control Based on Public Key Cryptography.- Cyber-physical systems and infrastructure: security andprivacy.- Location privacy and privacy in locations-based applications.- Privacy in emerging scenarios.- Privacy and security in social networks.- Economics of security and privacy.- Key management.- Elliptic curve cryptography.- Sequences, Boolean functions, stream ciphers.- Secure multiparty computations.- Human Aspects in Security and Privacy.- Trustworthy Computing, Physical/Hardware Security.- AI approaches for security and privacy.- Privacy and anonymity in communication networks.- Privacy laws and directives.

    1 in stock

    £809.99

  • Tools and Algorithms for the Construction and Analysis of Systems: 27th International Conference, TACAS 2021, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021,  Luxembourg City, Luxembourg, March

    Springer Nature Switzerland AG Tools and Algorithms for the Construction and Analysis of Systems: 27th International Conference, TACAS 2021, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021, Luxembourg City, Luxembourg, March

    15 in stock

    Book SynopsisThis open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic.The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers.Table of ContentsGame Theory.- A Game for Linear-time - Branching-time Spectroscopy.- On Satisficing in Quantitative Games.- Quasipolynomial Computation of Nested Fixpoints.- SMT Verification.- A Flexible Proof Format for SAT Solver-Elaborator Communication.- Generating Extended Resolution Proofs with a BDD-Based SAT Solver.- Bounded Model Checking for Hyperproperties.- Counterexample-Guided Prophecy for Model Checking Modulo the Theory of Arrays.- SAT Solving with GPU Accelerated Inprocessing.- FOREST: An Interactive Multi-tree Synthesizer for Regular Expressions.- Probabilities.- Finding Provably Optimal Markov Chains.- Inductive Synthesis for Probabilistic Programs Reaches New Horizons.- Analysis of Markov Jump Processes under Terminal Constraints.- Multi-objective Optimization of Long-run Average and Total Rewards.- Inferring Expected Runtimes of Probabilistic Integer Programs Using Expected Sizes.- Probabilistic and Systematic Coverage of Consecutive Test-Method Pairs for Detecting Order-Dependent Flaky Tests.- Timed Systems.- Timed Automata Relaxation for Reachability.- Iterative Bounded Synthesis for Efficient Cycle Detection in Parametric Timed Automata.- Algebraic Quantitative Semantics for Efficient Online Temporal Monitoring.- Neural Networks.- Synthesizing Context-free Grammars from Recurrent Neural Networks.- Automated and Formal Synthesis of Neural Barrier Certificates for Dynamical Models.- Improving Neural Network Verification through Spurious Region Guided Refinement.- Analysis of Network Communication Resilient Capacity-Aware Routing.- Network Traffic Classification by Program Synthesis.

    15 in stock

    £34.99

  • Computer Algebra: An Algorithm-Oriented

    Springer Nature Switzerland AG Computer Algebra: An Algorithm-Oriented

    1 in stock

    Book SynopsisThis textbook offers an algorithmic introduction to the field of computer algebra. A leading expert in the field, the author guides readers through numerous hands-on tutorials designed to build practical skills and algorithmic thinking. This implementation-oriented approach equips readers with versatile tools that can be used to enhance studies in mathematical theory, applications, or teaching. Presented using Mathematica code, the book is fully supported by downloadable sessions in Mathematica, Maple, and Maxima. Opening with an introduction to computer algebra systems and the basics of programming mathematical algorithms, the book goes on to explore integer arithmetic. A chapter on modular arithmetic completes the number-theoretic foundations, which are then applied to coding theory and cryptography. From here, the focus shifts to polynomial arithmetic and algebraic numbers, with modern algorithms allowing the efficient factorization of polynomials. The final chapters offer extensions into more advanced topics: simplification and normal forms, power series, summation formulas, and integration. Computer Algebra is an indispensable resource for mathematics and computer science students new to the field. Numerous examples illustrate algorithms and their implementation throughout, with online support materials to encourage hands-on exploration. Prerequisites are minimal, with only a knowledge of calculus and linear algebra assumed. In addition to classroom use, the elementary approach and detailed index make this book an ideal reference for algorithms in computer algebra.Trade Review“Strong interplay between the abstract exposition, which includes the relevant theorems as well as their proofs, and the practical utilization of those concepts in Mathematica is certainly a remarkable feature of this textbook. … Overall, the book is very well written and the approach to provide examples as actual Mathematica sessions is commendable.” (Andreas Maletti, zbMATH 1484.68004, 2022)Table of Contents

    1 in stock

    £44.99

  • Data Science for Social Good: Philanthropy and Social Impact in a Complex World

    Springer Nature Switzerland AG Data Science for Social Good: Philanthropy and Social Impact in a Complex World

    15 in stock

    Book SynopsisThis book is a collection of reflections by thought leaders at first-mover organizations in the exploding field of "Data Science for Social Good", meant as the application of knowledge from computer science, complex systems and computational social science to challenges such as humanitarian response, public health, sustainable development. The book provides both an overview of scientific approaches to social impact – identifying a social need, targeting an intervention, measuring impact – and the complementary perspective of funders and philanthropies that are pushing forward this new sector. This book will appeal to students and researchers in the rapidly growing field of data science for social impact, to data scientists at companies whose data could be used to generate more public value, and to decision makers at nonprofits, foundations, and agencies that are designing their own agenda around data.Table of ContentsIntroduction.- The Value of Data and Data Collaboratives for Good: A Roadmap for Philanthropies to Facilitate Systems Change through Data.- UN Global Pulse: A UN Innovation Initiative with a Multiplier Effect.- Building the Field of Data for Good.- When Philanthropy Meets Data Science: A Framework for Governance to Achieve Data-Driven Decision-Making for Public Good.- Data for Good: Unlocking Privately-Held Data to the Benefit of the Many.- Building a Funding Data Ecosystem: Grantmaking in the UK.- A Reflection on the Role of Data for Health: COVID-19 and Beyond.

    15 in stock

    £52.24

  • Triple Double: Using Statistics to Settle NBA

    Springer Nature Switzerland AG Triple Double: Using Statistics to Settle NBA

    5 in stock

    Book SynopsisThis book provides empirical evidence and statistical analyses to uncover answers to some of the most debated questions in the NBA. The sports world lives and breathes off of debates on who deserves an MVP award, and which athletes should be considered all-stars. This book provides some statistics-backed perspectives to some of these debates that are specific to the NBA. Was LeBron snubbed of an MVP in the 2010-2011 season? Why has the G.O.A.T. debate turned into LeBron vs. Jordan….Did Kobe get overlooked? How come Klay Thompson didn’t get All-NBA honors in the 2018-2019 season? This book explores these questions and many more with empirical evidence. This book is invaluable for any undergraduate or masters level course in sport analytics, sports marketing, or sports management. It will also be incredibly useful for scouts, recruiters, and general managers in the NBA who would like to use analytics in their work.Table of ContentsIntroduction.- 1. Da Real MVP.- 2. A Tribe of Goats.- 3. The Myth of the Superteam.- 4. Hey Now, You're an All-Star...But Are You All-NBA?- 5. Small Ball in a Big Man's Game.- 6.Is the Clutch Gene Real.- 7. Offense Wins Games, But Does Defense Win Championships? - 8. Strategic Implications of the Findings in This Book.- 9. Debates the Future Work Should Consider.

    5 in stock

    £52.24

  • Algorithms on Trees and Graphs: With Python Code

    Springer Nature Switzerland AG Algorithms on Trees and Graphs: With Python Code

    1 in stock

    Book SynopsisGraph algorithms is a well-established subject in mathematics and computer science. Beyond classical application fields, such as approximation, combinatorial optimization, graphics, and operations research, graph algorithms have recently attracted increased attention from computational molecular biology and computational chemistry. Centered around the fundamental issue of graph isomorphism, this text goes beyond classical graph problems of shortest paths, spanning trees, flows in networks, and matchings in bipartite graphs. Advanced algorithmic results and techniques of practical relevance are presented in a coherent and consolidated way. This book introduces graph algorithms on an intuitive basis followed by a detailed exposition in a literate programming style, with correctness proofs as well as worst-case analyses. Furthermore, full C++ implementations of all algorithms presented are given using the LEDA library of efficient data structures and algorithms.Table of Contents1. Introduction.- 2. Algorithmic Techniques.- 3. Tree Traversal.- 4. Tree Isomorphism.- 5. Graph Traversal.- 6. Clique, Independent Set, and Vertex Cover.- 7. Graph Isomorphism.

    1 in stock

    £59.99

  • On the Epistemology of Data Science: Conceptual

    Springer Nature Switzerland AG On the Epistemology of Data Science: Conceptual

    1 in stock

    Book SynopsisThis book addresses controversies concerning the epistemological foundations of data science: Is it a genuine science? Or is data science merely some inferior practice that can at best contribute to the scientific enterprise, but cannot stand on its own? The author proposes a coherent conceptual framework with which these questions can be rigorously addressed. Readers will discover a defense of inductivism and consideration of the arguments against it: an epistemology of data science more or less by definition has to be inductivist, given that data science starts with the data. As an alternative to enumerative approaches, the author endorses Federica Russo’s recent call for a variational rationale in inductive methodology. Chapters then address some of the key concepts of an inductivist methodology including causation, probability and analogy, before outlining an inductivist framework. The inductivist framework is shown to be adequate and useful for an analysis of the epistemological foundations of data science. The author points out that many aspects of the variational rationale are present in algorithms commonly used in data science. Introductions to algorithms and brief case studies of successful data science such as machine translation are included. Data science is located with reference to several crucial distinctions regarding different kinds of scientific practices, including between exploratory and theory-driven experimentation, and between phenomenological and theoretical science. Computer scientists, philosophers and data scientists of various disciplines will find this philosophical perspective and conceptual framework of great interest, especially as a starting point for further in-depth analysis of algorithms used in data science. Trade Review“Readers are taken on a journey where they will discover step-by-step methodologies for data-driven research. Judiciously, each key concept of data science is concisely defined, and examples and the when, why, and how to use them are provided. … I fully recommend it.” (Thierry Edoh, Computing Reviews, February 7, 2023)Table of ContentsPreface.- Chapter 1. Introduction.- Chapter 2. Inductivism.- Chapter 3. Phenomenological Science.- Chapter 4. Variational Induction.- Chapter 5. Causation As Difference Making.- Chapter 6. Evidence.- Chapter 7. Concept Formation.- Chapter 8. Analogy.- Chapter 9. Causal Probability.- Chapter 10. Conclusion.- Index.

    1 in stock

    £85.49

  • Information and Communications Security: 23rd International Conference, ICICS 2021, Chongqing, China, November 19-21, 2021, Proceedings, Part II

    Springer Nature Switzerland AG Information and Communications Security: 23rd International Conference, ICICS 2021, Chongqing, China, November 19-21, 2021, Proceedings, Part II

    15 in stock

    Book SynopsisThis two-volume set LNCS 12918 - 12919 constitutes the refereed proceedings of the 23nd International Conference on Information and Communications Security, ICICS 2021, held in Chongqing, China, in September 2021. The 49 revised full papers presented in the book were carefully selected from 182 submissions. The papers in Part II are organized in the following thematic blocks:​ machine learning security; multimedia security; security analysis; post-quantum cryptography; applied cryptography.Table of ContentsMachine Learning Security.- Multimedia Security.- Security Analysis.- Post-Quantum Cryptography.- Applied Cryptography.

    15 in stock

    £61.74

  • Elements of the General Theory of Optimal

    Springer Nature Switzerland AG Elements of the General Theory of Optimal

    1 in stock

    Book SynopsisIn this monograph, the authors develop a methodology that allows one to construct and substantiate optimal and suboptimal algorithms to solve problems in computational and applied mathematics. Throughout the book, the authors explore well-known and proposed algorithms with a view toward analyzing their quality and the range of their efficiency. The concept of the approach taken is based on several theories (of computations, of optimal algorithms, of interpolation, interlination, and interflatation of functions, to name several). Theoretical principles and practical aspects of testing the quality of algorithms and applied software, are a major component of the exposition. The computer technology in construction of T-efficient algorithms for computing ε-solutions to problems of computational and applied mathematics, is also explored. The readership for this monograph is aimed at scientists, postgraduate students, advanced students, and specialists dealing with issues of developing algorithmic and software support for the solution of problems of computational and applied mathematics.Table of Contents-Preface.- Introduction.- List of symbols and abbreviations.- 1. Elements of the computing theory.- 2. Theories of computational complexity.- 3. Interlination of functions.- 4. Interflatation of functions.- 5. Cubature formulae using interlanation functions.- 6. Testing the quality of algorithm programs.- 7. Computer technologies of solving problems of computational and applied mathematics with fixed values of quality characteristics.- Bilbiography.- Index.- About the Authors.

    1 in stock

    £104.49

  • A Quantum Computation Workbook

    Springer Nature Switzerland AG A Quantum Computation Workbook

    1 in stock

    Book SynopsisTeaching quantum computation and information is notoriously difficult, because it requires covering subjects from various fields of science, organizing these subjects consistently in a unified way despite their tendency to favor their specific languages, and overcoming the subjects’ abstract and theoretical natures, which offer few examples of actual realizations. In this book, we have organized all the subjects required to understand the principles of quantum computation and information processing in a manner suited to physics, mathematics, and engineering courses as early as undergraduate studies.In addition, we provide a supporting package of quantum simulation software from Wolfram Mathematica, specialists in symbolic calculation software. Throughout the book’s main text, demonstrations are provided that use the software package, allowing the students to deepen their understanding of each subject through self-practice. Readers can change the code so as to experiment with their own ideas and contemplate possible applications. The information in this book reflects many years of experience teaching quantum computation and information. The quantum simulation-based demonstrations and the unified organization of the subjects are both time-tested and have received very positive responses from the students who have experienced them.Trade Review“The book provides an extensive bibliography and index. … this volume is well suited for a advanced graduate or first-year PhD course in quantum mechanics, with ample time available for self-study.” (L.-F. Pau, Computing Reviews, January 30, 2023)Table of Contents1 The Postulates of Quantum Mechanics.- 2 Virtual Realization of Quantum Computers.- 3 Quantum Computation: Overview.- 4 Quantum Algorithms: Introduction.- 5 Quantum Information: Introduction.- 6 Quantum Error Correction Codes: Introduction.- Appendix A Linear Algebra.- Appendix B Mathematica Application Q3.- References.

    1 in stock

    £44.99

  • Cyber-Physical Systems: Intelligent Models and

    Springer Nature Switzerland AG Cyber-Physical Systems: Intelligent Models and

    15 in stock

    Book SynopsisThis book is devoted to intelligent models and algorithms as the core components of cyber-physical systems. The complexity of cyber-physical systems developing and deploying requires new approaches to its modelling and design. Presents results in the field of modelling technologies that leverage the exploitation of artificial intelligence, including artificial general intelligence (AGI) and weak artificial intelligence. Provides scientific, practical, and methodological approaches based on bio-inspired methods, fuzzy models and algorithms, predictive modelling, computer vision and image processing. The target audience of the book are practitioners, enterprises representatives, scientists, PhD and Master students who perform scientific research or applications of intelligent models and algorithms in cyber-physical systems for various domains.Table of ContentsBio-inspired modelling.- Fuzzy models and algorithms.- Predictive modelling.- Computer Vision and Image Processing.

    15 in stock

    £116.99

  • Modeling Reality with Mathematics

    Springer Nature Switzerland AG Modeling Reality with Mathematics

    5 in stock

    Book SynopsisSimulating the behavior of a human heart, predicting tomorrow's weather, optimizing the aerodynamics of a sailboat, finding the ideal cooking time for a hamburger: to solve these problems, cardiologists, meteorologists, sportsmen, and engineers can count on math help. This book will lead you to the discovery of a magical world, made up of equations, in which a huge variety of important problems for our life can find useful answers.Trade Review“By providing tools and current examples on modeling issues ... the book makes a contribution answering the becoming more and more prevalent presence of Mathematics in our daily lives. The examples come essentially from the physical and natural sciences, but the book can be relied on in other areas, including the humanities, aside seminal works on mathematical modeling ... . The journey consists in eight rigorous and enjoying chapters.” (Lisa Morhaim, zbMATH 1519.00001, 2023)Table of Contents1 The model, aka the magic box.- 2 Weather Forecast Models.- 3 Epidemics: the Mathematics of Contagion.- 4 Mathematical Hearth.- 5 Mathematics in the Wind.- 6 Flying on Sun Power.- 7 The taste for Mathematics.- 8 Conclusions.

    5 in stock

    £17.09

  • Computer Vision: Statistical Models for Marr's

    Springer Nature Switzerland AG Computer Vision: Statistical Models for Marr's

    15 in stock

    Book SynopsisAs the first book of a three-part series, this book is offered as a tribute to pioneers in vision, such as Béla Julesz, David Marr, King-Sun Fu, Ulf Grenander, and David Mumford. The authors hope to provide foundation and, perhaps more importantly, further inspiration for continued research in vision. This book covers David Marr's paradigm and various underlying statistical models for vision. The mathematical framework herein integrates three regimes of models (low-, mid-, and high-entropy regimes) and provides foundation for research in visual coding, recognition, and cognition. Concepts are first explained for understanding and then supported by findings in psychology and neuroscience, after which they are established by statistical models and associated learning and inference algorithms. A reader will gain a unified, cross-disciplinary view of research in vision and will accrue knowledge spanning from psychology to neuroscience to statistics. Table of ContentsPreface.- About the Authors.- 1 Introduction.- 2 Statistics of Natural Images.- 3 Textures.- 4 Textons.- 5 Gestalt Laws and Perceptual Organizations.- 6 Primal Sketch: Integrating Textures and Textons.- 7 2.1D Sketch and Layered Representation.- 8 2.5D Sketch and Depth Maps.- 9 Learning about information Projection.- 10 Informing Scaling and Regimes of Models.- 11 Deep Images and Models.- 12 A Tale of Three Families: Discriminative, Generative and Descriptive Models.- Bibliography

    15 in stock

    £61.74

  • Emerging Technology Trends in Internet of Things and Computing: First International Conference, TIOTC 2021, Erbil, Iraq, June 6–8, 2021, Revised Selected Papers

    Springer Nature Switzerland AG Emerging Technology Trends in Internet of Things and Computing: First International Conference, TIOTC 2021, Erbil, Iraq, June 6–8, 2021, Revised Selected Papers

    1 in stock

    Book SynopsisThis volume constitutes selected papers presented at the First International Conference on Emerging Technology Trends in IoT and Computing, TIOTC 2021, held in Erbil, Iraq, in June 2021. The 26 full papers were thoroughly reviewed and selected from 182 submissions. The papers are organized in the following topical sections: Internet of Things (IOT): services and applications; Internet of Things (IOT) in healthcare industry; IOT in networks, communications and distributed computing; real world application fields in information science and technology.Table of ContentsInternet of Things (IOT): Services and Applications.- Internet of Things (IOT) in Healthcare Industry.- IOT in Networks, Communications and Distributed Computing.- Real World Application Fields in information Science and Technology.

    1 in stock

    £62.99

  • Tools and Algorithms for the Construction and Analysis of Systems: 28th International Conference, TACAS 2022, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022, Munich, Germany, April 2–7, 2022, P

    Springer Nature Switzerland AG Tools and Algorithms for the Construction and Analysis of Systems: 28th International Conference, TACAS 2022, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022, Munich, Germany, April 2–7, 2022, P

    15 in stock

    Book SynopsisThis open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems. Table of ContentsProbabilistic Systems.- A Probabilistic Logic for Verifying Continuous-time Markov Chains.- Under-Approximating Expected Total Rewards in POMDPs.- Correct Probabilistic Model Checking with Floating-Point Arithmetic.- Correlated Equilibria and Fairness in Concurrent Stochastic Games.- Omega Automata.- A Direct Symbolic Algorithm for Solving Stochastic Rabin Games.- Practical Applications of the Alternating Cycle Decomposition.- Sky Is Not the Limit: Tighter Rank Bounds for Elevator Automata in Büchi Automata Complementation.- On-The-Fly Solving for Symbolic Parity Games.- Equivalence Checking.- Distributed Coalgebraic Partition Refinement.- From Bounded Checking to Verification of Equivalence via Symbolic Up-to Techniques.- Equivalence Checking for Orthocomplemented Bisemilattices in Log-Linear Time.- Monitoring and Analysis.- A Theoretical Analysis of Random Regression Test Prioritization.- Verified First-Order Monitoring with Recursive Rules.- Maximizing Branch Coverage with Constrained Horn Clauses.- Efficient Analysis of Cyclic Redundancy Architectures via Boolean Fault Propagation.- Tools | Optimizations, Repair and Explainability.- Adiar: Binary Decision Diagrams in External Memory.- Forest GUMP: A Tool for Explanation.- Alpinist: an Annotation-Aware GPU Program Optimizer.- Automatic Repair for Network Programs.- 11th Competition on Software Verification | SV-COMP 2022.- Progress on Software Verification: SV-COMP 2022.- AProVE: Non-Termination Witnesses for C Programs (Competition Contribution).- BRICK: Path Enumeration Based Bounded Reachability Checking of C Program (Competition Contribution).- A Prototype for Data Race Detection in CSeq 3 (Competition Contribution).- Dartagnan: SMT-based Violation Witness Validation (Competition Contribution).- Deagle: An SMT-based Veri er for Multi-threaded Programs (Competition Contribution).- The Static Analyzer Frama-C in SV-COMP (Competition Contribution).- GDart: An Ensemble of Tools for Dynamic Symbolic Execution on the Java Virtual Machine (Competition Contribution).- Graves-CPA: A Graph-Attention Veri er Selector (Competition Contribution).- GWIT: A Witness Validator for Java based on GraalVM (Competition Contribution).- The Static Analyzer Infer in SV-COMP (Competition Contribution).- LART: Compiled Abstract Execution (Competition Contribution).- Symbiotic 9: String Analysis and Backward Symbolic Execution with Loop Folding (Competition Contribution).- Symbiotic-Witch: A Klee-Based Violation Witness Checker (Competition Contribution).- Theta: portfolio of CEGAR-based analyses with dynamic algorithm selection.- Ultimate GemCutter and the Axes of Generalization (Competition Contribution).- Wit4Java: A Violation-Witness Validator for Java Verifiers (Competition Contribution).

    15 in stock

    £34.99

  • Fundamentals of Object Databases

    Springer International Publishing AG Fundamentals of Object Databases

    Out of stock

    Book SynopsisObject-oriented databases were originally developed as an alternative to relational database technology for the representation, storage, and access of non-traditional data forms that were increasingly found in advanced applications of database technology. After much debate regarding object-oriented versus relational database technology, object-oriented extensions were eventually incorporated into relational technology to create object-relational databases. Both object-oriented databases and object-relational databases, collectively known as object databases, provide inherent support for object features, such as object identity, classes, inheritance hierarchies, and associations between classes using object references. This monograph presents the fundamentals of object databases, with a specific focus on conceptual modeling of object database designs. After an introduction to the fundamental concepts of object-oriented data, the monograph provides a review of object-oriented conceptual modeling techniques using side-by-side Enhanced Entity Relationship diagrams and Unified Modeling Language conceptual class diagrams that feature class hierarchies with specialization constraints and object associations. These object-oriented conceptual models provide the basis for introducing case studies that illustrate the use of object features within the design of object-oriented and object-relational databases. For the object-oriented database perspective, the Object Data Management Group data definition language provides a portable, language-independent specification of an object schema, together with an SQL-like object query language. LINQ (Language INtegrated Query) is presented as a case study of an object query language together with its use in the db4o open-source object-oriented database. For the object-relational perspective, the object-relational features of the SQL standard are presented together with an accompanying case study of the object-relational features of Oracle. For completeness of coverage, an appendix provides a mapping of object-oriented conceptual designs to the relational model and its associated constraints. Table of Contents: List of Figures / List of Tables / Introduction to Object Databases / Object-Oriented Databases / Object-Relational DatabasesTable of ContentsList of Figures.- List of Tables.- Introduction to Object Databases.- Object-Oriented Databases.- Object-Relational Databases.

    Out of stock

    £25.19

  • Perspectives on Business Intelligence

    Springer International Publishing AG Perspectives on Business Intelligence

    Out of stock

    Book SynopsisIn the 1980s, traditional Business Intelligence (BI) systems focused on the delivery of reports that describe the state of business activities in the past, such as for questions like "How did our sales perform during the last quarter?" A decade later, there was a shift to more interactive content that presented how the business was performing at the present time, answering questions like "How are we doing right now?" Today the focus of BI users are looking into the future. "Given what I did before and how I am currently doing this quarter, how will I do next quarter?" Furthermore, fuelled by the demands of Big Data, BI systems are going through a time of incredible change. Predictive analytics, high volume data, unstructured data, social data, mobile, consumable analytics, and data visualization are all examples of demands and capabilities that have become critical within just the past few years, and are growing at an unprecedented pace. This book introduces research problems and solutions on various aspects central to next-generation BI systems. It begins with a chapter on an industry perspective on how BI has evolved, and discusses how game-changing trends have drastically reshaped the landscape of BI. One of the game changers is the shift toward the consumerization of BI tools. As a result, for BI tools to be successfully used by business users (rather than IT departments), the tools need a business model, rather than a data model. One chapter of the book surveys four different types of business modeling. However, even with the existence of a business model for users to express queries, the data that can meet the needs are still captured within a data model. The next chapter on vivification addresses the problem of closing the gap, which is often significant, between the business and the data models. Moreover, Big Data forces BI systems to integrate and consolidate multiple, and often wildly different, data sources. One chapter gives an overview of several integration architectures for dealing with the challenges that need to be overcome. While the book so far focuses on the usual structured relational data, the remaining chapters turn to unstructured data, an ever-increasing and important component of Big Data. One chapter on information extraction describes methods for dealing with the extraction of relations from free text and the web. Finally, BI users need tools to visualize and interpret new and complex types of information in a way that is compelling, intuitive, but accurate. The last chapter gives an overview of information visualization for decision support and text.Table of ContentsIntroduction and the Changing Landscape of Business Intelligence.- BI Game Changers: an Industry Viewpoint.- Business Modeling for BI.- Vivification in BI.- Information Integration in BI.- Information Extraction for BI.- Information Visualization for BI.

    Out of stock

    £25.19

  • Data Processing on FPGAs

    Springer International Publishing AG Data Processing on FPGAs

    Out of stock

    Book SynopsisRoughly a decade ago, power consumption and heat dissipation concerns forced the semiconductor industry to radically change its course, shifting from sequential to parallel computing. Unfortunately, improving performance of applications has now become much more difficult than in the good old days of frequency scaling. This is also affecting databases and data processing applications in general, and has led to the popularity of so-called data appliances—specialized data processing engines, where software and hardware are sold together in a closed box. Field-programmable gate arrays (FPGAs) increasingly play an important role in such systems. FPGAs are attractive because the performance gains of specialized hardware can be significant, while power consumption is much less than that of commodity processors. On the other hand, FPGAs are way more flexible than hard-wired circuits (ASICs) and can be integrated into complex systems in many different ways, e.g., directly in the network for a high-frequency trading application. This book gives an introduction to FPGA technology targeted at a database audience. In the first few chapters, we explain in detail the inner workings of FPGAs. Then we discuss techniques and design patterns that help mapping algorithms to FPGA hardware so that the inherent parallelism of these devices can be leveraged in an optimal way. Finally, the book will illustrate a number of concrete examples that exploit different advantages of FPGAs for data processing. Table of Contents: Preface / Introduction / A Primer in Hardware Design / FPGAs / FPGA Programming Models / Data Stream Processing / Accelerated DB Operators / Secure Data Processing / Conclusions / Bibliography / Authors' Biographies / IndexTable of ContentsPreface.- Introduction.- A Primer in Hardware Design.- FPGAs.- FPGA Programming Models.- Data Stream Processing.- Accelerated DB Operators.- Secure Data Processing.- Conclusions.- Bibliography.- Authors' Biographies.- Index.

    Out of stock

    £25.19

  • Information and Influence Propagation in Social Networks

    Springer International Publishing AG Information and Influence Propagation in Social Networks

    Out of stock

    Book SynopsisResearch on social networks has exploded over the last decade. To a large extent, this has been fueled by the spectacular growth of social media and online social networking sites, which continue growing at a very fast pace, as well as by the increasing availability of very large social network datasets for purposes of research. A rich body of this research has been devoted to the analysis of the propagation of information, influence, innovations, infections, practices and customs through networks. Can we build models to explain the way these propagations occur? How can we validate our models against any available real datasets consisting of a social network and propagation traces that occurred in the past? These are just some questions studied by researchers in this area. Information propagation models find applications in viral marketing, outbreak detection, finding key blog posts to read in order to catch important stories, finding leaders or trendsetters, information feed ranking, etc. A number of algorithmic problems arising in these applications have been abstracted and studied extensively by researchers under the garb of influence maximization. This book starts with a detailed description of well-established diffusion models, including the independent cascade model and the linear threshold model, that have been successful at explaining propagation phenomena. We describe their properties as well as numerous extensions to them, introducing aspects such as competition, budget, and time-criticality, among many others. We delve deep into the key problem of influence maximization, which selects key individuals to activate in order to influence a large fraction of a network. Influence maximization in classic diffusion models including both the independent cascade and the linear threshold models is computationally intractable, more precisely #P-hard, and we describe several approximation algorithms and scalable heuristics that have been proposed in the literature. Finally, we also deal with key issues that need to be tackled in order to turn this research into practice, such as learning the strength with which individuals in a network influence each other, as well as the practical aspects of this research including the availability of datasets and software tools for facilitating research. We conclude with a discussion of various research problems that remain open, both from a technical perspective and from the viewpoint of transferring the results of research into industry strength applications.Table of ContentsAcknowledgments.- Introduction.- Stochastic Diffusion Models.- Influence Maximization.- Extensions to Diffusion Modeling and Influence Maximization.- Learning Propagation Models.- Data and Software for Information/Influence: Propagation Research.- Conclusion and Challenges.- Bibliography.- Authors' Biographies.- Index.

    Out of stock

    £26.99

  • Similarity Joins in Relational Database Systems

    Springer International Publishing AG Similarity Joins in Relational Database Systems

    1 in stock

    Book SynopsisState-of-the-art database systems manage and process a variety of complex objects, including strings and trees. For such objects equality comparisons are often not meaningful and must be replaced by similarity comparisons. This book describes the concepts and techniques to incorporate similarity into database systems. We start out by discussing the properties of strings and trees, and identify the edit distance as the de facto standard for comparing complex objects. Since the edit distance is computationally expensive, token-based distances have been introduced to speed up edit distance computations. The basic idea is to decompose complex objects into sets of tokens that can be compared efficiently. Token-based distances are used to compute an approximation of the edit distance and prune expensive edit distance calculations. A key observation when computing similarity joins is that many of the object pairs, for which the similarity is computed, are very different from each other. Filters exploit this property to improve the performance of similarity joins. A filter preprocesses the input data sets and produces a set of candidate pairs. The distance function is evaluated on the candidate pairs only. We describe the essential query processing techniques for filters based on lower and upper bounds. For token equality joins we describe prefix, size, positional and partitioning filters, which can be used to avoid the computation of small intersections that are not needed since the similarity would be too low.Table of ContentsPreface.- Acknowledgments.- Introduction.- Data Types.- Edit-Based Distances.- Token-Based Distances.- Query Processing Techniques.- Filters for Token Equality Joins.- Conclusion.- Bibliography.- Authors' Biographies.- Index.

    1 in stock

    £26.59

  • Veracity of Data

    Springer International Publishing AG Veracity of Data

    Out of stock

    Book SynopsisOn the Web, a massive amount of user-generated content is available through various channels (e.g., texts, tweets, Web tables, databases, multimedia-sharing platforms, etc.). Conflicting information, rumors, erroneous and fake content can be easily spread across multiple sources, making it hard to distinguish between what is true and what is not. This book gives an overview of fundamental issues and recent contributions for ascertaining the veracity of data in the era of Big Data. The text is organized into six chapters, focusing on structured data extracted from texts. Chapter 1 introduces the problem of ascertaining the veracity of data in a multi-source and evolving context. Issues related to information extraction are presented in Chapter 2. Current truth discovery computation algorithms are presented in details in Chapter 3. It is followed by practical techniques for evaluating data source reputation and authoritativeness in Chapter 4. The theoretical foundations and various approaches for modeling diffusion phenomenon of misinformation spreading in networked systems are studied in Chapter 5. Finally, truth discovery computation from extracted data in a dynamic context of misinformation propagation raises interesting challenges that are explored in Chapter 6. This text is intended for a seminar course at the graduate level. It is also to serve as a useful resource for researchers and practitioners who are interested in the study of fact-checking, truth discovery, or rumor spreading.Table of ContentsIntroduction to Data Veracity.- Information Extraction.- Truth Discovery Computation.- Trust Computation.- Misinformation Dynamics.- Transdisciplinary Challenges of Truth Discovery.- Bibliography.- Authors' Biographies.

    Out of stock

    £31.49

  • Instant Recovery with Write-Ahead Logging

    Springer International Publishing AG Instant Recovery with Write-Ahead Logging

    Out of stock

    Book SynopsisTraditional theory and practice of write-ahead logging and of database recovery focus on three failure classes: transaction failures (typically due to deadlocks) resolved by transaction rollback; system failures (typically power or software faults) resolved by restart with log analysis, "redo," and "undo" phases; and media failures (typically hardware faults) resolved by restore operations that combine multiple types of backups and log replay. The recent addition of single-page failures and single-page recovery has opened new opportunities far beyond the original aim of immediate, lossless repair of single-page wear-out in novel or traditional storage hardware. In the contexts of system and media failures, efficient single-page recovery enables on-demand incremental "redo" and "undo" as part of system restart or media restore operations. This can give the illusion of practically instantaneous restart and restore: instant restart permits processing new queries and updates seconds after system reboot and instant restore permits resuming queries and updates on empty replacement media as if those were already fully recovered. In the context of node and network failures, instant restart and instant restore combine to enable practically instant failover from a failing database node to one holding merely an out-of-date backup and a log archive, yet without loss of data, updates, or transactional integrity. In addition to these instant recovery techniques, the discussion introduces self-repairing indexes and much faster offline restore operations, which impose no slowdown in backup operations and hardly any slowdown in log archiving operations. The new restore techniques also render differential and incremental backups obsolete, complete backup commands on a database server practically instantly, and even permit taking full up-to-date backups without imposing any load on the database server. Compared to the first version of this book, this second edition adds sections on applications of single-page repair, instant restart, single-pass restore, and instant restore. Moreover, it adds sections on instant failover among nodes in a cluster, applications of instant failover, recovery for file systems and data files, and the performance of instant restart and instant restore.Table of ContentsPreface.- Acknowledgments.- Introduction.- Related Prior Work.- Single-Page Recovery.- Applications of Single-Page Recovery.- Instant Restart after a System Failure.- Applications of Instant Restart.- Single-Pass Restore.- Applications of Single-Pass Restore.- Instant Restore after a Media Failure.- Applications of Instant Restore.- Multiple Pate, System and Media Failures.- Instant Failover.- Applications of Instant Failover.- File Systems and Data Files.- Performance and Scalability.- Conclusions.- References.- Author Biographies .

    Out of stock

    £26.99

  • Databases on Modern Hardware

    Springer International Publishing AG Databases on Modern Hardware

    Out of stock

    Book SynopsisData management systems enable various influential applications from high-performance online services (e.g., social networks like Twitter and Facebook or financial markets) to big data analytics (e.g., scientific exploration, sensor networks, business intelligence). As a result, data management systems have been one of the main drivers for innovations in the database and computer architecture communities for several decades. Recent hardware trends require software to take advantage of the abundant parallelism existing in modern and future hardware. The traditional design of the data management systems, however, faces inherent scalability problems due to its tightly coupled components. In addition, it cannot exploit the full capability of the aggressive micro-architectural features of modern processors. As a result, today's most commonly used server types remain largely underutilized leading to a huge waste of hardware resources and energy. In this book, we shed light on the challenges present while running DBMS on modern multicore hardware. We divide the material into two dimensions of scalability: implicit/vertical and explicit/horizontal. The first part of the book focuses on the vertical dimension: it describes the instruction- and data-level parallelism opportunities in a core coming from the hardware and software side. In addition, it examines the sources of under-utilization in a modern processor and presents insights and hardware/software techniques to better exploit the microarchitectural resources of a processor by improving cache locality at the right level of the memory hierarchy. The second part focuses on the horizontal dimension, i.e., scalability bottlenecks of database applications at the level of multicore and multisocket multicore architectures. It first presents a systematic way of eliminating such bottlenecks in online transaction processing workloads, which is based on minimizing unbounded communication, and shows several techniques that minimize bottlenecks in major components of database management systems. Then, it demonstrates the data and work sharing opportunities for analytical workloads, and reviews advanced scheduling mechanisms that are aware of nonuniform memory accesses and alleviate bandwidth saturation.Table of ContentsIntroduction.- Exploiting Resources of a Processor Core.- Minimizing Memory Stalls.- Scaling-up OLTP.- Scaling-up OLAP Workloads.- Outlook.- Summary.- Bibliography.- Authors' Biographies.

    Out of stock

    £25.19

  • Answering Queries Using Views, Second Edition

    Springer International Publishing AG Answering Queries Using Views, Second Edition

    Out of stock

    Book SynopsisThe topic of using views to answer queries has been popular for a few decades now, as it cuts across domains such as query optimization, information integration, data warehousing, website design and, recently, database-as-a-service and data placement in cloud systems. This book assembles foundational work on answering queries using views in a self-contained manner, with an effort to choose material that constitutes the backbone of the research. It presents efficient algorithms and covers the following problems: query containment; rewriting queries using views in various logical languages; equivalent rewritings and maximally contained rewritings; and computing certain answers in the data-integration and data-exchange settings. Query languages that are considered are fragments of SQL, in particular select-project-join queries, also called conjunctive queries (with or without arithmetic comparisons or negation), and aggregate SQL queries. This second edition includes two new chapters that refer to tree-like data and respective query languages. Chapter 8 presents the data model for XML documents and the XPath query language, and Chapter 9 provides a theoretical presentation of tree-like data model and query language where the tuples of a relation share a tree-structured schema for that relation and the query language is a dialect of SQL with evaluation techniques appropriately modified to fit the richer schema.Table of ContentsPreface to the First Edition.- Preface to the Second Edition.- Acknowledgments.- Queries and Views.- Query Containment and Equivalence.- Finding Equivalent Rewritings.- Maximally Contained Rewritings (MCRs).- Answering Queries in Presence of Dependencies.- Answering Queries in Data Exchange.- Answering Queries Using Views.- XPath Queries and Views.- Tree-Structured Records Queried with SQL Dialect.- Bibliographical Notes for Chapters 1--7.- Conclusion for Chapters 1--7.- Bibliography.- Authors' Biographies.

    Out of stock

    £62.99

  • Fault-Tolerant Distributed Transactions on Blockchain

    Springer International Publishing AG Fault-Tolerant Distributed Transactions on Blockchain

    Out of stock

    Book SynopsisSince the introduction of Bitcoin—the first widespread application driven by blockchain—the interest of the public and private sectors in blockchain has skyrocketed. In recent years, blockchain-based fabrics have been used to address challenges in diverse fields such as trade, food production, property rights, identity-management, aid delivery, health care, and fraud prevention. This widespread interest follows from fundamental concepts on which blockchains are built that together embed the notion of trust, upon which blockchains are built. 1. Blockchains provide data transparancy. Data in a blockchain is stored in the form of a ledger, which contains an ordered history of all the transactions. This facilitates oversight and auditing. 2. Blockchains ensure data integrity by using strong cryptographic primitives. This guarantees that transactions accepted by the blockchain are authenticated by its issuer, are immutable, and cannot be repudiated by the issuer. This ensures accountability. 3. Blockchains are decentralized, democratic, and resilient. They use consensus-based replication to decentralize the ledger among many independent participants. Thus, it can operate completely decentralized and does not require trust in a single authority. Additions to the chain are performed by consensus, in which all participants have a democratic voice in maintaining the integrity of the blockchain. Due to the usage of replication and consensus, blockchains are also highly resilient to malicious attacks even when a significant portion of the participants are malicious. It further increases the opportunity for fairness and equity through democratization. These fundamental concepts and the technologies behind them—a generic ledger-based data model, cryptographically ensured data integrity, and consensus-based replication—prove to be a powerful and inspiring combination, a catalyst to promote computational trust. In this book, we present an in-depth study of blockchain, unraveling its revolutionary promise to instill computational trust in society, all carefully tailored to a broad audience including students, researchers, and practitioners. We offer a comprehensive overview of theoretical limitations and practical usability of consensus protocols while examining the diverse landscape of how blockchains are manifested in their permissioned and permissionless forms.Table of ContentsPreface.- Introduction.- Practical Byzantine Fault-Tolerant Consensus.- Beyond the Design of PBFT.- Toward Scalable Blockchains.- Permissioned Blockchains.- Permissionless Blockchains.- Bibliography.- Authors' Biographies.

    Out of stock

    £49.49

© 2026 Book Curl

    • American Express
    • Apple Pay
    • Diners Club
    • Discover
    • Google Pay
    • Maestro
    • Mastercard
    • PayPal
    • Shop Pay
    • Union Pay
    • Visa

    Login

    Forgot your password?

    Don't have an account yet?
    Create account