Pattern recognition Books

149 products


  • Design Patterns

    Pearson Education (US) Design Patterns

    10 in stock

    Book SynopsisDr. Erich Gamma is technical director at the Software Technology Center of Object Technology International in Zurich, Switzerland. Dr. Richard Helm is a member of the Object Technology Practice Group in the IBM Consulting Group in Sydney, Australia. Dr. Ralph Johnson is a faculty member at the University of Illinois at Urbana-Champaign's Computer Science Department. John Vlissides is a member of the research staff at the IBM T. J. Watson Research Center in Hawthorne, New York. He has practiced object-oriented technology for more than a decade as a designer, implementer, researcher, lecturer, and consultant. In addition to co-authoring Design Patterns: Elements of Reusable Object-Oriented Software, he is co-editor of the book Pattern Languages of Program Design 2 (both from Addison-Wesley). He and the other co-authors of Design Patterns are recipients of the 1998 Dr. Dobb's Journal Excellence in Programming Award. 020163Table of Contents 1. Introduction. 2. A Case Study: Designing a Document Editor. 3. Creational Patterns. 4. Structural Pattern. 5. Behavioral Patterns. 6. Conclusion. Appendix A: Glossary. Appendix B: Guide to Notation. Appendix C: Foundation Classes. Bibliography. Index.

    10 in stock

    £44.09

  • Linear Algebra and Learning from Data

    Wellesley-Cambridge Press,U.S. Linear Algebra and Learning from Data

    1 in stock

    Book SynopsisLinear algebra and the foundations of deep learning, together at last! From Professor Gilbert Strang, acclaimed author of Introduction to Linear Algebra, comes Linear Algebra and Learning from Data, the first textbook that teaches linear algebra together with deep learning and neural nets. This readable yet rigorous textbook contains a complete course in the linear algebra and related mathematics that students need to know to get to grips with learning from data. Included are: the four fundamental subspaces, singular value decompositions, special matrices, large matrix computation techniques, compressed sensing, probability and statistics, optimization, the architecture of neural nets, stochastic gradient descent and backpropagation.

    1 in stock

    £59.84

  • Pattern Recognition and Machine Learning

    Springer Pattern Recognition and Machine Learning

    15 in stock

    Book SynopsisProbability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.Trade ReviewFrom the reviews: "This beautifully produced book is intended for advanced undergraduates, PhD students, and researchers and practitioners, primarily in the machine learning or allied areas...A strong feature is the use of geometric illustration and intuition...This is an impressive and interesting book that might form the basis of several advanced statistics courses. It would be a good choice for a reading group." John Maindonald for the Journal of Statistical Software "In this book, aimed at senior undergraduates or beginning graduate students, Bishop provides an authoritative presentation of many of the statistical techniques that have come to be considered part of ‘pattern recognition’ or ‘machine learning’. … This book will serve as an excellent reference. … With its coherent viewpoint, accurate and extensive coverage, and generally good explanations, Bishop’s book is a useful introduction … and a valuable reference for the principle techniques used in these fields." (Radford M. Neal, Technometrics, Vol. 49 (3), August, 2007) "This book appears in the Information Science and Statistics Series commissioned by the publishers. … The book appears to have been designed for course teaching, but obviously contains material that readers interested in self-study can use. It is certainly structured for easy use. … For course teachers there is ample backing which includes some 400 exercises. … it does contain important material which can be easily followed without the reader being confined to a pre-determined course of study." (W. R. Howard, Kybernetes, Vol. 36 (2), 2007) "Bishop (Microsoft Research, UK) has prepared a marvelous book that provides a comprehensive, 700-page introduction to the fields of pattern recognition and machine learning. Aimed at advanced undergraduates and first-year graduate students, as well as researchers and practitioners, the book assumes knowledge of multivariate calculus and linear algebra … . Summing Up: Highly recommended. Upper-division undergraduates through professionals." (C. Tappert, CHOICE, Vol. 44 (9), May, 2007) "The book is structured into 14 main parts and 5 appendices. … The book is aimed at PhD students, researchers and practitioners. It is well-suited for courses on machine learning, statistics, computer science, signal processing, computer vision, data mining, and bio-informatics. Extensive support is provided for course instructors, including more than 400 exercises, lecture slides and a great deal of additional material available at the book’s web site … ." (Ingmar Randvee, Zentralblatt MATH, Vol. 1107 (9), 2007) "This new textbook by C. M. Bishop is a brilliant extension of his former book ‘Neural Networks for Pattern Recognition’. It is written for graduate students or scientists doing interdisciplinary work in related fields. … In summary, this textbook is an excellent introduction to classical pattern recognition and machine learning (in the sense of parameter estimation). A large number of very instructive illustrations adds to this value." (H. G. Feichtinger, Monatshefte für Mathematik, Vol. 151 (3), 2007) "Author aims this text at advanced undergraduates, beginning graduate students, and researchers new to machine learning and pattern recognition. … Pattern Recognition and Machine Learning provides excellent intuitive descriptions and appropriate-level technical details on modern pattern recognition and machine learning. It can be used to teach a course or for self-study, as well as for a reference. … I strongly recommend it for the intended audience and note that Neal (2007) also has given this text a strong review to complement its strong sales record." (Thomas Burr, Journal of the American Statistical Association, Vol. 103 (482), June, 2008) "This accessible monograph seeks to provide a comprehensive introduction to the fields of pattern recognition and machine learning. It presents a unified treatment of well-known statistical pattern recognition techniques. … The book can be used by advanced undergraduates and graduate students … . The illustrative examples and exercises proposed at the end of each chapter are welcome … . The book, which provides several new views, developments and results, is appropriate for both researchers and students who work in machine learning … ." (L. State, ACM Computing Reviews, October, 2008) "Chris Bishop’s … technical exposition that is at once lucid and mathematically rigorous. … In more than 700 pages of clear, copiously illustrated text, he develops a common statistical framework that encompasses … machine learning. … it is a textbook, with a wide range of exercises, instructions to tutors on where to go for full solutions, and the color illustrations that have become obligatory in undergraduate texts. … its clarity and comprehensiveness will make it a favorite desktop companion for practicing data analysts." (H. Van Dyke Parunak, ACM Computing Reviews, Vol. 49 (3), March, 2008)Table of ContentsProbability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

    15 in stock

    £67.49

  • Mathematics for Machine Learning

    Cambridge University Press Mathematics for Machine Learning

    7 in stock

    Book SynopsisThis self-contained textbook introduces all the relevant mathematical concepts needed to understand and use machine learning methods, with a minimum of prerequisites. Topics include linear algebra, analytic geometry, matrix decompositions, vector calculus, optimization, probability and statistics.Trade Review'This book provides great coverage of all the basic mathematical concepts for machine learning. I'm looking forward to sharing it with students, colleagues, and anyone interested in building a solid understanding of the fundamentals.' Joelle Pineau, McGill University, Montreal'The field of machine learning has grown dramatically in recent years, with an increasingly impressive spectrum of successful applications. This comprehensive text covers the key mathematical concepts that underpin modern machine learning, with a focus on linear algebra, calculus, and probability theory. It will prove valuable both as a tutorial for newcomers to the field, and as a reference text for machine learning researchers and engineers.' Christopher Bishop, Microsoft Research Cambridge'This book provides a beautiful exposition of the mathematics underpinning modern machine learning. Highly recommended for anyone wanting a one-stop-shop to acquire a deep understanding of machine learning foundations.' Pieter Abbeel, University of California, Berkeley'Really successful are the numerous explanatory illustrations, which help to explain even difficult concepts in a catchy way. Each chapter concludes with many instructive exercises. An outstanding feature of this book is the additional material presented on the website …' Volker H. Schulz, SIAM ReviewTable of Contents1. Introduction and motivation; 2. Linear algebra; 3. Analytic geometry; 4. Matrix decompositions; 5. Vector calculus; 6. Probability and distribution; 7. Optimization; 8. When models meet data; 9. Linear regression; 10. Dimensionality reduction with principal component analysis; 11. Density estimation with Gaussian mixture models; 12. Classification with support vector machines.

    7 in stock

    £37.99

  • Mining of Massive Datasets

    Cambridge University Press Mining of Massive Datasets

    1 in stock

    Written by leading authorities in database and Web technologies, this book is essential reading for students and practitioners alike. The popularity of the Web and Internet commerce provides many extremely large datasets from which information can be gleaned by data mining. This book focuses on practical algorithms that have been used to solve key problems in data mining and can be applied successfully to even the largest datasets. It begins with a discussion of the MapReduce framework, an important tool for parallelizing algorithms automatically. The authors explain the tricks of locality-sensitive hashing and stream-processing algorithms for mining data that arrives too fast for exhaustive processing. Other chapters cover the PageRank idea and related tricks for organizing the Web, the problems of finding frequent itemsets, and clustering. This third edition includes new and extended coverage on decision trees, deep learning, and mining social-network graphs.

    1 in stock

    £61.74

  • Foundations of Data Science

    Cambridge University Press Foundations of Data Science

    2 in stock

    Book SynopsisThis book provides an introduction to the mathematical and algorithmic foundations of data science, including machine learning, high-dimensional geometry, and analysis of large networks. Topics include the counterintuitive nature of data in high dimensions, important linear algebraic techniques such as singular value decomposition, the theory of random walks and Markov chains, the fundamentals of and important algorithms for machine learning, algorithms and analysis for clustering, probabilistic models for large networks, representation learning including topic modelling and non-negative matrix factorization, wavelets and compressed sensing. Important probabilistic techniques are developed including the law of large numbers, tail inequalities, analysis of random projections, generalization guarantees in machine learning, and moment methods for analysis of phase transitions in large random graphs. Additionally, important structural and complexity measures are discussed such as matrix noTrade Review'This beautifully written text is a scholarly journey through the mathematical and algorithmic foundations of data science. Rigorous but accessible, and with many exercises, it will be a valuable resource for advanced undergraduate and graduate classes.' Peter Bartlett, University of California, Berkeley'The rise of the Internet, digital media, and social networks has brought us to the world of data, with vast sources from every corner of society. Data Science - aiming to understand and discover the essences that underlie the complex, multifaceted, and high-dimensional data - has truly become a 'universal discipline', with its multidisciplinary roots, interdisciplinary presence, and societal relevance. This timely and comprehensive book presents - by bringing together from diverse fields of computing - a full spectrum of mathematical, statistical, and algorithmic materials fundamental to data analysis, machine learning, and network modeling. Foundations of Data Science offers an effective roadmap to approach this fascinating discipline and engages more advanced readers with rigorous mathematical/algorithmic theory.' Shang-Hua Teng, University of Southern California'A lucid account of mathematical ideas that underlie today's data analysis and machine learning methods. I learnt a lot from it, and I am sure it will become an invaluable reference for many students, researchers and faculty around the world.' Sanjeev Arora, Princeton University, New Jersey'It provides a very broad overview of the foundations of data science that should be accessible to well-prepared students with backgrounds in computer science, linear algebra, and probability theory … These are all important topics in the theory of machine learning and it is refreshing to see them introduced together in a textbook at this level.' Brian Borchers, MAA Reviews'One plausible measure of [Foundations of Data Science's] impact is the book's own citation metrics. Semantic Scholar (https://www.semanticscholar.org) reports 81 citations with 42 citations related to background or methods; [Foundations of Data Science] appears to be on course to becoming influential.' M. Mounts, ChoiceTable of Contents1. Introduction; 2. High-dimensional space; 3. Best-fit subspaces and Singular Value Decomposition (SVD); 4. Random walks and Markov chains; 5. Machine learning; 6. Algorithms for massive data problems: streaming, sketching, and sampling; 7. Clustering; 8. Random graphs; 9. Topic models, non-negative matrix factorization, hidden Markov models, and graphical models; 10. Other topics; 11. Wavelets; 12. Appendix.

    2 in stock

    £42.74

  • Quick Start Guide to Large Language Models

    Pearson Education (US) Quick Start Guide to Large Language Models

    15 in stock

    Book SynopsisSinan Ozdemir is currently the founder and CTO of Shiba Technologies. Sinan is a former lecturer of Data Science at Johns Hopkins University and the author of multiple textbooks on data science and machine learning. Additionally, he is the founder of the recently acquired Kylie.ai, an enterprise-grade conversational AI platform with RPA capabilities. He holds a master's degree in Pure Mathematics from Johns Hopkins University and is based in San Francisco, CA.Trade Review"Ozdemir's book cuts through the noise to help readers understand where the LLM revolution has come from--and where it is going. Ozdemir breaks down complex topics into practical explanations and easy to follow code examples."--Shelia Gulati, former GM at Microsoft and current Managing Director of Tola Capital "When it comes to building Large Language Models (LLMs), it can be a daunting task to find comprehensive resources that cover all the essential aspects. However, my search for such a resource recently came to an end when I discovered this book. "One of the stand-out features of Sinan is his ability to present complex concepts in a straightforward manner. The author has done an outstanding job of breaking down intricate ideas and algorithms, ensuring that readers can grasp them without feeling overwhelmed. Each topic is carefully explained, building upon examples that serve as steppingstones for better understanding. This approach greatly enhances the learning experience, making even the most intricate aspects of LLM development accessible to readers of varying skill levels. "Another strength of this book is the abundance of code resources. The inclusion of practical examples and code snippets is a game-changer for anyone who wants to experiment and apply the concepts they learn. These code resources provide readers with hands-on experience, allowing them to test and refine their understanding. This is an invaluable asset, as it fosters a deeper comprehension of the material and enables readers to truly engage with the content. "In conclusion, this book is a rare find for anyone interested in building LLMs. Its exceptional quality of explanation, clear and concise writing style, abundant code resources, and comprehensive coverage of all essential aspects make it an indispensable resource. Whether you are a beginner or an experienced practitioner, this book will undoubtedly elevate your understanding and practical skills in LLM development. I highly recommend Quick Start Guide to Large Language Models to anyone looking to embark on the exciting journey of building LLM applications."--Pedro Marcelino, Machine Learning Engineer, Co-Founder and CEO @overfit.studyTable of ContentsForeword xvPreface xviiAcknowledgments xxiAbout the Author xxiii Part I: Introduction to Large Language Models 1 Chapter 1: Overview of Large Language Models 3What Are Large Language Models? 4Popular Modern LLMs 20Domain-Specific LLMs 22Applications of LLMs 23Summary 29 Chapter 2: Semantic Search with LLMs 31Introduction 31The Task 32Solution Overview 34The Components 35Putting It All Together 51The Cost of Closed-Source Components 54Summary 55 Chapter 3: First Steps with Prompt Engineering 57Introduction 57Prompt Engineering 57Working with Prompts Across Models 65Building a Q/A Bot with ChatGPT 69Summary 74 Part II: Getting the Most Out of LLMs 75 Chapter 4: Optimizing LLMs with Customized Fine-Tuning 77Introduction 77Transfer Learning and Fine-Tuning: A Primer 78A Look at the OpenAI Fine-Tuning API 82Preparing Custom Examples with the OpenAI CLI 84Setting Up the OpenAI CLI 87Our First Fine-Tuned LLM 88Case Study: Amazon Review Category Classification 93Summary 95 Chapter 5: Advanced Prompt Engineering 97Introduction 97Prompt Injection Attacks 97Input/Output Validation 99Batch Prompting 103Prompt Chaining 104Chain-of-Thought Prompting 111Revisiting Few-Shot Learning 113Testing and Iterative Prompt Development 123Summary 124 Chapter 6: Customizing Embeddings and Model Architectures 125Introduction 125Case Study: Building a Recommendation System 126Summary 144 Part III: Advanced LLM Usage 145 Chapter 7: Moving Beyond Foundation Models 147Introduction 147Case Study: Visual Q/A 147Case Study: Reinforcement Learning from Feedback 163Summary 173 Chapter 8: Advanced Open-Source LLM Fine-Tuning 175Introduction 175Example: Anime Genre Multilabel Classification with BERT 176Example: LaTeX Generation with GPT2 189Sinan's Attempt at Wise Yet Engaging Responses: SAWYER 193The Ever-Changing World of Fine-Tuning 206Summary 207 Chapter 9: Moving LLMs into Production 209Introduction 209Deploying Closed-Source LLMs to Production 209Deploying Open-Source LLMs to Production 210Summary 225 Part IV: Appendices 227 Appendix A: LLM FAQs 229Appendix B: LLM Glossary 233Appendix C: LLM Application Archetypes 239 Index 243

    15 in stock

    £34.19

  • Computer Vision

    Elsevier Science Computer Vision

    2 in stock

    Book SynopsisTable of Contents1. Vision, the Challenge2. Images and Imaging Operations3. Image Filtering and Morphology4. The Role of Thresholding5. Edge Detection6. Corner, Interest Point and Invariant Feature Detection7. Texture Analysis8. Binary Shape Analysis9. Boundary Pattern Analysis10. Line, Circle and Ellipse Detection11. The Generalised Hough Transform12. Object Segmentation and Shape Models13. Basic Classification Concepts14. Machine Learning: Probabilistic Methods15. Deep Learning Networks16. The Three-Dimensional World17. Tackling the Perspective n-point Problem18. Invariants and perspective19. Image transformations and camera calibration20. Motion21. Face Detection and Recognition: the Impact of Deep Learning22. Surveillance23. In-Vehicle Vision Systems24. Epilogue—Perspectives in VisionAppendix A: Robust statisticsAppendix B: The Sampling TheoremAppendix C: The representation of colourAppendix D: Sampling from distributions

    2 in stock

    £77.39

  • Machine Learning for Biomedical Applications

    Elsevier Science Machine Learning for Biomedical Applications

    1 in stock

    Book SynopsisTable of Contents1. Programming in Python 2. Machine Learning Basics 3. Regression 4. Classification 5. Dimensionality reduction 6. Clustering 7. Ensemble methods 8. Feature extraction and selection 9. Introduction to Deep Learning 10. Neural Networks 11. Convolutional Neural Networks

    1 in stock

    £55.05

  • Visualization Visual Analytics and Virtual

    Elsevier Science Visualization Visual Analytics and Virtual

    Out of stock

    Book SynopsisTable of Contents1. Introduction I Medical Visualization Techniques 2. Illustrative Medical Visualization 3. Advanced Vessel Visualization 4. Multimodal Medical Visualization 5. Medical Flow Visualization 6. Medical Animations II Selected Applications 7. 3D Visualization for Anatomy Education 8. Visual Computing for Radiation Treatment Planning III Visual Analytics in Healthcare 9. An Introduction to Visual Analytics 10. Visual Analytics in Public Health 11. Visual Analytics in Clinical Medicine IV Virtual Reality in Medicine 12. Introduction to Virtual Reality 13. Virtual Reality for Medical Education 14. Virtual Reality in Treatment and Rehabilitation

    Out of stock

    £103.50

  • Biometrics

    Oxford University Press Biometrics

    1 in stock

    Book SynopsisWe live in a society which is increasingly interconnected, in which communication between individuals is mostly mediated via some electronic platform, and transactions are often carried out remotely. In such a world, traditional notions of trust and confidence in the identity of those with whom we are interacting, taken for granted in the past, can be much less reliable. Biometrics - the scientific discipline of identifying individuals by means of the measurement of unique personal attributes - provides a reliable means of establishing or confirming an individual''s identity. These attributes include facial appearance, fingerprints, iris patterning, the voice, the way we write, or even the way we walk. The new technologies of biometrics have a wide range of practical applications, from securing mobile phones and laptops to establishing identity in bank transactions, travel documents, and national identity cards. This Very Short Introduction considers the capabilities of biometrics-based identity checking, from first principles to the practicalities of using different types of identification data. Michael Fairhurst looks at the basic techniques in use today, ongoing developments in system design, and emerging technologies, all aimed at improving precision in identification, and providing solutions to an increasingly wide range of practical problems. Considering how they may continue to develop in the future, Fairhurst explores the benefits and limitations of these pervasive and powerful technologies, and how they can effectively support our increasingly interconnected society.ABOUT THE SERIES: The Very Short Introductions series from Oxford University Press contains hundreds of titles in almost every subject area. These pocket-sized books are the perfect way to get ahead in a new subject quickly. Our expert authors combine facts, analysis, perspective, new ideas, and enthusiasm to make interesting and challenging topics highly readable.Table of Contents1: Are you who you say you are?2: Biometrics: where should I start?3: Making biometrics work4: Enhancing biometric processing5: An introduction to predictive biometrics6: Where are we going?Further readingIndex

    1 in stock

    £9.49

  • Methods and Procedures for the Verification and

    Springer-Verlag New York Inc. Methods and Procedures for the Verification and

    Out of stock

    Book SynopsisNeural networks are members of a class of software that have the potential to enable intelligent computational systems capable of simulating characteristics of biological thinking and learning. This volume introduces some of the methods and techniques used for the verification and validation of neural networks and adaptive systems.Table of ContentsBackground of the Verification and Validation of Neural Networks.- Augmentation of Current Verification and Validation Practices.- Risk and Hazard Analysis for Neural Network Systems.- Validation of Neural Networks Via Taxonomic Evaluation.- Stability Properties of Neural Networks.- Neural Network Verification.- Neural Network Visualization Techniques.- Rule Extraction as a Formal Method.- Automated Test Generation for Testing Neural Network Systems.- Run-Time Assessment of Neural Network Control Systems.

    Out of stock

    £85.88

  • Signal Processing Methods for Music Transcription

    Springer Signal Processing Methods for Music Transcription

    15 in stock

    Book SynopsisFoundations.- to Music Transcription.- An Introduction to Statistical Signal Processing and Spectrum Estimation.- Sparse Adaptive Representations for Musical Signals.- Rhythm and Timbre Analysis.- Beat Tracking and Musical Metre Analysis.- Unpitched Percussion Transcription.- Automatic Classification of Pitched Musical Instrument Sounds.- Multiple Fundamental Frequency Analysis.- Multiple Fundamental Frequency Estimation Based on Generative Models.- Auditory Model-Based Methods for Multiple Fundamental Frequency Estimation.- Unsupervised Learning Methods for Source Separation in Monaural Music Signals.- Entire Systems, Acoustic and Musicological Modelling.- Auditory Scene Analysis in Music Signals.- Music Scene Description.- Singing Transcription.Table of ContentsFoundations.- to Music Transcription.- An Introduction to Statistical Signal Processing and Spectrum Estimation.- Sparse Adaptive Representations for Musical Signals.- Rhythm and Timbre Analysis.- Beat Tracking and Musical Metre Analysis.- Unpitched Percussion Transcription.- Automatic Classification of Pitched Musical Instrument Sounds.- Multiple Fundamental Frequency Analysis.- Multiple Fundamental Frequency Estimation Based on Generative Models.- Auditory Model-Based Methods for Multiple Fundamental Frequency Estimation.- Unsupervised Learning Methods for Source Separation in Monaural Music Signals.- Entire Systems, Acoustic and Musicological Modelling.- Auditory Scene Analysis in Music Signals.- Music Scene Description.- Singing Transcription.

    15 in stock

    £123.49

  • Introduction to Biometrics

    Springer-Verlag New York Inc. Introduction to Biometrics

    2 in stock

    Book SynopsisIntroduction.- Fingerprint Recognition.- Face Recognition.- Iris Recognition.- Additional Biometric Traits.- Multibiometrics.- Security of Biometric Systems.Table of ContentsIntroduction.- Fingerprint Recognition.- Face Recognition.- Iris Recognition.- Additional Biometric Traits.- Multibiometrics.- Security of Biometric Systems.

    2 in stock

    £71.99

  • Statistical Pattern Recognition

    John Wiley & Sons Inc Statistical Pattern Recognition

    2 in stock

    Book SynopsisStatistical pattern recognition relates to the use of statistical techniques for analysing data measurements in order to extract information and make justified decisions. It is a very active area of study and research, which has seen many advances in recent years. Applications such as data mining, web searching, multimedia data retrieval, face recognition, and cursive handwriting recognition, all require robust and efficient pattern recognition techniques. This third edition provides an introduction to statistical pattern theory and techniques, with material drawn from a wide range of fields, including the areas of engineering, statistics, computer science and the social sciences. The book has been updated to cover new methods and applications, and includes a wide range of techniques such as Bayesian methods, neural networks, support vector machines, feature selection and feature reduction techniques.Technical descriptions and motivations are provided, and the techniques are illustTrade Review“In the end I must add that this book is so appealing that I often found myself lost in the reading, pausing the overview of the manuscript in order to look more into some presented subject, and not being able to continue until I had finished seeing all about it.” (Zentralblatt MATH, 1 December 2012)Table of ContentsPreface xix Notation xxiii 1 Introduction to Statistical Pattern Recognition 1 1.1 Statistical Pattern Recognition 1 1.1.1 Introduction 1 1.1.2 The Basic Model 2 1.2 Stages in a Pattern Recognition Problem 4 1.3 Issues 6 1.4 Approaches to Statistical Pattern Recognition 7 1.5 Elementary Decision Theory 8 1.5.1 Bayes’ Decision Rule for Minimum Error 8 1.5.2 Bayes’ Decision Rule for Minimum Error – Reject Option 12 1.5.3 Bayes’ Decision Rule for Minimum Risk 13 1.5.4 Bayes’ Decision Rule for Minimum Risk – Reject Option 15 1.5.5 Neyman–Pearson Decision Rule 15 1.5.6 Minimax Criterion 18 1.5.7 Discussion 19 1.6 Discriminant Functions 20 1.6.1 Introduction 20 1.6.2 Linear Discriminant Functions 21 1.6.3 Piecewise Linear Discriminant Functions 23 1.6.4 Generalised Linear Discriminant Function 24 1.6.5 Summary 26 1.7 Multiple Regression 27 1.8 Outline of Book 29 1.9 Notes and References 29 Exercises 31 2 Density Estimation – Parametric 33 2.1 Introduction 33 2.2 Estimating the Parameters of the Distributions 34 2.2.1 Estimative Approach 34 2.2.2 Predictive Approach 35 2.3 The Gaussian Classifier 35 2.3.1 Specification 35 2.3.2 Derivation of the Gaussian Classifier Plug-In Estimates 37 2.3.3 Example Application Study 39 2.4 Dealing with Singularities in the Gaussian Classifier 40 2.4.1 Introduction 40 2.4.2 Na¨ive Bayes 40 2.4.3 Projection onto a Subspace 41 2.4.4 Linear Discriminant Function 41 2.4.5 Regularised Discriminant Analysis 42 2.4.6 Example Application Study 44 2.4.7 Further Developments 45 2.4.8 Summary 46 2.5 Finite Mixture Models 46 2.5.1 Introduction 46 2.5.2 Mixture Models for Discrimination 48 2.5.3 Parameter Estimation for Normal Mixture Models 49 2.5.4 Normal Mixture Model Covariance Matrix Constraints 51 2.5.5 How Many Components? 52 2.5.6 Maximum Likelihood Estimation via EM 55 2.5.7 Example Application Study 60 2.5.8 Further Developments 62 2.5.9 Summary 63 2.6 Application Studies 63 2.7 Summary and Discussion 66 2.8 Recommendations 66 2.9 Notes and References 67 Exercises 67 3 Density Estimation – Bayesian 70 3.1 Introduction 70 3.1.1 Basics 72 3.1.2 Recursive Calculation 72 3.1.3 Proportionality 73 3.2 Analytic Solutions 73 3.2.1 Conjugate Priors 73 3.2.2 Estimating the Mean of a Normal Distribution with Known Variance 75 3.2.3 Estimating the Mean and the Covariance Matrix of a Multivariate Normal Distribution 79 3.2.4 Unknown Prior Class Probabilities 85 3.2.5 Summary 87 3.3 Bayesian Sampling Schemes 87 3.3.1 Introduction 87 3.3.2 Summarisation 87 3.3.3 Sampling Version of the Bayesian Classifier 89 3.3.4 Rejection Sampling 89 3.3.5 Ratio of Uniforms 90 3.3.6 Importance Sampling 92 3.4 Markov Chain Monte Carlo Methods 95 3.4.1 Introduction 95 3.4.2 The Gibbs Sampler 95 3.4.3 Metropolis–Hastings Algorithm 103 3.4.4 Data Augmentation 107 3.4.5 Reversible Jump Markov Chain Monte Carlo 108 3.4.6 Slice Sampling 109 3.4.7 MCMC Example – Estimation of Noisy Sinusoids 111 3.4.8 Summary 115 3.4.9 Notes and References 116 3.5 Bayesian Approaches to Discrimination 116 3.5.1 Labelled Training Data 116 3.5.2 Unlabelled Training Data 117 3.6 Sequential Monte Carlo Samplers 119 3.6.1 Introduction 119 3.6.2 Basic Methodology 121 3.6.3 Summary 125 3.7 Variational Bayes 126 3.7.1 Introduction 126 3.7.2 Description 126 3.7.3 Factorised Variational Approximation 129 3.7.4 Simple Example 131 3.7.5 Use of the Procedure for Model Selection 135 3.7.6 Further Developments and Applications 136 3.7.7 Summary 137 3.8 Approximate Bayesian Computation 137 3.8.1 Introduction 137 3.8.2 ABC Rejection Sampling 138 3.8.3 ABC MCMC Sampling 140 3.8.4 ABC Population Monte Carlo Sampling 141 3.8.5 Model Selection 142 3.8.6 Summary 143 3.9 Example Application Study 144 3.10 Application Studies 145 3.11 Summary and Discussion 146 3.12 Recommendations 147 3.13 Notes and References 147 Exercises 148 4 Density Estimation – Nonparametric 150 4.1 Introduction 150 4.1.1 Basic Properties of Density Estimators 150 4.2 k-Nearest-Neighbour Method 152 4.2.1 k-Nearest-Neighbour Classifier 152 4.2.2 Derivation 154 4.2.3 Choice of Distance Metric 157 4.2.4 Properties of the Nearest-Neighbour Rule 159 4.2.5 Linear Approximating and Eliminating Search Algorithm 159 4.2.6 Branch and Bound Search Algorithms: kd-Trees 163 4.2.7 Branch and Bound Search Algorithms: Ball-Trees 170 4.2.8 Editing Techniques 174 4.2.9 Example Application Study 177 4.2.10 Further Developments 178 4.2.11 Summary 179 4.3 Histogram Method 180 4.3.1 Data Adaptive Histograms 181 4.3.2 Independence Assumption (Naïve Bayes) 181 4.3.3 Lancaster Models 182 4.3.4 Maximum Weight Dependence Trees 183 4.3.5 Bayesian Networks 186 4.3.6 Example Application Study – Naïve Bayes Text Classification 190 4.3.7 Summary 193 4.4 Kernel Methods 194 4.4.1 Biasedness 197 4.4.2 Multivariate Extension 198 4.4.3 Choice of Smoothing Parameter 199 4.4.4 Choice of Kernel 201 4.4.5 Example Application Study 202 4.4.6 Further Developments 203 4.4.7 Summary 203 4.5 Expansion by Basis Functions 204 4.6 Copulas 207 4.6.1 Introduction 207 4.6.2 Mathematical Basis 207 4.6.3 Copula Functions 208 4.6.4 Estimating Copula Probability Density Functions 209 4.6.5 Simple Example 211 4.6.6 Summary 212 4.7 Application Studies 213 4.7.1 Comparative Studies 216 4.8 Summary and Discussion 216 4.9 Recommendations 217 4.10 Notes and References 217 Exercises 218 5 Linear Discriminant Analysis 221 5.1 Introduction 221 5.2 Two-Class Algorithms 222 5.2.1 General Ideas 222 5.2.2 Perceptron Criterion 223 5.2.3 Fisher’s Criterion 227 5.2.4 Least Mean-Squared-Error Procedures 228 5.2.5 Further Developments 235 5.2.6 Summary 235 5.3 Multiclass Algorithms 236 5.3.1 General Ideas 236 5.3.2 Error-Correction Procedure 237 5.3.3 Fisher’s Criterion – Linear Discriminant Analysis 238 5.3.4 Least Mean-Squared-Error Procedures 241 5.3.5 Regularisation 246 5.3.6 Example Application Study 246 5.3.7 Further Developments 247 5.3.8 Summary 248 5.4 Support Vector Machines 249 5.4.1 Introduction 249 5.4.2 Linearly Separable Two-Class Data 249 5.4.3 Linearly Nonseparable Two-Class Data 253 5.4.4 Multiclass SVMs 256 5.4.5 SVMs for Regression 257 5.4.6 Implementation 259 5.4.7 Example Application Study 262 5.4.8 Summary 263 5.5 Logistic Discrimination 263 5.5.1 Two-Class Case 263 5.5.2 Maximum Likelihood Estimation 264 5.5.3 Multiclass Logistic Discrimination 266 5.5.4 Example Application Study 267 5.5.5 Further Developments 267 5.5.6 Summary 268 5.6 Application Studies 268 5.7 Summary and Discussion 268 5.8 Recommendations 269 5.9 Notes and References 270 Exercises 270 6 Nonlinear Discriminant Analysis – Kernel and Projection Methods 274 6.1 Introduction 274 6.2 Radial Basis Functions 276 6.2.1 Introduction 276 6.2.2 Specifying the Model 278 6.2.3 Specifying the Functional Form 278 6.2.4 The Positions of the Centres 279 6.2.5 Smoothing Parameters 281 6.2.6 Calculation of the Weights 282 6.2.7 Model Order Selection 284 6.2.8 Simple RBF 285 6.2.9 Motivation 286 6.2.10 RBF Properties 288 6.2.11 Example Application Study 288 6.2.12 Further Developments 289 6.2.13 Summary 290 6.3 Nonlinear Support Vector Machines 291 6.3.1 Introduction 291 6.3.2 Binary Classification 291 6.3.3 Types of Kernel 292 6.3.4 Model Selection 293 6.3.5 Multiclass SVMs 294 6.3.6 Probability Estimates 294 6.3.7 Nonlinear Regression 296 6.3.8 Example Application Study 296 6.3.9 Further Developments 297 6.3.10 Summary 298 6.4 The Multilayer Perceptron 298 6.4.1 Introduction 298 6.4.2 Specifying the MLP Structure 299 6.4.3 Determining the MLP Weights 300 6.4.4 Modelling Capacity of the MLP 307 6.4.5 Logistic Classification 307 6.4.6 Example Application Study 310 6.4.7 Bayesian MLP Networks 311 6.4.8 Projection Pursuit 313 6.4.9 Summary 313 6.5 Application Studies 314 6.6 Summary and Discussion 316 6.7 Recommendations 317 6.8 Notes and References 318 Exercises 318 7 Rule and Decision Tree Induction 322 7.1 Introduction 322 7.2 Decision Trees 323 7.2.1 Introduction 323 7.2.2 Decision Tree Construction 326 7.2.3 Selection of the Splitting Rule 327 7.2.4 Terminating the Splitting Procedure 330 7.2.5 Assigning Class Labels to Terminal Nodes 332 7.2.6 Decision Tree Pruning – Worked Example 332 7.2.7 Decision Tree Construction Methods 337 7.2.8 Other Issues 339 7.2.9 Example Application Study 340 7.2.10 Further Developments 341 7.2.11 Summary 342 7.3 Rule Induction 342 7.3.1 Introduction 342 7.3.2 Generating Rules from a Decision Tree 345 7.3.3 Rule Induction Using a Sequential Covering Algorithm 345 7.3.4 Example Application Study 350 7.3.5 Further Developments 351 7.3.6 Summary 351 7.4 Multivariate Adaptive Regression Splines 351 7.4.1 Introduction 351 7.4.2 Recursive Partitioning Model 351 7.4.3 Example Application Study 355 7.4.4 Further Developments 355 7.4.5 Summary 356 7.5 Application Studies 356 7.6 Summary and Discussion 358 7.7 Recommendations 358 7.8 Notes and References 359 Exercises 359 8 Ensemble Methods 361 8.1 Introduction 361 8.2 Characterising a Classifier Combination Scheme 362 8.2.1 Feature Space 363 8.2.2 Level 366 8.2.3 Degree of Training 368 8.2.4 Form of Component Classifiers 368 8.2.5 Structure 369 8.2.6 Optimisation 369 8.3 Data Fusion 370 8.3.1 Architectures 370 8.3.2 Bayesian Approaches 371 8.3.3 Neyman–Pearson Formulation 373 8.3.4 Trainable Rules 374 8.3.5 Fixed Rules 375 8.4 Classifier Combination Methods 376 8.4.1 Product Rule 376 8.4.2 Sum Rule 377 8.4.3 Min, Max and Median Combiners 378 8.4.4 Majority Vote 379 8.4.5 Borda Count 379 8.4.6 Combiners Trained on Class Predictions 380 8.4.7 Stacked Generalisation 382 8.4.8 Mixture of Experts 382 8.4.9 Bagging 385 8.4.10 Boosting 387 8.4.11 Random Forests 389 8.4.12 Model Averaging 390 8.4.13 Summary of Methods 396 8.4.14 Example Application Study 398 8.4.15 Further Developments 399 8.5 Application Studies 399 8.6 Summary and Discussion 400 8.7 Recommendations 401 8.8 Notes and References 401 Exercises 402 9 Performance Assessment 404 9.1 Introduction 404 9.2 Performance Assessment 405 9.2.1 Performance Measures 405 9.2.2 Discriminability 406 9.2.3 Reliability 413 9.2.4 ROC Curves for Performance Assessment 415 9.2.5 Population and Sensor Drift 419 9.2.6 Example Application Study 421 9.2.7 Further Developments 422 9.2.8 Summary 423 9.3 Comparing Classifier Performance 424 9.3.1 Which Technique is Best? 424 9.3.2 Statistical Tests 425 9.3.3 Comparing Rules When Misclassification Costs are Uncertain 426 9.3.4 Example Application Study 428 9.3.5 Further Developments 429 9.3.6 Summary 429 9.4 Application Studies 429 9.5 Summary and Discussion 430 9.6 Recommendations 430 9.7 Notes and References 430 Exercises 431 10 Feature Selection and Extraction 433 10.1 Introduction 433 10.2 Feature Selection 435 10.2.1 Introduction 435 10.2.2 Characterisation of Feature Selection Approaches 439 10.2.3 Evaluation Measures 440 10.2.4 Search Algorithms for Feature Subset Selection 449 10.2.5 Complete Search – Branch and Bound 450 10.2.6 Sequential Search 454 10.2.7 Random Search 458 10.2.8 Markov Blanket 459 10.2.9 Stability of Feature Selection 460 10.2.10 Example Application Study 462 10.2.11 Further Developments 462 10.2.12 Summary 463 10.3 Linear Feature Extraction 463 10.3.1 Principal Components Analysis 464 10.3.2 Karhunen–Lo`eve Transformation 475 10.3.3 Example Application Study 481 10.3.4 Further Developments 482 10.3.5 Summary 483 10.4 Multidimensional Scaling 484 10.4.1 Classical Scaling 484 10.4.2 Metric MDS 486 10.4.3 Ordinal Scaling 487 10.4.4 Algorithms 490 10.4.5 MDS for Feature Extraction 491 10.4.6 Example Application Study 492 10.4.7 Further Developments 493 10.4.8 Summary 493 10.5 Application Studies 493 10.6 Summary and Discussion 495 10.7 Recommendations 495 10.8 Notes and References 496 Exercises 497 11 Clustering 501 11.1 Introduction 501 11.2 Hierarchical Methods 502 11.2.1 Single-Link Method 503 11.2.2 Complete-Link Method 506 11.2.3 Sum-of-Squares Method 507 11.2.4 General Agglomerative Algorithm 508 11.2.5 Properties of a Hierarchical Classification 508 11.2.6 Example Application Study 509 11.2.7 Summary 509 11.3 Quick Partitions 510 11.4 Mixture Models 511 11.4.1 Model Description 511 11.4.2 Example Application Study 512 11.5 Sum-of-Squares Methods 513 11.5.1 Clustering Criteria 514 11.5.2 Clustering Algorithms 515 11.5.3 Vector Quantisation 520 11.5.4 Example Application Study 530 11.5.5 Further Developments 530 11.5.6 Summary 531 11.6 Spectral Clustering 531 11.6.1 Elementary Graph Theory 531 11.6.2 Similarity Matrices 534 11.6.3 Application to Clustering 534 11.6.4 Spectral Clustering Algorithm 535 11.6.5 Forms of Graph Laplacian 535 11.6.6 Example Application Study 536 11.6.7 Further Developments 538 11.6.8 Summary 538 11.7 Cluster Validity 538 11.7.1 Introduction 538 11.7.2 Statistical Tests 539 11.7.3 Absence of Class Structure 540 11.7.4 Validity of Individual Clusters 541 11.7.5 Hierarchical Clustering 542 11.7.6 Validation of Individual Clusterings 542 11.7.7 Partitions 543 11.7.8 Relative Criteria 543 11.7.9 Choosing the Number of Clusters 545 11.8 Application Studies 546 11.9 Summary and Discussion 549 11.10 Recommendations 551 11.11 Notes and References 552 Exercises 553 12 Complex Networks 555 12.1 Introduction 555 12.1.1 Characteristics 557 12.1.2 Properties 557 12.1.3 Questions to Address 559 12.1.4 Descriptive Features 560 12.1.5 Outline 560 12.2 Mathematics of Networks 561 12.2.1 Graph Matrices 561 12.2.2 Connectivity 562 12.2.3 Distance Measures 562 12.2.4 Weighted Networks 563 12.2.5 Centrality Measures 563 12.2.6 Random Graphs 564 12.3 Community Detection 565 12.3.1 Clustering Methods 565 12.3.2 Girvan–Newman Algorithm 568 12.3.3 Modularity Approaches 570 12.3.4 Local Modularity 571 12.3.5 Clique Percolation 573 12.3.6 Example Application Study 574 12.3.7 Further Developments 575 12.3.8 Summary 575 12.4 Link Prediction 575 12.4.1 Approaches to Link Prediction 576 12.4.2 Example Application Study 578 12.4.3 Further Developments 578 12.5 Application Studies 579 12.6 Summary and Discussion 579 12.7 Recommendations 580 12.8 Notes and References 580 Exercises 580 13 Additional Topics 581 13.1 Model Selection 581 13.1.1 Separate Training and Test Sets 582 13.1.2 Cross-Validation 582 13.1.3 The Bayesian Viewpoint 583 13.1.4 Akaike’s Information Criterion 583 13.1.5 Minimum Description Length 584 13.2 Missing Data 585 13.3 Outlier Detection and Robust Procedures 586 13.4 Mixed Continuous and Discrete Variables 587 13.5 Structural Risk Minimisation and the Vapnik–Chervonenkis Dimension 588 13.5.1 Bounds on the Expected Risk 588 13.5.2 The VC Dimension 589 References 591 Index 637

    2 in stock

    £97.16

  • Statistical Pattern Recognition

    John Wiley & Sons Inc Statistical Pattern Recognition

    15 in stock

    Book SynopsisStatistical pattern recognition relates to the use of statistical techniques for analysing data measurements in order to extract information and make justified decisions. It is a very active area of study and research, which has seen many advances in recent years.Trade Review"In the end I must add that this book is so appealing that I often found myself lost in the reading, pausing the overview of the manuscript in order to look more into some presented subject, and not being able to continue until I had finished seeing all about it.” (Zentralblatt MATH, 1 December 2012)Table of ContentsPreface xix Notation xxiii 1 Introduction to Statistical Pattern Recognition 1 1.1 Statistical Pattern Recognition 1 1.1.1 Introduction 1 1.1.2 The Basic Model 2 1.2 Stages in a Pattern Recognition Problem 4 1.3 Issues 6 1.4 Approaches to Statistical Pattern Recognition 7 1.5 Elementary Decision Theory 8 1.5.1 Bayes’ Decision Rule for Minimum Error 8 1.5.2 Bayes’ Decision Rule for Minimum Error – Reject Option 12 1.5.3 Bayes’ Decision Rule for Minimum Risk 13 1.5.4 Bayes’ Decision Rule for Minimum Risk – Reject Option 15 1.5.5 Neyman–Pearson Decision Rule 15 1.5.6 Minimax Criterion 18 1.5.7 Discussion 19 1.6 Discriminant Functions 20 1.6.1 Introduction 20 1.6.2 Linear Discriminant Functions 21 1.6.3 Piecewise Linear Discriminant Functions 23 1.6.4 Generalised Linear Discriminant Function 24 1.6.5 Summary 26 1.7 Multiple Regression 27 1.8 Outline of Book 29 1.9 Notes and References 29 Exercises 31 2 Density Estimation – Parametric 33 2.1 Introduction 33 2.2 Estimating the Parameters of the Distributions 34 2.2.1 Estimative Approach 34 2.2.2 Predictive Approach 35 2.3 The Gaussian Classifier 35 2.3.1 Specification 35 2.3.2 Derivation of the Gaussian Classifier Plug-In Estimates 37 2.3.3 Example Application Study 39 2.4 Dealing with Singularities in the Gaussian Classifier 40 2.4.1 Introduction 40 2.4.2 Na¨ive Bayes 40 2.4.3 Projection onto a Subspace 41 2.4.4 Linear Discriminant Function 41 2.4.5 Regularised Discriminant Analysis 42 2.4.6 Example Application Study 44 2.4.7 Further Developments 45 2.4.8 Summary 46 2.5 Finite Mixture Models 46 2.5.1 Introduction 46 2.5.2 Mixture Models for Discrimination 48 2.5.3 Parameter Estimation for Normal Mixture Models 49 2.5.4 Normal Mixture Model Covariance Matrix Constraints 51 2.5.5 How Many Components? 52 2.5.6 Maximum Likelihood Estimation via EM 55 2.5.7 Example Application Study 60 2.5.8 Further Developments 62 2.5.9 Summary 63 2.6 Application Studies 63 2.7 Summary and Discussion 66 2.8 Recommendations 66 2.9 Notes and References 67 Exercises 67 3 Density Estimation – Bayesian 70 3.1 Introduction 70 3.1.1 Basics 72 3.1.2 Recursive Calculation 72 3.1.3 Proportionality 73 3.2 Analytic Solutions 73 3.2.1 Conjugate Priors 73 3.2.2 Estimating the Mean of a Normal Distribution with Known Variance 75 3.2.3 Estimating the Mean and the Covariance Matrix of a Multivariate Normal Distribution 79 3.2.4 Unknown Prior Class Probabilities 85 3.2.5 Summary 87 3.3 Bayesian Sampling Schemes 87 3.3.1 Introduction 87 3.3.2 Summarisation 87 3.3.3 Sampling Version of the Bayesian Classifier 89 3.3.4 Rejection Sampling 89 3.3.5 Ratio of Uniforms 90 3.3.6 Importance Sampling 92 3.4 Markov Chain Monte Carlo Methods 95 3.4.1 Introduction 95 3.4.2 The Gibbs Sampler 95 3.4.3 Metropolis–Hastings Algorithm 103 3.4.4 Data Augmentation 107 3.4.5 Reversible Jump Markov Chain Monte Carlo 108 3.4.6 Slice Sampling 109 3.4.7 MCMC Example – Estimation of Noisy Sinusoids 111 3.4.8 Summary 115 3.4.9 Notes and References 116 3.5 Bayesian Approaches to Discrimination 116 3.5.1 Labelled Training Data 116 3.5.2 Unlabelled Training Data 117 3.6 Sequential Monte Carlo Samplers 119 3.6.1 Introduction 119 3.6.2 Basic Methodology 121 3.6.3 Summary 125 3.7 Variational Bayes 126 3.7.1 Introduction 126 3.7.2 Description 126 3.7.3 Factorised Variational Approximation 129 3.7.4 Simple Example 131 3.7.5 Use of the Procedure for Model Selection 135 3.7.6 Further Developments and Applications 136 3.7.7 Summary 137 3.8 Approximate Bayesian Computation 137 3.8.1 Introduction 137 3.8.2 ABC Rejection Sampling 138 3.8.3 ABC MCMC Sampling 140 3.8.4 ABC Population Monte Carlo Sampling 141 3.8.5 Model Selection 142 3.8.6 Summary 143 3.9 Example Application Study 144 3.10 Application Studies 145 3.11 Summary and Discussion 146 3.12 Recommendations 147 3.13 Notes and References 147 Exercises 148 4 Density Estimation – Nonparametric 150 4.1 Introduction 150 4.1.1 Basic Properties of Density Estimators 150 4.2 k-Nearest-Neighbour Method 152 4.2.1 k-Nearest-Neighbour Classifier 152 4.2.2 Derivation 154 4.2.3 Choice of Distance Metric 157 4.2.4 Properties of the Nearest-Neighbour Rule 159 4.2.5 Linear Approximating and Eliminating Search Algorithm 159 4.2.6 Branch and Bound Search Algorithms: kd-Trees 163 4.2.7 Branch and Bound Search Algorithms: Ball-Trees 170 4.2.8 Editing Techniques 174 4.2.9 Example Application Study 177 4.2.10 Further Developments 178 4.2.11 Summary 179 4.3 Histogram Method 180 4.3.1 Data Adaptive Histograms 181 4.3.2 Independence Assumption (Na¨ive Bayes) 181 4.3.3 Lancaster Models 182 4.3.4 Maximum Weight Dependence Trees 183 4.3.5 Bayesian Networks 186 4.3.6 Example Application Study – Na¨ive Bayes Text Classification 190 4.3.7 Summary 193 4.4 Kernel Methods 194 4.4.1 Biasedness 197 4.4.2 Multivariate Extension 198 4.4.3 Choice of Smoothing Parameter 199 4.4.4 Choice of Kernel 201 4.4.5 Example Application Study 202 4.4.6 Further Developments 203 4.4.7 Summary 203 4.5 Expansion by Basis Functions 204 4.6 Copulas 207 4.6.1 Introduction 207 4.6.2 Mathematical Basis 207 4.6.3 Copula Functions 208 4.6.4 Estimating Copula Probability Density Functions 209 4.6.5 Simple Example 211 4.6.6 Summary 212 4.7 Application Studies 213 4.7.1 Comparative Studies 216 4.8 Summary and Discussion 216 4.9 Recommendations 217 4.10 Notes and References 217 Exercises 218 5 Linear Discriminant Analysis 221 5.1 Introduction 221 5.2 Two-Class Algorithms 222 5.2.1 General Ideas 222 5.2.2 Perceptron Criterion 223 5.2.3 Fisher’s Criterion 227 5.2.4 Least Mean-Squared-Error Procedures 228 5.2.5 Further Developments 235 5.2.6 Summary 235 5.3 Multiclass Algorithms 236 5.3.1 General Ideas 236 5.3.2 Error-Correction Procedure 237 5.3.3 Fisher’s Criterion – Linear Discriminant Analysis 238 5.3.4 Least Mean-Squared-Error Procedures 241 5.3.5 Regularisation 246 5.3.6 Example Application Study 246 5.3.7 Further Developments 247 5.3.8 Summary 248 5.4 Support Vector Machines 249 5.4.1 Introduction 249 5.4.2 Linearly Separable Two-Class Data 249 5.4.3 Linearly Nonseparable Two-Class Data 253 5.4.4 Multiclass SVMs 256 5.4.5 SVMs for Regression 257 5.4.6 Implementation 259 5.4.7 Example Application Study 262 5.4.8 Summary 263 5.5 Logistic Discrimination 263 5.5.1 Two-Class Case 263 5.5.2 Maximum Likelihood Estimation 264 5.5.3 Multiclass Logistic Discrimination 266 5.5.4 Example Application Study 267 5.5.5 Further Developments 267 5.5.6 Summary 268 5.6 Application Studies 268 5.7 Summary and Discussion 268 5.8 Recommendations 269 5.9 Notes and References 270 Exercises 270 6 Nonlinear Discriminant Analysis – Kernel and Projection Methods 274 6.1 Introduction 274 6.2 Radial Basis Functions 276 6.2.1 Introduction 276 6.2.2 Specifying the Model 278 6.2.3 Specifying the Functional Form 278 6.2.4 The Positions of the Centres 279 6.2.5 Smoothing Parameters 281 6.2.6 Calculation of the Weights 282 6.2.7 Model Order Selection 284 6.2.8 Simple RBF 285 6.2.9 Motivation 286 6.2.10 RBF Properties 288 6.2.11 Example Application Study 288 6.2.12 Further Developments 289 6.2.13 Summary 290 6.3 Nonlinear Support Vector Machines 291 6.3.1 Introduction 291 6.3.2 Binary Classification 291 6.3.3 Types of Kernel 292 6.3.4 Model Selection 293 6.3.5 Multiclass SVMs 294 6.3.6 Probability Estimates 294 6.3.7 Nonlinear Regression 296 6.3.8 Example Application Study 296 6.3.9 Further Developments 297 6.3.10 Summary 298 6.4 The Multilayer Perceptron 298 6.4.1 Introduction 298 6.4.2 Specifying the MLP Structure 299 6.4.3 Determining the MLP Weights 300 6.4.4 Modelling Capacity of the MLP 307 6.4.5 Logistic Classification 307 6.4.6 Example Application Study 310 6.4.7 Bayesian MLP Networks 311 6.4.8 Projection Pursuit 313 6.4.9 Summary 313 6.5 Application Studies 314 6.6 Summary and Discussion 316 6.7 Recommendations 317 6.8 Notes and References 318 Exercises 318 7 Rule and Decision Tree Induction 322 7.1 Introduction 322 7.2 Decision Trees 323 7.2.1 Introduction 323 7.2.2 Decision Tree Construction 326 7.2.3 Selection of the Splitting Rule 327 7.2.4 Terminating the Splitting Procedure 330 7.2.5 Assigning Class Labels to Terminal Nodes 332 7.2.6 Decision Tree Pruning – Worked Example 332 7.2.7 Decision Tree Construction Methods 337 7.2.8 Other Issues 339 7.2.9 Example Application Study 340 7.2.10 Further Developments 341 7.2.11 Summary 342 7.3 Rule Induction 342 7.3.1 Introduction 342 7.3.2 Generating Rules from a Decision Tree 345 7.3.3 Rule Induction Using a Sequential Covering Algorithm 345 7.3.4 Example Application Study 350 7.3.5 Further Developments 351 7.3.6 Summary 351 7.4 Multivariate Adaptive Regression Splines 351 7.4.1 Introduction 351 7.4.2 Recursive Partitioning Model 351 7.4.3 Example Application Study 355 7.4.4 Further Developments 355 7.4.5 Summary 356 7.5 Application Studies 356 7.6 Summary and Discussion 358 7.7 Recommendations 358 7.8 Notes and References 359 Exercises 359 8 Ensemble Methods 361 8.1 Introduction 361 8.2 Characterising a Classifier Combination Scheme 362 8.2.1 Feature Space 363 8.2.2 Level 366 8.2.3 Degree of Training 368 8.2.4 Form of Component Classifiers 368 8.2.5 Structure 369 8.2.6 Optimisation 369 8.3 Data Fusion 370 8.3.1 Architectures 370 8.3.2 Bayesian Approaches 371 8.3.3 Neyman–Pearson Formulation 373 8.3.4 Trainable Rules 374 8.3.5 Fixed Rules 375 8.4 Classifier Combination Methods 376 8.4.1 Product Rule 376 8.4.2 Sum Rule 377 8.4.3 Min, Max and Median Combiners 378 8.4.4 Majority Vote 379 8.4.5 Borda Count 379 8.4.6 Combiners Trained on Class Predictions 380 8.4.7 Stacked Generalisation 382 8.4.8 Mixture of Experts 382 8.4.9 Bagging 385 8.4.10 Boosting 387 8.4.11 Random Forests 389 8.4.12 Model Averaging 390 8.4.13 Summary of Methods 396 8.4.14 Example Application Study 398 8.4.15 Further Developments 399 8.5 Application Studies 399 8.6 Summary and Discussion 400 8.7 Recommendations 401 8.8 Notes and References 401 Exercises 402 9 Performance Assessment 404 9.1 Introduction 404 9.2 Performance Assessment 405 9.2.1 Performance Measures 405 9.2.2 Discriminability 406 9.2.3 Reliability 413 9.2.4 ROC Curves for Performance Assessment 415 9.2.5 Population and Sensor Drift 419 9.2.6 Example Application Study 421 9.2.7 Further Developments 422 9.2.8 Summary 423 9.3 Comparing Classifier Performance 424 9.3.1 Which Technique is Best? 424 9.3.2 Statistical Tests 425 9.3.3 Comparing Rules When Misclassification Costs are Uncertain 426 9.3.4 Example Application Study 428 9.3.5 Further Developments 429 9.3.6 Summary 429 9.4 Application Studies 429 9.5 Summary and Discussion 430 9.6 Recommendations 430 9.7 Notes and References 430 Exercises 431 10 Feature Selection and Extraction 433 10.1 Introduction 433 10.2 Feature Selection 435 10.2.1 Introduction 435 10.2.2 Characterisation of Feature Selection Approaches 439 10.2.3 Evaluation Measures 440 10.2.4 Search Algorithms for Feature Subset Selection 449 10.2.5 Complete Search – Branch and Bound 450 10.2.6 Sequential Search 454 10.2.7 Random Search 458 10.2.8 Markov Blanket 459 10.2.9 Stability of Feature Selection 460 10.2.10 Example Application Study 462 10.2.11 Further Developments 462 10.2.12 Summary 463 10.3 Linear Feature Extraction 463 10.3.1 Principal Components Analysis 464 10.3.2 Karhunen–Lo`eve Transformation 475 10.3.3 Example Application Study 481 10.3.4 Further Developments 482 10.3.5 Summary 483 10.4 Multidimensional Scaling 484 10.4.1 Classical Scaling 484 10.4.2 Metric MDS 486 10.4.3 Ordinal Scaling 487 10.4.4 Algorithms 490 10.4.5 MDS for Feature Extraction 491 10.4.6 Example Application Study 492 10.4.7 Further Developments 493 10.4.8 Summary 493 10.5 Application Studies 493 10.6 Summary and Discussion 495 10.7 Recommendations 495 10.8 Notes and References 496 Exercises 497 11 Clustering 501 11.1 Introduction 501 11.2 Hierarchical Methods 502 11.2.1 Single-Link Method 503 11.2.2 Complete-Link Method 506 11.2.3 Sum-of-Squares Method 507 11.2.4 General Agglomerative Algorithm 508 11.2.5 Properties of a Hierarchical Classification 508 11.2.6 Example Application Study 509 11.2.7 Summary 509 11.3 Quick Partitions 510 11.4 Mixture Models 511 11.4.1 Model Description 511 11.4.2 Example Application Study 512 11.5 Sum-of-Squares Methods 513 11.5.1 Clustering Criteria 514 11.5.2 Clustering Algorithms 515 11.5.3 Vector Quantisation 520 11.5.4 Example Application Study 530 11.5.5 Further Developments 530 11.5.6 Summary 531 11.6 Spectral Clustering 531 11.6.1 Elementary Graph Theory 531 11.6.2 Similarity Matrices 534 11.6.3 Application to Clustering 534 11.6.4 Spectral Clustering Algorithm 535 11.6.5 Forms of Graph Laplacian 535 11.6.6 Example Application Study 536 11.6.7 Further Developments 538 11.6.8 Summary 538 11.7 Cluster Validity 538 11.7.1 Introduction 538 11.7.2 Statistical Tests 539 11.7.3 Absence of Class Structure 540 11.7.4 Validity of Individual Clusters 541 11.7.5 Hierarchical Clustering 542 11.7.6 Validation of Individual Clusterings 542 11.7.7 Partitions 543 11.7.8 Relative Criteria 543 11.7.9 Choosing the Number of Clusters 545 11.8 Application Studies 546 11.9 Summary and Discussion 549 11.10 Recommendations 551 11.11 Notes and References 552 Exercises 553 12 Complex Networks 555 12.1 Introduction 555 12.1.1 Characteristics 557 12.1.2 Properties 557 12.1.3 Questions to Address 559 12.1.4 Descriptive Features 560 12.1.5 Outline 560 12.2 Mathematics of Networks 561 12.2.1 Graph Matrices 561 12.2.2 Connectivity 562 12.2.3 Distance Measures 562 12.2.4 Weighted Networks 563 12.2.5 Centrality Measures 563 12.2.6 Random Graphs 564 12.3 Community Detection 565 12.3.1 Clustering Methods 565 12.3.2 Girvan–Newman Algorithm 568 12.3.3 Modularity Approaches 570 12.3.4 Local Modularity 571 12.3.5 Clique Percolation 573 12.3.6 Example Application Study 574 12.3.7 Further Developments 575 12.3.8 Summary 575 12.4 Link Prediction 575 12.4.1 Approaches to Link Prediction 576 12.4.2 Example Application Study 578 12.4.3 Further Developments 578 12.5 Application Studies 579 12.6 Summary and Discussion 579 12.7 Recommendations 580 12.8 Notes and References 580 Exercises 580 13 Additional Topics 581 13.1 Model Selection 581 13.1.1 Separate Training and Test Sets 582 13.1.2 Cross-Validation 582 13.1.3 The Bayesian Viewpoint 583 13.1.4 Akaike’s Information Criterion 583 13.1.5 Minimum Description Length 584 13.2 Missing Data 585 13.3 Outlier Detection and Robust Procedures 586 13.4 Mixed Continuous and Discrete Variables 587 13.5 Structural Risk Minimisation and the Vapnik–Chervonenkis Dimension 588 13.5.1 Bounds on the Expected Risk 588 13.5.2 The VC Dimension 589 References 591 Index 637

    15 in stock

    £48.56

  • Pattern Classification

    John Wiley & Sons Inc Pattern Classification

    15 in stock

    Book SynopsisPATTERN CLASSIFICATION a unified view of statistical and neural approaches The product of years of research and practical experience in pattern classification, this book offers a theory-based engineering perspective on neural networks and statistical pattern classification. Pattern Classification sheds new light on the relationship between seemingly unrelated approaches to pattern recognition, including statistical methods, polynomial regression, multilayer perceptron, and radial basis functions. Important topics such as feature selection, reject criteria, classifier performance measurement, and classifier combinations are fully covered, as well as material on techniques that, until now, would have required an extensive literature search to locate. A full program of illustrations, graphs, and examples helps make the operations and general properties of different classification approaches intuitively understandable. Offering a lucid presentation of complex appTable of ContentsStatistical Decision Theory. Need for Approximations: Fundamental Approaches. Classification Based on Statistical Models Determined by First-and-Second Order Statistical Moments. Classification Based on Mean-Square Functional Approximations. Polynomial Regression. Multilayer Perceptron Regression. Radial Basis Functions. Measurements, Features, and Feature Section. Reject Criteria and Classifier Performance. Combining Classifiers. Conclusion. STATMOD Program: Description of ftp Package. References. Index.

    15 in stock

    £150.26

  • Geometric Data Analysis An Empirical Approach to

    John Wiley & Sons Inc Geometric Data Analysis An Empirical Approach to

    15 in stock

    Book SynopsisThis book addresses the most efficient methods of pattern analysis using wavelet decomposition. Readers will learn to analyze data in order to emphasize the differences between closely related patterns and then categorize them in a way that is useful to system users.Trade Review"...provides a valuable summary of data reduction." (Technometrics, May 2002) "...effectively describes and summarizes an emerging new field, namely, scientific data modeling and analysis." (Mathematical Reviews, 2003h)Table of ContentsPreface. Acknowledgments. INTRODUCTION. Pattern Analysis as Data Reduction. Vector Spaces and Linear Transformations. OPTIMAL ORTHOGONAL PATTERN REPRESENTATIONS. The Karhunen-Loève Expansion. Additional Theory, Algorithms and Applications. TIME, FREQUENCY AND SCALE ANALYSIS. Fourier Analysis. Wavelet Expansions. ADAPTIVE NONLINEAR MAPPINGS. Radial Basis Functions. Neural Networks. Nonlinear Reduction Architectures. Appendix A Mathemetical Preliminaries. References. Index.

    15 in stock

    £107.06

  • Speech Coding Algorithms Foundation and Evolution

    John Wiley & Sons Inc Speech Coding Algorithms Foundation and Evolution

    15 in stock

    Book SynopsisSpeech coding has evolved into a highly matured branch of signal processing, with deployment of a plethora of products such as cellular phones, answering machines, communication devices, and more recently, voice over internet protocol (VoIP).Trade Review“…well equipped with exercises and with procedures which are helpful in implementing the coders…” (Zentralblatt Math, Vol.1041, No.16, 2004)Table of ContentsPreface xiii Acronyms xix Notation xxiii 1 Introduction 1 1.1 Overview of Speech Coding 2 1.2 Classification of Speech Coders 8 1.3 Speech Production and Modeling 11 1.4 Some Properties of the Human Auditory System 18 1.5 Speech Coding Standards 22 1.6 About Algorithms 26 1.7 Summary and References 31 2 Signal Processing Techniques 33 2.1 Pitch Period Estimation 33 2.2 All-Pole and All-Zero Filters 45 2.3 Convolution 52 2.4 Summary and References 57 Exercises 57 3 Stochastic Processes and Models 61 3.1 Power Spectral Density 62 3.2 Periodogram 67 3.3 Autoregressive Model 69 3.4 Autocorrelation Estimation 73 3.5 Other Signal Models 85 3.6 Summary and References 86 Exercises 87 4 Linear Prediction 91 4.1 The Problem of Linear Prediction 92 4.2 Linear Prediction Analysis of Nonstationary Signals 96 4.3 Examples of Linear Prediction Analysis of Speech 101 4.4 The Levinson–Durbin Algorithm 107 4.5 The Leroux–Gueguen Algorithm 114 4.6 Long-Term Linear Prediction 120 4.7 Synthesis Filters 127 4.8 Practical Implementation 131 4.9 Moving Average Prediction 137 4.10 Summary and References 138 Exercises 139 5 Scalar Quantization 143 5.1 Introduction 143 5.2 Uniform Quantizer 147 5.3 Optimal Quantizer 149 5.4 Quantizer Design Algorithms 151 5.5 Algorithmic Implementation 155 5.6 Summary and References 158 Exercises 158 6 Pulse Code Modulation and Its Variants 161 6.1 Uniform Quantization 161 6.2 Nonuniform Quantization 166 6.3 Differential Pulse Code Modulation 172 6.4 Adaptive Schemes 175 6.5 Summary and References 180 Exercises 181 7 Vector Quantization 184 7.1 Introduction 185 7.2 Optimal Quantizer 188 7.3 Quantizer Design Algorithms 189 7.4 Multistage VQ 194 7.5 Predictive VQ 216 7.6 Other Structured Schemes 219 7.7 Summary and References 221 Exercises 222 8 Scalar Quantization of Linear Prediction Coefficient 227 8.1 Spectral Distortion 227 8.2 Quantization Based on Reflection Coefficient and Log Area Ratio 232 8.3 Line Spectral Frequency 239 8.4 Quantization Based on Line Spectral Frequency 252 8.5 Interpolation of LPC 256 8.6 Summary and References 258 Exercises 260 9 Linear Prediction Coding 263 9.1 Speech Production Model 264 9.2 Structure of the Algorithm 268 9.3 Voicing Detector 271 9.4 The FS1015 LPC Coder 275 9.5 Limitations of the LPC Model 277 9.6 Summary and References 280 Exercises 281 10 Regular-pulse Excitation Coders 285 10.1 Multipulse Excitation Model 286 10.2 Regular-Pulse-Excited–Long-Term Prediction 289 10.3 Summary and References 295 Exercises 296 11 Code-excited Linear Prediction 299 11.1 The CELP Speech Production Model 300 11.2 The Principle of Analysis-by-Synthesis 301 11.3 Encoding and Decoding 302 11.4 Excitation Codebook Search 308 11.5 Postfilter 317 11.6 Summary and References 325 Exercises 326 12 The Federal Standard Version of CELP 330 12.1 Improving the Long-Term Predictor 331 12.2 The Concept of the Adaptive Codebook 333 12.3 Incorporation of the Adaptive Codebook to the CELP Framework 336 12.4 Stochastic Codebook Structure 338 12.5 Adaptive Codebook Search 341 12.6 Stochastic Codebook Search 344 12.7 Encoder and Decoder 346 12.8 Summary and References 349 Exercises 350 13 Vector Sum Excited Linear Prediction 353 13.1 The Core Encoding Structure 354 13.2 Search Strategies for Excitation Codebooks 356 13.3 Excitation Codebook Searches 357 13.4 Gain Related Procedures 362 13.5 Encoder and Decoder 366 13.6 Summary and References 368 Exercises 369 14 Low-delay CELP 372 14.1 Strategies to Achieve Low Delay 373 14.2 Basic Operational Principles 375 14.3 Linear Prediction Analysis 377 14.4 Excitation Codebook Search 380 14.5 Backward Gain Adaptation 385 14.6 Encoder and Decoder 389 14.7 Codebook Training 391 14.8 Summary and References 393 Exercises 394 15 Vector Quantization of Linear Prediction Coefficient 396 15.1 Correlation Among the LSFs 396 15.2 Split VQ 399 15.3 Multistage VQ 403 15.4 Predictive VQ 407 15.5 Summary and References 418 Exercises 419 16 Algebraic CELP 423 16.1 Algebraic Codebook Structure 424 16.2 Adaptive Codebook 425 16.3 Encoding and Decoding 433 16.4 Algebraic Codebook Search 437 16.5 Gain Quantization Using Conjugate VQ 443 16.6 Other ACELP Standards 446 16.7 Summary and References 451 Exercises 451 17 Mixed Excitation Linear Prediction 454 17.1 The MELP Speech Production Model 455 17.2 Fourier Magnitudes 456 17.3 Shaping Filters 464 17.4 Pitch Period and Voicing Strength Estimation 466 17.5 Encoder Operations 474 17.6 Decoder Operations 477 17.7 Summary and References 481 Exercises 482 18 Source-controlled Variable Bit-rate CELP 486 18.1 Adaptive Rate Decision 487 18.2 LP Analysis and LSF-Related Operations 494 18.3 Decoding and Encoding 496 18.4 Summary and References 498 Exercises 499 19 Speech Quality Assessment 501 19.1 The Scope of Quality and Measuring Conditions 501 19.2 Objective Quality Measurements for Waveform Coders 502 19.3 Subjective Quality Measures 504 19.4 Improvements on Objective Quality Measures 505 Appendix A Minimum-phase Property of the Forward Prediction-error Filter 507 Appendix B Some Properties of Line Spectral Frequency 514 Appendix C Research Directions in Speech Coding 518 Appendix D Linear Combiner for Pattern Classification 522 Appendix E CELP: Optimal Long-term Predictor to Minimize the Weighted Difference 531 Appendix F Review of Linear Algebra: Orthogonality, Basis, Linear Independence, and the Gram–schmidt Algorithm 537 Bibliography 542 Index 553

    15 in stock

    £164.66

  • Flexible Pattern Matching in Strings

    Cambridge University Press Flexible Pattern Matching in Strings

    15 in stock

    Book SynopsisPresents recently developed algorithms for searching for simple, multiple and extended strings, regular expressions, exact and approximate matches.Trade Review'If you need efficient pattern matching for any kind of string then this is the only book I know that comes even close to providing you [with] the tools for the job.' The Journal of the ACCU'I really enjoyed reading and studying this book. I am convinced it is a must-read, especially chapters 4 through 6, for anyone who is involved in the task of designing algorithms for modern string or sequence matching.' Computing ReviewsTable of Contents1. Introduction; 2. String matching; 3. Multiple string matching; 4. Extended string matching; 5. Regular expression matching; 6. Approximate matching; 7. Conclusion; Bibliography; Index.

    15 in stock

    £53.99

  • Kernel Methods for Pattern Analysis

    Cambridge University Press Kernel Methods for Pattern Analysis

    15 in stock

    Book SynopsisThe kernel functions methodology described here provides a powerful and unified framework for disciplines ranging from neural networks and pattern recognition to machine learning and data mining. This book provides practitioners with a large toolkit of algorithms, kernels and solutions ready to be implemented, suitable for standard pattern discovery problems.Trade Review'Kernel methods form an important aspect of modern pattern analysis, and this book gives a lively and timely account of such methods. … if you want to get a good idea of the current research in this field, this book cannot be ignored.' SIAM Review'… the book provides an excellent overview of this growing field. I highly recommend it to those who are interested in pattern analysis and machine learning, and especailly to those who want to apply kernel-based methods to text analysis and bioinformatics problems.' Computing Reviews' … I enjoyed reading this book and am happy about is addition to my library as it is a valuable practitioner's reference. I especially liked the presentation of kernel-based pattern analysis algorithms in terse mathematical steps clearly identifying input data, output data, and steps of the process. The accompanying Matlab code or pseudocode is al extremely useful.' IAPR NewsletterTable of ContentsPreface; Part I. Basic Concepts: 1. Pattern analysis; 2. Kernel methods: an overview; 3. Properties of kernels; 4. Detecting stable patterns; Part II. Pattern Analysis Algorithms: 5. Elementary algorithms in feature space; 6. Pattern analysis using eigen-decompositions; 7. Pattern analysis using convex optimisation; 8. Ranking, clustering and data visualisation; Part III. Constructing Kernels: 9. Basic kernels and kernel types; 10. Kernels for text; 11. Kernels for structured data: strings, trees, etc.; 12. Kernels from generative models; Appendix A: proofs omitted from the main text; Appendix B: notational conventions; Appendix C: list of pattern analysis methods; Appendix D: list of kernels; References; Index.

    15 in stock

    £82.64

  • The Pattern Recognition Basis of Artificial

    IEEE Computer Society Press,U.S. The Pattern Recognition Basis of Artificial

    15 in stock

    Book Synopsis

    15 in stock

    £95.36

  • Mathematical Analysis of Machine Learning

    Cambridge University Press Mathematical Analysis of Machine Learning

    1 in stock

    Book SynopsisThis self-contained textbook introduces students and researchers of AI to the key mathematical concepts and techniques necessary to learn and analyze machine learning algorithms. Readers will gain the technical knowledge needed to understand research papers in theoretical machine learning, without much difficulty.Trade Review'This graduate-level text gives a thorough, rigorous and up-to-date treatment of the main mathematical tools that have been developed for the analysis and design of machine learning methods. It is ideal for a graduate class, and the exercises at the end of each chapter make it suitable for self-study. An excellent addition to the literature from one of the leading researchers in this area, it is sure to become a classic.' Peter Bartlett, University of California, Berkeley'This book showcases the breadth and depth of mathematical ideas in learning theory. The author has masterfully synthesized techniques from the many disciplines that have contributed to this subject, and presented them in an accessible format that will be appreciated by both newcomers and experts alike. Readers will learn the tools-of-the-trade needed to make sense of the research literature and to express new ideas with clarity and precision.' Daniel Hsu, Columbia University'Tong Zhang shares in this book his deep and broad knowledge of machine learning, writing an impressively comprehensive and up-to-date reference text, providing a rigorous and rather advanced treatment of the most important topics and approaches in the mathematical study of machine learning. As an authoritative reference and introduction, his book will be a great asset to the field.' Robert Schapire, Microsoft Research'This book gives a systematic treatment of the modern mathematical techniques that are commonly used in the design and analysis of machine learning algorithms. Written by a key contributor to the field, it is a unique resource for graduate students and researchers seeking to gain a deep understanding of the theory of machine learning.' Shai Shalev-Shwartz, Hebrew University of JerusalemTable of Contents1. Introduction; 2. Basic probability inequalities for sums of independent random variables; 3. Uniform convergence and generalization analysis; 4. Empirical covering number analysis and symmetrization; 5. Covering number estimates; 6. Rademacher complexity and concentration inequalities; 7. Algorithmic stability analysis; 8. Model selection; 9. Analysis of kernel methods; 10. Additive and sparse models; 11. Analysis of neural networks; 12. Lower bounds and minimax analysis; 13. Probability inequalities for sequential random variables; 14. Basic concepts of online learning; 15. Online aggregation and second order algorithms; 16. Multi-armed bandits; 17. Contextual bandits; 18. Reinforcement learning; A. Basics of convex analysis; B. f-Divergence of probability measures; References; Author index; Subject index.

    1 in stock

    £42.74

  • HandsOn Network Machine Learning with Python

    Cambridge University Press HandsOn Network Machine Learning with Python

    1 in stock

    1 in stock

    £47.49

  • Explainable AI for Practitioners

    O'Reilly Media Explainable AI for Practitioners

    7 in stock

    Book SynopsisExplainability methods provide an essential toolkit for better understanding model behavior, and this practical guide brings together best-in-class techniques for model explainability.

    7 in stock

    £47.99

  • AI at the Edge

    O'Reilly Media AI at the Edge

    10 in stock

    Book SynopsisThis practical guide gives engineering professionals, including product managers and technology leaders, an end-to-end framework for solving real-world industrial, commercial, and scientific problems with edge AI.

    10 in stock

    £47.99

  • Understanding Machine Learning From Theory to

    Cambridge University Press Understanding Machine Learning From Theory to

    1 in stock

    Book SynopsisMachine learning is one of the fastest growing areas of computer science, with far-reaching applications. This book explains the principles behind the automated learning approach and the considerations underlying its usage. The authors explain the 'hows' and 'whys' of machine-learning algorithms, making the field accessible to both students and practitioners.Trade Review'This elegant book covers both rigorous theory and practical methods of machine learning. This makes it a rather unique resource, ideal for all those who want to understand how to find structure in data.' Bernhard Schölkopf, Max Planck Institute for Intelligent Systems, Germany'This is a timely text on the mathematical foundations of machine learning, providing a treatment that is both deep and broad, not only rigorous but also with intuition and insight. It presents a wide range of classic, fundamental algorithmic and analysis techniques as well as cutting-edge research directions. This is a great book for anyone interested in the mathematical and computational underpinnings of this important and fascinating field.' Avrim Blum, Carnegie Mellon University'This text gives a clear and broadly accessible view of the most important ideas in the area of full information decision problems. Written by two key contributors to the theoretical foundations in this area, it covers the range from theoretical foundations to algorithms, at a level appropriate for an advanced undergraduate course.' Peter L. Bartlett, University of California, BerkeleyTable of Contents1. Introduction; Part I. Foundations: 2. A gentle start; 3. A formal learning model; 4. Learning via uniform convergence; 5. The bias-complexity trade-off; 6. The VC-dimension; 7. Non-uniform learnability; 8. The runtime of learning; Part II. From Theory to Algorithms: 9. Linear predictors; 10. Boosting; 11. Model selection and validation; 12. Convex learning problems; 13. Regularization and stability; 14. Stochastic gradient descent; 15. Support vector machines; 16. Kernel methods; 17. Multiclass, ranking, and complex prediction problems; 18. Decision trees; 19. Nearest neighbor; 20. Neural networks; Part III. Additional Learning Models: 21. Online learning; 22. Clustering; 23. Dimensionality reduction; 24. Generative models; 25. Feature selection and generation; Part IV. Advanced Theory: 26. Rademacher complexities; 27. Covering numbers; 28. Proof of the fundamental theorem of learning theory; 29. Multiclass learnability; 30. Compression bounds; 31. PAC-Bayes; Appendix A. Technical lemmas; Appendix B. Measure concentration; Appendix C. Linear algebra.

    1 in stock

    £48.44

  • The Cambridge Handbook of Cognitive Linguistics

    Cambridge University Press The Cambridge Handbook of Cognitive Linguistics

    15 in stock

    Book SynopsisThe best survey of cognitive linguistics available, this Handbook provides a thorough explanation of its rich methodology, key results, and interdisciplinary context. With in-depth coverage of the research questions, basic concepts, and various theoretical approaches, the Handbook addresses newly emerging subfields and shows their contribution to the discipline. The Handbook introduces fields of study that have become central to cognitive linguistics, such as conceptual mappings and construction grammar. It explains all the main areas of linguistic analysis traditionally expected in a full linguistics framework, and includes fields of study such as language acquisition, sociolinguistics, diachronic studies, and corpus linguistics. Setting linguistic facts within the context of many other disciplines, the Handbook will be welcomed by researchers and students in a broad range of disciplines, including linguistics, cognitive science, neuroscience, gesture studies, computational linguisticTrade ReviewAdvance praise: 'This is the definitive introduction to cognitive linguistics that the mature field deserves, written by the leading practitioners in cognitive approaches to grammar, semantics, conceptual structure, phonology, and everything in-between (and all around). I can't imagine a better introduction for students of language.' Benjamin K. Bergen, University of California, San DiegoTable of ContentsIntroduction Barbara Dancygier; Part I. Language in Cognition and Culture: 1. Opening commentary: language in cognition and culture N. J. Enfield; 2. Relationships between language and cognition Daniel Casasanto; 3. The study of indigenous languages Sally Rice; 4. First language acquisition Laura E. De Ruiter and Anna L. Theakston; 5. Second language acquisition Andrea Tyler; Part II. Language, Body, and Multimodal Communication: 6. Opening commentary: polytropos and communication in the wild Mark Turner; 7. Signed languages Sherman Wilcox and Corinne Occhino; 8. Gesture, language, and cognition Kensy Cooperrider and Susan Goldin-Meadow; 9. Multimodality in interaction Kurt Feyaerts, Geert Brône and Bert Oben; 10. Viewpoint Lieven Vandelanotte; 11. Embodied intersubjectivity Jordan Zlatev; 12. Intersubjectivity and grammar Ronny Boogaart and Alex Reuneker; Part III. Aspects of Linguistic Analysis: 13. Opening commentary: linguistic analysis John Newman; 14. Phonology Geoffrey S. Nathan; 15. The construction of words Geert Booij; 16. Lexical semantics John R. Taylor; 17. Cognitive grammar Ronald W. Langacker; 18. From constructions to construction grammars Thomas Hoffmann; 19. Construction grammars Thomas Hoffmann; 20. Cognitive linguistics and pragmatics Kerstin Fischer; 21. Fictive interaction Esther Pascual and Todd Oakley; 22. Diachronic approaches Alexander Bergs; Part IV. Conceptual Mappings: 23. Opening commentary: conceptual mappings Eve Sweetser; 24. Conceptual metaphor Karen Sullivan; 25. Metonymy Jeannette Littlemore; 26. Conceptual blending theory Todd Oakley and Esther Pascual; 27. Embodiment Raymond W. Gibbs, Jr; 28. Corpus linguistics and metaphor Elena Semino; 29. Metaphor, simulation, and fictive motion Teenie Matlock; Part V. Methodological Approaches: 30. Opening commentary: getting the measure of meaning Chris Sinha; 31. The quantitative turn Laura A. Janda; 32. Language and the brain Seana Coulson; 33. Cognitive sociolinguistics Willem B. Hollmann; 34. Computational resources: framenet and constructicon Hans C. Boas; 35. Computational approaches to metaphor: the case of MetaNet Oana A. David; 36. Corpus approaches Stefan Gries; 37. Cognitive linguistics and the study of textual meaning Barbara Dancygier; Part VI. Concepts and Approaches: Space and Time: 38. Linguistic patterns of space and time vocabulary Eve Sweetser and Alice Gaby; 39. Space-time mappings beyond language Alice Gaby and Eve Sweetser; 40. Conceptualizing time in terms of space: experimental evidence Tom Gijssels and Daniel Casasanto; 41. Discovering spatiotemporal concepts in discourse Thora Tenbrink.

    15 in stock

    £47.99

  • Natural Language Processing

    Cambridge University Press Natural Language Processing

    1 in stock

    Book SynopsisWith a machine learning approach and less focus on linguistic details, this gentle introduction to natural language processing develops fundamental mathematical and deep learning models for NLP under a unified framework. NLP problems are systematically organised by their machine learning nature, including classification, sequence labelling, and sequence-to-sequence problems. Topics covered include statistical machine learning and deep learning models, text classification and structured prediction models, generative and discriminative models, supervised and unsupervised learning with latent variables, neural networks, and transition-based methods. Rich connections are drawn between concepts throughout the book, equipping students with the tools needed to establish a deep understanding of NLP solutions, adapt existing models, and confidently develop innovative models of their own. Featuring a host of examples, intuition, and end of chapter exercises, plus sample code available as an onliTrade Review'An amazingly compact, and at the same time comprehensive, introduction and reference to natural language processing (NLP). It describes the NLP basics, then employs this knowledge to solve typical NLP problems. It achieves very high coverage of NLP through a clever abstraction to typical high-level tasks, such as sequence labelling. Finally, it explains the topics in deep learning. The book captivates through its simple elegance, depth, and accessibility to a wide range of readers from undergrads to experienced researchers.' Iryna Gurevych, Technical University of Darmstadt, Germany'An excellent introduction to the field of natural language processing including recent advances in deep learning. By organising the material in terms of machine learning techniques - instead of the more traditional division by linguistic levels or applications - the authors are able to discuss different topics within a single coherent framework, with a gradual progression from basic notions to more complex material.' Joakim Nivre, Uppsala University'The book is a valuable tool for both beginning and advanced researchers in the field.' Catalin Stoean, zbMATHTable of ContentsPart I. Basics: 1. Introduction; 2. Counting relative frequencies; 3. Feature vectors; 4. Discriminative linear classifiers; 5. A perspective from information theory; 6. Hidden variables; Part II. Structures: 7. Generative sequence labelling; 8. Discriminative sequence labelling; 9. Sequence segmentation; 10. Predicting tree structures; 11. Transition-based methods for structured prediction; 12. Bayesian models; Part III. Deep Learning: 13. Neural network; 14. Representation learning; 15. Neural structured prediction; 16. Working with two texts; 17. Pre-training and transfer learning; 18. Deep latent variable models; Index.

    1 in stock

    £55.09

  • Introduction to Graph Signal Processing

    Cambridge University Press Introduction to Graph Signal Processing

    15 in stock

    Book SynopsisAn intuitive and accessible text explaining the fundamentals and applications of graph signal processing. Requiring only an elementary understanding of linear algebra, it covers both basic and advanced topics, including node domain processing, graph signal frequency, sampling, and graph signal representations, as well as how to choose a graph. Understand the basic insights behind key concepts and learn how graphs can be associated to a range of specific applications across physical, biological and social networks, distributed sensor networks, image and video processing, and machine learning. With numerous exercises and Matlab examples to help put knowledge into practice, and a solutions manual available online for instructors, this unique text is essential reading for graduate and senior undergraduate students taking courses on graph signal processing, signal processing, information processing, and data analysis, as well as researchers and industry professionals.Table of Contents1. Introduction; 2. Node domain processing; 3. Graph signal frequency-Spectral graph theory; 4. Sampling; 5. Graph signal representations; 6. How to choose a graph; 7. Applications; Appendix A. Linear algebra and signal representations; Appendix B. GSP with Matlab: the GraSP toolbox; References; Index.

    15 in stock

    £69.99

  • Scaling Up Machine Learning

    Cambridge University Press Scaling Up Machine Learning

    1 in stock

    Book SynopsisIn many practical situations it is impossible to run existing machine learning methods on a single computer, because either the data is too large or the speed and throughput requirements are too demanding. Researchers and practitioners will find here a variety of machine learning methods developed specifically for parallel or distributed systems, covering algorithms, platforms and applications.Trade Review'One of the landmark achievements of our time is the ability to extract value from large volumes of data. Engineering and algorithmic developments on this front have gelled substantially in recent years, and are quickly being reduced to practice in widely available, reusable forms. This book provides a broad and timely snapshot of the state of developments in scalable machine learning, which should be of interest to anyone who wishes to understand and extend the state of the art in analyzing data.' Joseph M. Hellerstein, University of California, Berkeley'This is a book that every machine learning practitioner should keep in their library.' Yoram Singer, Google Inc.'The contributions in this book run the gamut from frameworks for large-scale learning to parallel algorithms to applications, and contributors include many of the top people in this burgeoning subfield. Overall this book is an invaluable resource for anyone interested in the problem of learning from and working with big datasets.' William W. Cohen, Carnegie Mellon University, Pennsylvania'This unique, timely book provides a 360 degrees view and understanding of both conceptual and practical issues that arise when implementing leading machine learning algorithms on a wide range of parallel and high-performance computing platforms. It will serve as an indispensable handbook for the practitioner of large-scale data analytics and a guide to dealing with BIG data and making sound choices for efficient applying learning algorithms to them. It can also serve as the basis for an attractive graduate course on parallel/distributed machine learning and data mining.' Joydeep Ghosh, University of TexasTable of Contents1. Scaling up machine learning: introduction Ron Bekkerman, Mikhail Bilenko and John Langford; Part I. Frameworks for Scaling Up Machine Learning: 2. Mapreduce and its application to massively parallel learning of decision tree ensembles Biswanath Panda, Joshua S. Herbach, Sugato Basu and Roberto J. Bayardo; 3. Large-scale machine learning using DryadLINQ Mihai Budiu, Dennis Fetterly, Michael Isard, Frank McSherry and Yuan Yu; 4. IBM parallel machine learning toolbox Edwin Pednault, Elad Yom-Tov and Amol Ghoting; 5. Uniformly fine-grained data parallel computing for machine learning algorithms Meichun Hsu, Ren Wu and Bin Zhang; Part II. Supervised and Unsupervised Learning Algorithms: 6. PSVM: parallel support vector machines with incomplete Cholesky Factorization Edward Chang, Hongjie Bai, Kaihua Zhu, Hao Wang, Jian Li and Zhihuan Qiu; 7. Massive SVM parallelization using hardware accelerators Igor Durdanovic, Eric Cosatto, Hans Peter Graf, Srihari Cadambi, Venkata Jakkula, Srimat Chakradhar and Abhinandan Majumdar; 8. Large-scale learning to rank using boosted decision trees Krysta M. Svore and Christopher J. C. Burges; 9. The transform regression algorithm Ramesh Natarajan and Edwin Pednault; 10. Parallel belief propagation in factor graphs Joseph Gonzalez, Yucheng Low and Carlos Guestrin; 11. Distributed Gibbs sampling for latent variable models Arthur Asuncion, Padhraic Smyth, Max Welling, David Newman, Ian Porteous and Scott Triglia; 12. Large-scale spectral clustering with Mapreduce and MPI Wen-Yen Chen, Yangqiu Song, Hongjie Bai, Chih-Jen Lin and Edward Y. Chang; 13. Parallelizing information-theoretic clustering methods Ron Bekkerman and Martin Scholz; Part III. Alternative Learning Settings: 14. Parallel online learning Daniel Hsu, Nikos Karampatziakis, John Langford and Alex J. Smola; 15. Parallel graph-based semi-supervised learning Jeff Bilmes and Amarnag Subramanya; 16. Distributed transfer learning via cooperative matrix factorization Evan Xiang, Nathan Liu and Qiang Yang; 17. Parallel large-scale feature selection Jeremy Kubica, Sameer Singh and Daria Sorokina; Part IV. Applications: 18. Large-scale learning for vision with GPUS Adam Coates, Rajat Raina and Andrew Y. Ng; 19. Large-scale FPGA-based convolutional networks Clement Farabet, Yann LeCun, Koray Kavukcuoglu, Berin Martini, Polina Akselrod, Selcuk Talay and Eugenio Culurciello; 20. Mining tree structured data on multicore systems Shirish Tatikonda and Srinivasan Parthasarathy; 21. Scalable parallelization of automatic speech recognition Jike Chong, Ekaterina Gonina, Kisun You and Kurt Keutzer.

    1 in stock

    £42.74

  • Machine Learning Refined

    Cambridge University Press Machine Learning Refined

    3 in stock

    Book SynopsisWith its intuitive yet rigorous approach to machine learning, this text provides students with the fundamental knowledge and practical tools needed to conduct research and build data-driven products. The authors prioritize geometric intuition and algorithmic thinking, and include detail on all the essential mathematical prerequisites, to offer a fresh and accessible way to learn. Practical applications are emphasized, with examples from disciplines including computer vision, natural language processing, economics, neuroscience, recommender systems, physics, and biology. Over 300 color illustrations are included and have been meticulously designed to enable an intuitive grasp of technical concepts, and over 100 in-depth coding exercises (in Python) provide a real understanding of crucial machine learning algorithms. A suite of online resources including sample code, data sets, interactive lecture slides, and a solutions manual are provided online, making this an ideal text both for gradTrade Review'An excellent book that treats the fundamentals of machine learning from basic principles to practical implementation. The book is suitable as a text for senior-level and first-year graduate courses in engineering and computer science. It is well organized and covers basic concepts and algorithms in mathematical optimization methods, linear learning, and nonlinear learning techniques. The book is nicely illustrated in multiple colors and contains numerous examples and coding exercises using Python.' John G. Proakis, University of California, San Diego'Some machine learning books cover only programming aspects, often relying on outdated software tools; some focus exclusively on neural networks; others, solely on theoretical foundations; and yet more books detail advanced topics for the specialist. This fully revised and expanded text provides a broad and accessible introduction to machine learning for engineering and computer science students. The presentation builds on first principles and geometric intuition, while offering real-world examples, commented implementations in Python, and computational exercises. I expect this book to become a key resource for students and researchers.' Osvaldo Simeone, Kings College London'This book is great for getting started in machine learning. It builds up the tools of the trade from first principles, provides lots of examples, and explains one thing at a time at a steady pace. The level of detail and runnable code show what's really going when we run a learning algorithm.' David Duvenaud, University of Toronto'This book covers various essential machine learning methods (e.g., regression, classification, clustering, dimensionality reduction, and deep learning) from a unified mathematical perspective of seeking the optimal model parameters that minimize a cost function. Every method is explained in a comprehensive, intuitive way, and mathematical understanding is aided and enhanced with many geometric illustrations and elegant Python implementations.' Kimiaki Sihrahama, Kindai University, Japan'Books featuring machine learning are many, but those which are simple, intuitive, and yet theoretical are extraordinary 'outliers'. This book is a fantastic and easy way to launch yourself into the exciting world of machine learning, grasp its core concepts, and code them up in Python or Matlab. It was my inspiring guide in preparing my 'Machine Learning Blinks' on my BASIRA YouTube channel for both undergraduate and graduate levels.' Islem Rekik, Director of the Brain And SIgnal Research and Analysis (BASIRA) Laboratory'With its intuitive yet rigorous approach to machine learning, this text provides students with the fundamental knowledge and practical tools needed to conduct research and build data-driven products. The authors prioritize geometric intuition and algorithmic thinking, and include detail on all the essential mathematical prerequisites, to offer a fresh and accessible way to learn. Practical applications are emphasized, with examples from disciplines including computer vision, natural language processing, economics, neuroscience, recommender systems, physics, and biology. Over 300 color illustrations are included and have been meticulously designed to enable an intuitive grasp of technical concepts, and over 100 in-depth coding exercises (in Python) provide a real understanding of crucial machine learning algorithms. A suite of online resources including sample code, data sets, interactive lecture slides, and a solutions manual are provided online, making this an ideal text both for graduate courses on machine learning and for individual reference and self-study.' politcommerce.com'This is a comprehensive textbook on the fundamental concepts of machine learning. In the second edition, the authors provide a very accessible introduction to the main ideas behind machine learning models.' Helena Mihaljević, zbMATHTable of Contents1. Introduction to machine learning; Part I. Mathematical Optimization: 2. Zero order optimization techniques; 3. First order methods; 4. Second order optimization techniques; Part II. Linear Learning: 5. Linear regression; 6. Linear two-class classification; 7. Linear multi-class classification; 8. Linear unsupervised learning; 9. Feature engineering and selection; Part III. Nonlinear Learning: 10. Principles of nonlinear feature engineering; 11. Principles of feature learning; 12. Kernel methods; 13. Fully-connected neural networks; 14. Tree-based learners; Part IV. Appendices: Appendix A. Advanced first and second order optimization methods; Appendix B. Derivatives and automatic differentiation; Appendix C. Linear algebra.

    3 in stock

    £55.09

  • The Science of Deep Learning

    Cambridge University Press The Science of Deep Learning

    1 in stock

    Book SynopsisThe Science of Deep Learning emerged from courses taught by the author that have provided thousands of students with training and experience for their academic studies, and prepared them for careers in deep learning, machine learning, and artificial intelligence in top companies in industry and academia. The book begins by covering the foundations of deep learning, followed by key deep learning architectures. Subsequent parts on generative models and reinforcement learning may be used as part of a deep learning course or as part of a course on each topic. The book includes state-of-the-art topics such as Transformers, graph neural networks, variational autoencoders, and deep reinforcement learning, with a broad range of applications. The appendices provide equations for computing gradients in backpropagation and optimization, and best practices in scientific writing and reviewing. The text presents an up-to-date guide to the field built upon clear visualizations using a unified notatioTrade Review'In the avalanche of books on Deep Learning, this one stands out. Iddo Drori has mastered reinforcement learning - in its technical meaning and in his successful, commonsense approach to teaching and understanding.' Gilbert Strang, Massachusetts Institute of Technology'This book covers an impressive breadth of foundational concepts and algorithms behind modern deep learning. By reading this book, readers will quickly but thoroughly learn and appreciate foundations and advances of modern deep learning.' Kyunghyun Cho, New York University'This book offers a fascinating tour of the field of deep learning, which in only ten years has come to revolutionize almost every area of computing. Drori provides concise descriptions of many of the most important developments, combining unified mathematical notation and ample figures to form an essential resource for students and practitioners alike.' Jonathan Ventura, Cal Poly'Drori's textbook goes under the hood of deep learning, covering a broad swath of modern techniques in optimization that are useful for efficiently training neural networks. The book also covers regularization methods to avoid overfitting, a common issue when working with deep learning models. Overall, this is an excellent textbook for students and practitioners who want to gain a deeper understanding of deep learning.' Madeleine Udell, Stanford University'This textbook provides an excellent introduction to contemporary methods and models in deep learning. I expect this book to become a key resource in data science education for students and researchers.' Nakul Verma, Columbia University'This new book by Professor Drori brings fresh insights from his experience teaching thousands of students at Columbia, MIT, and NYU during the past several years. The book is a unique resource and opportunity for educators and researchers worldwide to build on his highly successful deep learning course.' Claudio Silva, New York University'Drori's book covers deep learning, from fundamentals to applications. The fundamentals are covered with clear figures and examples, making the underlying algorithms easy to understand for non-specialists. The multidisciplinary applications are thoughtfully selected to illustrate the broad applications of deep neural networks to specialized domains while highlighting the common themes and architectures between them.' Tonio Buonassisi, Professor of Mechanical Engineering, Massachusetts Institute of Technology'Drori's textbook makes the learning curve for deep learning a whole lot easier to climb. It follows a rigid scientific narrative, accompanied by a trove of code examples and visualizations. These enable a truly multi-modal approach to learning that will allow many students to understand the material better and sets them on a path of exploration.' Joaquin Vanschoren, Assistant Professor of Machine Learning, Eindhoven University of TechnologyTable of ContentsPreface; Notation; Part I. Foundations: 1. Introduction; 2. Forward and backpropagation; 3. Optimization; 4. Regularization; Part II. Architectures: 5. Convolutional neural networks; 6. Sequence models; 7. Graph neural networks; 8. Transformers; Part III. Generative Models: 9. Generative adversarial networks; 10. Variational autoencoders; Part IV. Reinforcement Learning: 11. Reinforcement learning; 12. Deep reinforcement learning; Part V. Applications: 13. Applications; Appendices; References; Index.

    1 in stock

    £42.74

  • Algorithmic HighDimensional Robust Statistics

    Cambridge University Press Algorithmic HighDimensional Robust Statistics

    1 in stock

    Book SynopsisThis reference text offers a clear unified treatment for graduate students, academic researchers, and professionals interested in understanding and developing statistical procedures for high-dimensional data that are robust to idealized modeling assumptions, including robustness to model misspecification and to adversarial outliers in the dataset.Trade Review'This is a timely book on efficient algorithms for computing robust statistics from noisy data. It presents lucid intuitive descriptions of the algorithms as well as precise statements of results with rigorous proofs - a nice combination indeed. The topic has seen fundamental breakthroughs over the last few years and the authors are among the leading contributors. The reader will get a ringside view of the developments.' Ravi Kannan, Visiting Professor, Indian Institute of ScienceTable of Contents1. Introduction to robust statistics; 2. Efficient high-dimensional robust mean estimation; 3. Algorithmic refinements in robust mean estimation; 4. Robust covariance estimation; 5. List-decodable learning; 6. Robust estimation via higher moments; 7. Robust supervised learning; 8. Information-computation tradeoffs in high-dimensional robust statistics; A. Mathematical background; References; Index.

    1 in stock

    £42.74

  • Pattern Recognition in Computational Molecular

    John Wiley & Sons Inc Pattern Recognition in Computational Molecular

    10 in stock

    Book SynopsisA comprehensive overview of high-performance pattern recognition techniques and approaches to Computational Molecular Biology This book surveys the developments of techniques and approaches on pattern recognition related to Computational Molecular Biology. Providing a broad coverage of the field, the authors cover fundamental and technical information on these techniques and approaches, as well as discussing their related problems. The text consists of twenty nine chapters, organized into seven parts: Pattern Recognition in Sequences, Pattern Recognition in Secondary Structures, Pattern Recognition in Tertiary Structures, Pattern Recognition in Quaternary Structures, Pattern Recognition in Microarrays, Pattern Recognition in Phylogenetic Trees, and Pattern Recognition in Biological Networks. Surveys the development of techniques and approaches on pattern recognition in biomolecular data Discusses pattern recognitTable of ContentsLIST OF CONTRIBUTORS xxi PREFACE xxvii I PATTERN RECOGNITION IN SEQUENCES 1 1 COMBINATORIAL HAPLOTYPING PROBLEMS 3Giuseppe Lancia 1.1 Introduction / 3 1.2 Single Individual Haplotyping / 5 1.2.1 The Minimum Error Correction Model / 8 1.2.2 Probabilistic Approaches and Alternative Models / 10 1.3 Population Haplotyping / 12 1.3.1 Clark’s Rule / 14 1.3.2 Pure Parsimony / 15 1.3.3 Perfect Phylogeny / 19 1.3.4 Disease Association / 21 1.3.5 Other Models / 22 References / 23 2 ALGORITHMIC PERSPECTIVES OF THE STRING BARCODING PROBLEMS 28Sima Behpour and Bhaskar DasGupta 2.1 Introduction / 28 2.2 Summary of Algorithmic Complexity Results for Barcoding Problems / 32 2.2.1 Average Length of Optimal Barcodes / 33 2.3 Entropy-Based Information Content Technique for Designing Approximation Algorithms for String Barcoding Problems / 34 2.4 Techniques for Proving Inapproximability Results for String Barcoding Problems / 36 2.4.1 Reductions from Set Covering Problem / 36 2.4.2 Reduction from Graph-Coloring Problem / 38 2.5 Heuristic Algorithms for String Barcoding Problems / 39 2.5.1 Entropy-Based Method with a Different Measure for Information Content / 39 2.5.2 Balanced Partitioning Approach / 40 2.6 Conclusion / 40 Acknowledgments / 41 References / 41 3 ALIGNMENT-FREE MEASURES FOR WHOLE-GENOME COMPARISON 43Matteo Comin and Davide Verzotto 3.1 Introduction / 43 3.2 Whole-Genome Sequence Analysis / 44 3.2.1 Background on Whole-Genome Comparison / 44 3.2.2 Alignment-Free Methods / 45 3.2.3 Average Common Subword / 46 3.2.4 Kullback–Leibler Information Divergence / 47 3.3 Underlying Approach / 47 3.3.1 Irredundant Common Subwords / 48 3.3.2 Underlying Subwords / 49 3.3.3 Efficient Computation of Underlying Subwords / 50 3.3.4 Extension to Inversions and Complements / 53 3.3.5 A Distance-Like Measure Based on Underlying Subwords / 53 3.4 Experimental Results / 54 3.4.1 Genome Data sets and Reference Taxonomies / 54 3.4.2 Whole-Genome Phylogeny Reconstruction / 56 3.5 Conclusion / 61 Author’s Contributions / 62 Acknowledgments / 62 References / 62 4 A MAXIMUM LIKELIHOOD FRAMEWORK FOR MULTIPLE SEQUENCE LOCAL ALIGNMENT 65Chengpeng Bi 4.1 Introduction / 65 4.2 Multiple Sequence Local Alignment / 67 4.2.1 Overall Objective Function / 67 4.2.2 Maximum Likelihood Model / 68 4.3 Motif Finding Algorithms / 70 4.3.1 DEM Motif Algorithm / 70 4.3.2 WEM Motif Finding Algorithm / 70 4.3.3 Metropolis Motif Finding Algorithm / 72 4.3.4 Gibbs Motif Finding Algorithm / 73 4.3.5 Pseudo-Gibbs Motif Finding Algorithm / 74 4.4 Time Complexity / 75 4.5 Case Studies / 75 4.5.1 Performance Evaluation / 76 4.5.2 CRP Binding Sites / 76 4.5.3 Multiple Motifs in Helix–Turn–Helix Protein Structure / 78 4.6 Conclusion / 80 References / 81 5 GLOBAL SEQUENCE ALIGNMENT WITH A BOUNDED NUMBER OF GAPS 83Carl Barton, Tomáš Flouri, Costas S. Iliopoulos, and Solon P. Pissis 5.1 Introduction / 83 5.2 Definitions and Notation / 85 5.3 Problem Definition / 87 5.4 Algorithms / 88 5.5 Conclusion / 94 References / 95 II PATTERN RECOGNITION IN SECONDARY STRUCTURES 97 6 A SHORT REVIEW ON PROTEIN SECONDARY STRUCTURE PREDICTION METHODS 99Renxiang Yan, Jiangning Song, Weiwen Cai, and Ziding Zhang 6.1 Introduction / 99 6.2 Representative Protein Secondary Structure Prediction Methods / 102 6.2.1 Chou–Fasman / 103 6.2.2 GOR / 104 6.2.3 PHD / 104 6.2.4 PSIPRED / 104 6.2.5 SPINE-X / 105 6.2.6 PSSpred / 105 6.2.7 Meta Methods / 105 6.3 Evaluation of Protein Secondary Structure Prediction Methods / 106 6.3.1 Measures / 106 6.3.2 Benchmark / 106 6.3.3 Performances / 107 6.4 Conclusion / 110 Acknowledgments / 110 References / 111 7 A GENERIC APPROACH TO BIOLOGICAL SEQUENCE SEGMENTATION PROBLEMS: APPLICATION TO PROTEIN SECONDARY STRUCTURE PREDICTION 114Yann Guermeur and Fabien Lauer 7.1 Introduction / 114 7.2 Biological Sequence Segmentation / 115 7.3 MSVMpred / 117 7.3.1 Base Classifiers / 117 7.3.2 Ensemble Methods / 118 7.3.3 Convex Combination / 119 7.4 Postprocessing with A Generative Model / 119 7.5 Dedication to Protein Secondary Structure Prediction / 120 7.5.1 Biological Problem / 121 7.5.2 MSVMpred2 / 121 7.5.3 Hidden Semi-Markov Model / 122 7.5.4 Experimental Results / 122 7.6 Conclusions and Ongoing Research / 125 Acknowledgments / 126 References / 126 8 STRUCTURAL MOTIF IDENTIFICATION AND RETRIEVAL: A GEOMETRICAL APPROACH 129Virginio Cantoni, Marco Ferretti, Mirto Musci, and Nahumi Nugrahaningsih 8.1 Introduction / 129 8.2 A Few Basic Concepts / 130 8.2.1 Hierarchy of Protein Structures / 130 8.2.2 Secondary Structure Elements / 131 8.2.3 Structural Motifs / 132 8.2.4 Available Sources for Protein Data / 134 8.3 State of the Art / 135 8.3.1 Protein Structure Motif Search / 135 8.3.2 Promotif / 136 8.3.3 Secondary-Structure Matching / 137 8.3.4 Multiple Structural Alignment by Secondary Structures / 138 8.4 A Novel Geometrical Approach to Motif Retrieval / 138 8.4.1 Secondary Structures Cooccurrences / 138 8.4.2 Cross Motif Search / 143 8.4.3 Complete Cross Motif Search / 146 8.5 Implementation Notes / 149 8.5.1 Optimizations / 149 8.5.2 Parallel Approaches / 150 8.6 Conclusions and Future Work / 151 Acknowledgment / 152 References / 152 9 GENOME-WIDE SEARCH FOR PSEUDOKNOTTED NONCODING RNAs: A COMPARATIVE STUDY 155Meghana Vasavada, Kevin Byron, Yang Song, and Jason T.L. Wang 9.1 Introduction / 155 9.2 Background / 156 9.2.1 Noncoding RNAs and Their Secondary Structures / 156 9.2.2 Pseudoknotted ncRNA Search Tools / 157 9.3 Methodology / 157 9.4 Results and Interpretation / 161 9.5 Conclusion / 162 References / 163 III PATTERN RECOGNITION IN TERTIARY STRUCTURES 165 10 MOTIF DISCOVERY IN PROTEIN 3D-STRUCTURES USING GRAPH MINING TECHNIQUES 167Wajdi Dhifli and Engelbert Mephu Nguifo 10.1 Introduction / 167 10.2 From Protein 3D-Structures to Protein Graphs / 169 10.2.1 Parsing Protein 3D-Structures into Graphs / 169 10.3 Graph Mining / 172 10.4 Subgraph Mining / 173 10.5 Frequent Subgraph Discovery / 173 10.5.1 Problem Definition / 174 10.5.2 Candidates Generation / 176 10.5.3 Frequent Subgraph Discovery Approaches / 177 10.5.4 Variants of Frequent Subgraph Mining: Closed and Maximal Subgraphs / 178 10.6 Feature Selection / 179 10.6.1 Relevance of a Feature / 179 10.7 Feature Selection for Subgraphs / 180 10.7.1 Problem Statement / 180 10.7.2 Mining Top-k Subgraphs / 180 10.7.3 Clustering-Based Subgraph Selection / 181 10.7.4 Sampling-Based Approaches / 181 10.7.5 Approximate Subgraph Mining / 181 10.7.6 Discriminative Subgraph Selection / 182 10.7.7 Other Significant Subgraph Selection Approaches / 182 10.8 Discussion / 183 10.9 Conclusion / 185 Acknowledgments / 185 References / 186 11 FUZZY AND UNCERTAIN LEARNING TECHNIQUES FOR THE ANALYSIS AND PREDICTION OF PROTEIN TERTIARY STRUCTURES 190Chinua Umoja, Xiaxia Yu, and Robert Harrison 11.1 Introduction / 190 11.2 Genetic Algorithms / 192 11.2.1 GA Model Selection in Protein Structure Prediction / 196 11.2.2 Common Methodology / 198 11.3 Supervised Machine Learning Algorithm / 201 11.3.1 Artificial Neural Networks / 201 11.3.2 ANNs in Protein Structure Prediction / 202 11.3.3 Support Vector Machines / 203 11.4 Fuzzy Application / 204 11.4.1 Fuzzy Logic / 204 11.4.2 Fuzzy SVMs / 204 11.4.3 Adaptive-Network-Based Fuzzy Inference Systems / 205 11.4.4 Fuzzy Decision Trees / 206 11.5 Conclusion / 207 References / 208 12 PROTEIN INTER-DOMAIN LINKER PREDICTION 212Maad Shatnawi, Paul D. Yoo, and Sami Muhaidat 12.1 Introduction / 212 12.2 Protein Structure Overview / 213 12.3 Technical Challenges and Open Issues / 214 12.4 Prediction Assessment / 215 12.5 Current Approaches / 216 12.5.1 DomCut / 216 12.5.2 Scooby-Domain / 217 12.5.3 FIEFDom / 218 12.5.4 Chatterjee et al. (2009) / 219 12.5.5 Drop / 219 12.6 Domain Boundary Prediction Using Enhanced General Regression Network / 220 12.6.1 Multi-Domain Benchmark Data Set / 220 12.6.2 Compact Domain Profile / 221 12.6.3 The Enhanced Semi-Parametric Model / 222 12.6.4 Training, Testing, and Validation / 225 12.6.5 Experimental Results / 226 12.7 Inter-Domain Linkers Prediction Using Compositional Index and Simulated Annealing / 227 12.7.1 Compositional Index / 228 12.7.2 Detecting the Optimal Set of Threshold Values Using Simulated Annealing / 229 12.7.3 Experimental Results / 230 12.8 Conclusion / 232 References / 233 13 PREDICTION OF PROLINE CIS–TRANS ISOMERIZATION 236Paul D. Yoo, Maad Shatnawi, Sami Muhaidat, Kamal Taha, and Albert Y. Zomaya 13.1 Introduction / 236 13.2 Methods / 238 13.2.1 Evolutionary Data Set Construction / 238 13.2.2 Protein Secondary Structure Information / 239 13.2.3 Method I: Intelligent Voting / 239 13.2.4 Method II: Randomized Meta-Learning / 241 13.2.5 Model Validation and Testing / 242 13.2.6 Parameter Tuning / 242 13.3 Model Evaluation and Analysis / 243 13.4 Conclusion / 245 References / 245 IV PATTERN RECOGNITION IN QUATERNARY STRUCTURES 249 14 PREDICTION OF PROTEIN QUATERNARY STRUCTURES 251Akbar Vaseghi, Maryam Faridounnia, Soheila Shokrollahzade, Samad Jahandideh, and Kuo-Chen Chou 14.1 Introduction / 251 14.2 Protein Structure Prediction / 255 14.2.1 Secondary Structure Prediction / 255 14.2.2 Modeling of Tertiary Structure / 256 14.3 Template-Based Predictions / 257 14.3.1 Homology Modeling / 257 14.3.2 Threading Methods / 257 14.3.3 Ab initio Modeling / 257 14.4 Critical Assessment of Protein Structure Prediction / 258 14.5 Quaternary Structure Prediction / 258 14.6 Conclusion / 261 Acknowledgments / 261 References / 261 15 COMPARISON OF PROTEIN QUATERNARY STRUCTURES BY GRAPH APPROACHES 266Sheng-Lung Peng and Yu-Wei Tsay 15.1 Introduction / 266 15.2 Similarity in the Graph Model / 268 15.2.1 Graph Model for Proteins / 270 15.3 Measuring Structural Similarity VIA MCES / 272 15.3.1 Problem Formulation / 273 15.3.2 Constructing P-Graphs / 274 15.3.3 Constructing Line Graphs / 276 15.3.4 Constructing Modular Graphs / 276 15.3.5 Maximum Clique Detection / 277 15.3.6 Experimental Results / 277 15.4 Protein Comparison VIA Graph Spectra / 279 15.4.1 Graph Spectra / 279 15.4.2 Matrix Selection / 281 15.4.3 Graph Cospectrality and Similarity / 283 15.4.4 Cospectral Comparison / 283 15.4.5 Experimental Results / 284 15.5 Conclusion / 287 References / 287 16 STRUCTURAL DOMAINS IN PREDICTION OF BIOLOGICAL PROTEIN–PROTEIN INTERACTIONS 291Mina Maleki, Michael Hall, and Luis Rueda 16.1 Introduction / 291 16.2 Structural Domains / 293 16.3 The Prediction Framework / 293 16.4 Feature Extraction and Prediction Properties / 294 16.4.1 Physicochemical Properties / 296 16.4.2 Domain-Based Properties / 298 16.5 Feature Selection / 299 16.5.1 Filter Methods / 299 16.5.2 Wrapper Methods / 301 16.6 Classification / 301 16.6.1 Linear Dimensionality Reduction / 301 16.6.2 Support Vector Machines / 303 16.6.3 k-Nearest Neighbor / 303 16.6.4 Naive Bayes / 304 16.7 Evaluation and Analysis / 304 16.8 Results and Discussion / 304 16.8.1 Analysis of the Prediction Properties / 304 16.8.2 Analysis of Structural DDIs / 307 16.9 Conclusion / 309 References / 310 V PATTERN RECOGNITION IN MICROARRAYS 315 17 CONTENT-BASED RETRIEVAL OF MICROARRAY EXPERIMENTS 317Hasan O¢gul 17.1 Introduction / 317 17.2 Information Retrieval: Terminology and Background / 318 17.3 Content-Based Retrieval / 320 17.4 Microarray Data and Databases / 322 17.5 Methods for Retrieving Microarray Experiments / 324 17.6 Similarity Metrics / 327 17.7 Evaluating Retrieval Performance / 329 17.8 Software Tools / 330 17.9 Conclusion and Future Directions / 331 Acknowledgment / 332 References / 332 18 EXTRACTION OF DIFFERENTIALLY EXPRESSED GENES IN MICROARRAY DATA 335Tiratha Raj Singh, Brigitte Vannier, and Ahmed Moussa 18.1 Introduction / 335 18.2 From Microarray Image to Signal / 336 18.2.1 Signal from Oligo DNA Array Image / 336 18.2.2 Signal from Two-Color cDNA Array / 337 18.3 Microarray Signal Analysis / 337 18.3.1 Absolute Analysis and Replicates in Microarrays / 338 18.3.2 Microarray Normalization / 339 18.4 Algorithms for De Gene Selection / 339 18.4.1 Within–Between DE Gene (WB-DEG) Selection Algorithm / 340 18.4.2 Comparison of the WB-DEGs with Two Classical DE Gene Selection Methods on Latin Square Data / 341 18.5 Gene Ontology Enrichment and Gene Set Enrichment Analysis / 343 18.6 Conclusion / 345 References / 345 19 CLUSTERING AND CLASSIFICATION TECHNIQUES FOR GENE EXPRESSION PROFILE PATTERN ANALYSIS 347Emanuel Weitschek, Giulia Fiscon, Valentina Fustaino, Giovanni Felici, and Paola Bertolazzi 19.1 Introduction / 347 19.2 Transcriptome Analysis / 348 19.3 Microarrays / 349 19.3.1 Applications / 349 19.3.2 Microarray Technology / 350 19.3.3 Microarray Workflow / 350 19.4 RNA-Seq / 351 19.5 Benefits and Drawbacks of RNA-Seq and Microarray Technologies / 353 19.6 Gene Expression Profile Analysis / 356 19.6.1 Data Definition / 356 19.6.2 Data Analysis / 357 19.6.3 Normalization and Background Correction / 357 19.6.4 Genes Clustering / 359 19.6.5 Experiment Classification / 361 19.6.6 Software Tools for Gene Expression Profile Analysis / 362 19.7 Real Case Studies / 364 19.8 Conclusions / 367 References / 368 20 MINING INFORMATIVE PATTERNS IN MICROARRAY DATA 371Li Teng 20.1 Introduction / 371 20.2 Patterns with Similarity / 373 20.2.1 Similarity Measurement / 374 20.2.2 Clustering / 376 20.2.3 Biclustering / 379 20.2.4 Types of Biclusters / 380 20.2.5 Measurement of the Homogeneity / 383 20.2.6 Biclustering Algorithms with Different Searching Schemes / 387 20.3 Conclusion / 391 References / 391 21 ARROW PLOT AND CORRESPONDENCE ANALYSIS MAPS FOR VISUALIZING THE EFFECTS OF BACKGROUND CORRECTION AND NORMALIZATION METHODS ON MICROARRAY DATA 394Carina Silva, Adelaide Freitas, Sara Roque, and Lisete Sousa 21.1 Overview / 394 21.1.1 Background Correction Methods / 395 21.1.2 Normalization Methods / 396 21.1.3 Literature Review / 397 21.2 Arrow Plot / 399 21.2.1 DE Genes Versus Special Genes / 399 21.2.2 Definition and Properties of the ROC Curve / 400 21.2.3 AUC and Degenerate ROC Curves / 401 21.2.4 Overlapping Coefficient / 402 21.2.5 Arrow Plot Construction / 403 21.3 Significance Analysis of Microarrays / 404 21.4 Correspondence Analysis / 405 21.4.1 Basic Principles / 405 21.4.2 Interpretation of CA Maps / 406 21.5 Impact of the Preprocessing Methods / 407 21.5.1 Class Prediction Context / 408 21.5.2 Class Comparison Context / 408 21.6 Conclusions / 412 Acknowledgments / 413 References / 413 VI PATTERN RECOGNITION IN PHYLOGENETIC TREES 417 22 PATTERN RECOGNITION IN PHYLOGENETICS: TREES AND NETWORKS 419David A. Morrison 22.1 Introduction / 419 22.2 Networks and Trees / 420 22.3 Patterns and Their Processes / 424 22.4 The Types of Patterns / 427 22.5 Fingerprints / 431 22.6 Constructing Networks / 433 22.7 Multi-Labeled Trees / 435 22.8 Conclusion / 436 References / 437 23 DIVERSE CONSIDERATIONS FOR SUCCESSFUL PHYLOGENETIC TREE RECONSTRUCTION: IMPACTS FROM MODEL MISSPECIFICATION, RECOMBINATION, HOMOPLASY, AND PATTERN RECOGNITION 439Diego Mallo, Agustín Sánchez-Cobos, and Miguel Arenas 23.1 Introduction / 440 23.2 Overview on Methods and Frameworks for Phylogenetic Tree Reconstruction / 440 23.2.1 Inferring Gene Trees / 441 23.2.2 Inferring Species Trees / 442 23.3 Influence of Substitution Model Misspecification on Phylogenetic Tree Reconstruction / 445 23.4 Influence of Recombination on Phylogenetic Tree Reconstruction / 446 23.5 Influence of Diverse Evolutionary Processes on Species Tree Reconstruction / 447 23.6 Influence of Homoplasy on Phylogenetic Tree Reconstruction: The Goals of Pattern Recognition / 449 23.7 Concluding Remarks / 449 Acknowledgments / 450 References / 450 24 AUTOMATED PLAUSIBILITY ANALYSIS OF LARGE PHYLOGENIES 457David Dao, Tomáš Flouri, and Alexandros Stamatakis 24.1 Introduction / 457 24.2 Preliminaries / 459 24.3 A Naïve Approach / 462 24.4 Toward a Faster Method / 463 24.5 Improved Algorithm / 467 24.5.1 Preprocessing / 467 24.5.2 Computing Lowest Common Ancestors / 468 24.5.3 Constructing the Induced Tree / 468 24.5.4 Final Remarks / 471 24.6 Implementation / 473 24.6.1 Preprocessing / 473 24.6.2 Reconstruction / 473 24.6.3 Extracting Bipartitions / 474 24.7 Evaluation / 474 24.7.1 Test Data Sets / 474 24.7.2 Experimental Results / 475 24.8 Conclusion / 479 Acknowledgment / 481 References / 481 25 A NEW FAST METHOD FOR DETECTING AND VALIDATING HORIZONTAL GENE TRANSFER EVENTS USING PHYLOGENETIC TREES AND AGGREGATION FUNCTIONS 483Dunarel Badescu, Nadia Tahiri, and Vladimir Makarenkov 25.1 Introduction / 483 25.2 Methods / 485 25.2.1 Clustering Using Variability Functions / 485 25.2.2 Other Variants of Clustering Functions Implemented in the Algorithm / 487 25.2.3 Description of the New Algorithm / 488 25.2.4 Time Complexity / 491 25.3 Experimental Study / 491 25.3.1 Implementation / 491 25.3.2 Synthetic Data / 491 25.3.3 Real Prokaryotic (Genomic) Data / 495 25.4 Results and Discussion / 501 25.4.1 Analysis of Synthetic Data / 501 25.4.2 Analysis of Prokaryotic Data / 502 25.5 Conclusion / 502 References / 503 VII PATTERN RECOGNITION IN BIOLOGICAL NETWORKS 505 26 COMPUTATIONAL METHODS FOR MODELING BIOLOGICAL INTERACTION NETWORKS 507Christos Makris and Evangelos Theodoridis 26.1 Introduction / 507 26.2 Measures/Metrics / 508 26.3 Models of Biological Networks / 511 26.4 Reconstructing and Partitioning Biological Networks / 511 26.5 PPI Networks / 513 26.6 Mining PPI Networks—Interaction Prediction / 517 26.7 Conclusions / 519 References / 519 27 BIOLOGICAL NETWORK INFERENCE AT MULTIPLE SCALES: FROM GENE REGULATION TO SPECIES INTERACTIONS 525Andrej Aderhold, V Anne Smith, and Dirk Husmeier 27.1 Introduction / 525 27.2 Molecular Systems / 528 27.3 Ecological Systems / 528 27.4 Models and Evaluation / 529 27.4.1 Notations / 529 27.4.2 Sparse Regression and the LASSO / 530 27.4.3 Bayesian Regression / 530 27.4.4 Evaluation Metric / 531 27.5 Learning Gene Regulation Networks / 532 27.5.1 Nonhomogeneous Bayesian Regression / 533 27.5.2 Gradient Estimation / 534 27.5.3 Simulated Bio-PEPA Data / 534 27.5.4 Real mRNA Expression Profile Data / 535 27.5.5 Method Evaluation and Learned Networks / 536 27.6 Learning Species Interaction Networks / 540 27.6.1 Regression Model of Species interactions / 540 27.6.2 Multiple Global Change-Points / 541 27.6.3 Mondrian Process Change-Points / 542 27.6.4 Synthetic Data / 544 27.6.5 Simulated Population Dynamics / 544 27.6.6 Real World Plant Data / 546 27.6.7 Method Evaluation and Learned Networks / 546 27.7 Conclusion / 550 References / 550 28 DISCOVERING CAUSAL PATTERNS WITH STRUCTURAL EQUATION MODELING: APPLICATION TO TOLL-LIKE RECEPTOR SIGNALING PATHWAY IN CHRONIC LYMPHOCYTIC LEUKEMIA 555Athina Tsanousa, Stavroula Ntoufa, Nikos Papakonstantinou, Kostas Stamatopoulos, and Lefteris Angelis 28.1 Introduction / 555 28.2 Toll-Like Receptors / 557 28.2.1 Basics / 557 28.2.2 Structure and Signaling of TLRs / 558 28.2.3 TLR Signaling in Chronic Lymphocytic Leukemia / 559 28.3 Structural Equation Modeling / 560 28.3.1 Methodology of SEM Modeling / 560 28.3.2 Assumptions / 561 28.3.3 Estimation Methods / 562 28.3.4 Missing Data / 562 28.3.5 Goodness-of-Fit Indices / 563 28.3.6 Other Indications of a Misspecified Model / 565 28.4 Application / 566 28.5 Conclusion / 580 References / 581 29 ANNOTATING PROTEINS WITH INCOMPLETE LABEL INFORMATION 585Guoxian Yu, Huzefa Rangwala, and Carlotta Domeniconi 29.1 Introduction / 585 29.2 Related Work / 587 29.3 Problem Formulation / 589 29.3.1 The Algorithm / 591 29.4 Experimental Setup / 592 29.4.1 Data sets / 592 29.4.2 Comparative Methods / 593 29.4.3 Experimental Protocol / 594 29.4.4 Evaluation Criteria / 594 29.5 Experimental Analysis / 596 29.5.1 Replenishing Missing Functions / 596 29.5.2 Predicting Unlabeled Proteins / 600 29.5.3 Component Analysis / 604 29.5.4 Run Time Analysis / 604 29.6 Conclusions / 605 Acknowledgments / 606 References / 606 INDEX 609

    10 in stock

    £109.76

  • Pattern Recognition

    John Wiley & Sons Inc Pattern Recognition

    10 in stock

    Book SynopsisA new approach to the issue of data quality in pattern recognition Detailing foundational concepts before introducing more complex methodologies and algorithms, this book is a self-contained manual for advanced data analysis and data mining. Top-down organization presents detailed applications only after methodological issues have been mastered, and step-by-step instructions help ensure successful implementation of new processes. By positioning data quality as a factor to be dealt with rather than overcome, the framework provided serves as a valuable, versatile tool in the analysis arsenal. For decades, practical need has inspired intense theoretical and applied research into pattern recognition for numerous and diverse applications. Throughout, the limiting factor and perpetual problem has been dataits sheer diversity, abundance, and variable quality presents the central challenge to pattern recognition innovation. Pattern Recognition: A Quality of Data PersTable of ContentsPREFACE ix PART 1 FUNDAMENTALS 1 CHAPTER 1 PATTERN RECOGNITION: FEATURE SPACE CONSTRUCTION 3 1.1 Concepts 3 1.2 From Patterns to Features 8 1.3 Features Scaling 17 1.4 Evaluation and Selection of Features 23 1.5 Conclusions 47 Appendix 1.A 48 Appendix 1.B 50 References 50 CHAPTER 2 PATTERN RECOGNITION: CLASSIFIERS 53 2.1 Concepts 53 2.2 Nearest Neighbors Classification Method 55 2.3 Support Vector Machines Classification Algorithm 57 2.4 Decision Trees in Classification Problems 65 2.5 Ensemble Classifiers 78 2.6 Bayes Classifiers 82 2.7 Conclusions 97 References 97 CHAPTER 3 CLASSIFICATION WITH REJECTION PROBLEM FORMULATION AND AN OVERVIEW 101 3.1 Concepts 102 3.2 The Concept of Rejecting Architectures 107 3.3 Native Patterns-Based Rejection 112 3.4 Rejection Option in the Dataset of Native Patterns: A Case Study 118 3.5 Conclusions 129 References 130 CHAPTER 4 EVALUATING PATTERN RECOGNITION PROBLEM 133 4.1 Evaluating Recognition with Rejection: Basic Concepts 133 4.2 Classification with Rejection with No Foreign Patterns 145 4.3 Classification with Rejection: Local Characterization 149 4.4 Conclusions 156 References 156 CHAPTER 5 RECOGNITION WITH REJECTION: EMPIRICAL ANALYSIS 159 5.1 Experimental Results 160 5.2 Geometrical Approach 175 5.3 Conclusions 191 References 192 PART 2 ADVANCED TOPICS: A FRAMEWORK OF GRANULAR COMPUTING 195 CHAPTER 6 CONCEPTS AND NOTIONS OF INFORMATION GRANULES 197 6.1 Information Granularity and Granular Computing 197 6.2 Formal Platforms of Information Granularity 201 6.3 Intervals and Calculus of Intervals 205 6.4 Calculus of Fuzzy Sets 208 6.5 Characterization of Information Granules: Coverage and Specificity 216 6.6 Matching Information Granules 219 6.7 Conclusions 220 References 221 CHAPTER 7 INFORMATION GRANULES: FUNDAMENTAL CONSTRUCTS 223 7.1 The Principle of Justifiable Granularity 223 7.2 Information Granularity as a Design Asset 230 7.3 Single-Step and Multistep Prediction of Temporal Data in Time Series Models 235 7.4 Development of Granular Models of Higher Type 236 7.5 Classification with Granular Patterns 241 7.6 Conclusions 245 References 246 CHAPTER 8 CLUSTERING 247 8.1 Fuzzy C-Means Clustering Method 247 8.2 k-Means Clustering Algorithm 252 8.3 Augmented Fuzzy Clustering with Clusters and Variables Weighting 253 8.4 Knowledge-Based Clustering 254 8.5 Quality of Clustering Results 254 8.6 Information Granules and Interpretation of Clustering Results 256 8.7 Hierarchical Clustering 258 8.8 Information Granules in Privacy Problem: A Concept of Microaggregation 261 8.9 Development of Information Granules of Higher Type 262 8.10 Experimental Studies 264 8.11 Conclusions 272 References 273 CHAPTER 9 QUALITY OF DATA: IMPUTATION AND DATA BALANCING 275 9.1 Data Imputation: Underlying Concepts and Key Problems 275 9.2 Selected Categories of Imputation Methods 276 9.3 Imputation with the Use of Information Granules 278 9.4 Granular Imputation with the Principle of Justifiable Granularity 279 9.5 Granular Imputation with Fuzzy Clustering 283 9.6 Data Imputation in System Modeling 285 9.7 Imbalanced Data and their Granular Characterization 286 9.8 Conclusions 291 References 291 INDEX 293

    10 in stock

    £97.16

  • Image Segmentation  Principles Techniques and

    John Wiley & Sons Inc Image Segmentation Principles Techniques and

    15 in stock

    Book SynopsisImage Segmentation Summarizes and improves new theory, methods, and applications of current image segmentation approaches, written by leaders in the field The process of image segmentation divides an image into different regions based on the characteristics of pixels, resulting in a simplified image that can be more efficiently analyzed. Image segmentation has wide applications in numerous fields ranging from industry detection and bio-medicine to intelligent transportation and architecture. Image Segmentation: Principles, Techniques, and Applications is an up-to-date collection of recent techniques and methods devoted to the field of computer vision. Covering fundamental concepts, new theories and approaches, and a variety of practical applications including medical imaging, remote sensing, fuzzy clustering, and watershed transform. In-depth chapters present innovative methods developed by the authorssuch as convolutional neural networks, graph convolutional networks, deformable convolution, and model compressionto assist graduate students and researchers apply and improve image segmentation in their work. Describes basic principles of image segmentation and related mathematical methods such as clustering, neural networks, and mathematical morphology. Introduces new methods for achieving rapid and accurate image segmentation based on classic image processing and machine learning theory. Presents techniques for improved convolutional neural networks for scene segmentation, object recognition, and change detection, etc. Highlights the effect of image segmentation in various application scenarios such as traffic image analysis, medical image analysis, remote sensing applications, and material analysis, etc. Image Segmentation: Principles, Techniques, and Applications is an essential resource for undergraduate and graduate courses such as image and video processing, computer vision, and digital signal processing, as well as researchers working in computer vision and image analysis looking to improve their techniques and methods.Table of ContentsPreface About the Authors List of Abbreviations Part One: Principle 1 Introduction to Image Segmentation 2 Principles of Clustering 3 Principles of Mathematical Morphology 4 Principles of Neural Network Part Two: Methods 5 Fast and Robust Image Segmentation Using Clustering 6 Fast Image Segmentation Using Watershed Transform 7 Superpixel-based Fast Image Segmentation Part Three: Application 8 Image Segmentation for Traffic Scene Analysis 9 Image Segmentation for Medical Analysis 10 Image Segmentation for Remote Sensing Analysis 11 Image Segmentation for Material Analysis

    15 in stock

    £99.00

  • Machine Learning Evaluation

    Cambridge University Press Machine Learning Evaluation

    2 in stock

    Book SynopsisThis accessible, comprehensive guide is aimed at students, practitioners, engineers, and users. The emphasis is on building robust, responsible machine learning products incorporating meaningful metrics, rigorous statistical analysis, fair training sets, and explainability. Implementations in Python and sklearn are available on the book's website.

    2 in stock

    £56.99

  • Pro Processing for Images and Computer Vision

    Apress Pro Processing for Images and Computer Vision

    1 in stock

    Book SynopsisTagline: Teaching your computer to seeTable of Contents1. Getting Started with Processing and OpenCV2. Image Sources and Representations3. Pixel-Based Manipulation4. Geometry and Transformation5. Identification of Structure6. Understanding Motion7. Feature Detection and Matching8. Application Deployment and Conclusion

    1 in stock

    £47.86

  • Practical Machine Learning and Image Processing

    APress Practical Machine Learning and Image Processing

    1 in stock

    Book Synopsis Gain insights into image-processing methodologies and algorithms, using machine learning and neural networks in Python. This book begins with the environment setup, understanding basic image-processing terminology, and exploring Python concepts that will be useful for implementing the algorithms discussed in the book. You will then cover all the core image processing algorithms in detail before moving onto the biggest computer vision library: OpenCV. You''ll see the OpenCV algorithms and how to use them for image processing.  The next section looks at advanced machine learning and deep learning methods for image processing and classification. You''ll work with concepts such as pulse coupled neural networks, AdaBoost, XG boost, and convolutional neural networks for image-specific applications. Later you''ll explore how models are made in real time and then deployed using various DevOps tools.  All the concepTable of ContentsChapter 1: Installation and Environment Setup Chapter Goal: Making System Ready for Image Processing and Analysis No of pages 20 Sub -Topics (Top 2) 1. Installing Jupyter Notebook 2. Installing OpenCV and other Image Analysis dependencies 3. Installing Neural Network Dependencies Chapter 2: Introduction to Python and Image Processing Chapter Goal: Introduction to different concepts of Python and Image processing Application on it. No of pages: 50 Sub - Topics (Top 2) 1. Essentials of Python 2. Terminologies related to Image Analysis Chapter 3: Advanced Image Processing using OpenCV Chapter Goal: Understanding Algorithms and their applications using Python No of pages: 100 Sub - Topics (Top 2): 1. Operations on Images 2. Image Transformations Chapter 4: Machine Learning Approaches in Image Processing Chapter Goal: Basic Implementation of Machine and Deep Learning Models, which takes care of Image Processing, before applications in real-time scenario No of pages: 100 Sub - Topics (Top 2): 1. Image Classification and Segmentation 2. Applying Supervised and Unsupervised Learning approaches on Images using Python Chapter 5: Real Time Use Cases Chapter Goal: Working on 5 projects using Python, applying all the concepts learned in this book No of pages: 100 Sub - Topics (Top 5): 1. Facial Detection 2. Facial Recognition 3. Hand Gesture Movement Recognition 4. Self-Driving Cars Conceptualization: Advanced Lane Finding 5. Self-Driving Cars Conceptualization: Traffic Signs Detection Chapter 6: Appendix A Chapter Goal: Advanced concepts Introduction No of pages: 50 Sub - Topics (Top 2): 1. AdaBoost and XGBoost 2. Pulse Coupled Neural Networks

    1 in stock

    £46.74

  • ROYAL COLLINS PUB CO The Digital Twin 2.0

    Out of stock

    Book Synopsis

    Out of stock

    £22.95

  • Advanced Analytics with Spark

    O'Reilly Media Advanced Analytics with Spark

    1 in stock

    Book SynopsisIn the second edition of this practical book, four Cloudera data scientists present a set of self-contained patterns for performing large-scale data analysis with Spark. The authors bring Spark, statistical methods, and real-world datasets together to teach you how to approach analytics problems by example.

    1 in stock

    £35.99

  • Pattern Recognition and Machine Learning

    Springer-Verlag New York Inc. Pattern Recognition and Machine Learning

    15 in stock

    Book SynopsisProbability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.Trade ReviewFrom the reviews: "This beautifully produced book is intended for advanced undergraduates, PhD students, and researchers and practitioners, primarily in the machine learning or allied areas...A strong feature is the use of geometric illustration and intuition...This is an impressive and interesting book that might form the basis of several advanced statistics courses. It would be a good choice for a reading group." John Maindonald for the Journal of Statistical Software "In this book, aimed at senior undergraduates or beginning graduate students, Bishop provides an authoritative presentation of many of the statistical techniques that have come to be considered part of ‘pattern recognition’ or ‘machine learning’. … This book will serve as an excellent reference. … With its coherent viewpoint, accurate and extensive coverage, and generally good explanations, Bishop’s book is a useful introduction … and a valuable reference for the principle techniques used in these fields." (Radford M. Neal, Technometrics, Vol. 49 (3), August, 2007) "This book appears in the Information Science and Statistics Series commissioned by the publishers. … The book appears to have been designed for course teaching, but obviously contains material that readers interested in self-study can use. It is certainly structured for easy use. … For course teachers there is ample backing which includes some 400 exercises. … it does contain important material which can be easily followed without the reader being confined to a pre-determined course of study." (W. R. Howard, Kybernetes, Vol. 36 (2), 2007) "Bishop (Microsoft Research, UK) has prepared a marvelous book that provides a comprehensive, 700-page introduction to the fields of pattern recognition and machine learning. Aimed at advanced undergraduates and first-year graduate students, as well as researchers and practitioners, the book assumes knowledge of multivariate calculus and linear algebra … . Summing Up: Highly recommended. Upper-division undergraduates through professionals." (C. Tappert, CHOICE, Vol. 44 (9), May, 2007) "The book is structured into 14 main parts and 5 appendices. … The book is aimed at PhD students, researchers and practitioners. It is well-suited for courses on machine learning, statistics, computer science, signal processing, computer vision, data mining, and bio-informatics. Extensive support is provided for course instructors, including more than 400 exercises, lecture slides and a great deal of additional material available at the book’s web site … ." (Ingmar Randvee, Zentralblatt MATH, Vol. 1107 (9), 2007) "This new textbook by C. M. Bishop is a brilliant extension of his former book ‘Neural Networks for Pattern Recognition’. It is written for graduate students or scientists doing interdisciplinary work in related fields. … In summary, this textbook is an excellent introduction to classical pattern recognition and machine learning (in the sense of parameter estimation). A large number of very instructive illustrations adds to this value." (H. G. Feichtinger, Monatshefte für Mathematik, Vol. 151 (3), 2007) "Author aims this text at advanced undergraduates, beginning graduate students, and researchers new to machine learning and pattern recognition. … Pattern Recognition and Machine Learning provides excellent intuitive descriptions and appropriate-level technical details on modern pattern recognition and machine learning. It can be used to teach a course or for self-study, as well as for a reference. … I strongly recommend it for the intended audience and note that Neal (2007) also has given this text a strong review to complement its strong sales record." (Thomas Burr, Journal of the American Statistical Association, Vol. 103 (482), June, 2008) "This accessible monograph seeks to provide a comprehensive introduction to the fields of pattern recognition and machine learning. It presents a unified treatment of well-known statistical pattern recognition techniques. … The book can be used by advanced undergraduates and graduate students … . The illustrative examples and exercises proposed at the end of each chapter are welcome … . The book, which provides several new views, developments and results, is appropriate for both researchers and students who work in machine learning … ." (L. State, ACM Computing Reviews, October, 2008) "Chris Bishop’s … technical exposition that is at once lucid and mathematically rigorous. … In more than 700 pages of clear, copiously illustrated text, he develops a common statistical framework that encompasses … machine learning. … it is a textbook, with a wide range of exercises, instructions to tutors on where to go for full solutions, and the color illustrations that have become obligatory in undergraduate texts. … its clarity and comprehensiveness will make it a favorite desktop companion for practicing data analysts." (H. Van Dyke Parunak, ACM Computing Reviews, Vol. 49 (3), March, 2008)Table of ContentsProbability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

    15 in stock

    £58.49

  • Advances in Pattern Recognition Research

    Nova Science Publishers Inc Advances in Pattern Recognition Research

    Out of stock

    Book Synopsis

    Out of stock

    £163.19

  • We See It All: Liberty and Justice in an Age of

    PublicAffairs We See It All: Liberty and Justice in an Age of

    10 in stock

    Book Synopsis

    10 in stock

    £21.00

  • Face Recognition: Methods, Applications &

    Nova Science Publishers Inc Face Recognition: Methods, Applications &

    1 in stock

    Book Synopsis

    1 in stock

    £149.99

  • Pattern and Chaos in Art, Science and Everyday

    Intellect Books Pattern and Chaos in Art, Science and Everyday

    Out of stock

    Book SynopsisThis collection explores critical and visual practices through the lens of interactions and intersections between pattern and chaos. The dynamic of the inter-relationship between pattern and chaos is such as to challenge disciplinary boundaries, critical frameworks and modes of understanding, perception and communication, often referencing the in-between territory of art and science through experimentation and visual scrutiny. A territory of 'pattern-chaos' or 'chaos-pattern' begins to unfold. Drawing upon fields such as visual culture, sociology, physics, neurobiology, linguistics or critical theory, for example, contributors have experimented with pattern and/or chaos-related forms, processes, materials, sounds and language or have reflected on the work of other artists, scientists and scholars. Diagrams, tessellations, dust, knots, mazes, folds, creases, flux, virus, fire and flow are indicative of processes through which pattern and chaos are addressed. The contributions are organized into clusters of subjects which reflect the interdisciplinary terrain through a robust, yet also experimental, arrangement. These are 'Pattern Dynamics', 'Morph Flux Mutate', 'Decompose Recompose', 'Virus; Social Imaginary' and 'Nothings in Particular'.Table of ContentsAcknowledgements List of Figures Introduction Sarah Horton and Victoria Mitchell PART 1: PATTERN DYNAMICS Introduction The Anxious Spiral Krzysztof Fijalkowski Representing Kinematics and Dynamics by Pattern-Breaking in Nature, Art and Music Brian Whalley and J. Harry Whalley Drawing Dynamic Patterns: The Protein Maze Gemma Anderson, Jonathan Phillips and John Dupré The Metamorphogram: Pattern as Memory of Experience Alun Kirby Crumpling: An Exploration of Nature Dewi Brunet and Gwenaël Prost, for the CRIMP Ccollective Somewhere Between Weaving and Painting Geoff Diego Litherland (with Angharad McLaren) Knotting Across Species: Creating Order from Chaos Eleanor Morgan Simplifying Complexity: The Visual Language of Neuroscience Gill Brown PART 2: MORPH, FLUX, MUTATE Introduction Unrepeating-Repeat Danica Maier Pattern Evolution Kate Farley Geomorphology: Mapping the Land, Above and Below Water Glyn Brewerton Flux Katy Hammond Drawing Fire David Griffin Imago Images Robert Hillier The Chaos of Delight: Spatial and Temporal Interruptions Lesley Halliwell PART 3: DECOMPOSE–-RECOMPOSE Introduction Foment Catherine Yass Meniscus James Quinn Digital Dadaism Chris Brown Forty-Four Sounds Mark Graver A Type of Chaos Pauline Clancy Fragile Order Charlotte Hodes Shatter Zoë Hillyard The Moments I am Looking For… Judith Stewart Expanded Visuality: Photography as a Patterning Mechanism for the Animated Form Katarina Andjelkovic PART 4: VIRUS Introduction Global Ghost Map Anne Eggebert Embodied and Coded: Drawings as Viral Systems Daksha Patel Viral Experiments Louise Mackenzie Contagious Pattern: The Spread of Appropriated Patterns by Contemporary Artists Andrew Bracey PART 5: SOCIAL IMAGINARY Introduction You’ll Never Walk Alone: Aa Song of Community and Struggle 1945–2021 Sarah Lowndes Dialectical Reversal in About Two Worlds David Mabb Distance and Disruption: The Organizsed Disorder of the Body in Illness Catherine Baker Unfolding Thinking: Nanotechnology Meets Fine Art Practice Les Bicknell Instead of the Feeling of Home Townley and Bradby Designing for the Real World: The Importance of Chaos Anthony Hudson Order? Sarah Blair You Guys Are So Stochastic Lucy Ward and Karoline Wiesner Clouds in the Machine Sarah Horton PART 6 NOTHINGS IN PARTICULAR Introduction The Shape of Dust Doris Rohr Mimesis: Nothings in Particular William Prosser Mottled Geometries: The Lure and Allure of the Pattern in the Carpet Victoria Mitchell Ghost Flower 3 Andrea Stokes Dom Sylvester Houédard: Exhibiting Spiritual Architypestractures and Cosmic Dust Nicola Simpson Notes on Contributors Bibliography Index

    Out of stock

    £98.96

  • Change Detection and Image Time Series Analysis

    ISTE Ltd Change Detection and Image Time Series Analysis

    15 in stock

    Book SynopsisChange Detection and Image Time Series Analysis 2 presents supervised machine-learning-based methods for temporal evolution analysis by using image time series associated with Earth observation data. Chapter 1 addresses the fusion of multisensor, multiresolution and multitemporal data. It proposes two supervised solutions that are based on a Markov random field: the first relies on a quad-tree and the second is specifically designed to deal with multimission, multifrequency and multiresolution time series.Chapter 2 provides an overview of pixel based methods for time series classification, from the earliest shallow learning methods to the most recent deep-learning-based approaches.Chapter 3 focuses on very high spatial resolution data time series and on the use of semantic information for modeling spatio-temporal evolution patterns.Chapter 4 centers on the challenges of dense time series analysis, including pre processing aspects and a taxonomy of existing methodologies. Finally, since the evaluation of a learning system can be subject to multiple considerations,Chapters 5 and 6 offer extensive evaluations of the methodologies and learning frameworks used to produce change maps, in the context of multiclass and/or multilabel change classification issues.Table of ContentsContents Preface ix Abdourrahmane M. ATTO, Francesca BOVOLO and Lorenzo BRUZZONE List of Notations Chapter 1 Hierarchical Markov Random Fields for High Resolution Land Cover Classification of Multisensor and Multiresolution Image Time Series 1 Ihsen HEDHLI, Gabriele MOSER, Sebastiano B. SERPICO and Josiane ZERUBIA 1.1. Introduction 1 1.1.1. The role of multisensor data in time series classification 1 1.1.2. Multisensor and multiresolution classification 2 1.1.3.Previouswork 5 1.2. Methodology 9 1.2.1. Overview of the proposed approaches 9 1.2.2. Hierarchical model associated with the first proposed method 10 1.2.3. Hierarchical model associated with the second proposed method 13 1.2.4. Multisensor hierarchical MPM inference 14 1.2.5. Probability density estimation through finite mixtures 17 1.3.Examplesofexperimentalresults 19 1.3.1.Resultsofthefirstmethod 19 1.3.2.Resultsofthesecondmethod 22 1.4.Conclusion 26 xiii 1.5.Acknowledgments 26 1.6.References 27 Chapter 2 Pixel-based Classification Techniques for Satellite Image Time Series 33 Charlotte PELLETIER and Silvia VALERO 2.1. Introduction 33 2.2. Basic concepts in supervised remote sensing classification 35 2.2.1. Preparing data before it is fed into classification algorithms 35 2.2.2. Key considerations when training supervised classifiers 39 2.2.3. Performance evaluation of supervised classifiers 41 2.3.Traditionalclassificationalgorithms 45 2.3.1. Support vector machines 45 2.3.2. Random forests 51 2.3.3. k-nearest neighbor 56 2.4. Classification strategies based on temporal feature representations 59 2.4.1. Phenology-based classification approaches 60 2.4.2 Dictionary-based classificationapproaches 61 2.4.3 Shapelet-based classificationapproaches 62 2.5.Deeplearningapproaches 63 2.5.1. Introduction to deep learning 64 2.5.2.Convolutionalneuralnetworks 68 2.5.3.Recurrentneuralnetworks 71 2.6.References 75 Chapter 3 Semantic Analysis of Satellite Image Time Series 85 Corneliu Octavian DUMITRU and Mihai DATCU 3.1. Introduction 85 3.1.1.TypicalSITSexamples 89 3.1.2. Irregular acquisitions 90 3.1.3.Thechapterstructure 96 3.2.WhyaresemanticsneededinSITS? 96 3.3.Similaritymetrics 97 3.4. Feature methods 98 3.5. Classification methods 98 3.5.1.Activelearning 99 3.5.2.Relevancefeedback 100 3.5.3. Compression-based pattern recognition 100 3.5.4.LatentDirichletallocation 101 3.6.Conclusion 102 vii 3.7.Acknowledgments 105 3.8.References 105 Chapter 4 Optical Satellite Image Time Series Analysis for Environment Applications: From Classical Methods to Deep Learning and Beyond 109 Matthieu MOLINIER, Jukka MIETTINEN,DinoIENCO,ShiQIU and Zhe ZHU 4.1. Introduction 109 4.2. Annual time series 111 4.2.1. Overview of annual time series methods 111 4.2.2 Examples of annual times series analysis applications for environmentalmonitoring 112 4.2.3.Towardsdensetimeseriesanalysis 116 4.3. Dense time series analysis using all available data 117 4.3.1. Making dense time series consistent 118 4.3.2. Change detection methods 121 4.3.3.Summaryandfuturedevelopments 125 4.4. Deep learning-based time series analysis approaches 126 4.4.1 Recurrent Neural Network (RNN) for Satellite Image TimeSeries 129 4.4.2 Convolutional Neural Networks (CNN) for Satellite Image TimeSeries 131 4.4.3. Hybrid models: Convolutional Recurrent Neural Network (ConvRNN) models for Satellite Image Time Series 134 4.4.4. Synthesis and future developments 136 4.5. Beyond satellite image time series and deep learning: convergence between time series and video approaches 136 4.5.1 Increased image acquisition frequency: from time series to spacebornetime-lapseandvideos 137 4.5.2. Deep learning and computer vision as technology enablers 138 4.5.3.Futuresteps 139 4.6.References 140 Chapter 5 A Review on Multi-temporal Earthquake Damage Assessment Using Satellite Images 155 Gülşen TAŞKIN, EsraERTEN and Enes Oğuzhan ALATAŞ 5.1. Introduction 155 5.1.1. Research methodology and statistics 159 5.2. Satellite-based earthquake damage assessment 165 5.3. Pre-processing of satellite images before damage assessment 167 5.4. Multi-source image analysis 168 5.5. Contextual feature mining for damage assessment 169 5.5.1.Texturalfeatures 170 5.5.2. Filter-based methods 173 5.6. Multi-temporal image analysis for damage assessment 175 5.6.1. Use of machine learning in damage assessment problem 176 5.6.2. Rapid earthquake damage assessment 180 5.7. Understanding damage following an earthquake using satellite-based SAR 181 5.7.1. SAR fundamental parameters and acquisition vector 185 5.7.2. Coherent methods for damage assessment 188 5.7.3. Incoherent methods for damage assessment 192 5.7.4. Post-earthquake-only SAR data-based damage assessment 195 5.7.5 Combination of coherent and incoherent methods for damage assessment 196 5.7.6.Summary 198 5.8. Use of auxiliary data sources 200 5.9.Damagegrades 200 5.10.Conclusionanddiscussion 203 5.11.References 205 Chapter 6 Multiclass Multilabel Change of State Transfer Learning from Image Time Series 223 Abdourrahmane M. ATTO,HélaHADHRI, FlavienVERNIER and Emmanuel TROUVÉ 6.1. Introduction 223 6.2. Coarse- to fine-grained change of state dataset 225 6.3. Deep transfer learning models for change of state classification 232 6.3.1.Deeplearningmodellibrary 232 6.3.2.GraphstructuresfortheCNNlibrary 234 6.3.3. Dimensionalities of the learnables for the CNN library 236 6.4.Changeofstateanalysis 237 6.4.1 Transfer learning adaptations for the change of state classificationissues 238 6.4.2.Experimentalresults 239 6.5.Conclusion 243 6.6.Acknowledgments 244 6.7.References 244 List of Authors 247 Index 249 Summary of Volume 1 253

    15 in stock

    £124.15

© 2026 Book Curl

    • American Express
    • Apple Pay
    • Diners Club
    • Discover
    • Google Pay
    • Maestro
    • Mastercard
    • PayPal
    • Shop Pay
    • Union Pay
    • Visa

    Login

    Forgot your password?

    Don't have an account yet?
    Create account