Machine learning Books

513 products


  • Machine Learners Archaeology of a Data Practice

    MIT Press Ltd Machine Learners Archaeology of a Data Practice

    Out of stock

    Book SynopsisIf machine learning transforms the nature of knowledge, does it also transform the practice of critical thought?Machine learning—programming computers to learn from data—has spread across scientific disciplines, media, entertainment, and government. Medical research, autonomous vehicles, credit transaction processing, computer gaming, recommendation systems, finance, surveillance, and robotics use machine learning. Machine learning devices (sometimes understood as scientific models, sometimes as operational algorithms) anchor the field of data science. They have also become mundane mechanisms deeply embedded in a variety of systems and gadgets. In contexts from the everyday to the esoteric, machine learning is said to transform the nature of knowledge. In this book, Adrian Mackenzie investigates whether machine learning also transforms the practice of critical thinking.Mackenzie focuses on machine learners—either humans and machines or human-machine r

    Out of stock

    £28.50

  • Algorithms for Optimization

    MIT Press Ltd Algorithms for Optimization

    10 in stock

    Book Synopsis

    10 in stock

    £85.50

  • Foundations of Machine Learning

    MIT Press Foundations of Machine Learning

    2 in stock

    Book Synopsis

    2 in stock

    £72.00

  • Fundamentals of Machine Learning for Predictive

    1 in stock

    £68.40

  • Deep Learning

    MIT Press Deep Learning

    1 in stock

    Book Synopsis

    1 in stock

    £14.39

  • AI Ethics

    MIT Press AI Ethics

    5 in stock

    Book Synopsis

    5 in stock

    £14.39

  • Signal Processing and Machine Learning Theory

    Elsevier Science & Technology Signal Processing and Machine Learning Theory

    15 in stock

    Book SynopsisTable of Contents1. Introduction to Signal Processing and Machine Learning Theory 2. Continuous-Time Signals and Systems 3. Discrete-Time Signals and Systems 4. Random Signals and Stochastic Processes 5. Sampling and Quantization 6. Digital Filter Structures and Their Implementation 7. Multi-rate Signal Processing for Software Radio Architectures 8. Modern Transform Design for Practical Audio/Image/Video Coding Applications 9. Discrete Multi-Scale Transforms in Signal Processing 10. Frames in Signal Processing 11. Parametric Estimation 12. Adaptive Filters 13. Signal Processing over Graphs 14. Tensors for Signal Processing and Machine Learning 15. Non-convex Optimization for Machine Learning 16. Dictionary Learning and Sparse Representation

    15 in stock

    £114.30

  • Reinforcement and Systemic Machine Learning for

    John Wiley & Sons Inc Reinforcement and Systemic Machine Learning for

    15 in stock

    Book Synopsis* Authors have both industrial and academic experiences * Case studies are included reflecting author's industrial experiences * Downloadable tutorials are available .Table of ContentsPreface xv Acknowledgments xix About the Author xxi 1 Introduction to Reinforcement and Systemic Machine Learning 1 1.1. Introduction 1 1.2. Supervised, Unsupervised, and Semisupervised Machine Learning 2 1.3. Traditional Learning Methods and History of Machine Learning 4 1.4. What Is Machine Learning? 7 1.5. Machine-Learning Problem 8 1.6. Learning Paradigms 9 1.7. Machine-Learning Techniques and Paradigms 12 1.8. What Is Reinforcement Learning? 14 1.9. Reinforcement Function and Environment Function 16 1.10. Need of Reinforcement Learning 17 1.11. Reinforcement Learning and Machine Intelligence 17 1.12. What Is Systemic Learning? 18 1.13. What Is Systemic Machine Learning? 18 1.14. Challenges in Systemic Machine Learning 19 1.15. Reinforcement Machine Learning and Systemic Machine Learning 19 1.16. Case Study Problem Detection in a Vehicle 20 1.17. Summary 20 2 Fundamentals of Whole-System, Systemic, and Multiperspective Machine Learning 23 2.1. Introduction 23 2.2. What Is Systemic Machine Learning? 27 2.3. Generalized Systemic Machine-Learning Framework 30 2.4. Multiperspective Decision Making and Multiperspective Learning 33 2.5. Dynamic and Interactive Decision Making 43 2.6. The Systemic Learning Framework 47 2.7. System Analysis 52 2.8. Case Study: Need of Systemic Learning in the Hospitality Industry 54 2.9. Summary 55 3 Reinforcement Learning 57 3.1. Introduction 57 3.2. Learning Agents 60 3.3. Returns and Reward Calculations 62 3.4. Reinforcement Learning and Adaptive Control 63 3.5. Dynamic Systems 66 3.6. Reinforcement Learning and Control 68 3.7. Markov Property and Markov Decision Process 68 3.8. Value Functions 69 3.8.1. Action and Value 70 3.9. Learning an Optimal Policy (Model-Based and Model-Free Methods) 70 3.10. Dynamic Programming 71 3.11. Adaptive Dynamic Programming 71 3.12. Example: Reinforcement Learning for Boxing Trainer 75 3.13. Summary 75 4 Systemic Machine Learning and Model 77 4.1. Introduction 77 4.2. A Framework for Systemic Learning 78 4.3. Capturing the Systemic View 86 4.4. Mathematical Representation of System Interactions 89 4.5. Impact Function 91 4.6. Decision-Impact Analysis 91 4.7. Summary 97 5 Inference and Information Integration 99 5.1. Introduction 99 5.2. Inference Mechanisms and Need 101 5.3. Integration of Context and Inference 107 5.4. Statistical Inference and Induction 111 5.5. Pure Likelihood Approach 112 5.6. Bayesian Paradigm and Inference 113 5.7. Time-Based Inference 114 5.8. Inference to Build a System View 114 5.9. Summary 118 6 Adaptive Learning 119 6.1. Introduction 119 6.2. Adaptive Learning and Adaptive Systems 119 6.3. What Is Adaptive Machine Learning? 123 6.4. Adaptation and Learning Method Selection Based on Scenario 124 6.5. Systemic Learning and Adaptive Learning 127 6.6. Competitive Learning and Adaptive Learning 140 6.7. Examples 146 6.8. Summary 149 7 Multiperspective and Whole-System Learning 151 7.1. Introduction 151 7.2. Multiperspective Context Building 152 7.3. Multiperspective Decision Making and Multiperspective Learning 154 7.4. Whole-System Learning and Multiperspective Approaches 164 7.5. Case Study Based on Multiperspective Approach 167 7.6. Limitations to a Multiperspective Approach 174 7.7. Summary 174 8 Incremental Learning and Knowledge Representation 177 8.1. Introduction 177 8.2. Why Incremental Learning? 178 8.3. Learning from What Is Already Learned. . . 180 8.4. Supervised Incremental Learning 191 8.5. Incremental Unsupervised Learning and Incremental Clustering 191 8.6. Semisupervised Incremental Learning 196 8.7. Incremental and Systemic Learning 199 8.8. Incremental Closeness Value and Learning Method 200 8.9. Learning and Decision-Making Model 205 8.10. Incremental Classification Techniques 206 8.11. Case Study: Incremental Document Classification 207 8.12. Summary 208 9 Knowledge Augmentation: A Machine Learning Perspective 209 9.1. Introduction 209 9.2. Brief History and Related Work 211 9.3. Knowledge Augmentation and Knowledge Elicitation 215 9.4. Life Cycle of Knowledge 217 9.5. Incremental Knowledge Representation 222 9.6. Case-Based Learning and Learning with Reference to Knowledge Loss 224 9.7. Knowledge Augmentation: Techniques and Methods 224 9.8. Heuristic Learning 228 9.9. Systemic Machine Learning and Knowledge Augmentation 229 9.10. Knowledge Augmentation in Complex Learning Scenarios 232 9.11. Case Studies 232 9.12. Summary 235 10 Building a Learning System 237 10.1. Introduction 237 10.2. Systemic Learning System 237 10.3. Algorithm Selection 242 10.4. Knowledge Representation 244 10.5. Designing a Learning System 245 10.6. Making System to Behave Intelligently 246 10.7. Example-Based Learning 246 10.8. Holistic Knowledge Framework and Use of Reinforcement Learning 246 10.9. Intelligent Agents—Deployment and Knowledge Acquisition and Reuse 250 10.10. Case-Based Learning: Human Emotion-Detection System 251 10.11. Holistic View in Complex Decision Problem 253 10.12. Knowledge Representation and Data Discovery 255 10.13. Components 258 10.14. Future of Learning Systems and Intelligent Systems 259 10.15. Summary 259 Appendix A: Statistical Learning Methods 261 Appendix B: Markov Processes 271 Index 281

    15 in stock

    £98.96

  • Statistical Learning Theory 2 Adaptive and

    John Wiley & Sons Inc Statistical Learning Theory 2 Adaptive and

    15 in stock

    Book SynopsisThis book is devoted to the statistical theory of learning and generalization, that is, the problem of choosing the desired function on the basis of empirical data. The author will present the whole picture of learning and generalization theory. Learning theory has applications in many fields, such as psychology, education and computer science.Table of ContentsPreface xxi Introduction: The Problem of induction and Statistical inference 1 I Theory of learning and generation 1 Two Approches to the learnig problem 19 Appendix to chapter 1: Methods for solving III-posed problems 51 2 Estimation of the probability Measure and problem of learning 59 3 Conditions for Consistency of Empirical Risk Minimization Principal 79 4 Bounds on the Risk for indicator Loss Functions 121 Appendix to Chapter 4: Lower Bounds on the Risk of the ERM Principle 169 5 Bounds on the Risk for Real-valued loss functions 183 6 The structural Risk Minimization Principle 219 Appendix to chapter 6: Estimating Functions on the basis of indirect measurements 271 7 stochastic III-posed problems 293 8 Estimating the values of Function at given points 339 II Support Vector Estimation of Functions 9 Perceptions and their Generalizations 375 10 The Support Vector Method for Estimating Indicator functions 401 11 The Support Vector Method for Estimating Real-Valued functions 443 12 SV Machines for pattern Recognition 493 13 SV Machines for Function Approximations, Regression Estimation, and Signal Processing 521 III Statistical Foundation of Learning Theory 14 Necessary and Sufficient Conditions for Uniform Convergence of Frequencies to their Probabilities 571 15 Necessary and Sufficient Conditions for Uniform Convergence of Means to their Expectations 597 16 Necessary and Sufficient Conditions for Uniform One-sided Convergence of Means to their Expectations 629 Comments and Bibliographical Remarks 681 References 723 Index 733

    15 in stock

    £175.46

  • Computational Learning and Probabilistic

    John Wiley & Sons Inc Computational Learning and Probabilistic

    15 in stock

    Book SynopsisProviding a unified coverage of the latest research and applications methods and techniques, this book is devoted to two interrelated techniques for solving some important problems in machine intelligence and pattern recognition, namely probabilistic reasoning and computational learning. The contributions in this volume describe and explore the current developments in computer science and theoretical statistics which provide computational probabilistic models for manipulating knowledge found in industrial and business data. These methods are very efficient for handling complex problems in medicine, commerce and finance. Part I covers Generalisation Principles and Learning and describes several new inductive principles and techniques used in computational learning. Part II describes Causation and Model Selection including the graphical probabilistic models that exploit the independence relationships presented in the graphs, and applications of Bayesian networks to multivariate statisticTable of ContentsPartial table of contents: GENERALISATION PRINCIPLES AND LEARNING. Structure of Statistical Learning Theory (V. Vapnik). MML Inference of Predictive Trees, Graphs and Nets (C.Wallace). Probabilistic Association and Denotation in Machine Learning ofNatural Language (P. Suppes & L. Liang). CAUSATION AND MODEL SELECTION. Causation, Action, and Counterfactuals (J. Pearl). Efficient Estimation and Model Selection in Large Graphical Models(D. Wedelin). BAYESIAN BELIEF NETWORKS AND HYBRID SYSTEMS. Bayesian Belief Networks and Patient Treatment (L. Meshalkin &E. Tsybulkin). DECISION-MAKING, OPTIMIZATION AND CLASSIFICATION. Axioms for Dynamic Programming (P. Shenoy). Extreme Values of Functionals Characterizing Stability ofStatistical Decisions (A. Nagaev). Index.

    15 in stock

    £243.86

  • Machine Learning in Asset Pricing

    Princeton University Press Machine Learning in Asset Pricing

    15 in stock

    Book SynopsisTrade Review"The book shows the advances Machine Learning offers for academic research. The book certainly makes a difference in the exploding literature on Machine Learning and I highly recommend it to all academics in finance."---Thorsten Hens, Journal of Economics

    15 in stock

    £38.25

  • GraphPowered Analytics and Machine Learning with

    O'Reilly Media GraphPowered Analytics and Machine Learning with

    10 in stock

    Book SynopsisThis practical guide shows data scientists, data engineers, architects, and business analysts how to get started with a graph database using TigerGraph, one of the leading graph database models available.

    10 in stock

    £39.74

  • Scaling Machine Learning with Spark

    O'Reilly Media Scaling Machine Learning with Spark

    4 in stock

    Book SynopsisWith this practical guide, author Adi Polak introduces data and ML practitioners to creative solutions that supersede today's traditional methods. You'll learn a more holistic approach that takes you beyond specific requirements and organizational goals--allowing data and ML practitioners to collaborate and understand each other better.

    4 in stock

    £47.99

  • Mastering Financial Pattern Recognition

    O'Reilly Media Mastering Financial Pattern Recognition

    1 in stock

    Book Synopsis

    1 in stock

    £47.99

  • Practicing Trustworthy Machine Learning

    O'Reilly Media Practicing Trustworthy Machine Learning

    1 in stock

    Book SynopsisWith the increasing use of AI in high-stakes domains such as medicine, law, and defense, organizations spend a lot of time and money to make ML models trustworthy. This guide provides a practical starting point to help development teams produce models that are secure, more robust, less biased, and more explainable.

    1 in stock

    £47.99

  • Embedded Analytics

    O'Reilly Media Embedded Analytics

    2 in stock

    Book SynopsisThe adoption of data analytics has remained remarkably static - perhaps reaching no more than thirty percent of potential users. This book explores the most important techniques for taking that adoption further: embedding analytics into the workflow of our everyday operations.

    2 in stock

    £35.99

  • Building Knowledge Graphs

    O'Reilly Media Building Knowledge Graphs

    15 in stock

    Book SynopsisUsing hands-on examples, this practical book shows data scientists and data practitioners how to build their own custom knowledge graphs. Authors Jesus Barrasa and Jim Webber from Neo4j illustrate patterns commonly used for building knowledge graphs that solve many of today's pressing problems.

    15 in stock

    £53.99

  • Data Science The Hard Parts

    O'Reilly Media Data Science The Hard Parts

    15 in stock

    Book SynopsisThis practical guide provides a collection of techniques and best practices that are generally overlooked in most data engineering and data science pedagogy. Taken as a whole, the lessons in this book make the difference between an average data scientist candidate and a qualified data scientist working in the field.

    15 in stock

    £39.74

  • Machine Learning Interviews

    O'Reilly Media Machine Learning Interviews

    15 in stock

    Book SynopsisIn this guide, data science leader Susan Shu Chang shows you how to tackle the ML hiring process.

    15 in stock

    £47.99

  • Architecting Data and Machine Learning Platforms

    O'Reilly Media Architecting Data and Machine Learning Platforms

    15 in stock

    Book SynopsisThis handbook is ideal for learning how to design, build, and modernize cloud native data and machine learning platforms using AWS, Azure, Google Cloud, or multicloud tools like Fivetran, dbt, Snowflake, and Databricks.

    15 in stock

    £39.74

  • ModelBased Clustering and Classification for Data

    Cambridge University Press ModelBased Clustering and Classification for Data

    1 in stock

    Book SynopsisCluster analysis finds groups in data automatically. Most methods have been heuristic and leave open such central questions as: how many clusters are there? Which method should I use? How should I handle outliers? Classification assigns new observations to groups given previously classified observations, and also has open questions about parameter tuning, robustness and uncertainty assessment. This book frames cluster analysis and classification in terms of statistical models, thus yielding principled estimation, testing and prediction methods, and sound answers to the central questions. It builds the basic ideas in an accessible but rigorous way, with extensive data examples and R code; describes modern approaches to high-dimensional data and networks; and explains such recent advances as Bayesian regularization, non-Gaussian model-based clustering, cluster merging, variable selection, semi-supervised and robust classification, clustering of functional data, text and images, and co-clTrade Review'Bouveyron, Celeux, Murphy, and Raftery pioneered the theory, computation, and application of modern model-based clustering and discriminant analysis. Here they have produced an exhaustive yet accessible text, covering both the field's state of the art as well as its intellectual development. The authors develop a unified vision of cluster analysis, rooted in the theory and computation of mixture models. Embedded R code points the way for applied readers, while graphical displays develop intuition about both model construction and the critical but often-neglected estimation process. Building on a series of running examples, the authors gradually and methodically extend their core insights into a variety of exciting data structures, including networks and functional data. This text will serve as a backbone for graduate study as well as an important reference for applied data scientists interested in working with cutting-edge tools in semi- and unsupervised machine learning.' John S. Ahlquist, University of California, San Diego'This book, written by authoritative experts in the field, gives a comprehensive and thorough introduction to model-based clustering and classification. The authors not only explain the statistical theory and methods, but also provide hands-on applications illustrating their use with the open-source statistical software R. The book also covers recent advances made for specific data structures (e.g. network data) or modeling strategies (e.g. variable selection techniques), making it a fantastic resource as an overview of the state of the field today.' Bettina Grün, Johannes Kepler Universität Linz, Austria'Four authors with diverse strengths nicely integrate their specialties to illustrate how clustering and classification methods are implemented in a wide selection of real-world applications. Their inclusion of how to use available software is an added benefit for students. The book covers foundations, challenging aspects, and some essential details of applications of clustering and classification. It is a fun and informative read!' Naisyin Wang, University of Michigan'This is a beautifully written book on a topic of fundamental importance in modern statistical science, by some of the leading researchers in the field. It is particularly effective in being an applied presentation - the reader will learn how to work with real data and at the same time clearly presenting the underlying statistical thinking. Fundamental statistical issues like model and variable selection are clearly covered as well as crucial issues in applied work such as outliers and ordinal data. The R code and graphics are particularly effective. The R code is there so you know how to do things, but it is presented in a way that does not disrupt the underlying narrative. This is not easy to do. The graphics are 'sophisticatedly simple' in that they convey complex messages without being too complex. For me, this is a 'must have' book.' Rob McCulloch, Arizona State University'This advanced text explains the underlying concepts clearly and is strong on theory … I congratulate the authors on the theoretical aspects of their book, it's a fine achievement.' Antony Unwin, International Statistical Review'In my opinion, the overall quality of this impactful and intriguing book can be expressed by concluding that it is a perfect fit to the Cambridge Series in Statistical and Probabilistic Mathematics, characterized as a series of high-quality upper-division textbooks and expository monographs containing applications and discussions of new techniques while emphasizing rigorous treatment of theoretical methods.' Zdenek Hlavka, MathSciNet'… this book not only gives the big picture of the analysis of clustering and classification but also explains recent methodological advances. Extensive real-world data examples and R code for many methods are also well summarized. This book is highly recommended to students in data science, as well as researchers and data analysts.' Li-Pang Chen, Biometrical Journal'Model-Based Clustering and Classification for Data Science: With Applications in R, written by leading statisticians in the field, provides academics and practitioners with a solid theoretical and practical foundation on the use of model-based clustering methods … this book will serve as an excellent resource for quantitative practitioners and theoreticians seeking to learn the current state of the field.' C. M. Foley, Quarterly Review of Biology'This book frames cluster analysis and classification in terms of statistical models, thus yielding principled estimation, testing and prediction methods, and sound answers to the central questions … Written for advanced undergraduates in data science, as well as researchers and practitioners, it assumes basic knowledge of multivariate calculus, linear algebra, probability and statistics.' Hans-Jürgen Schmidt, zbMATHTable of Contents1. Introduction; 2. Model-based clustering: basic ideas; 3. Dealing with difficulties; 4. Model-based classification; 5. Semi-supervised clustering and classification; 6. Discrete data clustering; 7. Variable selection; 8. High-dimensional data; 9. Non-Gaussian model-based clustering; 10. Network data; 11. Model-based clustering with covariates; 12. Other topics; List of R packages; Bibliography; Index.

    1 in stock

    £66.49

  • MultiAgent Machine Learning

    John Wiley & Sons Inc MultiAgent Machine Learning

    15 in stock

    Book SynopsisThe book begins with a chapter on traditional methods of supervised learning, covering recursive least squares learning, mean square error methods, and stochastic approximation. Chapter 2 covers single agent reinforcement learning. Topics include learning value functions, Markov games, and TD learning with eligibility traces. Chapter 3 discusses two player games including two player matrix games with both pure and mixed strategies. Numerous algorithms and examples are presented. Chapter 4 covers learning in multi-player games, stochastic games, and Markov games, focusing on learning multi-player grid gamestwo player grid games, Q-learning, and Nash Q-learning. Chapter 5 discusses differential games, including multi player differential games, actor critique structure, adaptive fuzzy control and fuzzy interference systems, the evader pursuit game, and the defending a territory games. Chapter 6 discusses new ideas on learning within robotic swarms and the innovative idea of the evolutiTrade Review“This is an interesting book both as research reference as well as teaching material for Master and PhD students.” (Zentralblatt MATH, 1 April 2015) .Table of ContentsPreface ix Chapter 1 A Brief Review of Supervised Learning 1 1.1 Least Squares Estimates 1 1.2 Recursive Least Squares 5 1.3 Least Mean Squares 6 1.4 Stochastic Approximation 10 References 11 Chapter 2 Single-Agent Reinforcement Learning 12 2.1 Introduction 12 2.2 n-Armed Bandit Problem 13 2.3 The Learning Structure 15 2.4 The Value Function 17 2.5 The Optimal Value Functions 18 2.5.1 The Grid World Example 20 2.6 Markov Decision Processes 23 2.7 Learning Value Functions 25 2.8 Policy Iteration 26 2.9 Temporal Difference Learning 28 2.10 TD Learning of the State-Action Function 30 2.11 Q-Learning 32 2.12 Eligibility Traces 33 References 37 Chapter 3 Learning in Two-Player Matrix Games 38 3.1 Matrix Games 38 3.2 Nash Equilibria in Two-Player Matrix Games 42 3.3 Linear Programming in Two-Player Zero-Sum Matrix Games 43 3.4 The Learning Algorithms 47 3.5 Gradient Ascent Algorithm 47 3.6 WoLF-IGA Algorithm 51 3.7 Policy Hill Climbing (PHC) 52 3.8 WoLF-PHC Algorithm 54 3.9 Decentralized Learning in Matrix Games 57 3.10 Learning Automata 59 3.11 Linear Reward–Inaction Algorithm 59 3.12 Linear Reward–Penalty Algorithm 60 3.13 The Lagging Anchor Algorithm 60 3.14 LR−I Lagging Anchor Algorithm 62 3.14.1 Simulation 68 References 70 Chapter 4 Learning in Multiplayer Stochastic Games 73 4.1 Introduction 73 4.2 Multiplayer Stochastic Games 75 4.3 Minimax-Q Algorithm 79 4.3.1 2 ×2 Grid Game 80 4.4 Nash Q-Learning 87 4.4.1 The Learning Process 95 4.5 The Simplex Algorithm 96 4.6 The Lemke–Howson Algorithm 100 4.7 Nash-Q Implementation 107 4.8 Friend-or-Foe Q-Learning 111 4.9 Infinite Gradient Ascent 112 4.10 Policy Hill Climbing 114 4.11 WoLF-PHC Algorithm 114 4.12 Guarding a Territory Problem in a Grid World 117 4.12.1 Simulation and Results 119 4.13 Extension of LR−I Lagging Anchor Algorithm to Stochastic Games 125 4.14 The Exponential Moving-Average Q-Learning (EMA Q-Learning) Algorithm 128 4.15 Simulation and Results Comparing EMA Q-Learning to Other Methods 131 4.15.1 Matrix Games 131 4.15.2 Stochastic Games 134 References 141 Chapter 5 Differential Games 144 5.1 Introduction 144 5.2 A Brief Tutorial on Fuzzy Systems 146 5.2.1 Fuzzy Sets and Fuzzy Rules 146 5.2.2 Fuzzy Inference Engine 148 5.2.3 Fuzzifier and Defuzzifier 151 5.2.4 Fuzzy Systems and Examples 152 5.3 Fuzzy Q-Learning 155 5.4 Fuzzy Actor–Critic Learning 159 5.5 Homicidal Chauffeur Differential Game 162 5.6 Fuzzy Controller Structure 165 5.7 Q()-Learning Fuzzy Inference System 166 5.8 Simulation Results for the Homicidal Chauffeur 171 5.9 Learning in the Evader–Pursuer Game with Two Cars 174 5.10 Simulation of the Game of Two Cars 177 5.11 Differential Game of Guarding a Territory 180 5.12 Reward Shaping in the Differential Game of Guarding a Territory 184 5.13 Simulation Results 185 5.13.1 One Defender Versus One Invader 185 5.13.2 Two Defenders Versus One Invader 191 References 197 Chapter 6 Swarm Intelligence and the Evolution of Personality Traits 200 6.1 Introduction 200 6.2 The Evolution of Swarm Intelligence 200 6.3 Representation of the Environment 201 6.4 Swarm-Based Robotics in Terms of Personalities 203 6.5 Evolution of Personality Traits 206 6.6 Simulation Framework 207 6.7 A Zero-Sum Game Example 208 6.7.1 Convergence 208 6.7.2 Simulation Results 214 6.8 Implementation for Next Sections 216 6.9 Robots Leaving a Room 218 6.10 Tracking a Target 221 6.11 Conclusion 232 References 233 Index 237

    15 in stock

    £86.36

  • Financial Signal Processing and Machine Learning

    John Wiley & Sons Inc Financial Signal Processing and Machine Learning

    15 in stock

    Book SynopsisThe modern financial industry has been required to deal with large and diverse portfolios in a variety of asset classes often with limited market data available.Table of ContentsList of Contributors xiii Preface xv 1 Overview 1 Ali N. Akansu, Sanjeev R. Kulkarni, and Dmitry Malioutov 1.1 Introduction 1 1.2 A Bird’s-Eye View of Finance 2 1.2.1 Trading and Exchanges 4 1.2.2 Technical Themes in the Book 5 1.3 Overview of the Chapters 6 1.3.1 Chapter 2: “Sparse Markowitz Portfolios” by Christine De Mol 6 1.3.2 Chapter 3: “Mean-Reverting Portfolios: Tradeoffs between Sparsity and Volatility” by Marco Cuturi and Alexandre d’Aspremont 7 1.3.3 Chapter 4: “Temporal Causal Modeling” by Prabhanjan Kambadur, Aurélie C. Lozano, and Ronny Luss 7 1.3.4 Chapter 5: “Explicit Kernel and Sparsity of Eigen Subspace for the AR(1) Process” by Mustafa U. Torun, Onur Yilmaz and Ali N. Akansu 7 1.3.5 Chapter 6: “Approaches to High-Dimensional Covariance and Precision Matrix Estimation” by Jianqing Fan, Yuan Liao, and Han Liu 7 1.3.6 Chapter 7: “Stochastic Volatility: Modeling and Asymptotic Approaches to Option Pricing and Portfolio Selection” by Matthew Lorig and Ronnie Sircar 7 1.3.7 Chapter 8: “Statistical Measures of Dependence for Financial Data” by David S. Matteson, Nicholas A. James, and William B. Nicholson 8 1.3.8 Chapter 9: “Correlated Poisson Processes and Their Applications in Financial Modeling” by Alexander Kreinin 8 1.3.9 Chapter 10: “CVaR Minimizations in Support Vector Machines” by Junya Gotoh and Akiko Takeda 8 1.3.10 Chapter 11: “Regression Models in Risk Management” by Stan Uryasev 8 1.4 Other Topics in Financial Signal Processing and Machine Learning 9 References 9 2 Sparse Markowitz Portfolios 11 ChristineDeMol 2.1 Markowitz Portfolios 11 2.2 Portfolio Optimization as an Inverse Problem: The Need for Regularization 13 2.3 Sparse Portfolios 15 2.4 Empirical Validation 17 2.5 Variations on the Theme 18 2.5.1 Portfolio Rebalancing 18 2.5.2 Portfolio Replication or Index Tracking 19 2.5.3 Other Penalties and Portfolio Norms 19 2.6 Optimal Forecast Combination 20 Acknowlegments 21 References 21 3 Mean-Reverting Portfolios 23 Marco Cuturi and Alexandre d’Aspremont 3.1 Introduction 23 3.1.1 Synthetic Mean-Reverting Baskets 24 3.1.2 Mean-Reverting Baskets with Sufficient Volatility and Sparsity 24 3.2 Proxies for Mean Reversion 25 3.2.1 Related Work and Problem Setting 25 3.2.2 Predictability 26 3.2.3 Portmanteau Criterion 27 3.2.4 Crossing Statistics 28 3.3 Optimal Baskets 28 3.3.1 Minimizing Predictability 29 3.3.2 Minimizing the Portmanteau Statistic 29 3.3.3 Minimizing the Crossing Statistic 29 3.4 Semidefinite Relaxations and Sparse Components 30 3.4.1 A Semidefinite Programming Approach to Basket Estimation 30 3.4.2 Predictability 30 3.4.3 Portmanteau 31 3.4.4 Crossing Stats 31 3.5 Numerical Experiments 32 3.5.1 Historical Data 32 3.5.2 Mean-reverting Basket Estimators 33 3.5.3 Jurek and Yang (2007) Trading Strategy 33 3.5.4 Transaction Costs 33 3.5.5 Experimental Setup 36 3.5.6 Results 36 3.6 Conclusion 39 References 39 4 Temporal Causal Modeling 41 Prabhanjan Kambadur, Aurélie C. Lozano, and Ronny Luss 4.1 Introduction 41 4.2 TCM 46 4.2.1 Granger Causality and Temporal Causal Modeling 46 4.2.2 Grouped Temporal Causal Modeling Method 47 4.2.3 Synthetic Experiments 49 4.3 Causal Strength Modeling 51 4.4 Quantile TCM (Q-TCM) 52 4.4.1 Modifying Group OMP for Quantile Loss 52 4.4.2 Experiments 53 4.5 TCM with Regime Change Identification 55 4.5.1 Model 56 4.5.2 Algorithm 58 4.5.3 Synthetic Experiments 60 4.5.4 Application: Analyzing Stock Returns 62 4.6 Conclusions 63 References 64 5 Explicit Kernel and Sparsity of Eigen Subspace for the AR(1) Process 67 Mustafa U. Torun, Onur Yilmaz, and Ali N. Akansu 5.1 Introduction 67 5.2 Mathematical Definitions 68 5.2.1 Discrete AR(1) Stochastic Signal Model 68 5.2.2 Orthogonal Subspace 69 5.3 Derivation of Explicit KLT Kernel for a Discrete AR(1) Process 72 5.3.1 A Simple Method for Explicit Solution of a Transcendental Equation 73 5.3.2 Continuous Process with Exponential Autocorrelation 74 5.3.3 Eigenanalysis of a Discrete AR(1) Process 76 5.3.4 Fast Derivation of KLT Kernel for an AR(1) Process 79 5.4 Sparsity of Eigen Subspace 82 5.4.1 Overview of Sparsity Methods 83 5.4.2 pdf-Optimized Midtread Quantizer 84 5.4.3 Quantization of Eigen Subspace 86 5.4.4 pdf of Eigenvector 87 5.4.5 Sparse KLT Method 89 5.4.6 Sparsity Performance 91 5.5 Conclusions 97 References 97 6 Approaches to High-Dimensional Covariance and Precision Matrix Estimations 100 Jianqing Fan, Yuan Liao, and Han Liu 6.1 Introduction 100 6.2 Covariance Estimation via Factor Analysis 101 6.2.1 Known Factors 103 6.2.2 Unknown Factors 104 6.2.3 Choosing the Threshold 105 6.2.4 Asymptotic Results 105 6.2.5 A Numerical Illustration 107 6.3 Precision Matrix Estimation and Graphical Models 109 6.3.1 Column-wise Precision Matrix Estimation 110 6.3.2 The Need for Tuning-insensitive Procedures 111 6.3.3 TIGER: A Tuning-insensitive Approach for Optimal Precision Matrix Estimation 112 6.3.4 Computation 114 6.3.5 Theoretical Properties of TIGER 114 6.3.6 Applications to Modeling Stock Returns 115 6.3.7 Applications to Genomic Network 118 6.4 Financial Applications 119 6.4.1 Estimating Risks of Large Portfolios 119 6.4.2 Large Panel Test of Factor Pricing Models 121 6.5 Statistical Inference in Panel Data Models 126 6.5.1 Efficient Estimation in Pure Factor Models 126 6.5.2 Panel Data Model with Interactive Effects 127 6.5.3 Numerical Illustrations 130 6.6 Conclusions 131 References 131 7 Stochastic Volatility 135 Matthew Lorig and Ronnie Sircar 7.1 Introduction 135 7.1.1 Options and Implied Volatility 136 7.1.2 Volatility Modeling 137 7.2 Asymptotic Regimes and Approximations 141 7.2.1 Contract Asymptotics 142 7.2.2 Model Asymptotics 142 7.2.3 Implied Volatility Asymptotics 143 7.2.4 Tractable Models 145 7.2.5 Model Coefficient Polynomial Expansions 146 7.2.6 Small “Vol of Vol” Expansion 152 7.2.7 Separation of Timescales Approach 152 7.2.8 Comparison of the Expansion Schemes 154 7.3 Merton Problem with Stochastic Volatility: Model Coefficient Polynomial Expansions 155 7.3.1 Models and Dynamic Programming Equation 155 7.3.2 Asymptotic Approximation 157 7.3.3 Power Utility 159 7.4 Conclusions 160 Acknowledgements 160 References 160 8 Statistical Measures of Dependence for Financial Data 162 David S. Matteson, Nicholas A. James, and William B. Nicholson 8.1 Introduction 162 8.2 Robust Measures of Correlation and Autocorrelation 164 8.2.1 Transformations and Rank-Based Methods 166 8.2.2 Inference 169 8.2.3 Misspecification Testing 171 8.3 Multivariate Extensions 174 8.3.1 Multivariate Volatility 175 8.3.2 Multivariate Misspecification Testing 176 8.3.3 Granger Causality 176 8.3.4 Nonlinear Granger Causality 177 8.4 Copulas 179 8.4.1 Fitting Copula Models 180 8.4.2 Parametric Copulas 181 8.4.3 Extending beyond Two Random Variables 183 8.4.4 Software 185 8.5 Types of Dependence 185 8.5.1 Positive and Negative Dependence 185 8.5.2 Tail Dependence 187 References 188 9 Correlated Poisson Processes and Their Applications in Financial Modeling 191 Alexander Kreinin 9.1 Introduction 191 9.2 Poisson Processes and Financial Scenarios 193 9.2.1 Integrated Market–Credit Risk Modeling 193 9.2.2 Market Risk and Derivatives Pricing 194 9.2.3 Operational Risk Modeling 194 9.2.4 Correlation of Operational Events 195 9.3 Common Shock Model and Randomization of Intensities 196 9.3.1 Common Shock Model 196 9.3.2 Randomization of Intensities 196 9.4 Simulation of Poisson Processes 197 9.4.1 Forward Simulation 197 9.4.2 Backward Simulation 200 9.5 Extreme Joint Distribution 207 9.5.1 Reduction to Optimization Problem 207 9.5.2 Monotone Distributions 208 9.5.3 Computation of the Joint Distribution 214 9.5.4 On the Frechet–Hoeffding Theorem 215 9.5.5 Approximation of the Extreme Distributions 217 9.6 Numerical Results 219 9.6.1 Examples of the Support 219 9.6.2 Correlation Boundaries 221 9.7 Backward Simulation of the Poisson-Wiener Process 222 9.8 Concluding Remarks 227 Acknowledgments 228 Appendix A 229 A. 1 Proof of Lemmas 9.2 and 9.3 229 A.1.1 Proof of Lemma 9.2 229 A.1.2 Proof of Lemma 9.3 230 References 231 10 CVaR Minimizations in Support Vector Machines 233 Jun-ya Gotoh and Akiko Takeda 10.1 What Is CVaR? 234 10.1.1 Definition and Interpretations 234 10.1.2 Basic Properties of CVaR 238 10.1.3 Minimization of CVaR 240 10.2 Support Vector Machines 242 10.2.1 Classification 242 10.2.2 Regression 246 10.3 ν-SVMs as CVaR Minimizations 247 10.3.1 ν-SVMs as CVaR Minimizations with Homogeneous Loss 247 10.3.2 ν-SVMs as CVaR Minimizations with Nonhomogeneous Loss 251 10.3.3 Refining the ν-Property 253 10.4 Duality 256 10.4.1 Binary Classification 256 10.4.2 Geometric Interpretation of ν-SVM 257 10.4.3 Geometric Interpretation of the Range of ν for ν-SVC 258 10.4.4 Regression 259 10.4.5 One-class Classification and SVDD 259 10.5 Extensions to Robust Optimization Modelings 259 10.5.1 Distributionally Robust Formulation 259 10.5.2 Measurement-wise Robust Formulation 261 10.6 Literature Review 262 10.6.1 CVaR as a Risk Measure 263 10.6.2 From CVaR Minimization to SVM 263 10.6.3 From SVM to CVaR Minimization 263 10.6.4 Beyond CVaR 263 References 264 11 Regression Models in Risk Management 266 Stan Uryasev 11.1 Introduction 267 11.2 Error and Deviation Measures 268 11.3 Risk Envelopes and Risk Identifiers 271 11.3.1 Examples of Deviation Measures D, Corresponding Risk Envelopes Q, and Sets of Risk Identifiers QD(X) 272 11.4 Error Decomposition in Regression 273 11.5 Least-Squares Linear Regression 275 11.6 Median Regression 277 11.7 Quantile Regression and Mixed Quantile Regression 281 11.8 Special Types of Linear Regression 283 11.9 Robust Regression 284 References, Further Reading, and Bibliography 287 Index 289

    15 in stock

    £79.16

  • Robot Learning by Visual Observation

    John Wiley & Sons Inc Robot Learning by Visual Observation

    15 in stock

    Book SynopsisThis book presents programming by demonstration for robot learning from observations with a focus on the trajectory level of task abstraction Discusses methods for optimization of task reproduction, such as reformulation of task planning as a constrained optimization problemFocuses on regression approaches, such as Gaussian mixture regression, spline regression, and locally weighted regressionConcentrates on the use of vision sensors for capturing motions and actions during task demonstration by a human task expertTable of ContentsPreface xi List of Abbreviations xv 1 Introduction 1 1.1 Robot Programming Methods 2 1.2 Programming by Demonstration 3 1.3 Historical Overview of Robot PbD 4 1.4 PbD System Architecture 6 1.4.1 Learning Interfaces 8 1.4.1.1 Sensor-Based Techniques 10 1.4.2 Task Representation and Modeling 13 1.4.2.1 Symbolic Level 14 1.4.2.2 Trajectory Level 16 1.4.3 Task Analysis and Planning 18 1.4.3.1 Symbolic Level 18 1.4.3.2 Trajectory Level 19 1.4.4 Program Generation and Task Execution 20 1.5 Applications 21 1.6 Research Challenges 25 1.6.1 Extracting the Teacher’s Intention from Observations 26 1.6.2 Robust Learning from Observations 27 1.6.2.1 Robust Encoding of Demonstrated Motions 27 1.6.2.2 Robust Reproduction of PbD Plans 29 1.6.3 Metrics for Evaluation of Learned Skills 29 1.6.4 Correspondence Problem 30 1.6.5 Role of the Teacher in PbD 31 1.7 Summary 32 References 33 2 Task Perception 432.1 Optical Tracking Systems 43 2.2 Vision Cameras 44 2.3 Summary 46 References 46 3 Task Representation 49 3.1 Level of Abstraction 50 3.2 Probabilistic Learning 51 3.3 Data Scaling and Aligning 51 3.3.1 Linear Scaling 52 3.3.2 Dynamic Time Warping (DTW) 52 3.4 Summary 55 References 55 4 Task Modeling 57 4.1 Gaussian Mixture Model (GMM) 57 4.2 Hidden Markov Model (HMM) 59 4.2.1 Evaluation Problem 61 4.2.2 Decoding Problem 62 4.2.3 Training Problem 62 4.2.4 Continuous Observation Data 63 4.3 Conditional Random Fields (CRFs) 64 4.3.1 Linear Chain CRF 65 4.3.2 Training and Inference 66 4.4 Dynamic Motion Primitives (DMPs) 68 4.5 Summary 70 References 70 5 Task Planning 73 5.1 Gaussian Mixture Regression 73 5.2 Spline Regression 74 5.2.1 Extraction of Key Points as Trajectories Features 75 5.2.2 HMM-Based Modeling and Generalization 80 5.2.2.1 Related Work 80 5.2.2.2 Modeling 81 5.2.2.3 Generalization 83 5.2.2.4 Experiments 87 5.2.2.5 Comparison with Related Work 100 5.2.3 CRF Modeling and Generalization 107 5.2.3.1 Related Work 107 5.2.3.2 Feature Functions Formation 107 5.2.3.3 Trajectories Encoding and Generalization 109 5.2.3.4 Experiments 111 5.2.3.5 Comparisons with Related Work 115 5.3 Locally Weighted Regression 117 5.4 Gaussian Process Regression 121 5.5 Summary 122 References 123 6 Task Execution 129 6.1 Background and Related Work 129 6.2 Kinematic Robot Control 132 6.3 Vision-Based Trajectory Tracking Control 134 6.3.1 Image-Based Visual Servoing (IBVS) 134 6.3.2 Position-Based Visual Servoing (PBVS) 135 6.3.3 Advanced Visual Servoing Methods 141 6.4 Image-Based Task Planning 141 6.4.1 Image-Based Learning Environment 141 6.4.2 Task Planning 142 6.4.3 Second-Order Conic Optimization 143 6.4.4 Objective Function 144 6.4.5 Constraints 146 6.4.5.1 Image-Space Constraints 146 6.4.5.2 Cartesian Space Constraints 149 6.4.5.3 Robot Manipulator Constraints 150 6.4.6 Optimization Model 152 6.5 Robust Image-Based Tracking Control 156 6.5.1 Simulations 157 6.5.1.1 Simulation 1 158 6.5.1.2 Simulation 2 161 6.5.2 Experiments 162 6.5.2.1 Experiment 1 166 6.5.2.2 Experiment 2 173 6.5.2.3 Experiment 3 173 6.5.3 Robustness Analysis and Comparisons with Other Methods 173 6.6 Discussion 183 6.7 Summary 185 References 185 Index 000

    15 in stock

    £93.56

  • Deep Learning for Physical Scientists

    John Wiley & Sons Inc Deep Learning for Physical Scientists

    1 in stock

    Book SynopsisDiscover the power of machine learning in the physical sciences with this one-stop resource from a leading voice in the field Deep Learning for Physical Scientists: Accelerating Research with Machine Learning delivers an insightful analysis of the transformative techniques being used in deep learning within the physical sciences. The book offers readers the ability to understand, select, and apply the best deep learning techniques for their individual research problem and interpret the outcome. Designed to teach researchers to think in useful new ways about how to achieve results in their research, the book provides scientists with new avenues to attack problems and avoid common pitfalls and problems. Practical case studies and problems are presented, giving readers an opportunity to put what they have learned into practice, with exemplar coding approaches provided to assist the reader. From modelling basics to feed-forward networks, the book offerTable of ContentsAbout the Authors xi Acknowledgements xii 1 Prefix – Learning to “Think Deep” 1 1.1 So What Do I Mean by Changing the Way You Think? 2 2 Setting Up a Python Environment for Deep Learning Projects 5 2.1 Python Overview 5 2.2 Why Use Python for Data Science? 6 2.3 Anaconda Python 7 2.3.1 Why Use Anaconda? 7 2.3.2 Downloading and Installing Anaconda Python 7 2.3.2.1 Installing TensorFlow 9 2.4 Jupyter Notebooks 10 2.4.1 Why Use a Notebook? 10 2.4.2 Starting a Jupyter Notebook Server 11 2.4.3 Adding Markdown to Notebooks 12 2.4.4 A Simple Plotting Example 14 2.4.5 Summary 16 3 Modelling Basics 17 3.1 Introduction 17 3.2 Start Where You Mean to Go On – Input Definition and Creation 17 3.3 Loss Functions 18 3.3.1 Classification and Regression 19 3.3.2 Regression Loss Functions 19 3.3.2.1 Mean Absolute Error 19 3.3.2.2 Root Mean Squared Error 19 3.3.3 Classification Loss Functions 20 3.3.3.1 Precision 21 3.3.3.2 Recall 21 3.3.3.3 F1 Score 22 3.3.3.4 Confusion Matrix 22 3.3.3.5 (Area Under) Receiver Operator Curve (AU-ROC) 23 3.3.3.6 Cross Entropy 25 3.4 Overfitting and Underfitting 28 3.4.1 Bias–Variance Trade-Off 29 3.5 Regularisation 31 3.5.1 Ridge Regression 31 3.5.2 LASSO Regularisation 33 3.5.3 Elastic Net 34 3.5.4 Bagging and Model Averaging 34 3.6 Evaluating a Model 35 3.6.1 Holdout Testing 35 3.6.2 Cross Validation 36 3.7 The Curse of Dimensionality 37 3.7.1 Normalising Inputs and Targets 37 3.8 Summary 39 Notes 39 4 Feedforward Networks and Multilayered Perceptrons 41 4.1 Introduction 41 4.2 The Single Perceptron 41 4.2.1 Training a Perceptron 41 4.2.2 Activation Functions 42 4.2.3 Back Propagation 43 4.2.3.1 Weight Initialisation 45 4.2.3.2 Learning Rate 46 4.2.4 Key Assumptions 46 4.2.5 Putting It All Together in TensorFlow 47 4.3 Moving to a Deep Network 49 4.4 Vanishing Gradients and Other “Deep” Problems 53 4.4.1 Gradient Clipping 54 4.4.2 Non-saturating Activation Functions 54 4.4.2.1 ReLU 54 4.4.2.2 Leaky ReLU 56 4.4.2.3 ELU 57 4.4.3 More Complex Initialisation Schemes 57 4.4.3.1 Xavier 58 4.4.3.2 He 58 4.4.4 Mini Batching 59 4.5 Improving the Optimisation 60 4.5.1 Bias 60 4.5.2 Momentum 63 4.5.3 Nesterov Momentum 63 4.5.4 (Adaptive) Learning Rates 63 4.5.5 AdaGrad 64 4.5.6 RMSProp 65 4.5.7 Adam 65 4.5.8 Regularisation 66 4.5.9 Early Stopping 66 4.5.10 Dropout 68 4.6 Parallelisation of learning 69 4.6.1 Hogwild! 69 4.7 High and Low-level Tensorflow APIs 70 4.8 Architecture Implementations 72 4.9 Summary 73 4.10 Papers to Read 73 5 Recurrent Neural Networks 77 5.1 Introduction 77 5.2 Basic Recurrent Neural Networks 77 5.2.1 Training a Basic RNN 78 5.2.2 Putting It All Together in TensorFlow 79 5.2.3 The Problem with Vanilla RNNs 81 5.3 Long Short-Term Memory (LSTM) Networks 82 5.3.1 Forget Gate 82 5.3.2 Input Gate 84 5.3.3 Output Gate 84 5.3.4 Peephole Connections 85 5.3.5 Putting It All Together in TensorFlow 86 5.4 Gated Recurrent Units 87 5.4.1 Putting It All Together in TensorFlow 88 5.5 Using Keras for RNNs 88 5.6 Real World Implementations 89 5.7 Summary 89 5.8 Papers to Read 90 6 Convolutional Neural Networks 93 6.1 Introduction 93 6.2 Fundamental Principles of Convolutional Neural Networks 94 6.2.1 Convolution 94 6.2.2 Pooling 95 6.2.2.1 Why Use Pooling? 95 6.2.2.2 Types of Pooling 96 6.2.3 Stride and Padding 99 6.2.4 Sparse Connectivity 101 6.2.5 Parameter Sharing 101 6.2.6 Convolutional Neural Networks with TensorFlow 102 6.3 Graph Convolutional Networks 103 6.3.1 Graph Convolutional Networks in Practice 104 6.4 Real World Implementations 107 6.5 Summary 108 6.6 Papers to Read 108 7 Auto-Encoders 111 7.1 Introduction 111 7.1.1 Auto-Encoders for Dimensionality Reduction 111 7.2 Getting a Good Start – Stacked Auto-Encoders, Restricted Boltzmann Machines, and Pretraining 115 7.2.1 Restricted Boltzmann Machines 115 7.2.2 Stacking Restricted Boltzmann Machines 118 7.3 Denoising Auto-Encoders 120 7.4 Variational Auto-Encoders 121 7.5 Sequence to Sequence Learning 125 7.6 The Attention Mechanism 126 7.7 Application in Chemistry: Building a Molecular Generator 127 7.8 Summary 132 7.9 Real World Implementations 132 7.10 Papers to Read 132 8 Optimising Models Using Bayesian Optimisation 135 8.1 Introduction 135 8.2 Defining Our Function 135 8.3 Grid and Random Search 136 8.4 Moving Towards an Intelligent Search 137 8.5 Exploration and Exploitation 137 8.6 Greedy Search 138 8.6.1 Key Fact One – Exploitation Heavy Search is Susceptible to Initial Data Bias 139 8.7 Diversity Search 141 8.8 Bayesian Optimisation 142 8.8.1 Domain Knowledge (or Prior) 142 8.8.2 Gaussian Processes 145 8.8.3 Kernels 146 8.8.3.1 Stationary Kernels 146 8.8.3.2 Noise Kernel 147 8.8.4 Combining Gaussian Process Prediction and Optimisation 149 8.8.4.1 Probability of Improvement 149 8.8.4.2 Expected Improvement 150 8.8.5 Balancing Exploration and Exploitation 151 8.8.6 Upper and Lower Confidence Bound Algorithm 151 8.8.7 Maximum Entropy Sampling 152 8.8.8 Optimising the Acquisition Function 153 8.8.9 Cost Sensitive Bayesian Optimisation 155 8.8.10 Constrained Bayesian Optimisation 158 8.8.11 Parallel Bayesian Optimisation 158 8.8.11.1 qEI 158 8.8.11.2 Constant Liar and Kriging Believer 160 8.8.11.3 Local Penalisation 162 8.8.11.4 Parallel Thompson Sampling 162 8.8.11.5 K-Means Batch Bayesian Optimisation 162 8.9 Summary 163 8.10 Papers to Read 163 Case Study 1 Solubility Prediction Case Study 167 CS 1.1 Step 1 – Import Packages 167 CS 1.2 Step 2 – Importing the Data 168 CS 1.3 Step 3 – Creating the Inputs 168 CS 1.4 Step 4 – Splitting into Training and Testing 168 CS 1.5 Step 5 – Defining Our Model 169 CS 1.6 Step 6 – Running Our Model 169 CS 1.7 Step 7 – Automatically Finding an Optimised Architecture Using Bayesian Optimisation 170 Case Study 2 Time Series Forecasting with LSTMs 173 CS 2.1 Simple LSTM 173 CS 2.2 Sequence-to-Sequence LSTM 177 Case Study 3 Deep Embeddings for Auto-Encoder-Based Featurisation 185 Index 190

    1 in stock

    £55.76

  • Big Data and Machine Learning in Quantitative

    John Wiley & Sons Inc Big Data and Machine Learning in Quantitative

    15 in stock

    Book SynopsisGet to know the why' and how' of machine learning and big data in quantitative investment Big Data and Machine Learning in Quantitative Investment is not just about demonstrating the maths or the coding. Instead, it's a book by practitioners for practitioners, covering the questions of why and how of applying machine learning and big data to quantitative finance. The book is split into 13 chapters, each of which is written by a different author on a specific case. The chapters are ordered according to the level of complexity; beginning with the big picture and taxonomy, moving onto practical applications of machine learning and finally finishing with innovative approaches using deep learning. Gain a solid reason to use machine learning Frame your question using financial markets laws Know your data Understand how machine learning is becoming ever more sophisticated Machine learning and big data are not a magical solution, but appropriately applied, they are extremely effectivTable of ContentsCHAPTER 1 Do Algorithms Dream About Artificial Alphas? 1By Michael Kollo CHAPTER 2 Taming Big Data 13By Rado Lipuš and Daryl Smith CHAPTER 3 State of Machine Learning Applications in Investment Management 33By Ekaterina Sirotyuk CHAPTER 4 Implementing Alternative Data in an Investment Process 51By Vinesh Jha CHAPTER 5 Using Alternative and Big Data to Trade Macro Assets 75By Saeed Amen and Iain Clark CHAPTER 6 Big Is Beautiful: How Email Receipt Data Can Help Predict Company Sales 95By Giuliano De Rossi, Jakub Kolodziej and Gurvinder Brar CHAPTER 7 Ensemble Learning Applied to Quant Equity: Gradient Boosting in a Multifactor Framework 129By Tony Guida and Guillaume Coqueret CHAPTER 8 A Social Media Analysis of Corporate Culture 149By Andy Moniz CHAPTER 9 Machine Learning and Event Detection for Trading Energy Futures 169By Peter Hafez and Francesco Lautizi CHAPTER 10 Natural Language Processing of Financial News 185By M. Berkan Sesen, Yazann Romahi and Victor Li CHAPTER 11 Support Vector Machine-Based Global Tactical Asset Allocation 211By Joel Guglietta CHAPTER 12 Reinforcement Learning in Finance 225By Gordon Ritter CHAPTER 13 Deep Learning in Finance: Prediction of Stock Returns with Long Short-Term Memory Networks 251By Miquel N. Alonso, Gilberto Batres-Estrada and Aymeric Moulin Biography 279

    15 in stock

    £37.80

  • Machine Learning in the AWS Cloud

    John Wiley & Sons Inc Machine Learning in the AWS Cloud

    7 in stock

    Book SynopsisPut the power of AWS Cloud machine learning services to work in your business and commercial applications! Machine Learning in the AWS Cloud introduces readers to the machine learning (ML) capabilities of the Amazon Web Services ecosystem and provides practical examples to solve real-world regression and classification problems. While readers do not need prior ML experience, they are expected to have some knowledge of Python and a basic knowledge of Amazon Web Services. Part One introduces readers to fundamental machine learning concepts. You will learn about the types of ML systems, how they are used, and challenges you may face with ML solutions. Part Two focuses on machine learning services provided by Amazon Web Services. You'll be introduced to the basics of cloud computing and AWS offerings in the cloud-based machine learning space. Then you'll learn to use Amazon Machine Learning to solve a simpler class of machine learning problems, and Amazon SageMaker to solve more complexTable of ContentsIntroduction xxiii Part 1 Fundamentals of Machine Learning 1 Chapter 1 Introduction to Machine Learning 3 What is Machine Learning? 4 Tools Commonly Used by Data Scientists 4 Common Terminology 5 Real-World Applications of Machine Learning 7 Types of Machine Learning Systems 8 Supervised Learning 8 Unsupervised Learning 9 Semi-Supervised Learning 10 Reinforcement Learning 11 Batch Learning 11 Incremental Learning 12 Instance-based Learning 12 Model-based Learning 12 The Traditional Versus the Machine Learning Approach 13 A Rule-based Decision System 14 A Machine Learning–based System 17 Summary 25 Chapter 2 Data Collection and Preprocessing 27 Machine Learning Datasets 27 Scikit-learn Datasets 27 AWS Public Datasets 30 Kaggle.com Datasets 30 UCI Machine Learning Repository 30 Data Preprocessing Techniques 31 Obtaining an Overview of the Data 31 Handling Missing Values 42 Creating New Features 44 Transforming Numeric Features 46 One-Hot Encoding Categorical Features 47 Summary 50 Chapter 3 Data Visualization with Python 51 Introducing Matplotlib 51 Components of a Plot 54 Figure 55 Axes55 Axis 56 Axis Labels 56 Grids 57 Title 57 Common Plots 58 Histograms 58 Bar Chart 62 Grouped Bar Chart 63 Stacked Bar Chart 65 Stacked Percentage Bar Chart 67 Pie Charts 69 Box Plot 71 Scatter Plots 73 Summary 78 Chapter 4 Creating Machine Learning Models with Scikit-learn 79 Introducing Scikit-learn 79 Creating a Training and Test Dataset 80 K-Fold Cross Validation 84 Creating Machine Learning Models 86 Linear Regression 86 Support Vector Machines 92 Logistic Regression 101 Decision Trees 109 Summary 114 Chapter 5 Evaluating Machine Learning Models 115 Evaluating Regression Models 115 RMSE Metric 117 R2 Metric 119 Evaluating Classification Models 119 Binary Classification Models 119 Multi-Class Classification Models 126 Choosing Hyperparameter Values 131 Summary 132 Part 2 Machine Learning with Amazon Web Services 133 Chapter 6 Introduction to Amazon Web Services 135 What is Cloud Computing? 135 Cloud Service Models 136 Cloud Deployment Models 138 The AWS Ecosystem 139 Machine Learning Application Services 140 Machine Learning Platform Services 141 Support Services 142 Sign Up for an AWS Free-Tier Account 142 Step 1: Contact Information 143 Step 2: Payment Information 145 Step 3: Identity Verification 145 Step 4: Support Plan Selection 147 Step 5: Confirmation 148 Summary 148 Chapter 7 AWS Global Infrastructure 151 Regions and Availability Zones 151 Edge Locations 153 Accessing AWS 154 The AWS Management Console 156 Summary 160 Chapter 8 Identity and Access Management 161 Key Concepts 161 Root Account 161 User 162 Identity Federation 162 Group 163 Policy164 Role 164 Common Tasks 165 Creating a User 167 Modifying Permissions Associated with an Existing Group 172 Creating a Role 173 Securing the Root Account with MFA 176 Setting Up an IAM Password Rotation Policy 179 Summary 180 Chapter 9 Amazon S3 181 Key Concepts 181 Bucket 181 Object Key 182 Object Value 182 Version ID 182 Storage Class 182 Costs 183 Subresources 183 Object Metadata 184 Common Tasks 185 Creating a Bucket 185 Uploading an Object 189 Accessing an Object 191 Changing the Storage Class of an Object 195 Deleting an Object 196 Amazon S3 Bucket Versioning 197 Accessing Amazon S3 Using the AWS CLI 199 Summary 200 Chapter 10 Amazon Cognito 201 Key Concepts 201 Authentication 201 Authorization 201 Identity Provider 202 Client 202 OAuth 2.0 202 OpenID Connect 202 Amazon Cognito User Pool 202 Identity Pool 203 Amazon Cognito Federated Identities 203 Common Tasks 204 Creating a User Pool 204 Retrieving the App Client Secret 213 Creating an Identity Pool 214 User Pools or Identity Pools: Which One Should You Use? 218 Summary 219 Chapter 11 Amazon DynamoDB 221 Key Concepts 221 Tables 222 Global Tables 222 Items 222 Attributes 222 Primary Keys 222 Secondary Indexes 223 Queries 223 Scans 223 Read Consistency 224 Read/Write Capacity Modes 224 Common Tasks 225 Creating a Table 225 Adding Items to a Table 228 Creating an Index 231 Performing a Scan 233 Performing a Query 235 Summary 236 Chapter 12 AWS Lambda 237 Common Use Cases for Lambda 237 Key Concepts 238 Supported Languages 238 Lambda Functions 238 Programming Model 239 Execution Environment 243 Service Limitations 244 Pricing and Availability 244 Common Tasks 244 Creating a Simple Python Lambda Function Using the AWS Management Console 244 Testing a Lambda Function Using the AWS Management Console 250 Deleting an AWS Lambda Function Using the AWS Management Console 253 Summary 255 Chapter 13 Amazon Comprehend 257 Key Concepts 257 Natural Language Processing 257 Topic Modeling 259 Language Support 259 Pricing and Availability 259 Text Analysis Using the Amazon Comprehend Management Console 260 Interactive Text Analysis with the AWS CLI 262 Entity Detection with the AWS CLI 263 Key Phrase Detection with the AWS CLI 264 Sentiment Analysis with the AWS CLI 265 Using Amazon Comprehend with AWS Lambda 266 Summary 274 Chapter 14 Amazon Lex 275 Key Concepts 275 Bot 275 Client Application 276 Intent 276 Slot 276 Utterance 277 Programming Model 277 Pricing and Availability 278 Creating an Amazon Lex Bot 278 Creating Amazon DynamoDB Tables 278 Creating AWS Lambda Functions 285 Creating the Chatbot 304 Customizing the AccountOverview Intent 308 Customizing the ViewTransactionList Intent 312 Testing the Chatbot 314 Summary 315 Chapter 15 Amazon Machine Learning 317 Key Concepts 317 Datasources 318 ML Model 318 Regularization 319 Training Parameters 319 Descriptive Statistics 320 Pricing and Availability 321 Creating Datasources 321 Creating the Training Datasource 324 Creating the Test Datasource 330 Viewing Data Insights 332 Creating an ML Model 337 Making Batch Predictions 341 Creating a Real-Time Prediction Endpoint for Your Machine Learning Model 346 Making Predictions Using the AWS CLI 347 Using Real-Time Prediction Endpoints with Your Applications 349 Summary 350 Chapter 16 Amazon SageMaker 353 Key Concepts 353 Programming Model 354 Amazon SageMaker Notebook Instances 354 Training Jobs 354 Prediction Instances 355 Prediction Endpoint and Endpoint Configuration 355 Amazon SageMaker Batch Transform 355 Data Channels 355 Data Sources and Formats 356 Built-in Algorithms 356 Pricing and Availability 357 Creating an Amazon SageMaker Notebook Instance 357 Preparing Test and Training Data 362 Training a Scikit-learn Model on an Amazon SageMaker Notebook Instance 364 Training a Scikit-learn Model on a Dedicated Training Instance 368 Training a Model Using a Built-in Algorithm on a Dedicated Training Instance 379 Summary 384 Chapter 17 Using Google TensorFlow with Amazon SageMaker 387 Introduction to Google TensorFlow 387 Creating a Linear Regression Model with Google TensorFlow 390 Training and Deploying a DNN Classifier Using the TensorFlow Estimators API and Amazon SageMaker 408 Summary 419 Chapter 18 Amazon Rekognition 421 Key Concepts 421 Object Detection 421 Object Location 422 Scene Detection 422 Activity Detection 422 Facial Recognition 422 Face Collection 422 API Sets 422 Non-Storage and Storage-Based Operations 423 Model Versioning 423 Pricing and Availability 423 Analyzing Images Using the Amazon Rekognition Management Console 423 Interactive Image Analysis with the AWS CLI 428 Using Amazon Rekognition with AWS Lambda 433 Creating the Amazon DynamoDB Table 433 Creating the AWS Lambda Function 435 Summary 444 Appendix A Anaconda and Jupyter Notebook Setup 445 Installing the Anaconda Distribution 445 Creating a Conda Python Environment 447 Installing Python Packages 449 Installing Jupyter Notebook 451 Summary 454 Appendix B AWS Resources Needed to Use This Book 455 Creating an IAM User for Development 455 Creating S3 Buckets 458 Appendix C Installing and Configuring the AWS CLI 461 Mac OS Users 461 Installing the AWS CLI 461 Configuring the AWS CLI 462 Windows Users 464 Installing the AWS CLI4 64 Configuring the AWS CLI 465 Appendix D Introduction to NumPy and Pandas 467 NumPy 467 Creating NumPy Arrays 467 Modifying Arrays 471 Indexing and Slicing 474 Pandas 475 Creating Series and Dataframes 476 Getting Dataframe Information 478 Selecting Data 481 Index 485

    7 in stock

    £28.49

  • Machine Learning for Future Wireless

    John Wiley & Sons Inc Machine Learning for Future Wireless

    15 in stock

    Book SynopsisA comprehensive review to the theory, application and research of machine learning for future wireless communications In one single volume, Machine Learning for Future Wireless Communications provides a comprehensive and highly accessible treatment to the theory, applications and current research developments to the technology aspects related to machine learning for wireless communications and networks. The technology development of machine learning for wireless communications has grown explosively and is one of the biggest trends in related academic, research and industry communities. Deep neural networks-based machine learning technology is a promising tool to attack the big challenge in wireless communications and networks imposed by the increasing demands in terms of capacity, coverage, latency, efficiency flexibility, compatibility, quality of experience and silicon convergence. The author a noted expert on the topic covers a wide range of topics including system architecture anTable of ContentsList of Contributors xv Preface xxi Part I Spectrum Intelligence and Adaptive Resource Management 1 1 Machine Learning for Spectrum Access and Sharing 3Kobi Cohen 1.1 Introduction 3 1.2 Online Learning Algorithms for Opportunistic Spectrum Access 4 1.3 Learning Algorithms for Channel Allocation 9 1.4 Conclusions 19 Acknowledgments 20 Bibliography 20 2 Reinforcement Learning for Resource Allocation in Cognitive Radio Networks 27Andres Kwasinski, Wenbo Wang, and Fatemeh Shah Mohammadi 2.1 Use of Q-Learning for Cross-layer Resource Allocation 29 2.2 Deep Q-Learning and Resource Allocation 33 2.3 Cooperative Learning and Resource Allocation 36 2.4 Conclusions 42 Bibliography 43 3 Machine Learning for Spectrum Sharing in Millimeter-Wave Cellular Networks 45Hadi Ghauch, Hossein Shokri-Ghadikolaei, Gabor Fodor, Carlo Fischione, and Mikael Skoglund 3.1 Background and Motivation 45 3.2 System Model and Problem Formulation 49 3.3 Hybrid Solution Approach 54 3.4 Conclusions and Discussions 59 Appendix A Appendix for Chapter 3 61 A.1 Overview of Reinforcement Learning 61 Bibliography 61 4 Deep Learning–Based Coverage and Capacity Optimization 63Andrei Marinescu, Zhiyuan Jiang, Sheng Zhou, Luiz A. DaSilva, and Zhisheng Niu 4.1 Introduction 63 4.2 Related Machine Learning Techniques for Autonomous Network Management 64 4.3 Data-Driven Base-Station Sleeping Operations by Deep Reinforcement Learning 67 4.4 Dynamic Frequency Reuse through a Multi-Agent Neural Network Approach 72 4.5 Conclusions 81 Bibliography 82 5 Machine Learning for Optimal Resource Allocation 85Marius Pesavento and Florian Bahlke 5.1 Introduction and Motivation 85 5.2 System Model 88 5.3 Resource Minimization Approaches 90 5.4 Numerical Results 96 5.5 Concluding Remarks 99 Bibliography 100 6 Machine Learning in Energy Efficiency Optimization 105Muhammad Ali Imran, Ana Flávia dos Reis, Glauber Brante, Paulo Valente Klaine, and Richard Demo Souza 6.1 Self-Organizing Wireless Networks 106 6.2 Traffic Prediction and Machine Learning 110 6.3 Cognitive Radio and Machine Learning 111 6.4 Future Trends and Challenges 112 6.5 Conclusions 114 Bibliography 114 7 Deep Learning Based Traffic and Mobility Prediction 119Honggang Zhang, Yuxiu Hua, Chujie Wang, Rongpeng Li, and Zhifeng Zhao 7.1 Introduction 119 7.2 Related Work 120 7.3 Mathematical Background 122 7.4 ANN-Based Models for Traffic and Mobility Prediction 124 7.5 Conclusion 133 Bibliography 134 8 Machine Learning for Resource-Efficient Data Transfer in Mobile Crowdsensing 137Benjamin Sliwa, Robert Falkenberg, and Christian Wietfeld 8.1 Mobile Crowdsensing 137 8.2 ML-Based Context-Aware Data Transmission 140 8.3 Methodology for Real-World Performance Evaluation 148 8.4 Results of the Real-World Performance Evaluation 149 8.5 Conclusion 152 Acknowledgments 154 Bibliography 154 Part II Transmission Intelligence and Adaptive Baseband Processing 157 9 Machine Learning–Based Adaptive Modulation and Coding Design 159Lin Zhang and Zhiqiang Wu 9.1 Introduction and Motivation 159 9.2 SL-Assisted AMC 162 9.3 RL-Assisted AMC 172 9.4 Further Discussion and Conclusions 178 Bibliography 178 10 Machine Learning–Based Nonlinear MIMO Detector 181Song-Nam Hong and Seonho Kim 10.1 Introduction 181 10.2 A Multihop MIMO Channel Model 182 10.3 Supervised-Learning-based MIMO Detector 184 10.4 Low-Complexity SL (LCSL) Detector 188 10.5 Numerical Results 191 10.6 Conclusions 193 Bibliography 193 11 Adaptive Learning for Symbol Detection: A Reproducing Kernel Hilbert Space Approach 197Daniyal Amir Awan, Renato Luis Garrido Cavalcante, Masahario Yukawa, and Slawomir Stanczak 11.1 Introduction 197 11.2 Preliminaries 198 11.3 System Model 200 11.4 The Proposed Learning Algorithm 203 11.5 Simulation 207 11.6 Conclusion 208 Appendix A Derivation of the Sparsification Metric and the Projections onto the Subspace Spanned by the Nonlinear Dictionary 210 Bibliography 211 12 Machine Learning for Joint Channel Equalization and Signal Detection 213Lin Zhang and Lie-Liang Yang 12.1 Introduction 213 12.2 Overview of Neural Network-Based Channel Equalization 214 12.3 Principles of Equalization and Detection 219 12.5 Performance of OFDM Systems With Neural Network-Based Equalization 232 12.6 Conclusions and Discussion 236 Bibliography 237 13 Neural Networks for Signal Intelligence: Theory and Practice 243Jithin Jagannath, Nicholas Polosky, Anu Jagannath, Francesco Restuccia, and Tommaso Melodia 13.1 Introduction 243 13.2 Overview of Artificial Neural Networks 244 13.3 Neural Networks for Signal Intelligence 248 13.4 Neural Networks for Spectrum Sensing 255 13.5 Open Problems 259 13.6 Conclusion 260 Bibliography 260 14 Channel Coding with Deep Learning: An Overview 265Shugong Xu 14.1 Overview of Channel Coding and Deep Learning 265 14.2 DNNs for Channel Coding 268 14.3 CNNs for Decoding 277 14.4 RNNs for Decoding 279 14.5 Conclusions 283 Bibliography 283 15 Deep Learning Techniques for Decoding Polar Codes 287Warren J. Gross, Nghia Doan, Elie Ngomseu Mambou, and Seyyed Ali Hashemi 15.1 Motivation and Background 287 15.2 Decoding of Polar Codes: An Overview 289 15.3 DL-Based Decoding for Polar Codes 292 15.4 Conclusions 299 Bibliography 299 16 Neural Network–Based Wireless Channel Prediction 303Wei Jiang, Hans Dieter Schotten, and Ji-ying Xiang 16.1 Introduction 303 16.2 Adaptive Transmission Systems 305 16.3 The Impact of Outdated CSI 307 16.4 Classical Channel Prediction 309 16.5 NN-Based Prediction Schemes 313 16.6 Summary 323 Bibliography 323 Part III Network Intelligence and Adaptive System Optimization 327 17 Machine Learning for Digital Front-End: a Comprehensive Overview 329Pere L. Gilabert, David López-Bueno, Thi Quynh Anh Pham, and Gabriel Montoro 17.1 Motivation and Background 329 17.2 Overview of CFR and DPD 331 17.3 Dimensionality Reduction and ML 341 17.4 Nonlinear Neural Network Approaches 350 17.5 Support Vector Regression Approaches 368 17.6 Further Discussion and Conclusions 373 Bibliography 374 18 Neural Networks for Full-Duplex Radios: Self-Interference Cancellation 383Alexios Balatsoukas-Stimming 18.1 Nonlinear Self-Interference Models 384 18.2 Digital Self-Interference Cancellation 386 18.3 Experimental Results 391 18.4 Conclusions 393 Bibliography 395 19 Machine Learning for Context-Aware Cross-Layer Optimization 397Yang Yang, Zening Liu, Shuang Zhao, Ziyu Shao, and Kunlun Wang 19.1 Introduction 397 19.2 System Model 399 19.3 Problem Formulation and Analytical Framework 402 19.4 Predictive Multi-tier Operations Scheduling (PMOS) Algorithm 409 19.5 A Multi-tier Cost Model for User Scheduling in Fog Computing Networks 413 19.6 Conclusion 420 Bibliography 421 20 Physical-Layer Location Verification by Machine Learning 425Stefano Tomasin, Alessandro Brighente, Francesco Formaggio, and Gabriele Ruvoletto 20.1 IRLV by Wireless Channel Features 427 20.2 ML Classification for IRLV 428 20.3 Learning Phase Convergence 431 20.4 Experimental Results 433 20.5 Conclusions 437 Bibliography 437 21 Deep Multi-Agent Reinforcement Learning for Cooperative Edge Caching 439M. Cenk Gursoy, Chen Zhong, and Senem Velipasalar 21.1 Introduction 439 21.2 System Model 441 21.3 Problem Formulation 443 21.4 Deep Actor-Critic Framework for Content Caching 446 21.5 Application to the Multi-Cell Network 448 21.6 Application to the Single-Cell Network with D2D Communications 452 21.7 Conclusion 454 Bibliography 455 Index 459

    15 in stock

    £106.16

  • Informatics and Machine Learning

    John Wiley & Sons Inc Informatics and Machine Learning

    15 in stock

    Book SynopsisInformatics and Machine Learning Discover a thorough exploration of how to use computational, algorithmic, statistical, and informatics methods to analyze digital data Informatics and Machine Learning: From Martingales to Metaheuristics delivers an interdisciplinary presentation on how analyze any data captured in digital form. The book describes how readers can conduct analyses of text, general sequential data, experimental observations over time, stock market and econometric histories, or symbolic data, like genomes. It contains large amounts of sample code to demonstrate the concepts contained within and assist with various levels of project work. The book offers a complete presentation of the mathematical underpinnings of a wide variety of forms of data analysis and provides extensive examples of programming implementations. It is based on two decades worth of the distinguished author's teaching and industry experience. A thorough introductiTable of ContentsPreface xv 1 Introduction 1 1.1 Data Science: Statistics, Probability, Calculus … Python (or Perl) and Linux 2 1.2 Informatics and Data Analytics 3 1.3 FSA-Based Signal Acquisition and Bioinformatics 4 1.4 Feature Extraction and Language Analytics 7 1.5 Feature Extraction and Gene Structure Identification 8 1.5.1 HMMs for Analysis of Information Encoding Molecules 11 1.5.2 HMMs for Cheminformatics and Generic Signal Analysis 11 1.6 Theoretical Foundations for Learning 13 1.7 Classification and Clustering 13 1.8 Search 14 1.9 Stochastic Sequential Analysis (SSA) Protocol (Deep Learning Without NNs) 15 1.9.1 Stochastic Carrier Wave (SCW) Analysis – Nanoscope Signal Analysis 18 1.9.2 Nanoscope Cheminformatics – A Case Study for Device “Smartening” 19 1.10 Deep Learning using Neural Nets 20 1.11 Mathematical Specifics and Computational Implementations 21 2 Probabilistic Reasoning and Bioinformatics 23 2.1 Python Shell Scripting 23 2.1.1 Sample Size Complications 33 2.2 Counting, the Enumeration Problem, and Statistics 34 2.3 From Counts to Frequencies to Probabilities 35 2.4 Identifying Emergent/Convergent Statistics and Anomalous Statistics 35 2.5 Statistics, Conditional Probability, and Bayes’ Rule 37 2.5.1 The Calculus of Conditional Probabilities: The Cox Derivation 37 2.5.2 Bayes’ Rule 38 2.5.3 Estimation Based on Maximal Conditional Probabilities 38 2.6 Emergent Distributions and Series 39 2.6.1 The Law of Large Numbers (LLN) 39 2.6.2 Distributions 39 2.6.3 Series 42 2.7 Exercises 42 3 Information Entropy and Statistical Measures 47 3.1 Shannon Entropy, Relative Entropy, Maxent, Mutual Information 48 3.1.1 The Khinchin Derivation 49 3.1.2 Maximum Entropy Principle 49 3.1.3 Relative Entropy and Its Uniqueness 51 3.1.4 Mutual Information 51 3.1.5 Information Measures Recap 52 3.2 Codon Discovery from Mutual Information Anomaly 58 3.3 ORF Discovery from Long-Tail Distribution Anomaly 66 3.3.1 Ab initio Learning with smORF’s, Holistic Modeling, and Bootstrap Learning 69 3.4 Sequential Processes and Markov Models 72 3.4.1 Markov Chains 73 3.5 Exercises 75 4 Ad Hoc, Ab Initio, and Bootstrap Signal Acquisition Methods 77 4.1 Signal Acquisition, or Scanning, at Linear Order Time-Complexity 77 4.2 Genome Analytics: The Gene-Finder 80 4.3 Objective Performance Evaluation: Sensitivity and Specificity 93 4.4 Signal Analytics: The Time-Domain Finite State Automaton (tFSA) 93 4.4.1 tFSA Spike Detector 95 4.4.2 tFSA-Based Channel Signal Acquisition Methods with Stable Baseline 98 4.4.3 tFSA-Based Channel Signal Acquisition Methods Without Stable Baseline 103 4.5 Signal Statistics (Fast): Mean, Variance, and Boxcar Filter 107 4.5.1 Efficient Implementations for Statistical Tools (O(L)) 109 4.6 Signal Spectrum: Nyquist Criterion, Gabor Limit, Power Spectrum 110 4.6.1 Nyquist Sampling Theorem 110 4.6.2 Fourier Transforms, and Other Classic Transforms 110 4.6.3 Power Spectral Density 111 4.6.4 Power-Spectrum-Based Feature Extraction 111 4.6.5 Cross-Power Spectral Density 112 4.6.6 AM/FM/PM Communications Protocol 112 4.7 Exercises 112 5 Text Analytics 125 5.1 Words 125 5.1.1 Text Acquisition: Text Scraping and Associative Memory 125 5.1.2 Word Frequency Analysis: Machiavelli’s Polysemy on Fortuna and Virtu 130 5.1.3 Word Frequency Analysis: Coleridge’s Hidden Polysemy on Logos 139 5.1.4 Sentiment Analysis 143 5.2 Phrases – Short (Three Words) 145 5.2.1 Shakespearean Insult Generation – Phrase Generation 147 5.3 Phrases – Long (A Line or Sentence) 150 5.3.1 Iambic Phrase Analysis: Shakespeare 150 5.3.2 Natural Language Processing 152 5.3.3 Sentence and Story Generation: Tarot 152 5.4 Exercises 153 6 Analysis of Sequential Data Using HMMs 155 6.1 Hidden Markov Models (HMMs) 155 6.1.1 Background and Role in Stochastic Sequential Analysis (SSA) 155 6.1.2 When to Use a Hidden Markov Model (HMM)? 160 6.1.3 Hidden Markov Models (HMMs) – Standard Formulation and Terms 161 6.2 Graphical Models for Markov Models and Hidden Markov Models 162 6.2.1 Hidden Markov Models 162 6.2.2 Viterbi Path 163 6.2.3 Forward and Backward Probabilities 164 6.2.4 HMM: Maximum Likelihood discrimination 165 6.2.5 Expectation/Maximization (Baum–Welch) 166 6.3 Standard HMM Weaknesses and their GHMM Fixes 168 6.4 Generalized HMMs (GHMMs –“Gems”): Minor Viterbi Variants 171 6.4.1 The Generic HMM 171 6.4.2 pMM/SVM 171 6.4.3 EM and Feature Extraction via EVA Projection 172 6.4.4 Feature Extraction via Data Absorption (a.k.a. Emission Inversion) 174 6.4.5 Modified AdaBoost for Feature Selection and Data Fusion 176 6.5 HMM Implementation for Viterbi (in C and Perl) 179 6.6 Exercises 206 7 Generalized HMMs (GHMMs): Major Viterbi Variants 207 7.1 GHMMs: Maximal Clique for Viterbi and Baum–Welch 207 7.2 GHMMs: Full Duration Model 216 7.2.1 HMM with Duration (HMMD) 216 7.2.2 Hidden Semi-Markov Models (HSMM) with sid-information 220 7.2.3 HMM with Binned Duration (HMMBD) 224 7.3 GHMMs: Linear Memory Baum–Welch Algorithm 228 7.4 GHMMs: Distributable Viterbi and Baum–Welch Algorithms 230 7.4.1 Distributed HMM processing via “Viterbi-overlap-chunking” with GPU speedup 230 7.4.2 Relative Entropy and Viterbi Scoring 231 7.5 Martingales and the Feasibility of Statistical Learning (further details in Appendix) 232 7.6 Exercises 234 8 Neuromanifolds and the Uniqueness of Relative Entropy 235 8.1 Overview 235 8.2 Review of Differential Geometry 236 8.2.1 Differential Topology – Natural Manifold 236 8.2.2 Differential Geometry – Natural Geometric Structures 240 8.3 Amari’s Dually Flat Formulation 243 8.3.1 Generalization of Pythagorean Theorem 246 8.3.2 Projection Theorem and Relation Between Divergence and Link Formalism 246 8.4 Neuromanifolds 247 8.5 Exercises 250 9 Neural Net Learning and Loss Bounds Analysis 253 9.1 Brief Introduction to Neural Nets (NNs) 254 9.1.1 Single Neuron Discriminator 254 9.1.2 Neural Net with Back-Propagation 258 9.2 Variational Learning Formalism and Use in Loss Bounds Analysis 261 9.2.1 Variational Basis for Update Rule 261 9.2.2 Review and Generalization of GD Loss Bounds Analysis 262 9.2.3 Review of the EG Loss Bounds Analysis 266 9.3 The “sinh −1 (ω)” link algorithm (SA) 266 9.3.1 Motivation for “sinh −1 (ω)” link algorithm (SA) 266 9.3.2 Relation of sinh Link Algorithm to the Binary Exponentiated Gradient Algorithm 268 9.4 The Loss Bounds Analysis for sinh −1 (ω) 269 9.4.1 Loss Bounds Analysis Using the Taylor Series Approach 273 9.4.2 Loss Bounds Analysis Using Taylor Series for the sinh Link (SA) Algorithm 275 9.5 Exercises 277 10 Classification and Clustering 279 10.1 The SVM Classifier – An Overview 281 10.2 Introduction to Classification and Clustering 282 10.2.1 Sum of Squared Error (SSE) Scoring 286 10.2.2 K-Means Clustering (Unsupervised Learning) 286 10.2.3 k-Nearest Neighbors Classification (Supervised Learning) 292 10.2.4 The Perceptron Recap (See Chapter 9 for Details) 295 10.3 Lagrangian Optimization and Structural Risk Minimization (SRM) 296 10.3.1 Decision Boundary and SRM Construction Using Lagrangian 296 10.3.2 The Theory of Classification 301 10.3.3 The Mathematics of the Feasibility of Learning 303 10.3.4 Lagrangian Optimization 306 10.3.5 The Support Vector Machine (SVM) – Lagrangian with SRM 308 10.3.6 Kernel Construction Using Polarization 310 10.3.7 SVM Binary Classifier Derivation 312 10.4 SVM Binary Classifier Implementation 318 10.4.1 Sequential Minimal Optimization (SMO) 318 10.4.2 Alpha-Selection Variants 320 10.4.3 Chunking on Large Datasets: O(N 2) ➔ n O(N 2 /n 2) = O(N 2)/n 320 10.4.4 Support Vector Reduction (SVR) 331 10.4.5 Code Examples (in OO Perl) 335 10.5 Kernel Selection and Tuning Metaheuristics 346 10.5.1 The “Stability” Kernels 346 10.5.2 Derivation of “Stability” Kernels 349 10.5.3 Entropic and Gaussian Kernels Relate to Unique, Minimally Structured, Information Divergence and Geometric Distance Measures 351 10.5.4 Automated Kernel Selection and Tuning 353 10.6 SVM Multiclass from Decision Tree with SVM Binary Classifiers 356 10.7 SVM Multiclass Classifier Derivation (Multiple Decision Surface) 359 10.7.1 Decomposition Method to Solve the Dual 361 10.7.2 SVM Speedup via Differentiating BSVs and SVs 362 10.8 SVM Clustering 364 10.8.1 SVM-External Clustering 365 10.8.2 Single-Convergence SVM-Clustering: Comparative Analysis 368 10.8.3 Stabilized, Single-Convergence Initialized, SVM-External Clustering 375 10.8.4 Stabilized, Multiple-Convergence, SVM-External Clustering 379 10.8.5 SVM-External Clustering – Algorithmic Variants 381 10.9 Exercises 385 11 Search Metaheuristics 389 11.1 Trajectory-Based Search Metaheuristics 389 11.1.1 Optimal-Fitness Configuration Trajectories – Fitness Function Known and Sufficiently Regular 390 11.1.2 Optimal-Fitness Configuration Trajectories – Fitness Function not Known 392 11.1.3 Fitness Configuration Trajectories with Nonoptimal Updates 397 11.2 Population-Based Search Metaheuristics 399 11.2.1 Population with Evolution 400 11.2.2 Population with Group Interaction – Swarm Intelligence 402 11.2.3 Population with Indirect Interaction via Artifact 403 11.3 Exercises 404 12 Stochastic Sequential Analysis (SSA) 407 12.1 HMM and FSA-Based Methods for Signal Acquisition and Feature Extraction 408 12.2 The Stochastic Sequential Analysis (SSA) Protocol 410 12.2.1 (Stage 1) Primitive Feature Identification 415 12.2.2 (Stage 2) Feature Identification and Feature Selection 416 12.2.3 (Stage 3) Classification 418 12.2.4 (Stage 4) Clustering 418 12.2.5 (All Stages) Database/Data-Warehouse System Specification 419 12.2.6 (All Stages) Server-Based Data Analysis System Specification 420 12.3 Channel Current Cheminformatics (CCC) Implementation of the Stochastic Sequential Analysis (SSA) Protocol 420 12.4 SCW for Detector Sensitivity Boosting 423 12.4.1 NTD with Multiple Channels (or High Noise) 424 12.4.2 Stochastic Carrier Wave 426 12.5 SSA for Deep Learning 430 12.6 Exercises 431 13 Deep Learning Tools – TensorFlow 433 13.1 Neural Nets Review 433 13.1.1 Summary of Single Neuron Discriminator 433 13.1.2 Summary of Neural Net Discriminator and Back-Propagation 433 13.2 TensorFlow from Google 435 13.2.1 Installation/Setup 436 13.2.2 Example: Character Recognition 437 13.2.3 Example: Language Translation 440 13.2.4 TensorBoard and the TensorFlow Profiler 441 13.2.5 Tensor Cores 444 13.3 Exercises 444 14 Nanopore Detection – A Case Study 445 14.1 Standard Apparatus 447 14.1.1 Standard Operational and Physiological Buffer Conditions 448 14.1.2 α-Hemolysin Channel Stability – Introduction of Chaotropes 448 14.2 Controlling Nanopore Noise Sources and Choice of Aperture 449 14.3 Length Resolution of Individual DNA Hairpins 451 14.4 Detection of Single Nucleotide Differences (Large Changes in Structure) 454 14.5 Blockade Mechanism for 9bphp 455 14.6 Conformational Kinetics on Model Biomolecules 459 14.7 Channel Current Cheminformatics 460 14.7.1 Power Spectra and Standard EE Signal Analysis 460 14.7.2 Channel Current Cheminformatics for Single-Biomolecule/Mixture Identifications 462 14.7.3 Channel Current Cheminformatics: Feature Extraction by HMM 464 14.7.4 Bandwidth Limitations 465 14.8 Channel-Based Detection Mechanisms 467 14.8.1 Partitioning and Translocation-Based ND Biosensing Methods 467 14.8.2 Transduction Versus Translation 468 14.8.3 Single-Molecule Versus Ensemble 469 14.8.4 Biosensing with High Sensitivity in Presence of Interference 470 14.8.5 Nanopore Transduction Detection Methods 471 14.9 The NTD Nanoscope 474 14.9.1 Nanopore Transduction Detection (NTD) 475 14.9.2 NTD: A Versatile Platform for Biosensing 479 14.9.3 NTD Platform 481 14.9.4 NTD Operation 484 14.9.5 Driven Modulations 487 14.9.6 Driven Modulations with Multichannel Augmentation 490 14.10 NTD Biosensing Methods 495 14.10.1 Model Biosensor Based on Streptavidin and Biotin 495 14.10.2 Model System Based on DNA Annealing 501 14.10.3 Y-Aptamer with Use of Chaotropes to Improve Signal Resolution 506 14.10.4 Pathogen Detection, miRNA Detection, and miRNA Haplotyping 508 14.10.5 SNP Detection 510 14.10.6 Aptamer-Based Detection 512 14.10.7 Antibody-Based Detection 512 14.11 Exercises 516 Appendix A: Python and Perl System Programming in Linux 519 A.1 Getting Linux and Python in a Flash (Drive) 519 A.2 Linux and the Command Shell 520 A.3 Perl Review: I/O, Primitives, String Handling, Regex 521 Appendix B: Physics 529 B.1 The Calculus of Variations 529 Appendix C: Math 531 C.1 Martingales 531 C.2 Hoeffding Inequality 537 References 541 Index 559

    15 in stock

    £97.16

  • Machine Learning with Dynamics 365 and Power

    John Wiley & Sons Inc Machine Learning with Dynamics 365 and Power

    7 in stock

    Book SynopsisApply cutting-edge AI techniques to your Dynamics 365 environment to create new solutions to old business problems In Machine Learning with Dynamics 365 and Power Platform: The Ultimate Guide to Apply Predictive Analytics, an accomplished team of digital and data analytics experts delivers a practical and comprehensive discussion of how to integrate AI Builder with Dataverse and Dynamics 365 to create real-world business solutions. It also walks you through how to build powerful machine learning models using Azure Data Lake, Databricks, Azure Synapse Analytics. The book is filled with clear explanations, visualizations, and working examples that get you up and running in your development of supervised, unsupervised, and reinforcement learning techniques using Microsoft machine learning tools and technologies. These strategies will transform your business verticals, reducing costs and manual processes in finance and operations, retail, telecommunications, Table of ContentsForeword vii Preface ix Acknowledgments xi About the Authors xiii Chapter 1: Dynamics 365, Power Platform, and Machine Learning 1 Introduction to Dynamics 365 1 Introduction to Power Platform 6 What Is Machine Learning: How Has It Evolved? 11 Definition of Machine Learning 12 Chapter 2: Artificial Intelligence and Pre- Built Machine Learning in Dynamics 365 33 Azure AI Platform 33 Azure Machine Learning Service 41 Knowledge Mining 67 Chapter 3: ML/AI Features and Their Applications in Dynamics 365 71 Customer Insights 71 Customer Service Insights 77 Sales Insights 83 Product Insights 95 Virtual Agent for Customer Service in Dynamics 365 96 Artificial Intelligence in Power Apps with AI Builder 99 What Is Mixed Reality? 102 Chapter 4: Dynamics 365 and Custom ML Models Using Azure ML 107 Azure Machine Learning 108 Azure Machine Learning Studio 115 Azure Machine Learning Service 146 Chapter 5: Deep Dive in Machine Learning Custom Models 149 Azure CLI Extension 149 Visual Studio Code 153 Chapter 6: Machine Learning with Dynamics 365 Use Cases 161 ml for Finance 162 Demand Forecasting 190 Connected Store 192 ml for Human Resources Management 195 Machine Learning at the Workplace 200 Afterword 205 Index 207

    7 in stock

    £28.49

  • Machine Learning and Data Science

    John Wiley & Sons Inc Machine Learning and Data Science

    15 in stock

    Book SynopsisMACHINE LEARNING AND DATA SCIENCE Written and edited by a team of experts in the field, this collection of papers reflects the most up-to-date and comprehensive current state of machine learning and data science for industry, government, and academia. Machine learning (ML) and data science (DS) are very active topics with an extensive scope, both in terms of theory and applications. They have been established as an important emergent scientific field and paradigm driving research evolution in such disciplines as statistics, computing science and intelligence science, and practical transformation in such domains as science, engineering, the public sector, business, social science, and lifestyle. Simultaneously, their applications provide important challenges that can often be addressed only with innovative machine learning and data science algorithms. These algorithms encompass the larger areas of artificial intelligence, data analytics, machine learning, patternTable of ContentsPreface xiii Book Description xv 1 Machine Learning: An Introduction to Reinforcement Learning 1Sheikh Amir Fayaz, Dr. S Jahangeer Sidiq, Dr. Majid Zaman and Dr. Muheet Ahmed Butt 1.1 Introduction 2 1.2 Reinforcement Learning Paradigm: Characteristics 11 1.3 Reinforcement Learning Problem 12 1.4 Applications of Reinforcement Learning 15 2 Data Analysis Using Machine Learning: An Experimental Study on UFC 23Prashant Varshney, Charu Gupta, Palak Girdhar, Anand Mohan, Prateek Agrawal and Vishu Madaan 2.1 Introduction 23 2.2 Proposed Methodology 25 2.3 Experimental Evaluation and Visualization 31 2.4 Conclusion 44 3 Dawn of Big Data with Hadoop and Machine Learning 47Balraj Singh and Harsh Kumar Verma 3.1 Introduction 48 3.2 Big Data 48 3.3 Machine Learning 53 3.4 Hadoop 55 3.5 Studies Representing Applications of Machine Learning Techniques with Hadoop 57 3.6 Conclusion 61 4 Industry 4.0: Smart Manufacturing in Industries -- The Future 67Dr. K. Bhavana Raj 4.1 Introduction 67 5 COVID-19 Curve Exploration Using Time Series Data for India 75Apeksha Rustagi, Divyata, Deepali Virmani, Ashok Kumar, Charu Gupta, Prateek Agrawal and Vishu Madaan 5.1 Introduction 76 5.2 Materials Methods 77 5.3 Concl usion and Future Work 86 6 A Case Study on Cluster Based Application Mapping Method for Power Optimization in 2D NoC 89Aravindhan Alagarsamy and Sundarakannan Mahilmaran 6.1 Introduction 90 6.2 Concept Graph Theory and NOC 91 6.3 Related Work 94 6.4 Proposed Methodology 97 6.5 Experimental Results and Discussion 100 6.6 Conclusion 105 7 Healthcare Case Study: COVID-19 Detection, Prevention Measures, and Prediction Using Machine Learning & Deep Learning Algorithms 109Devesh Kumar Srivastava, Mansi Chouhan and Amit Kumar Sharma 7.1 Introduction 110 7.2 Literature Review 111 7.3 Coronavirus (Covid19) 112 7.4 Proposed Working Model 118 7.5 Experimental Evaluation 130 7.6 Conclusion and Future Work 132 8 Analysis and Impact of Climatic Conditions on COVID-19 Using Machine Learning 135Prasenjit Das, Shaily Jain, Shankar Shambhu and Chetan Sharma 8.1 Introduction 136 8.2 COVID-19 138 8.3 Experimental Setup 141 8.4 Proposed Methodology 142 8.5 Results Discussion 143 8.6 Conclusion and Future Work 143 9 Application of Hadoop in Data Science 147Balraj Singh and Harsh K. Verma 9.1 Introduction 148 9.2 Hadoop Distributed Processing 153 9.3 Using Hadoop with Data Science 160 9.4 Conclusion 164 10 Networking Technologies and Challenges for Green IOT Applications in Urban Climate 169Saikat Samanta, Achyuth Sarkar and Aditi Sharma 10.1 Introduction 170 10.2 Background 170 10.3 Green Internet of Things 173 10.4 Different Energy--Efficient Implementation of Green IOT 177 10.5 Recycling Principal for Green IOT 178 10.6 Green IOT Architecture of Urban Climate 179 10.7 Challenges of Green IOT in Urban Climate 181 10.8 Discussion & Future Research Directions 181 10.9 Conclusion 182 11 Analysis of Human Activity Recognition Algorithms Using Trimmed Video Datasets 185Disha G. Deotale, Madhushi Verma, P. Suresh, Divya Srivastava, Manish Kumar and Sunil Kumar Jangir 11.1 Introduction 186 11.2 Contributions in the Field of Activity Recognition from Video Sequences 190 11.3 Conclusion 212 12 Solving Direction Sense Based Reasoning Problems Using Natural Language Processing 215Vishu Madaan, Komal Sood, Prateek Agrawal, Ashok Kumar, Charu Gupta, Anand Sharma and Awadhesh Kumar Shukla 12.1 Introduction 216 12.2 Methodology 217 12.3 Description of Position 222 12.4 Results and Discussion 224 12.5 Graphical User Interface 225 13 Drowsiness Detection Using Digital Image Processing 231G. Ramesh Babu, Chinthagada Naveen Kumar and Maradana Harish 13.1 Introduction 231 13.2 Literature Review 232 13.3 Proposed System 233 13.4 The Dataset 234 13.5 Working Principle 235 13.6 Convolutional Neural Networks 239 13.6.1 CNN Design for Decisive State of the Eye 239 13.7 Performance Evaluation 240 13.8 Conclusion 242 References 242 Index 245

    15 in stock

    £153.90

  • Data Analytics in Bioinformatics

    John Wiley & Sons Inc Data Analytics in Bioinformatics

    15 in stock

    Book Synopsis

    15 in stock

    £164.66

  • Advanced Analytics and Deep Learning Models

    John Wiley & Sons Inc Advanced Analytics and Deep Learning Models

    15 in stock

    Book SynopsisAdvanced Analytics and Deep Learning Models The book provides readers with an in-depth understanding of concepts and technologies related to the importance of analytics and deep learning in many useful real-world applications such as e-healthcare, transportation, agriculture, stock market, etc. Advanced analytics is a mixture of machine learning, artificial intelligence, graphs, text mining, data mining, semantic analysis. It is an approach to data analysis. Beyond the traditional business intelligence, it is a semi and autonomous analysis of data by using different techniques and tools. However, deep learning and data analysis both are high centers of data science. Almost all the private and public organizations collect heavy amounts of data, i.e., domain-specific data. Many small/large companies are exploring large amounts of data for existing and future technology. Deep learning is also exploring large amounts of unsupervised data making it beneficial and effective for big data. DeeTable of ContentsPreface xix Part 1: Introduction to Computer Vision 1 1 Artificial Intelligence in Language Learning: Practices and Prospects 3Khushboo Kuddus 1.1 Introduction 4 1.2 Evolution of CALL 5 1.3 Defining Artificial Intelligence 7 1.4 Historical Overview of AI in Education and Language Learning 7 1.5 Implication of Artificial Intelligence in Education 8 1.5.1 Machine Translation 9 1.5.2 Chatbots 9 1.5.3 Automatic Speech Recognition Tools 9 1.5.4 Autocorrect/Automatic Text Evaluator 11 1.5.5 Vocabulary Training Applications 12 1.5.6 Google Docs Speech Recognition 12 1.5.7 Language MuseTM Activity Palette 13 1.6 Artificial Intelligence Tools Enhance the Teaching and Learning Processes 13 1.6.1 Autonomous Learning 13 1.6.2 Produce Smart Content 13 1.6.3 Task Automation 13 1.6.4 Access to Education for Students with Physical Disabilities 14 1.7 Conclusion 14 References 15 2 Real Estate Price Prediction Using Machine Learning Algorithms 19Palak Furia and Anand Khandare 2.1 Introduction 20 2.2 Literature Review 20 2.3 Proposed Work 21 2.3.1 Methodology 21 2.3.2 Work Flow 22 2.3.3 The Dataset 22 2.3.4 Data Handling 23 2.3.4.1 Missing Values and Data Cleaning 23 2.3.4.2 Feature Engineering 24 2.3.4.3 Removing Outliers 25 2.4 Algorithms 27 2.4.1 Linear Regression 27 2.4.2 LASSO Regression 27 2.4.3 Decision Tree 28 2.4.4 Support Vector Machine 28 2.4.5 Random Forest Regressor 28 2.4.6 XGBoost 29 2.5 Evaluation Metrics 29 2.6 Result of Prediction 30 References 31 3 Multi-Criteria–Based Entertainment Recommender System Using Clustering Approach 33Chandramouli Das, Abhaya Kumar Sahoo and Chittaranjan Pradhan 3.1 Introduction 34 3.2 Work Related Multi-Criteria Recommender System 35 3.3 Working Principle 38 3.3.1 Modeling Phase 39 3.3.2 Prediction Phase 39 3.3.3 Recommendation Phase 40 3.3.4 Content-Based Approach 40 3.3.5 Collaborative Filtering Approach 41 3.3.6 Knowledge-Based Filtering Approach 41 3.4 Comparison Among Different Methods 42 3.4.1 MCRS Exploiting Aspect-Based Sentiment Analysis 42 3.4.1.1 Discussion and Result 43 3.4.2 User Preference Learning in Multi-Criteria Recommendation Using Stacked Autoencoders by Tallapally et al. 46 3.4.2.1 Dataset and Evaluation Matrix 46 3.4.2.2 Training Setting 49 3.4.2.3 Result 49 3.4.3 Situation-Aware Multi-Criteria Recommender System: Using Criteria Preferences as Contexts by Zheng 49 3.4.3.1 Evaluation Setting 50 3.4.3.2 Experimental Result 50 3.4.4 Utility-Based Multi-Criteria Recommender Systems by Zheng 51 3.4.4.1 Experimental Dataset 51 3.4.4.2 Experimental Result 52 3.4.5 Multi-Criteria Clustering Approach by Wasid and Ali 53 3.4.5.1 Experimental Evaluation 53 3.4.5.2 Result and Analysis 53 3.5 Advantages of Multi-Criteria Recommender System 54 3.5.1 Revenue 57 3.5.2 Customer Satisfaction 57 3.5.3 Personalization 57 3.5.4 Discovery 58 3.5.5 Provide Reports 58 3.6 Challenges of Multi-Criteria Recommender System 58 3.6.1 Cold Start Problem 58 3.6.2 Sparsity Problem 59 3.6.3 Scalability 59 3.6.4 Over Specialization Problem 59 3.6.5 Diversity 59 3.6.6 Serendipity 59 3.6.7 Privacy 60 3.6.8 Shilling Attacks 60 3.6.9 Gray Sheep 60 3.7 Conclusion 60 References 61 4 Adoption of Machine/Deep Learning in Cloud With a Case Study on Discernment of Cervical Cancer65Jyothi A. P., S. Usha and Archana H. R. 4.1 Introduction 66 4.2 Background Study 69 4.3 Overview of Machine Learning/Deep Learning 72 4.4 Connection Between Machine Learning/Deep Learning and Cloud Computing 74 4.5 Machine Learning/Deep Learning Algorithm 74 4.5.1 Supervised Learning 74 4.5.2 Unsupervised Learning 77 4.5.3 Reinforcement or Semi-Supervised Learning 77 4.5.3.1 Outline of ML Algorithms 77 4.6 A Project Implementation on Discernment of Cervical Cancer by Using Machine/Deep Learning in Cloud 93 4.6.1 Proposed Work 94 4.6.1.1 MRI Dataset 94 4.6.1.2 Pre Processing 95 4.6.1.3 Feature Extraction 96 4.6.2 Design Methodology and Implementation 97 4.6.3 Results 100 4.7 Applications 101 4.7.1 Cognitive Cloud 102 4.7.2 Chatbots and Smart Personal Assistants 103 4.7.3 IoT Cloud 103 4.7.4 Business Intelligence 103 4.7.5 AI-as-a-Service 104 4.8 Advantages of Adoption of Cloud in Machine Learning/ Deep Learning 104 4.9 Conclusion 105 References 106 5 Machine Learning and Internet of Things–Based Models for Healthcare Monitoring 111Shruti Kute, Amit Kumar Tyagi, Aswathy S.U. and Shaveta Malik 5.1 Introduction 112 5.2 Literature Survey 113 5.3 Interpretable Machine Learning in Healthcare 114 5.4 Opportunities in Machine Learning for Healthcare 116 5.5 Why Combining IoT and ML? 119 5.5.1 ML-IoT Models for Healthcare Monitoring 119 5.6 Applications of Machine Learning in Medical and Pharma 121 5.7 Challenges and Future Research Direction 122 5.8 Conclusion 123 References 123 6 Machine Learning–Based Disease Diagnosis and Prediction for E-Healthcare System 127Shruti Suhas Kute, Shreyas Madhav A. V., Shabnam Kumari and Aswathy S. U. 6.1 Introduction 128 6.2 Literature Survey 129 6.3 Machine Learning Applications in Biomedical Imaging 132 6.4 Brain Tumor Classification Using Machine Learning and IoT 134 6.5 Early Detection of Dementia Disease Using Machine Learning and IoT-Based Applications 135 6.6 IoT and Machine Learning-Based Diseases Prediction and Diagnosis System for EHRs 137 6.7 Machine Learning Applications for a Real-Time Monitoring of Arrhythmia Patients Using IoT 140 6.8 IoT and Machine Learning–Based System for Medical Data Mining 141 6.9 Conclusion and Future Works 143 References 144 Part 2: Introduction to Deep Learning and its Models 149 7 Deep Learning Methods for Data Science 151K. Indira, Kusumika Krori Dutta, S. Poornima and Sunny Arokia Swamy Bellary 7.1 Introduction 152 7.2 Convolutional Neural Network 152 7.2.1 Architecture 154 7.2.2 Implementation of CNN 154 7.2.3 Simulation Results 157 7.2.4 Merits and Demerits 158 7.2.5 Applications 159 7.3 Recurrent Neural Network 159 7.3.1 Architecture 160 7.3.2 Types of Recurrent Neural Networks 161 7.3.2.1 Simple Recurrent Neural Networks 161 7.3.2.2 Long Short-Term Memory Networks 162 7.3.2.3 Gated Recurrent Units (GRUs) 164 7.3.3 Merits and Demerits 167 7.3.3.1 Merits 167 7.3.3.2 Demerits 167 7.3.4 Applications 167 7.4 Denoising Autoencoder 168 7.4.1 Architecture 169 7.4.2 Merits and Demerits 169 7.4.3 Applications 170 7.5 Recursive Neural Network (RCNN) 170 7.5.1 Architecture 170 7.5.2 Merits and Demerits 172 7.5.3 Applications 172 7.6 Deep Reinforcement Learning 173 7.6.1 Architecture 174 7.6.2 Merits and Demerits 174 7.6.3 Applications 174 7.7 Deep Belief Networks (DBNS) 175 7.7.1 Architecture 176 7.7.2 Merits and Demerits 176 7.7.3 Applications 176 7.8 Conclusion 177 References 177 8 A Proposed LSTM-Based Neuromarketing Model for Consumer Emotional State Evaluation Using EEG 181Rupali Gill and Jaiteg Singh 8.1 Introduction 182 8.2 Background and Motivation 183 8.2.1 Emotion Model 183 8.2.2 Neuromarketing and BCI 184 8.2.3 EEG Signal 185 8.3 Related Work 185 8.3.1 Machine Learning 186 8.3.2 Deep Learning 191 8.3.2.1 Fast Feed Neural Networks 193 8.3.2.2 Recurrent Neural Networks 193 8.3.2.3 Convolutional Neural Networks 194 8.4 Methodology of Proposed System 195 8.4.1 DEAP Dataset 196 8.4.2 Analyzing the Dataset 196 8.4.3 Long Short-Term Memory 197 8.4.4 Experimental Setup 197 8.4.5 Data Set Collection 197 8.5 Results and Discussions 198 8.5.1 LSTM Model Training and Accuracy 198 8.6 Conclusion 199 References 199 9 An Extensive Survey of Applications of Advanced Deep Learning Algorithms on Detection of Neurodegenerative Diseases and the Tackling Procedure in Their Treatment Protocol 207Vignesh Baalaji S., Vergin Raja Sarobin M., L. Jani Anbarasi, Graceline Jasmine S. and Rukmani P. 9.1 Introduction 208 9.2 Story of Alzheimer’s Disease 208 9.3 Datasets 210 9.3.1 ADNI 210 9.3.2 OASIS 210 9.4 Story of Parkinson’s Disease 211 9.5 A Review on Learning Algorithms 212 9.5.1 Convolutional Neural Network (CNN) 212 9.5.2 Restricted Boltzmann Machine 213 9.5.3 Siamese Neural Networks 213 9.5.4 Residual Network (ResNet) 214 9.5.5 U-Net 214 9.5.6 LSTM 214 9.5.7 Support Vector Machine 215 9.6 A Review on Methodologies 215 9.6.1 Prediction of Alzheimer’s Disease 215 9.6.2 Prediction of Parkinson’s Disease 221 9.6.3 Detection of Attacks on Deep Brain Stimulation 223 9.7 Results and Discussion 224 9.8 Conclusion 224 References 227 10 Emerging Innovations in the Near Future Using Deep Learning Techniques 231Akshara Pramod, Harsh Sankar Naicker and Amit Kumar Tyagi 10.1 Introduction 232 10.2 Related Work 234 10.3 Motivation 235 10.4 Future With Deep Learning/Emerging Innovations in Near Future With Deep Learning 236 10.4.1 Deep Learning for Image Classification and Processing 237 10.4.2 Deep Learning for Medical Image Recognition 237 10.4.3 Computational Intelligence for Facial Recognition 238 10.4.4 Deep Learning for Clinical and Health Informatics 238 10.4.5 Fuzzy Logic for Medical Applications 239 10.4.6 Other Intelligent-Based Methods for Biomedical and Healthcare 239 10.4.7 Other Applications 239 10.5 Open Issues and Future Research Directions 244 10.5.1 Joint Representation Learning From User and Item Content Information 244 10.5.2 Explainable Recommendation With Deep Learning 245 10.5.3 Going Deeper for Recommendation 245 10.5.4 Machine Reasoning for Recommendation 246 10.5.5 Cross Domain Recommendation With Deep Neural Networks 246 10.5.6 Deep Multi-Task Learning for Recommendation 247 10.5.7 Scalability of Deep Neural Networks for Recommendation 247 10.5.8 Urge for a Better and Unified Evaluation 248 10.6 Deep Learning: Opportunities and Challenges 249 10.7 Argument with Machine Learning and Other Available Techniques 250 10.8 Conclusion With Future Work 251 Acknowledgement 252 References 252 11 Optimization Techniques in Deep Learning Scenarios: An Empirical Comparison 255Ajeet K. Jain, PVRD Prasad Rao and K. Venkatesh Sharma 11.1 Introduction 256 11.1.1 Background and Related Work 256 11.2 Optimization and Role of Optimizer in DL 258 11.2.1 Deep Network Architecture 259 11.2.2 Proper Initialization 260 11.2.3 Representation, Optimization, and Generalization 261 11.2.4 Optimization Issues 261 11.2.5 Stochastic GD Optimization 262 11.2.6 Stochastic Gradient Descent with Momentum 263 11.2.7 SGD With Nesterov Momentum 264 11.3 Various Optimizers in DL Practitioner Scenario 265 11.3.1 AdaGrad Optimizer 265 11.3.2 RMSProp 267 11.3.3 Adam 267 11.3.4 AdaMax 269 11.3.5 AMSGrad 269 11.4 Recent Optimizers in the Pipeline 270 11.4.1 EVE 270 11.4.2 RAdam 271 11.4.3 MAS (Mixing ADAM and SGD) 271 11.4.4 Lottery Ticket Hypothesis 272 11.5 Experiment and Results 273 11.5.1 Web Resource 273 11.5.2 Resource 277 11.6 Discussion and Conclusion 278 References 279 Part 3: Introduction to Advanced Analytics 283 12 Big Data Platforms 285Sharmila Gaikwad and Jignesh Patil 12.1 Visualization in Big Data 286 12.1.1 Introduction to Big Data 286 12.1.2 Techniques of Visualization 287 12.1.3 Case Study on Data Visualization 302 12.2 Security in Big Data 305 12.2.1 Introduction of Data Breach 305 12.2.2 Data Security Challenges 306 12.2.3 Data Breaches 307 12.2.4 Data Security Achieved 307 12.2.5 Findings: Case Study of Data Breach 309 12.3 Conclusion 309 References 309 13 Smart City Governance Using Big Data Technologies 311K. Raghava Rao and D. Sateesh Kumar 13.1 Objective 312 13.2 Introduction 312 13.3 Literature Survey 314 13.4 Smart Governance Status 314 13.4.1 International 314 13.4.2 National 316 13.5 Methodology and Implementation Approach 318 13.5.1 Data Generation 319 13.5.2 Data Acquisition 319 13.5.3 Data Analytics 319 13.6 Outcome of the Smart Governance 322 13.7 Conclusion 323 References 323 14 Big Data Analytics With Cloud, Fog, and Edge Computing 325Deepti Goyal, Amit Kumar Tyagi and Aswathy S. U. 14.1 Introduction to Cloud, Fog, and Edge Computing 326 14.2 Evolution of Computing Terms and Its Related Works 330 14.3 Motivation 332 14.4 Importance of Cloud, Fog, and Edge Computing in Various Applications 333 14.5 Requirement and Importance of Analytics (General) in Cloud, Fog, and Edge Computing 334 14.6 Existing Tools for Making a Reliable Communication and Discussion of a Use Case (with Respect to Cloud, Fog, and Edge Computing) 335 14.6.1 CloudSim 335 14.6.2 SPECI 336 14.6.3 Green Cloud 336 14.6.4 OCT (Open Cloud Testbed) 337 14.6.5 Open Cirrus 337 14.6.6 GroudSim 338 14.6.7 Network CloudSim 338 14.7 Tools Available for Advanced Analytics (for Big Data Stored in Cloud, Fog, and Edge Computing Environment) 338 14.7.1 Microsoft HDInsight 338 14.7.2 Skytree 339 14.7.3 Splice Machine 339 14.7.4 Spark 339 14.7.5 Apache SAMOA 339 14.7.6 Elastic Search 339 14.7.7 R-Programming 339 14.8 Importance of Big Data Analytics for Cyber-Security and Privacy for Cloud-IoT Systems 340 14.8.1 Risk Management 340 14.8.2 Predictive Models 340 14.8.3 Secure With Penetration Testing 340 14.8.4 Bottom Line 341 14.8.5 Others: Internet of Things-Based Intelligent Applications 341 14.9 An Use Case with Real World Applications (with Respect to Big Data Analytics) Related to Cloud, Fog, and Edge Computing 341 14.10 Issues and Challenges Faced by Big Data Analytics (in Cloud, Fog, and Edge Computing Environments) 342 14.10.1 Cloud Issues 343 14.11 Opportunities for the Future in Cloud, Fog, and Edge Computing Environments (or Research Gaps) 344 14.12 Conclusion 345 References 346 15 Big Data in Healthcare: Applications and Challenges 351V. Shyamala Susan, K. Juliana Gnana Selvi and Ir. Bambang Sugiyono Agus Purwono 15.1 Introduction 352 15.1.1 Big Data in Healthcare 352 15.1.2 The 5V’s Healthcare Big Data Characteristics 353 15.1.2.1 Volume 353 15.1.2.2 Velocity 353 15.1.2.3 Variety 353 15.1.2.4 Veracity 353 15.1.2.5 Value 353 15.1.3 Various Varieties of Big Data Analytical (BDA) in Healthcare 353 15.1.4 Application of Big Data Analytics in Healthcare 354 15.1.5 Benefits of Big Data in the Health Industry 355 15.2 Analytical Techniques for Big Data in Healthcare 356 15.2.1 Platforms and Tools for Healthcare Data 357 15.3 Challenges 357 15.3.1 Storage Challenges 357 15.3.2 Cleaning 358 15.3.3 Data Quality 358 15.3.4 Data Security 358 15.3.5 Missing or Incomplete Data 358 15.3.6 Information Sharing 358 15.3.7 Overcoming the Big Data Talent and Cost Limitations 359 15.3.8 Financial Obstructions 359 15.3.9 Volume 359 15.3.10 Technology Adoption 360 15.4 What is the Eventual Fate of Big Data in Healthcare Services? 360 15.5 Conclusion 361 References 361 16 The Fog/Edge Computing: Challenges, Serious Concerns, and the Road Ahead 365Varsha. R., Siddharth M. Nair and Amit Kumar Tyagi 16.1 Introduction 366 16.1.1 Organization of the Work 368 16.2 Motivation 368 16.3 Background 369 16.4 Fog and Edge Computing–Based Applications 371 16.5 Machine Learning and Internet of Things–Based Cloud, Fog, and Edge Computing Applications 374 16.6 Threats Mitigated in Fog and Edge Computing–Based Applications 376 16.7 Critical Challenges and Serious Concerns Toward Fog/Edge Computing and Its Applications 378 16.8 Possible Countermeasures 381 16.9 Opportunities for 21st Century Toward Fog and Edge Computing 383 16.9.1 5G and Edge Computing as Vehicles for Transformation of Mobility in Smart Cities 383 16.9.2 Artificial Intelligence for Cloud Computing and Edge Computing 384 16.10 Conclusion 387 References 387 Index 391

    15 in stock

    £153.90

  • Data Mining and Machine Learning Applications

    John Wiley & Sons Inc Data Mining and Machine Learning Applications

    15 in stock

    Book SynopsisDATA MINING AND MACHINE LEARNING APPLICATIONS The book elaborates in detail on the current needs of data mining and machine learning and promotes mutual understanding among research in different disciplines, thus facilitating research development and collaboration. Data, the latest currency of today's world, is the new gold. In this new form of gold, the most beautiful jewels are data analytics and machine learning. Data mining and machine learning are considered interdisciplinary fields. Data mining is a subset of data analytics and machine learning involves the use of algorithms that automatically improve through experience based on data. Massive datasets can be classified and clustered to obtain accurate results. The most common technologies used include classification and clustering methods. Accuracy and error rates are calculated for regression and classification and clustering to find actual results through algorithms like support vector machines and neural networks with forward Table of ContentsPreface xvii 1 Introduction to Data Mining 1Santosh R. Durugkar, Rohit Raja, Kapil Kumar Nagwanshi and Sandeep Kumar 1.1. Introduction 1 1.1.1 Data Mining 1 1.2 Knowledge Discovery in Database (KDD) 2 1.2.1 Importance of Data Mining 3 1.2.2 Applications of Data Mining 3 1.2.3 Databases 4 1.3 Issues in Data Mining 6 1.4 Data Mining Algorithms 7 1.5 Data Warehouse 9 1.6 Data Mining Techniques 10 1.7 Data Mining Tools 11 1.7.1 Python for Data Mining 12 1.7.2 KNIME 13 1.7.3 Rapid Miner 17 References 18 2 Classification and Mining Behavior of Data 21Srinivas Konda, Kavitarani Balmuri and Kishore Kumar Mamidala 2.1 Introduction 22 2.2 Main Characteristics of Mining Behavioral Data 23 2.2.1 Mining Dynamic/Streaming Data 23 2.2.2 Mining Graph & Network Data 24 2.2.3 Mining Heterogeneous/Multi-Source Information 25 2.2.3.1 Multi-Source and Multidimensional Information 26 2.2.3.2 Multi-Relational Data 26 2.2.3.3 Background and Connected Data 27 2.2.3.4 Complex Data, Sequences, and Events 27 2.2.3.5 Data Protection and Morals 27 2.2.4 Mining High Dimensional Data 28 2.2.5 Mining Imbalanced Data 29 2.2.5.1 The Class Imbalance Issue 29 2.2.6 Mining Multimedia Data 30 2.2.6.1 Common Applications Multimedia Data Mining 31 2.2.6.2 Multimedia Data Mining Utilizations 31 2.2.6.3 Multimedia Database Management 32 2.2.7 Mining Scientific Data 34 2.2.8 Mining Sequential Data 35 2.2.9 Mining Social Networks 36 2.2.9.1 Social-Media Data Mining Reasons 39 2.2.10 Mining Spatial and Temporal Data 40 2.2.10.1 Utilizations of Spatial and Temporal Data Mining 41 2.3 Research Method 44 2.4 Results 48 2.5 Discussion 49 2.6 Conclusion 50 References 51 3 A Comparative Overview of Hybrid Recommender Systems: Review, Challenges, and Prospects 57Rakhi Seth and Aakanksha Sharaff 3.1 Introduction 58 3.2 Related Work on Different Recommender System 60 3.2.1 Challenges in RS 65 3.2.2 Research Questions and Architecture of This Paper 66 3.2.3 Background 68 3.2.3.1 The Architecture of Hybrid Approach 69 3.2.4 Analysis 78 3.2.4.1 Evaluation Measures 78 3.2.5 Materials and Methods 81 3.2.6 Comparative Analysis With Traditional Recommender System 85 3.2.7 Practical Implications 85 3.2.8 Conclusion & Future Work 94 References 94 4 Stream Mining: Introduction, Tools & Techniques and Applications 99Naresh Kumar Nagwani 4.1 Introduction 100 4.2 Data Reduction: Sampling and Sketching 101 4.2.1 Sampling 101 4.2.2 Sketching 102 4.3 Concept Drift 103 4.4 Stream Mining Operations 105 4.4.1 Clustering 105 4.4.2 Classification 106 4.4.3 Outlier Detection 107 4.4.4 Frequent Itemsets Mining 108 4.5 Tools & Techniques 109 4.5.1 Implementation in Java 110 4.5.2 Implementation in Python 116 4.5.3 Implementation in R 118 4.6 Applications 120 4.6.1 Stock Prediction in Share Market 120 4.6.2 Weather Forecasting System 121 4.6.3 Finding Trending News and Events 121 4.6.4 Analyzing User Behavior in Electronic Commerce Site (Click Stream) 121 4.6.5 Pollution Control Systems 122 4.7 Conclusion 122 References 122 5 Data Mining Tools and Techniques: Clustering Analysis 125Rohit Miri, Amit Kumar Dewangan, S.R. Tandan, Priya Bhatnagar and Hiral Raja 5.1 Introduction 126 5.2 Data Mining Task 129 5.2.1 Data Summarization 129 5.2.2 Data Clustering 129 5.2.3 Classification of Data 129 5.2.4 Data Regression 130 5.2.5 Data Association 130 5.3 Data Mining Algorithms and Methodologies 131 5.3.1 Data Classification Algorithm 131 5.3.2 Predication 132 5.3.3 Association Rule 132 5.3.4 Neural Network 132 5.3.4.1 Data Clustering Algorithm 133 5.3.5 In-Depth Study of Gathering Techniques 134 5.3.6 Data Partitioning Method 134 5.3.7 Hierarchical Method 134 5.3.8 Framework-Based Method 136 5.3.9 Model-Based Method 136 5.3.10 Thickness-Based Method 136 5.4 Clustering the Nearest Neighbor 136 5.4.1 Fuzzy Clustering 137 5.4.2 K-Algorithm Means 137 5.5 Data Mining Applications 138 5.6 Materials and Strategies for Document Clustering 140 5.6.1 Features Generation 142 5.7 Discussion and Results 143 5.7.1 Discussion 146 5.7.2 Conclusion 149 References 149 6 Data Mining Implementation Process 151Kamal K. Mehta, Rajesh Tiwari and Nishant Behar 6.1 Introduction 151 6.2 Data Mining Historical Trends 152 6.3 Processes of Data Analysis 153 6.3.1 Data Attack 153 6.3.2 Data Mixing 153 6.3.3 Data Collection 153 6.3.4 Data Conversion 154 6.3.4.1 Data Mining 154 6.3.4.2 Design Evaluation 154 6.3.4.3 Data Illustration 154 6.3.4.4 Implementation of Data Mining in the Cross-Industry Standard Process 154 6.3.5 Business Understanding 155 6.3.6 Data Understanding 156 6.3.7 Data Preparation 158 6.3.8 Modeling 159 6.3.9 Evaluation 160 6.3.10 Deployment 161 6.3.11 Contemporary Developments 162 6.3.12 An Assortment of Data Mining 162 6.3.12.1 Using Computational & Connectivity Tools 163 6.3.12.2 Web Mining 163 6.3.12.3 Comparative Statement 163 6.3.13 Advantages of Data Mining 163 6.3.14 Drawbacks of Data Mining 165 6.3.15 Data Mining Applications 165 6.3.16 Methodology 167 6.3.17 Results 169 6.3.18 Conclusion and Future Scope 171 References 172 7 Predictive Analytics in IT Service Management (ITSM) 175Sharon Christa I.L. and Suma V. 7.1 Introduction 176 7.2 Analytics: An Overview 178 7.2.1 Predictive Analytics 180 7.3 Significance of Predictive Analytics in ITSM 181 7.4 Ticket Analytics: A Case Study 186 7.4.1 Input Parameters 188 7.4.2 Predictive Modeling 188 7.4.3 Random Forest Model 189 7.4.4 Performance of the Predictive Model 191 7.5 Conclusion 191 References 192 8 Modified Cross-Sell Model for Telecom Service Providers Using Data Mining Techniques 195K. Ramya Laxmi, Sumit Srivastava, K. Madhuravani, S. Pallavi and Omprakash Dewangan 8.1 Introduction 196 8.2 Literature Review 198 8.3 Methodology and Implementation 200 8.3.1 Selection of the Independent Variables 200 8.4 Data Partitioning 203 8.4.1 Interpreting the Results of Logistic Regression Model 203 8.5 Conclusions 204 References 205 9 Inductive Learning Including Decision Tree and Rule Induction Learning 209Raj Kumar Patra, A. Mahendar and G. Madhukar 9.1 Introduction 210 9.2 The Inductive Learning Algorithm (ILA) 212 9.3 Proposed Algorithms 213 9.4 Divide & Conquer Algorithm 214 9.4.1 Decision Tree 214 9.5 Decision Tree Algorithms 215 9.5.1 ID3 Algorithm 215 9.5.2 Separate and Conquer Algorithm 217 9.5.3 RULE EXTRACTOR-1 226 9.5.4 Inductive Learning Applications 226 9.5.4.1 Education 226 9.5.4.2 Making Credit Decisions 227 9.5.5 Multidimensional Databases and OLAP 228 9.5.6 Fuzzy Choice Trees 228 9.5.7 Fuzzy Choice Tree Development From a Multidimensional Database 229 9.5.8 Execution and Results 230 9.6 Conclusion and Future Work 231 References 232 10 Data Mining for Cyber-Physical Systems 235M. Varaprasad Rao, D. Anji Reddy, Anusha Ampavathi and Shaik Munawar 10.1 Introduction 236 10.1.1 Models of Cyber-Physical System 238 10.1.2 Statistical Model-Based Methodologies 239 10.1.3 Spatial-and-Transient Closeness-Based Methodologies 240 10.2 Feature Recovering Methodologies 240 10.3 CPS vs. IT Systems 241 10.4 Collections, Sources, and Generations of Big Data for CPS 242 10.4.1 Establishing Conscious Computation and Information Systems 243 10.5 Spatial Prediction 243 10.5.1 Global Optimization 244 10.5.2 Big Data Analysis CPS 245 10.5.3 Analysis of Cloud Data 245 10.5.4 Analysis of Multi-Cloud Data 247 10.6 Clustering of Big Data 248 10.7 NoSQL 251 10.8 Cyber Security and Privacy Big Data 251 10.8.1 Protection of Big Computing and Storage 252 10.8.2 Big Data Analytics Protection 252 10.8.3 Big Data CPS Applications 256 10.9 Smart Grids 256 10.10 Military Applications 258 10.11 City Management 259 10.12 Clinical Applications 261 10.13 Calamity Events 262 10.14 Data Streams Clustering by Sensors 263 10.15 The Flocking Model 263 10.16 Calculation Depiction 264 10.17 Initialization 265 10.18 Representative Maintenance and Clustering 266 10.19 Results 267 10.20 Conclusion 268 References 269 11 Developing Decision Making and Risk Mitigation: Using CRISP-Data Mining 281Vivek Parganiha, Soorya Prakash Shukla and Lokesh Kumar Sharma 11.1 Introduction 282 11.2 Background 283 11.3 Methodology of CRISP-DM 284 11.4 Stage One—Determine Business Objectives 286 11.4.1 What Are the Ideal Yields of the Venture? 287 11.4.2 Evaluate the Current Circumstance 288 11.4.3 Realizes Data Mining Goals 289 11.5 Stage Two—Data Sympathetic 290 11.5.1 Portray Data 291 11.5.2 Investigate Facts 291 11.5.3 Confirm Data Quality 292 11.5.4 Data Excellence Description 292 11.6 Stage Three—Data Preparation 292 11.6.1 Select Your Data 294 11.6.2 The Data Is Processed 294 11.6.3 Data Needed to Build 294 11.6.4 Combine Information 295 11.7 Stage Four—Modeling 295 11.7.1 Select Displaying Strategy 296 11.7.2 Produce an Investigation Plan 297 11.7.3 Fabricate Ideal 297 11.7.4 Evaluation Model 297 11.8 Stage Five—Evaluation 298 11.8.1 Assess Your Outcomes 299 11.8.2 Survey Measure 299 11.8.3 Decide on the Subsequent Stages 300 11.9 Stage Six—Deployment 300 11.9.1 Plan Arrangement 301 11.9.2 Plan Observing and Support 301 11.9.3 Produce the Last Report 302 11.9.4 Audit Venture 302 11.10 Data on ERP Systems 302 11.11 Usage of CRISP-DM Methodology 304 11.12 Modeling 306 11.12.1 Association Rule Mining (ARM) or Association Analysis 307 11.12.2 Classification Algorithms 307 11.12.3 Regression Algorithms 308 11.12.4 Clustering Algorithms 308 11.13 Assessment 310 11.14 Distribution 310 11.15 Results and Discussion 310 11.16 Conclusion 311 References 314 12 Human–Machine Interaction and Visual Data Mining 317Upasana Sinha, Akanksha Gupta, Samera Khan, Shilpa Rani and Swati Jain 12.1 Introduction 318 12.2 Related Researches 320 12.2.1 Data Mining 323 12.2.2 Data Visualization 323 12.2.3 Visual Learning 324 12.3 Visual Genes 325 12.4 Visual Hypotheses 326 12.5 Visual Strength and Conditioning 326 12.6 Visual Optimization 327 12.7 The Vis 09 Model 327 12.8 Graphic Monitoring and Contact With Human–Computer 328 12.9 Mining HCI Information Using Inductive Deduction Viewpoint 332 12.10 Visual Data Mining Methodology 334 12.11 Machine Learning Algorithms for Hand Gesture Recognition 338 12.12 Learning 338 12.13 Detection 339 12.14 Recognition 340 12.15 Proposed Methodology for Hand Gesture Recognition 340 12.16 Result 343 12.17 Conclusion 343 References 344 13 MSDTrA: A Boosting Based-Transfer Learning Approach for Class Imbalanced Skin Lesion Dataset for Melanoma Detection 349Lokesh Singh, Rekh Ram Janghel and Satya Prakash Sahu 13.1 Introduction 349 13.2 Literature Survey 352 13.3 Methods and Material 353 13.3.1 Proposed Methodology: Multi Source Dynamic TrAdaBoost Algorithm 355 13.4 Experimental Results 357 13.5 Libraries Used 357 13.6 Comparing Algorithms Based on Decision Boundaries 357 13.7 Evaluating Results 358 13.8 Conclusion 361 References 361 14 New Algorithms and Technologies for Data Mining 365Padma Bonde, Latika Pinjarkar, Korhan Cengiz, Aditi Shukla and Maguluri Sudeep Joel 14.1 Introduction 366 14.2 Machine Learning Algorithms 368 14.3 Supervised Learning 368 14.4 Unsupervised Learning 369 14.5 Semi-Supervised Learning 369 14.6 Regression Algorithms 371 14.7 Case-Based Algorithms 371 14.8 Regularization Algorithms 372 14.9 Decision Tree Algorithms 372 14.10 Bayesian Algorithms 373 14.11 Clustering Algorithms 374 14.12 Association Rule Learning Algorithms 375 14.13 Artificial Neural Network Algorithms 375 14.14 Deep Learning Algorithms 376 14.15 Dimensionality Reduction Algorithms 377 14.16 Ensemble Algorithms 377 14.17 Other Machine Learning Algorithms 378 14.18 Data Mining Assignments 378 14.19 Data Mining Models 381 14.20 Non-Parametric & Parametric Models 381 14.21 Flexible vs. Restrictive Methods 382 14.22 Unsupervised vs. Supervised Learning 382 14.23 Data Mining Methods 384 14.24 Proposed Algorithm 387 14.24.1 Organization Formation Procedure 387 14.25 The Regret of Learning Phase 388 14.26 Conclusion 392 References 392 15 Classification of EEG Signals for Detection of Epileptic Seizure Using Restricted Boltzmann Machine Classifier 397Sudesh Kumar, Rekh Ram Janghel and Satya Prakash Sahu 15.1 Introduction 398 15.2 Related Work 400 15.3 Material and Methods 401 15.3.1 Dataset Description 401 15.3.2 Proposed Methodology 403 15.3.3 Normalization 404 15.3.4 Preprocessing Using PCA 404 15.3.5 Restricted Boltzmann Machine (RBM) 406 15.3.6 Stochastic Binary Units (Bernoulli Variables) 407 15.3.7 Training 408 15.3.7.1 Gibbs Sampling 409 15.3.7.2 Contrastive Divergence (CD) 409 15.4 Experimental Framework 410 15.5 Experimental Results and Discussion 412 15.5.1 Performance Measurement Criteria 412 15.5.2 Experimental Results 412 15.6 Discussion 414 15.7 Conclusion 418 References 419 16 An Enhanced Security of Women and Children Using Machine Learning and Data Mining Techniques 423Nanda R. Wagh and Sanjay R. Sutar 16.1 Introduction 424 16.2 Related Work 424 16.2.1 WoSApp 424 16.2.2 Abhaya 425 16.2.3 Women Empowerment 425 16.2.4 Nirbhaya 425 16.2.5 Glympse 426 16.2.6 Fightback 426 16.2.7 Versatile-Based 426 16.2.8 RFID 426 16.2.9 Self-Preservation Framework for WomenBWith Area Following and SMS Alarming Through GSM Network 426 16.2.10 Safe: A Women Security Framework 427 16.2.11 Intelligent Safety System For Women Security 427 16.2.12 A Mobile-Based Women Safety Application 427 16.2.13 Self-Salvation—The Women’s Security Module 427 16.3 Issue and Solution 427 16.3.1 Inspiration 427 16.3.2 Issue Statement and Choice of Solution 428 16.4 Selection of Data 428 16.5 Pre-Preparation Data 430 16.5.1 Simulation 431 16.5.2 Assessment 431 16.5.3 Forecast 434 16.6 Application Development 436 16.6.1 Methodology 436 16.6.2 AI Model 437 16.6.3 Innovations Used The Proposed Application Has Utilized After Technologies 437 16.7 Use Case For The Application 437 16.7.1 Application Icon 437 16.7.2 Enlistment Form 438 16.7.3 Login Form 439 16.7.4 Misconduct Place Detector 439 16.7.5 Help Button 440 16.8 Conclusion 443 References 443 17 Conclusion and Future Direction in Data Mining and Machine Learning 447Santosh R. Durugkar, Rohit Raja, Kapil Kumar Nagwanshi and Ramakant Chandrakar 17.1 Introduction 448 17.2 Machine Learning 451 17.2.1 Neural Network 452 17.2.2 Deep Learning 452 17.2.3 Three Activities for Object Recognition 453 17.3 Conclusion 457 References 457 Index 461

    15 in stock

    £169.16

  • Multimedia Streaming in SdnNfv and 5g Networks

    John Wiley & Sons Inc Multimedia Streaming in SdnNfv and 5g Networks

    Out of stock

    Book SynopsisMultimedia Streaming in SDN/NFV and 5G Networks A comprehensive overview of Quality of Experience control and management of multimedia services in future networks In Multimedia Streaming in SDN/NFV and 5G Networks, renowned researchers deliver a high-level exploration of Quality of Experience (QoE) control and management solutions for multimedia services in future softwarized and virtualized 5G networks. The book offers coverage of network softwarization and virtualization technologies, including SDN, NFV, MEC, and Fog/Cloud Computing, as critical elements for the management of multimedia services in future networks, like 5G and 6G networks and beyond. Providing a fulsome examination of end-to-end QoE control and management solutions in softwarized and virtualized networks, the book concludes with discussions of probable future challenges and research directions in emerging multimedia services and applications, 5G network management and orchestration, netTable of ContentsBiography of Authors xi Abstract xiii List of Figures xv List of Tables xix Preface xxi Acknowledgments xxv List of Acronyms xxvii 1 5G Networks 1 1.1 History of Mobile Communication Systems 1 1.2 5G: Vision and Motivation 3 1.3 5G Service Quality and Business Requirements 6 1.3.1 High User Experienced Data Rate and Ultra-Low Latency 7 1.3.2 Transparency, Consistency, User’s QoE Personalization, and Service Differentiation 9 1.3.3 Enhanced Security, Mobility, and Services Availability 9 1.3.4 Seamless User Experience, Longer Battery Life, and Context Aware Networking 10 1.3.5 Energy and Cost Efficiency, Network Scalability, and Flexibility 10 1.4 5G Services, Applications, and Use Cases 11 1.5 5G Standardization Activities 12 1.6 Conclusion 14 Bibliography 14 2 5G Network Management for Big Data Streaming using Machine Learning 19 2.1 Machine Learning for Multimedia Networking 19 2.1.1 Data Collection 20 2.1.2 Feature Engineering 21 2.1.3 Ground Truth Quality Perception – Model Optimization and Training 22 2.1.4 Performance Metrics, Testing, and Model Validation 22 2.2 Machine Learning Paradigms 22 2.3 Multimedia Big Data Streaming 23 2.4 Deep Learning for IoT Big Data and Streaming Analytics 25 2.5 Intelligent QoE-based Big Data Strategies and Multimedia Streaming in Future Softwarized Networks 26 2.6 Optimization of Data Management with ML in Softwarized 5G Networks 27 2.6.1 A Multimodal Learning Framework for Video QoE Prediction 27 2.6.2 Supervised-Learning-based QoE Prediction of Video Streaming in Future Networks 28 2.6.3 OTT Service Providers 29 2.6.4 ISP/MNOs 29 2.6.5 Information Flow 29 2.7 Conclusion 30 Bibliography 31 3 Quality of Experience Management of Multimedia Streaming Services 35 3.1 Quality of Experience (QoE): Concepts and Definition 35 3.1.1 Quality of Experience Influence Factors 36 3.2 QoE Modeling and Assessment: Metrics and Models 38 3.2.1 Subjective Quality Models 38 3.2.2 Objective Quality Models 39 3.2.3 Data-driven Quality Models 41 3.3 QoE Measurement, Assessment, and Management 42 3.4 QoE Optimization and Control of Multimedia Streaming Services 43 3.4.1 Customer Experience Management and QoE Monitoring 47 3.5 Conclusion 51 Bibliography 51 4 Multimedia Streaming Services Over the Internet 57 4.1 Internet-Based Video Streaming: An Overview 57 4.1.1 A Brief History of Internet-Based Video Streaming 57 4.2 HTTP Adaptive Streaming (HAS) Framework 59 4.3 Server and Network-Assisted DASH (SAND) 63 4.4 Multimedia Delivery Chain and Service Management Issues 66 4.5 Conclusion 68 Bibliography 68 5 QoE Management of Multimedia Services Using Machine Learning in SDN/NFV 5G Networks 73 5.1 QoE-Centric Routing Mechanisms for Improving Video Quality 74 5.2 MPTCP/SR and QUIC Approaches for QoE Optimization 76 5.3 Server and Network-Assisted Optimization Approaches Using Sdn/nfv 80 5.4 QoE-Centric Fairness and Personalized QoE Control 84 5.4.1 QoE-driven Optimization Approaches for Video Streaming Using Machine Learning 86 5.5 Conclusion 89 Bibliography 89 6 Network Softwarization and Virtualization in Future Networks: The promise of SDN, NFV, MEC, and Fog/Cloud Computing 99 6.1 Network Softwarization: Concepts and Use Cases 99 6.2 Network Softwarization and Virtualization Technologies for Future Communication Platforms 100 6.2.1 Software-Defined Networking (SDN) 100 6.2.2 Standardization Activities of SDN 102 6.2.3 Traffic Management Applications for Stateful SDN Data Plane 103 6.2.4 Network Function Virtualization (NFV) 103 6.2.5 NFV Management and Orchestration (NFV MANO) Framework 106 6.2.6 NFV Use Cases, Application Scenarios, and Implementation 107 6.2.7 NFV Standardization Activities 107 6.2.8 Network Hypervisors, Containers, and Virtual Machines 108 6.2.9 Multiaccess Edge Computing (MEC): From Clouds to Edges 110 6.3 Conclusion 111 Bibliography 112 7 Management of Multimedia Services in Emerging Architectures Using Big Data Analytics: MEC, ICN, and Fog/Cloud computing 119 7.1 QoE-Aware/Driven Adaptive Streaming Over MEC Architectures 119 7.2 QoE-aware Self-Driving 3D Network Architecture 121 7.3 QoE-driven/aware Management Architecture Using ICN 124 7.4 QoE-aware Adaptive Streaming over Cloud/Fog Computing 126 7.5 Conclusion 127 Bibliography 127 8 Emerging Applications and Services in Future 5G Networks 133 8.1 QoE in Immersive AR/VR and Mulsemedia Applications 133 8.2 QoE in Cloud Gaming Video Streaming Applications 135 8.3 QoE in Light Field Applications 136 8.4 Holographic and Future Media Communications 137 8.5 Human-Centric Services and 3D Volumetric Video Streaming 137 8.6 New Video Compression Standards Toward 6G Networks 139 8.7 Conclusion 140 Bibliography 141 9 5G Network Slicing Management Architectures and Implementations for Multimedia 147 9.1 5G Network Slicing Architectures and Implementations 147 9.1.1 Collaborative 5G Network Slicing Research Projects 148 9.1.2 Open Source Orchestrators, Proof of Concepts, and Standardization Efforts 152 9.2 5G Network Slicing Orchestration and Management in Multi-domain 156 9.2.1 Reference Architecture for Data-Driven Network Management in 6G and Beyond Networks 158 9.3 Conclusion 161 Bibliography 162 10 QoE Management of Multimedia Service Challenges in 5G Networks 167 10.1 QoE Management and Orchestration in Future Networks 167 10.2 Immersive Media Experience in Future Softwarized and Beyond Networks 168 10.3 Development of New Subjective Assessment Methodologies for Emerging Services and Applications 170 10.4 QoE-aware Network Sharing and Slicing in Future Softwarized Networks 171 10.5 QoE Measurement, Modeling, and Testing Issues Over 6G Networks 172 10.6 QoE-Centric Business Models in Future Softwarized Network 172 10.7 Novel QoE-driven Virtualized Multimedia 3D Services Delivery Schemes Over 6G Networks 174 10.8 Novel 3D Cloud/Edge Models for Multimedia Applications and Elastic 3D Service Customization 174 10.9 Security, Privacy, Trust 176 10.10 Conclusion 177 Bibliography 178 11 Multimedia QoE-Driven Services Delivery Toward 6G and Beyond Network 185 11.1 The Roads Toward 6G and Beyond Networks 185 11.1.1 Holographic and Future Media Communications 189 11.1.2 Human-Centric Services and 3D Volumetric Video Streaming 190 11.1.3 Potential Features and Technologies in 6G Networks 191 11.2 6G Innovative Network Architectures: From Network Softwarization to Intelligentization 194 11.2.1 Network Management, Automation, and Orchestration 194 11.2.2 Pervasive AI, ML, and Big Data Analytics 196 11.2.3 New Protocol Stack and 3D Network Architectures 197 11.3 6G Standardization Activities 198 11.4 Conclusion 199 Bibliography 199 12 Multimedia Streaming Services Delivery in 2030 and Beyond Networks 203 12.1 The Future of the Video Streaming Industry: Market Growth and Trends Toward 2030 203 12.2 Future 2030 Communication Network and Computing Scenarios 204 12.3 New Paradigms of Internetworking for 2030 and Beyond Networks 206 12.3.1 Next-Generation Human–Machine Interaction Network: A Humancentric Hyper-real Experience 207 12.3.2 Multimedia Streaming Blockchain-enabled Resource Management and Sharing for 2030 and Beyond Networks 209 12.4 A General QoE Provisioning Ecosystem for 2030 and Beyond Networks 211 12.4.1 Customer QoE Quantification 213 12.4.2 QoE Analysis, Assessment, Measurement, and Visualization 213 12.4.3 5G Network Performance and QoE Optimization 214 12.4.4 Multistakeholders Collaboration for Video Content Delivery and Network Infrastructures 214 12.5 Conclusion 218 Bibliography 218 Index 221

    Out of stock

    £92.70

  • Fundamentals and Methods of Machine and Deep

    John Wiley & Sons Inc Fundamentals and Methods of Machine and Deep

    Out of stock

    Book SynopsisFUNDAMENTALS AND METHODS OF MACHINE AND DEEP LEARNING The book provides a practical approach by explaining the concepts of machine learning and deep learning algorithms, evaluation of methodology advances, and algorithm demonstrations with applications. Over the past two decades, the field of machine learning and its subfield deep learning have played a main role in software applications development. Also, in recent research studies, they are regarded as one of the disruptive technologies that will transform our future life, business, and the global economy. The recent explosion of digital data in a wide variety of domains, including science, engineering, Internet of Things, biomedical, healthcare, and many business sectors, has declared the era of big data, which cannot be analysed by classical statistics but by the more modern, robust machine learning and deep learning techniques. Since machine learning learns from data rather than by programming hard-coded decisiTable of ContentsPreface xix 1 Supervised Machine Learning: Algorithms and Applications 1Shruthi H. Shetty, Sumiksha Shetty, Chandra Singh and Ashwath Rao 1.1 History 2 1.2 Introduction 2 1.3 Supervised Learning 4 1.4 Linear Regression (LR) 5 1.4.1 Learning Model 6 1.4.2 Predictions With Linear Regression 7 1.5 Logistic Regression 8 1.6 Support Vector Machine (SVM) 9 1.7 Decision Tree 11 1.8 Machine Learning Applications in Daily Life 12 1.8.1 Traffic Alerts (Maps) 12 1.8.2 Social Media (Facebook) 13 1.8.3 Transportation and Commuting (Uber) 13 1.8.4 Products Recommendations 13 1.8.5 Virtual Personal Assistants 13 1.8.6 Self-Driving Cars 14 1.8.7 Google Translate 14 1.8.8 Online Video Streaming (Netflix) 14 1.8.9 Fraud Detection 14 1.9 Conclusion 15 References 15 2 Zonotic Diseases Detection Using Ensemble Machine Learning Algorithms 17Bhargavi K. 2.1 Introduction 18 2.2 Bayes Optimal Classifier 19 2.3 Bootstrap Aggregating (Bagging) 21 2.4 Bayesian Model Averaging (BMA) 22 2.5 Bayesian Classifier Combination (BCC) 24 2.6 Bucket of Models 26 2.7 Stacking 27 2.8 Efficiency Analysis 29 2.9 Conclusion 30 References 30 3 Model Evaluation 33Ravi Shekhar Tiwari 3.1 Introduction 34 3.2 Model Evaluation 34 3.2.1 Assumptions 36 3.2.2 Residual 36 3.2.3 Error Sum of Squares (Sse) 37 3.2.4 Regression Sum of Squares (Ssr) 37 3.2.5 Total Sum of Squares (Ssto) 37 3.3 Metric Used in Regression Model 38 3.3.1 Mean Absolute Error (Mae) 38 3.3.2 Mean Square Error (Mse) 39 3.3.3 Root Mean Square Error (Rmse) 41 3.3.4 Root Mean Square Logarithm Error (Rmsle) 42 3.3.5 R-Square (R2) 45 3.3.5.1 Problem With R-Square (R2) 46 3.3.6 Adjusted R-Square (R2) 46 3.3.7 Variance 47 3.3.8 AIC 48 3.3.9 BIC 49 3.3.10 ACP, Press, and R2-Predicted 49 3.3.11 Solved Examples 51 3.4 Confusion Metrics 52 3.4.1 How to Interpret the Confusion Metric? 53 3.4.2 Accuracy 55 3.4.2.1 Why Do We Need the Other Metric Along With Accuracy? 56 3.4.3 True Positive Rate (TPR) 56 3.4.4 False Negative Rate (FNR) 57 3.4.5 True Negative Rate (TNR) 57 3.4.6 False Positive Rate (FPR) 58 3.4.7 Precision 58 3.4.8 Recall 59 3.4.9 Recall-Precision Trade-Off 60 3.4.10 F1-Score 61 3.4.11 F-Beta Sore 61 3.4.12 Thresholding 63 3.4.13 AUC - ROC 64 3.4.14 AUC - PRC 65 3.4.15 Derived Metric From Recall, Precision, and F1-Score 67 3.4.16 Solved Examples 68 3.5 Correlation 70 3.5.1 Pearson Correlation 70 3.5.2 Spearman Correlation 71 3.5.3 Kendall’s Rank Correlation 73 3.5.4 Distance Correlation 74 3.5.5 Biweight Mid-Correlation 75 3.5.6 Gamma Correlation 76 3.5.7 Point Biserial Correlation 77 3.5.8 Biserial Correlation 78 3.5.9 Partial Correlation 78 3.6 Natural Language Processing (NLP) 78 3.6.1 N-Gram 79 3.6.2 BELU Score 79 3.6.2.1 BELU Score With N-Gram 80 3.6.3 Cosine Similarity 81 3.6.4 Jaccard Index 83 3.6.5 ROUGE 84 3.6.6 NIST 85 3.6.7 SQUAD 85 3.6.8 MACRO 86 3.7 Additional Metrics 86 3.7.1 Mean Reciprocal Rank (MRR) 86 3.7.2 Cohen Kappa 87 3.7.3 Gini Coefficient 87 3.7.4 Scale-Dependent Errors 87 3.7.5 Percentage Errors 88 3.7.6 Scale-Free Errors 88 3.8 Summary of Metric Derived from Confusion Metric 89 3.9 Metric Usage 90 3.10 Pro and Cons of Metrics 94 3.11 Conclusion 95 References 96 4 Analysis of M-SEIR and LSTM Models for the Prediction of COVID-19 Using RMSLE 101Archith S., Yukta C., Archana H.R. and Surendra H.H. 4.1 Introduction 101 4.2 Survey of Models 103 4.2.1 SEIR Model 103 4.2.2 Modified SEIR Model 103 4.2.3 Long Short-Term Memory (LSTM) 104 4.3 Methodology 106 4.3.1 Modified SEIR 106 4.3.2 LSTM Model 108 4.3.2.1 Data Pre-Processing 108 4.3.2.2 Data Shaping 109 4.3.2.3 Model Design 109 4.4 Experimental Results 111 4.4.1 Modified SEIR Model 111 4.4.2 LSTM Model 113 4.5 Conclusion 116 4.6 Future Work 116 References 118 5 The Significance of Feature Selection Techniques in Machine Learning 121N. Bharathi, B.S. Rishiikeshwer, T. Aswin Shriram, B. Santhi and G.R. Brindha 5.1 Introduction 122 5.2 Significance of Pre-Processing 122 5.3 Machine Learning System 123 5.3.1 Missing Values 123 5.3.2 Outliers 123 5.3.3 Model Selection 124 5.4 Feature Extraction Methods 124 5.4.1 Dimension Reduction 125 5.4.1.1 Attribute Subset Selection 126 5.4.2 Wavelet Transforms 127 5.4.3 Principal Components Analysis 127 5.4.4 Clustering 128 5.5 Feature Selection 128 5.5.1 Filter Methods 129 5.5.2 Wrapper Methods 129 5.5.3 Embedded Methods 130 5.6 Merits and Demerits of Feature Selection 131 5.7 Conclusion 131 References 132 6 Use of Machine Learning and Deep Learning in Healthcare—A Review on Disease Prediction System 135Radha R. and Gopalakrishnan R. 6.1 Introduction to Healthcare System 136 6.2 Causes for the Failure of the Healthcare System 137 6.3 Artificial Intelligence and Healthcare System for Predicting Diseases 138 6.3.1 Monitoring and Collection of Data 140 6.3.2 Storing, Retrieval, and Processing of Data 141 6.4 Facts Responsible for Delay in Predicting the Defects 142 6.5 Pre-Treatment Analysis and Monitoring 143 6.6 Post-Treatment Analysis and Monitoring 145 6.7 Application of ML and DL 145 6.7.1 ML and DL for Active Aid 145 6.7.1.1 Bladder Volume Prediction 147 6.7.1.2 Epileptic Seizure Prediction 148 6.8 Challenges and Future of Healthcare Systems Based on ML and DL 148 6.9 Conclusion 149 References 150 7 Detection of Diabetic Retinopathy Using Ensemble Learning Techniques 153Anirban Dutta, Parul Agarwal, Anushka Mittal, Shishir Khandelwal and Shikha Mehta 7.1 Introduction 153 7.2 Related Work 155 7.3 Methodology 155 7.3.1 Data Pre-Processing 155 7.3.2 Feature Extraction 161 7.3.2.1 Exudates 161 7.3.2.2 Blood Vessels 161 7.3.2.3 Microaneurysms 162 7.3.2.4 Hemorrhages 162 7.3.3 Learning 163 7.3.3.1 Support Vector Machines 163 7.3.3.2 K-Nearest Neighbors 163 7.3.3.3 Random Forest 164 7.3.3.4 AdaBoost 164 7.3.3.5 Voting Technique 164 7.4 Proposed Models 165 7.4.1 AdaNaive 165 7.4.2 AdaSVM 166 7.4.3 AdaForest 166 7.5 Experimental Results and Analysis 167 7.5.1 Dataset 167 7.5.2 Software and Hardware 167 7.5.3 Results 168 7.6 Conclusion 173 References 174 8 Machine Learning and Deep Learning for Medical Analysis—A Case Study on Heart Disease Data 177Swetha A.M., Santhi B. and Brindha G.R. 8.1 Introduction 178 8.2 Related Works 179 8.3 Data Pre-Processing 181 8.3.1 Data Imbalance 181 8.4 Feature Selection 182 8.4.1 Extra Tree Classifier 182 8.4.2 Pearson Correlation 183 8.4.3 Forward Stepwise Selection 183 8.4.4 Chi-Square Test 184 8.5 ML Classifiers Techniques 184 8.5.1 Supervised Machine Learning Models 185 8.5.1.1 Logistic Regression 185 8.5.1.2 SVM 186 8.5.1.3 Naive Bayes 186 8.5.1.4 Decision Tree 186 8.5.1.5 K-Nearest Neighbors (KNN) 187 8.5.2 Ensemble Machine Learning Model 187 8.5.2.1 Random Forest 187 8.5.2.2 AdaBoost 188 8.5.2.3 Bagging 188 8.5.3 Neural Network Models 189 8.5.3.1 Artificial Neural Network (ANN) 189 8.5.3.2 Convolutional Neural Network (CNN) 189 8.6 Hyperparameter Tuning 190 8.6.1 Cross-Validation 190 8.7 Dataset Description 190 8.7.1 Data Pre-Processing 193 8.7.2 Feature Selection 195 8.7.3 Model Selection 196 8.7.4 Model Evaluation 197 8.8 Experiments and Results 197 8.8.1 Study 1: Survival Prediction Using All Clinical Features 198 8.8.2 Study 2: Survival Prediction Using Age, Ejection Fraction and Serum Creatinine 198 8.8.3 Study 3: Survival Prediction Using Time, Ejection Fraction, and Serum Creatinine 199 8.8.4 Comparison Between Study 1, Study 2, and Study 3 203 8.8.5 Comparative Study on Different Sizes of Data 204 8.9 Analysis 206 8.10 Conclusion 206 References 207 9 A Novel Convolutional Neural Network Model to Predict Software Defects 211Kumar Rajnish, Vandana Bhattacharjee and Mansi Gupta 9.1 Introduction 212 9.2 Related Works 213 9.2.1 Software Defect Prediction Based on Deep Learning 213 9.2.2 Software Defect Prediction Based on Deep Features 214 9.2.3 Deep Learning in Software Engineering 214 9.3 Theoretical Background 215 9.3.1 Software Defect Prediction 215 9.3.2 Convolutional Neural Network 216 9.4 Experimental Setup 218 9.4.1 Data Set Description 218 9.4.2 Building Novel Convolutional Neural Network (NCNN) Model 219 9.4.3 Evaluation Parameters 222 9.4.4 Results and Analysis 224 9.5 Conclusion and Future Scope 230 References 233 10 Predictive Analysis on Online Television Videos Using Machine Learning Algorithms 237Rebecca Jeyavadhanam B., Ramalingam V.V., Sugumaran V. and Rajkumar D. 10.1 Introduction 238 10.1.1 Overview of Video Analytics 241 10.1.2 Machine Learning Algorithms 242 10.1.2.1 Decision Tree C4.5 243 10.1.2.2 J48 Graft 243 10.1.2.3 Logistic Model Tree 244 10.1.2.4 Best First Tree 244 10.1.2.5 Reduced Error Pruning Tree 244 10.1.2.6 Random Forest 244 10.2 Proposed Framework 245 10.2.1 Data Collection 246 10.2.2 Feature Extraction 246 10.2.2.1 Block Intensity Comparison Code 247 10.2.2.2 Key Frame Rate 248 10.3 Feature Selection 249 10.4 Classification 250 10.5 Online Incremental Learning 251 10.6 Results and Discussion 253 10.7 Conclusion 255 References 256 11 A Combinational Deep Learning Approach to Visually Evoked EEG-Based Image Classification 259Nandini Kumari, Shamama Anwar and Vandana Bhattacharjee 11.1 Introduction 260 11.2 Literature Review 262 11.3 Methodology 264 11.3.1 Dataset Acquisition 264 11.3.2 Pre-Processing and Spectrogram Generation 265 11.3.3 Classification of EEG Spectrogram Images With Proposed CNN Model 266 11.3.4 Classification of EEG Spectrogram Images With Proposed Combinational CNN+LSTM Model 268 11.4 Result and Discussion 270 11.5 Conclusion 272 References 273 12 Application of Machine Learning Algorithms With Balancing Techniques for Credit Card Fraud Detection: A Comparative Analysis 277Shiksha 12.1 Introduction 278 12.2 Methods and Techniques 280 12.2.1 Research Approach 280 12.2.2 Dataset Description 282 12.2.3 Data Preparation 283 12.2.4 Correlation Between Features 284 12.2.5 Splitting the Dataset 285 12.2.6 Balancing Data 285 12.2.6.1 Oversampling of Minority Class 286 12.2.6.2 Under-Sampling of Majority Class 286 12.2.6.3 Synthetic Minority Over Sampling Technique 286 12.2.6.4 Class Weight 287 12.2.7 Machine Learning Algorithms (Models) 288 12.2.7.1 Logistic Regression 288 12.2.7.2 Support Vector Machine 288 12.2.7.3 Decision Tree 290 12.2.7.4 Random Forest 292 12.2.8 Tuning of Hyperparameters 294 12.2.9 Performance Evaluation of the Models 294 12.3 Results and Discussion 298 12.3.1 Results Using Balancing Techniques 299 12.3.2 Result Summary 299 12.4 Conclusions 305 12.4.1 Future Recommendations 305 References 306 13 Crack Detection in Civil Structures Using Deep Learning 311Bijimalla Shiva Vamshi Krishna, Rishiikeshwer B.S., J. Sanjay Raju, N. Bharathi, C. Venkatasubramanian and G.R. Brindha 13.1 Introduction 312 13.2 Related Work 312 13.3 Infrared Thermal Imaging Detection Method 314 13.4 Crack Detection Using CNN 314 13.4.1 Model Creation 316 13.4.2 Activation Functions (AF) 317 13.4.3 Optimizers 322 13.4.4 Transfer Learning 322 13.5 Results and Discussion 322 13.6 Conclusion 323 References 323 14 Measuring Urban Sprawl Using Machine Learning 327Keerti Kulkarni and P. A. Vijaya 14.1 Introduction 327 14.2 Literature Survey 328 14.3 Remotely Sensed Images 329 14.4 Feature Selection 331 14.4.1 Distance-Based Metric 331 14.5 Classification Using Machine Learning Algorithms 332 14.5.1 Parametric vs. Non-Parametric Algorithms 332 14.5.2 Maximum Likelihood Classifier 332 14.5.3 k-Nearest Neighbor Classifiers 334 14.5.4 Evaluation of the Classifiers 334 14.5.4.1 Precision 334 14.5.4.2 Recall 335 14.5.4.3 Accuracy 335 14.5.4.4 F1-Score 335 14.6 Results 335 14.7 Discussion and Conclusion 338 Acknowledgements 338 References 338 15 Application of Deep Learning Algorithms in Medical Image Processing: A Survey 341Santhi B., Swetha A.M. and Ashutosh A.M. 15.1 Introduction 342 15.2 Overview of Deep Learning Algorithms 343 15.2.1 Supervised Deep Neural Networks 343 15.2.1.1 Convolutional Neural Network 343 15.2.1.2 Transfer Learning 344 15.2.1.3 Recurrent Neural Network 344 15.2.2 Unsupervised Learning 345 15.2.2.1 Autoencoders 345 15.2.2.2 GANs 345 15.3 Overview of Medical Images 346 15.3.1 MRI Scans 346 15.3.2 CT Scans 347 15.3.3 X-Ray Scans 347 15.3.4 PET Scans 347 15.4 Scheme of Medical Image Processing 348 15.4.1 Formation of Image 348 15.4.2 Image Enhancement 349 15.4.3 Image Analysis 349 15.4.4 Image Visualization 349 15.5 Anatomy-Wise Medical Image Processing With Deep Learning 349 15.5.1 Brain Tumor 352 15.5.2 Lung Nodule Cancer Detection 357 15.5.3 Breast Cancer Segmentation and Detection 362 15.5.4 Heart Disease Prediction 364 15.5.5 COVID-19 Prediction 370 15.6 Conclusion 372 References 372 16 Simulation of Self-Driving Cars Using Deep Learning 379Rahul M. K., Praveen L. Uppunda, Vinayaka Raju S., Sumukh B. and C. Gururaj 16.1 Introduction 380 16.2 Methodology 380 16.2.1 Behavioral Cloning 380 16.2.2 End-to-End Learning 380 16.3 Hardware Platform 381 16.4 Related Work 382 16.5 Pre-Processing 382 16.5.1 Lane Feature Extraction 382 16.5.1.1 Canny Edge Detector 383 16.5.1.2 Hough Transform 383 16.5.1.3 Raw Image Without Pre-Processing 384 16.6 Model 384 16.6.1 CNN Architecture 385 16.6.2 Multilayer Perceptron Model 385 16.6.3 Regression vs. Classification 385 16.6.3.1 Regression 386 16.6.3.2 Classification 386 16.7 Experiments 387 16.8 Results 387 16.9 Conclusion 394 References 394 17 Assistive Technologies for Visual, Hearing, and Speech Impairments: Machine Learning and Deep Learning Solutions 397Shahira K. C., Sruthi C. J. and Lijiya A. 17.1 Introduction 397 17.2 Visual Impairment 398 17.2.1 Conventional Assistive Technology for the VIP 399 17.2.1.1 Way Finding 399 17.2.1.2 Reading Assistance 402 17.2.2 The Significance of Computer Vision and Deep Learning in AT of VIP 403 17.2.2.1 Navigational Aids 403 17.2.2.2 Scene Understanding 405 17.2.2.3 Reading Assistance 406 17.2.2.4 Wearables 408 17.3 Verbal and Hearing Impairment 410 17.3.1 Assistive Listening Devices 410 17.3.2 Alerting Devices 411 17.3.3 Augmentative and Alternative Communication Devices 411 17.3.3.1 Sign Language Recognition 412 17.3.4 Significance of Machine Learning and Deep Learning in Assistive Communication Technology 417 17.4 Conclusion and Future Scope 418 References 418 18 Case Studies: Deep Learning in Remote Sensing 425Emily Jenifer A. and Sudha N. 18.1 Introduction 426 18.2 Need for Deep Learning in Remote Sensing 427 18.3 Deep Neural Networks for Interpreting Earth Observation Data 427 18.3.1 Convolutional Neural Network 427 18.3.2 Autoencoder 428 18.3.3 Restricted Boltzmann Machine and Deep Belief Network 429 18.3.4 Generative Adversarial Network 430 18.3.5 Recurrent Neural Network 431 18.4 Hybrid Architectures for Multi-Sensor Data Processing 432 18.5 Conclusion 434 References 434 Index 439

    Out of stock

    £168.26

  • Deep Learning for Targeted Treatments

    John Wiley & Sons Inc Deep Learning for Targeted Treatments

    15 in stock

    Book SynopsisDEEP LEARNING FOR TREATMENTS The book provides the direction for future research in deep learning in terms of its role in targeted treatment, biological systems, site-specific drug delivery, risk assessment in therapy, etc. Deep Learning for Targeted Treatments describes the importance of the deep learning framework for patient care, disease imaging/detection, and health management. Since deep learning can and does play a major role in a patient's healthcare management by controlling drug delivery to targeted tissues or organs, the main focus of the book is to leverage the various prospects of the DL framework for targeted therapy of various diseases. In terms of its industrial significance, this general-purpose automatic learning procedure is being widely implemented in pharmaceutical healthcare. Audience The book will be immensely interesting and useful to researchers and those working in the areas of clinical research, disease management, phTable of ContentsPreface xvii Acknowledgement xix 1 Deep Learning and Site-Specific Drug Delivery: The Future and Intelligent Decision Support for Pharmaceutical Manufacturing Science 1 Dhanalekshmi Unnikrishnan Meenakshi, Selvasudha Nandakumar, Arul Prakash Francis, Pushpa Sweety, Shivkanya Fuloria, Neeraj Kumar Fuloria, Vetriselvan Subramaniyan and Shah Alam Khan 1.1 Introduction 2 1.2 Drug Discovery, Screening and Repurposing 5 1.3 DL and Pharmaceutical Formulation Strategy 11 1.3.1 DL in Dose and Formulation Prediction 11 1.3.2 DL in Dissolution and Release Studies 15 1.3.3 DL in the Manufacturing Process 16 1.4 Deep Learning Models for Nanoparticle-Based Drug Delivery 19 1.4.1 Nanoparticles With High Drug Delivery Capacities Using Perturbation Theory 20 1.4.2 Artificial Intelligence and Drug Delivery Algorithms 21 1.4.3 Nanoinformatics 22 1.5 Model Prediction for Site-Specific Drug Delivery 23 1.5.1 Prediction of Mode and a Site-Specific Action 23 1.5.2 Precision Medicine 26 1.6 Future Scope and Challenges 27 1.7 Conclusion 29 References 30 2 Role of Deep Learning, Blockchain and Internet of Things in Patient Care 39 Akanksha Sharma, Rishabha Malviya and Sonali Sundram 2.1 Introduction 40 2.2 IoT and WBAN in Healthcare Systems 42 2.2.1 IoT in Healthcare 42 2.2.2 WBAN 44 2.2.2.1 Key Features of Medical Networks in the Wireless Body Area 44 2.2.2.2 Data Transmission & Storage Health 45 2.2.2.3 Privacy and Security Concerns in Big Data 45 2.3 Blockchain Technology in Healthcare 46 2.3.1 Importance of Blockchain 46 2.3.2 Role of Blockchain in Healthcare 47 2.3.3 Benefits of Blockchain in Healthcare Applications 48 2.3.4 Elements of Blockchain 49 2.3.5 Situation Awareness and Healthcare Decision Support with Combined Machine Learning and Semantic Modeling 51 2.3.6 Mobile Health and Remote Monitoring 53 2.3.7 Different Mobile Health Application with Description of Usage in Area of Application 54 2.3.8 Patient-Centered Blockchain Mode 55 2.3.9 Electronic Medical Record 57 2.3.9.1 The Most Significant Barriers to Adoption Are 60 2.3.9.2 Concern Regarding Negative Unintended Consequences of Technology 60 2.4 Deep Learning in Healthcare 62 2.4.1 Deep Learning Models 63 2.4.1.1 Recurrent Neural Networks (RNN) 63 2.4.1.2 Convolutional Neural Networks (CNN) 64 2.4.1.3 Deep Belief Network (DBN) 65 2.4.1.4 Contrasts Between Models 66 2.4.1.5 Use of Deep Learning in Healthcare 66 2.5 Conclusion 70 2.6 Acknowledgments 70 References 70 3 Deep Learning on Site-Specific Drug Delivery System 77 Prem Shankar Mishra, Rakhi Mishra and Rupa Mazumder 3.1 Introduction 78 3.2 Deep Learning 81 3.2.1 Types of Algorithms Used in Deep Learning 81 3.2.1.1 Convolutional Neural Networks (CNNs) 82 3.2.1.2 Long Short-Term Memory Networks (LSTMs) 83 3.2.1.3 Recurrent Neural Networks 83 3.2.1.4 Generative Adversarial Networks (GANs) 84 3.2.1.5 Radial Basis Function Networks 84 3.2.1.6 Multilayer Perceptron 85 3.2.1.7 Self-Organizing Maps 85 3.2.1.8 Deep Belief Networks 85 3.3 Machine Learning and Deep Learning Comparison 86 3.4 Applications of Deep Learning in Drug Delivery System 87 3.5 Conclusion 90 References 90 4 Deep Learning Advancements in Target Delivery 101 Sudhanshu Mishra, Palak Gupta, Smriti Ojha, Vijay Sharma, Vicky Anthony and Disha Sharma 4.1 Introduction: Deep Learning and Targeted Drug Delivery 102 4.2 Different Models/Approaches of Deep Learning and Targeting Drug 104 4.3 QSAR Model 105 4.3.1 Model of Deep Long-Term Short-Term Memory 105 4.3.2 RNN Model 107 4.3.3 CNN Model 108 4.4 Deep Learning Process Applications in Pharmaceutical 109 4.5 Techniques for Predicting Pharmacotherapy 109 4.6 Approach to Diagnosis 110 4.7 Application 113 4.7.1 Deep Learning in Drug Discovery 114 4.7.2 Medical Imaging and Deep Learning Process 115 4.7.3 Deep Learning in Diagnostic and Screening 116 4.7.4 Clinical Trials Using Deep Learning Models 116 4.7.5 Learning for Personalized Medicine 117 4.8 Conclusion 121 Acknowledgment 122 References 122 5 Deep Learning and Precision Medicine: Lessons to Learn for the Preeminent Treatment for Malignant Tumors 127 Selvasudha Nandakumar, Shah Alam Khan, Poovi Ganesan, Pushpa Sweety, Arul Prakash Francis, Mahendran Sekar, Rukkumani Rajagopalan and Dhanalekshmi Unnikrishnan Meenakshi 5.1 Introduction 128 5.2 Role of DL in Gene Identification, Unique Genomic Analysis, and Precise Cancer Diagnosis 132 5.2.1 Gene Identification and Genome Data 133 5.2.2 Image Diagnosis 135 5.2.3 Radiomics, Radiogenomics, and Digital Biopsy 137 5.2.4 Medical Image Analysis in Mammography 138 5.2.5 Magnetic Resonance Imaging 139 5.2.6 CT Imaging 140 5.3 dl in Next-Generation Sequencing, Biomarkers, and Clinical Validation 141 5.3.1 Next-Generation Sequencing 141 5.3.2 Biomarkers and Clinical Validation 142 5.4 dl and Translational Oncology 144 5.4.1 Prediction 144 5.4.2 Segmentation 146 5.4.3 Knowledge Graphs and Cancer Drug Repurposing 147 5.4.4 Automated Treatment Planning 149 5.4.5 Clinical Benefits 150 5.5 DL in Clinical Trials—A Necessary Paradigm Shift 152 5.6 Challenges and Limitations 155 5.7 Conclusion 157 References 157 6 Personalized Therapy Using Deep Learning Advances 171 Nishant Gaur, Rashmi Dharwadkar and Jinsu Thomas 6.1 Introduction 172 6.2 Deep Learning 174 6.2.1 Convolutional Neural Networks 175 6.2.2 Autoencoders 180 6.2.3 Deep Belief Network (DBN) 182 6.2.4 Deep Reinforcement Learning 184 6.2.5 Generative Adversarial Network 186 6.2.6 Long Short-Term Memory Networks 188 References 191 7 Tele-Health Monitoring Using Artificial Intelligence Deep Learning Framework 199 Swati Verma, Rishabha Malviya, Md Aftab Alam and Bhuneshwar Dutta Tripathi 7.1 Introduction 200 7.2 Artificial Intelligence 200 7.2.1 Types of Artificial Intelligence 201 7.2.1.1 Machine Intelligence 201 7.2.1.2 Types of Machine Intelligence 203 7.2.2 Applications of Artificial Intelligence 204 7.2.2.1 Role in Healthcare Diagnostics 205 7.2.2.2 AI in Telehealth 205 7.2.2.3 Role in Structural Health Monitoring 205 7.2.2.4 Role in Remote Medicare Management 206 7.2.2.5 Predictive Analysis Using Big Data 207 7.2.2.6 AI’s Role in Virtual Monitoring of Patients 208 7.2.2.7 Functions of Devices 208 7.2.2.8 Clinical Outcomes Through Remote Patient Monitoring 210 7.2.2.9 Clinical Decision Support 211 7.2.3 Utilization of Artificial Intelligence in Telemedicine 211 7.2.3.1 Artificial Intelligence–Assisted Telemedicine 212 7.2.3.2 Telehealth and New Care Models 213 7.2.3.3 Strategy of Telecare Domain 214 7.2.3.4 Role of AI-Assisted Telemedicine in Various Domains 216 7.3 AI-Enabled Telehealth: Social and Ethical Considerations 218 7.4 Conclusion 219 References 220 8 Deep Learning Framework for Cancer Diagnosis and Treatment 229 Shiv Bahadur and Prashant Kumar 8.1 Deep Learning: An Emerging Field for Cancer Management 230 8.2 Deep Learning Framework in Diagnosis and Treatment of Cancer 232 8.3 Applications of Deep Learning in Cancer Diagnosis 233 8.3.1 Medical Imaging Through Artificial Intelligence 234 8.3.2 Biomarkers Identification in the Diagnosis of Cancer Through Deep Learning 234 8.3.3 Digital Pathology Through Deep Learning 235 8.3.4 Application of Artificial Intelligence in Surgery 236 8.3.5 Histopathological Images Using Deep Learning 237 8.3.6 MRI and Ultrasound Images Through Deep Learning 237 8.4 Clinical Applications of Deep Learning in the Management of Cancer 238 8.5 Ethical Considerations in Deep Learning–Based Robotic Therapy 239 8.6 Conclusion 240 Acknowledgments 240 References 241 9 Applications of Deep Learning in Radiation Therapy 247 Akanksha Sharma, Ashish Verma, Rishabha Malviya and Shalini Yadav 9.1 Introduction 248 9.2 History of Radiotherapy 250 9.3 Principal of Radiotherapy 251 9.4 Deep Learning 251 9.5 Radiation Therapy Techniques 254 9.5.1 External Beam Radiation Therapy 257 9.5.2 Three-Dimensional Conformal Radiation Therapy (3D-CRT) 259 9.5.3 Intensity Modulated Radiation Therapy (IMRT) 260 9.5.4 Image-Guided Radiation Therapy (IGRT) 261 9.5.5 Intraoperative Radiation Therapy (IORT) 263 9.5.6 Brachytherapy 265 9.5.7 Stereotactic Radiosurgery (SRS) 268 9.6 Different Role of Deep Learning with Corresponding Role of Medical Physicist 269 9.6.1 Deep Learning in Patient Assessment 269 9.6.1.1 Radiotherapy Results Prediction 269 9.6.1.2 Respiratory Signal Prediction 271 9.6.2 Simulation Computed Tomography 271 9.6.3 Targets and Organs-at-Risk Segmentation 273 9.6.4 Treatment Planning 274 9.6.4.1 Beam Angle Optimization 274 9.6.4.2 Dose Prediction 276 9.6.5 Other Role of Deep Learning in Corresponds with Medical Physicists 277 9.7 Conclusion 280 References 281 10 Application of Deep Learning in Radiation Therapy 289 Shilpa Rawat, Shilpa Singh, Md. Aftab Alam and Rishabha Malviya 10.1 Introduction 290 10.2 Radiotherapy 291 10.3 Principle of Deep Learning and Machine Learning 293 10.3.1 Deep Neural Networks (DNN) 294 10.3.2 Convolutional Neural Network 295 10.4 Role of AI and Deep Learning in Radiation Therapy 295 10.5 Platforms for Deep Learning and Tools for Radiotherapy 297 10.6 Radiation Therapy Implementation in Deep Learning 300 10.6.1 Deep Learning and Imaging Techniques 301 10.6.2 Image Segmentation 301 10.6.3 Lesion Segmentation 302 10.6.4 Computer-Aided Diagnosis 302 10.6.5 Computer-Aided Detection 303 10.6.6 Quality Assurance 304 10.6.7 Treatment Planning 305 10.6.8 Treatment Delivery 305 10.6.9 Response to Treatment 306 10.7 Prediction of Outcomes 307 10.7.1 Toxicity 309 10.7.2 Survival and the Ability to Respond 310 10.8 Deep Learning in Conjunction With Radiomoic 312 10.9 Planning for Treatment 314 10.9.1 Optimization of Beam Angle 315 10.9.2 Prediction of Dose 315 10.10 Deep Learning’s Challenges and Future Potential 316 10.11 Conclusion 317 References 318 11 Deep Learning Framework for Cancer 333 Pratishtha 11.1 Introduction 334 11.2 Brief History of Deep Learning 335 11.3 Types of Deep Learning Methods 336 11.4 Applications of Deep Learning 339 11.4.1 Toxicity Detection for Different Chemical Structures 339 11.4.2 Mitosis Detection 340 11.4.3 Radiology or Medical Imaging 341 11.4.4 Hallucination 342 11.4.5 Next-Generation Sequencing (NGS) 342 11.4.6 Drug Discovery 343 11.4.7 Sequence or Video Generation 343 11.4.8 Other Applications 343 11.5 Cancer 343 11.5.1 Factors 344 11.5.1.1 Heredity 345 11.5.1.2 Ionizing Radiation 345 11.5.1.3 Chemical Substances 345 11.5.1.4 Dietary Factors 345 11.5.1.5 Estrogen 346 11.5.1.6 Viruses 346 11.5.1.7 Stress 347 11.5.1.8 Age 347 11.5.2 Signs and Symptoms of Cancer 347 11.5.3 Types of Cancer Treatment Available 348 11.5.3.1 Surgery 348 11.5.3.2 Radiation Therapy 348 11.5.3.3 Chemotherapy 348 11.5.3.4 Immunotherapy 348 11.5.3.5 Targeted Therapy 349 11.5.3.6 Hormone Therapy 349 11.5.3.7 Stem Cell Transplant 349 11.5.3.8 Precision Medicine 349 11.5.4 Types of Cancer 349 11.5.4.1 Carcinoma 349 11.5.4.2 Sarcoma 349 11.5.4.3 Leukemia 350 11.5.4.4 Lymphoma and Myeloma 350 11.5.4.5 Central Nervous System (CNS) Cancers 350 11.5.5 The Development of Cancer (Pathogenesis) Cancer 350 11.6 Role of Deep Learning in Various Types of Cancer 350 11.6.1 Skin Cancer 351 11.6.1.1 Common Symptoms of Melanoma 351 11.6.1.2 Types of Skin Cancer 352 11.6.1.3 Prevention 353 11.6.1.4 Treatment 353 11.6.2 Deep Learning in Skin Cancer 354 11.6.3 Pancreatic Cancer 354 11.6.3.1 Symptoms of Pancreatic Cancer 355 11.6.3.2 Causes or Risk Factors of Pancreatic Cancer 355 11.6.3.3 Treatments of Pancreatic Cancer 355 11.6.4 Deep Learning in Pancreatic Cancer 355 11.6.5 Tobacco-Driven Lung Cancer 357 11.6.5.1 Symptoms of Lung Cancer 357 11.6.5.2 Causes or Risk Factors of Lung Cancer 358 11.6.5.3 Treatments Available for Lung Cancer 358 11.6.5.4 Deep Learning in Lung Cancer 358 11.6.6 Breast Cancer 359 11.6.6.1 Symptoms of Breast Cancer 360 11.6.6.2 Causes or Risk Factors of Breast Cancer 360 11.6.6.3 Treatments Available for Breast Cancer 361 11.6.7 Deep Learning in Breast Cancer 361 11.6.8 Prostate Cancer 362 11.6.9 Deep Learning in Prostate Cancer 362 11.7 Future Aspects of Deep Learning in Cancer 363 11.8 Conclusion 363 References 363 12 Cardiovascular Disease Prediction Using Deep Neural Network for Older People 369 Nagarjuna Telagam, B.Venkata Kranti and Nikhil Chandra Devarasetti 12.1 Introduction 370 12.2 Proposed System Model 375 12.2.1 Decision Tree Algorithm 375 12.2.1.1 Confusion Matrix 376 12.3 Random Forest Algorithm 381 12.4 Variable Importance for Random Forests 383 12.5 The Proposed Method Using a Deep Learning Model 384 12.5.1 Prevention of Overfitting 386 12.5.2 Batch Normalization 386 12.5.3 Dropout Technique 386 12.6 Results and Discussions 386 12.6.1 Linear Regression 386 12.6.2 Decision Tree Classifier 388 12.6.3 Voting Classifier 389 12.6.4 Bagging Classifier 389 12.6.5 Naïve Bayes 390 12.6.6 Logistic Regression 390 12.6.7 Extra Trees Classifier 391 12.6.8 K-Nearest Neighbor [KNN] Algorithm 391 12.6.9 Adaboost Classifier 392 12.6.10 Light Gradient Boost Classifier 393 12.6.11 Gradient Boosting Classifier 393 12.6.12 Stochastic Gradient Descent Algorithm 393 12.6.13 Linear Support Vector Classifier 394 12.6.14 Support Vector Machines 394 12.6.15 Gaussian Process Classification 395 12.6.16 Random Forest Classifier 395 12.7 Evaluation Metrics 396 12.8 Conclusion 401 References 402 13 Machine Learning: The Capabilities and Efficiency of Computers in Life Sciences 407 Shalini Yadav, Saurav Yadav, Shobhit Prakash Srivastava, Saurabh Kumar Gupta and Sudhanshu Mishra 13.1 Introduction 408 13.2 Supervised Learning 410 13.2.1 Workflow of Supervised Learning 410 13.2.2 Decision Tree 410 13.2.3 Support Vector Machine (SVM) 411 13.2.4 Naive Bayes 413 13.3 Deep Learning: A New Era of Machine Learning 414 13.4 Deep Learning in Artificial Intelligence (AI) 416 13.5 Using ML to Enhance Preventive and Treatment Insights 417 13.6 Different Additional Emergent Machine Learning Uses 418 13.6.1 Education 418 13.6.2 Pharmaceuticals 419 13.6.3 Manufacturing 419 13.7 Machine Learning 419 13.7.1 Neuroscience Research Advancements 420 13.7.2 Finding Patterns in Astronomical Data 420 13.8 Ethical and Social Issues Raised ! ! ! 421 13.8.1 Reliability and Safety 421 13.8.2 Transparency and Accountability 421 13.8.3 Data Privacy and Security 421 13.8.4 Malicious Use of AI 422 13.8.5 Effects on Healthcare Professionals 422 13.9 Future of Machine Learning in Healthcare 422 13.9.1 A Better Patient Journey 422 13.9.2 New Ways to Deliver Care 424 13.10 Challenges and Hesitations 424 13.10.1 Not Overlord Assistant Intelligent 424 13.10.2 Issues with Unlabeled Data 425 13.11 Concluding Thoughts 425 Acknowledgments 426 References 426 Index 431

    15 in stock

    £153.00

  • Deep Learning

    John Wiley & Sons Inc Deep Learning

    1 in stock

    Book SynopsisAn engaging and accessible introduction to deep learning perfect for students and professionals In Deep Learning: A Practical Introduction, a team of distinguished researchers delivers a book complete with coverage of the theoretical and practical elements of deep learning. The book includes extensive examples, end-of-chapter exercises, homework, exam material, and a GitHub repository containing code and data for all provided examples. Combining contemporary deep learning theory with state-of-the-art tools, the chapters are structured to maximize accessibility for both beginning and intermediate students. The authors have included coverage of TensorFlow, Keras, and Pytorch. Readers will also find: Thorough introductions to deep learning and deep learning toolsComprehensive explorations of convolutional neural networks, including discussions of their elements, operation, training, and architecturesPractical discussions of recurrent neural networks and non-supervised approaches to deep learningFulsome treatments of generative adversarial networks as well as deep Bayesian neural networks Perfect for undergraduate and graduate students studying computer vision, computer science, artificial intelligence, and neural networks, Deep Learning: A Practical Introduction will also benefit practitioners and researchers in the fields of deep learning and machine learning in general.

    1 in stock

    £67.50

  • Autonomous Learning Systems

    John Wiley & Sons Inc Autonomous Learning Systems

    Out of stock

    Book SynopsisAutonomous Learning Systems is the result of over a decade of focused research and studies in this emerging area which spans a number of well-known and well-established disciplines that include machine learning, system identification, data mining, fuzzy logic, neural networks, neuro-fuzzy systems, control theory and pattern recognition. The evolution of these systems has been both industry-driven with an increasing demand from sectors such as defence and security, aerospace and advanced process industries, bio-medicine and intelligent transportation, as well as research-driven there is a strong trend of innovation of all of the above well-established research disciplines that is linked to their on-line and real-time application; their adaptability and flexibility. Providing an introduction to the key technologies, detailed technical explanations of the methodology, and an illustration of the practical relevance of the approach with a wide range of applications, this bTrade Review“Overall, this book presents a valuable framework for further investigation and development for researchers and software developers. Summing Up: Recommended. Graduate students and above.” (Choice, 1 October 2013)Table of ContentsForewords xi Preface xix About the Author xxiii 1 Introduction 1 1.1 Autonomous Systems 3 1.2 The Role of Machine Learning in Autonomous Systems 4 1.3 System Identification – an Abstract Model of the Real World 6 1.4 Online versus Offline Identification 9 1.5 Adaptive and Evolving Systems 10 1.6 Evolving or Evolutionary Systems 11 1.7 Supervised versus Unsupervised Learning 13 1.8 Structure of the Book 14 PART I FUNDAMENTALS 2 Fundamentals of Probability Theory 19 2.1 Randomness and Determinism 20 2.2 Frequentistic versus Belief-Based Approach 22 2.3 Probability Densities and Moments 23 2.4 Density Estimation – Kernel-Based Approach 26 2.5 Recursive Density Estimation (RDE) 28 2.6 Detecting Novelties/Anomalies/Outliers using RDE 32 2.7 Conclusions 36 3 Fundamentals of Machine Learning and Pattern Recognition 37 3.1 Preprocessing 37 3.2 Clustering 42 3.3 Classification 56 3.4 Conclusions 58 4 Fundamentals of Fuzzy Systems Theory 61 4.1 Fuzzy Sets 61 4.2 Fuzzy Systems, Fuzzy Rules 64 4.3 Fuzzy Systems with Nonparametric Antecedents (AnYa) 69 4.4 FRB (Offline) Classifiers 73 4.5 Neurofuzzy Systems 75 4.6 State Space Perspective 79 4.7 Conclusions 81 PART II METHODOLOGY OF AUTONOMOUS LEARNING SYSTEMS 5 Evolving System Structure from Streaming Data 85 5.1 Defining System Structure Based on Prior Knowledge 85 5.2 Data Space Partitioning 86 5.3 Normalisation and Standardisation of Streaming Data in an Evolving Environment 96 5.4 Autonomous Monitoring of the Structure Quality 98 5.5 Short- and Long-Term Focal Points and Submodels 104 5.6 Simplification and Interpretability Issues 105 5.7 Conclusions 107 6 Autonomous Learning Parameters of the Local Submodels 109 6.1 Learning Parameters of Local Submodels 110 6.2 Global versus Local Learning 111 6.3 Evolving Systems Structure Recursively 113 6.4 Learning Modes 116 6.5 Robustness to Outliers in Autonomous Learning 118 6.6 Conclusions 118 7 Autonomous Predictors, Estimators, Filters, Inferential Sensors 121 7.1 Predictors, Estimators, Filters – Problem Formulation 121 7.2 Nonlinear Regression 123 7.3 Time Series 124 7.4 Autonomous Learning Sensors 125 7.5 Conclusions 131 8 Autonomous Learning Classifiers 133 8.1 Classifying Data Streams 133 8.2 Why Adapt the Classifier Structure? 134 8.3 Architecture of Autonomous Classifiers of the Family AutoClassify 135 8.4 Learning AutoClassify from Streaming Data 139 8.5 Analysis of AutoClassify 140 8.6 Conclusions 140 9 Autonomous Learning Controllers 143 9.1 Indirect Adaptive Control Scheme 144 9.2 Evolving Inverse Plant Model from Online Streaming Data 145 9.3 Evolving Fuzzy Controller Structure from Online Streaming Data 147 9.4 Examples of Using AutoControl 148 9.5 Conclusions 153 10 Collaborative Autonomous Learning Systems 155 10.1 Distributed Intelligence Scenarios 155 10.2 Autonomous Collaborative Learning 157 10.3 Collaborative Autonomous Clustering, AutoCluster by a Team of ALSs 158 10.4 Collaborative Autonomous Predictors, Estimators, Filters and AutoSense by a Team of ALSs 159 10.5 Collaborative Autonomous Classifiers AutoClassify by a Team of ALSs 160 10.6 Superposition of Local Submodels 161 10.7 Conclusions 161 PART III APPLICATIONS OF ALS 11 Autonomous Learning Sensors for Chemical and Petrochemical Industries 165 11.1 Case Study 1: Quality of the Products in an Oil Refinery 165 11.2 Case Study 2: Polypropylene Manufacturing 172 11.3 Conclusions 178 12 Autonomous Learning Systems in Mobile Robotics 179 12.1 The Mobile Robot Pioneer 3DX 179 12.2 Autonomous Classifier for Landmark Recognition 180 12.3 Autonomous Leader Follower 193 12.4 Results Analysis 196 13 Autonomous Novelty Detection and Object Tracking in Video Streams 197 13.1 Problem Definition 197 13.2 Background Subtraction and KDE for Detecting Visual Novelties 198 13.3 Detecting Visual Novelties with the RDE Method 203 13.4 Object Identification in Image Frames Using RDE 204 13.5 Real-time Tracking in Video Streams Using ALS 206 13.6 Conclusions 209 14 Modelling Evolving User Behaviour with ALS 211 14.1 User Behaviour as an Evolving Phenomenon 211 14.2 Designing the User Behaviour Profile 212 14.3 Applying AutoClassify0 for Modelling Evolving User Behaviour 215 14.4 Case Studies 216 14.5 Conclusions 221 15 Epilogue 223 15.1 Conclusions 223 15.2 Open Problems 227 15.3 Future Directions 227 APPENDICES Appendix A Mathematical Foundations 231 Appendix B Pseudocode of the Basic Algorithms 235 References 245 Glossary 259 Index 263

    Out of stock

    £100.76

  • Machine and Deep Learning Using MATLAB

    John Wiley & Sons Inc Machine and Deep Learning Using MATLAB

    15 in stock

    Book SynopsisMACHINE AND DEEP LEARNING In-depth resource covering machine and deep learning methods using MATLAB tools and algorithms, providing insights and algorithmic decision-making processes Machine and Deep Learning Using MATLAB introduces early career professionals to the power of MATLAB to explore machine and deep learning applications by explaining the relevant MATLAB tool or app and how it is used for a given method or a collection of methods. Its properties, in terms of input and output arguments, are explained, the limitations or applicability is indicated via an accompanied text or a table, and a complete running example is shown with all needed MATLAB command prompt code. The text also presents the results, in the form of figures or tables, in parallel with the given MATLAB code, and the MATLAB written code can be later used as a template for trying to solve new cases or datasets. Throughout, the text features worked examples in each chapter for self-stuTable of ContentsPreface xiii About the Companion Website xvii 1 Unsupervised Machine Learning (ML) Techniques 1 Introduction 1 Selection of the Right Algorithm in ML 2 Classical Multidimensional Scaling of Predictors Data 2 Principal Component Analysis (PCA) 6 k-Means Clustering 13 Distance Metrics: Locations of Cluster Centroids 13 Replications 14 Gaussian Mixture Model (GMM) Clustering 15 Optimum Number of GMM Clusters 17 Observations and Clusters Visualization 18 Evaluating Cluster Quality 21 Silhouette Plots 22 Hierarchical Clustering 23 Step 1 -- Determine Hierarchical Structure 23 Step 2 -- Divide Hierarchical Tree into Clusters 25 PCA and Clustering: Wine Quality 27 Feature Selection Using Laplacian (fsulaplacian) for Unsupervised Learning 35 CHW 1.1 The Iris Flower Features Data 37 CHW 1.2 The Ionosphere Data Features 38 CHW 1.3 The Small Car Data 39 CHW 1.4 Seeds Features Data 40 2 ML Supervised Learning: Classification Models 42 Fitting Data Using Different Classification Models 42 Customizing a Model 43 Creating Training and Test Datasets 43 Predicting the Response 45 Evaluating the Classification Model 45 KNN Model for All Categorical or All Numeric Data Type 47 KNN Model: Heart Disease Numeric Data 48 Viewing the Fitting Model Properties 50 The Fitting Model: Number of Neighbors and Weighting Factor 51 The Cost Penalty of the Fitting Model 52 KNN Model: Red Wine Data 55 Using MATLAB Classification Learner 57 Binary Decision Tree Model for Multiclass Classification of All Data Types 68 Classification Tree Model: Heart Disease Numeric Data Types 70 Classification Tree Model: Heart Disease All Predictor Data Types 72 Naive Bayes Classification Model for All Data Types 74 Fitting Heart Disease Numeric Data to Naive Bayes Model 75 Fitting Heart Disease All Data Types to Naive Bayes Model 77 Discriminant Analysis (DA) Classifier for Numeric Predictors Only 79 Discriminant Analysis (DA): Heart Disease Numeric Predictors 82 Support Vector Machine (SVM) Classification Model for All Data Types 84 Properties of SVM Model 85 SVM Classification Model: Heart Disease Numeric Data Types 87 SVM Classification Model: Heart Disease All Data Types 90 Multiclass Support Vector Machine (fitcecoc) Model 92 Multiclass Support Vector Machines Model: Red Wine Data 95 Binary Linear Classifier (fitclinear) to High-Dimensional Data 98 CHW 2.1 Mushroom Edibility Data 100 CHW 2.2 1994 Adult Census Income Data 100 CHW 2.3 White Wine Classification 101 CHW 2.4 Cardiac Arrhythmia Data 102 CHW 2.5 Breast Cancer Diagnosis 102 3 Methods of Improving ML Predictive Models 103 Accuracy and Robustness of Predictive Models 103 Evaluating a Model: Cross-Validation 104 Cross-Validation Tune-up Parameters 105 Partition with K-Fold: Heart Disease Data Classification 106 Reducing Predictors: Feature Transformation and Selection 108 Factor Analysis 110 Feature Transformation and Factor Analysis: Heart Disease Data 113 Feature Selection 115 Feature Selection Using predictorImportance Function: Health Disease Data 116 Sequential Feature Selection (SFS): sequentialfs Function with Model Error Handler 118 Accommodating Categorical Data: Creating Dummy Variables 121 Feature Selection with Categorical Heart Disease Data 122 Ensemble Learning 126 Creating Ensembles: Heart Disease Data 130 Ensemble Learning: Wine Quality Classification 131 Improving fitcensemble Predictive Model: Abalone Age Prediction 132 Improving fitctree Predictive Model with Feature Selection (FS): Credit Ratings Data 134 Improving fitctree Predictive Model with Feature Transformation (FT): Credit Ratings Data 135 Using MATLAB Regression Learner 136 Feature Selection and Feature Transformation Using Regression Learner App 145 Feature Selection Using Neighborhood Component Analysis (NCA) for Regression: Big Car Data 146 CHW 3.1 The Ionosphere Data 148 CHW 3.2 Sonar Dataset 149 CHW 3.3 White Wine Classification 150 CHW 3.4 Small Car Data (Regression Case) 152 4 Methods of ML Linear Regression 153 Introduction 153 Linear Regression Models 154 Fitting Linear Regression Models Using fitlm Function 155 How to Organize the Data? 155 Results Visualization: Big Car Data 162 Fitting Linear Regression Models Using fitglm Function 164 Nonparametric Regression Models 166 fitrtree Nonparametric Regression Model: Big Car Data 167 Support Vector Machine, fitrsvm, Nonparametric Regression Model: Big Car Data 170 Nonparametric Regression Model: Gaussian Process Regression (GPR) 172 Regularized Parametric Linear Regression 176 Ridge Linear Regression: The Penalty Term 176 Fitting Ridge Regression Models 177 Predicting Response Using Ridge Regression Models 178 Determining Ridge Regression Parameter, λ 179 The Ridge Regression Model: Big Car Data 179 The Ridge Regression Model with Optimum λ: Big Car Data 181 Regularized Parametric Linear Regression Model: Lasso 183 Stepwise Parametric Linear Regression 186 Fitting Stepwise Linear Regression 187 How to Specify stepwiselm Model? 187 Stepwise Linear Regression Model: Big Car Data 188 CHW 4.1 Boston House Price 192 CHW 4.2 The Forest Fires Data 193 CHW 4.3 The Parkinson’s Disease Telemonitoring Data 194 CHW 4.4 The Car Fuel Economy Data 195 5 Neural Networks 197 Introduction 197 Feed-Forward Neural Networks 198 Feed-Forward Neural Network Classification 199 Feed-Forward Neural Network Regression 200 Numeric Data: Dummy Variables 200 Neural Network Pattern Recognition (nprtool) Application 201 Command-Based Feed-Forward Neural Network Classification: Heart Data 210 Neural Network Regression (nftool) 214 Command-Based Feed-Forward Neural Network Regression: Big Car Data 223 Training the Neural Network Regression Model Using fitrnet Function: Big Car Data 226 Finding the Optimum Regularization Strength for Neural Network Using Cross-Validation: Big Car Data 229 Custom Hyperparameter Optimization in Neural Network Regression: Big Car Data 231 CHW 5.1 Mushroom Edibility Data 233 CHW 5.2 1994 Adult Census Income Data 233 CHW 5.3 Breast Cancer Diagnosis 234 CHW 5.4 Small Car Data (Regression Case) 234 CHW 5.5 Boston House Price 235 6 Pretrained Neural Networks: Transfer Learning 237 Deep Learning: Image Networks 237 Data Stores in MATLAB 241 Image and Augmented Image Datastores 243 Accessing an Image File 246 Retraining: Transfer Learning for Image Recognition 247 Convolutional Neural Network (CNN) Layers: Channels and Activations 256 Convolution 2-D Layer Features via Activations 258 Extraction and Visualization of Activations 261 A 2-D (or 2-D Grouped) Convolutional Layer 264 Features Extraction for Machine Learning 267 Image Features in Pretrained Convolutional Neural Networks (CNNs) 268 Classification with Machine Learning 268 Feature Extraction for Machine Learning: Flowers 269 Pattern Recognition Network Generation 271 Machine Learning Feature Extraction: Spectrograms 275 Network Object Prediction Explainers 278 Occlusion Sensitivity 278 imageLIME Features Explainer 282 gradCAM Features Explainer 284 HCW 6.1 CNN Retraining for Round Worms Alive or Dead Prediction 286 HCW 6.2 CNN Retraining for Food Images Prediction 286 HCW 6.3 CNN Retraining for Merchandise Data Prediction 287 HCW 6.4 CNN Retraining for Musical Instrument Spectrograms Prediction 288 HCW 6.5 CNN Retraining for Fruit/Vegetable Varieties Prediction 289 7 A Convolutional Neural Network (CNN) Architecture and Training 290 A Simple CNN Architecture: The Land Satellite Images 291 Displaying Satellite Images 291 Training Options 294 Mini Batches 295 Learning Rates 296 Gradient Clipping 297 Algorithms 298 Training a CNN for Landcover Dataset 299 Layers and Filters 302 Filters in Convolution Layers 307 Viewing Filters: AlexNet Filters 308 Validation Data 311 Using shuffle Function 316 Improving Network Performance 319 Training Algorithm Options 319 Training Data 319 Architecture 320 Image Augmentation: The Flowers Dataset 322 Directed Acyclic Graphs Networks 329 Deep Network Designer (DND) 333 Semantic Segmentation 342 Analyze Training Data for Semantic Segmentation 343 Create a Semantic Segmentation Network 345 Train and Test the Semantic Segmentation Network 350 HCW 7.1 CNN Creation for Round Worms Alive or Dead Prediction 356 HCW 7.2 CNN Creation for Food Images Prediction 357 HCW 7.3 CNN Creation for Merchandise Data Prediction 358 HCW 7.4 CNN Creation for Musical Instrument Spectrograms Prediction 358 HCW 7.5 CNN Creation for Chest X-ray Prediction 359 HCW 7.6 Semantic Segmentation Network for CamVid Dataset 359 8 Regression Classification: Object Detection 361 Preparing Data for Regression 361 Modification of CNN Architecture from Classification to Regression 361 Root-Mean-Square Error 364 AlexNet-Like CNN for Regression: Hand-Written Synthetic Digit Images 364 A New CNN for Regression: Hand-Written Synthetic Digit Images 370 Deep Network Designer (DND) for Regression 374 Loading Image Data 375 Generating Training Data 375 Creating a Network Architecture 376 Importing Data 378 Training the Network 378 Test Network 383 YOLO Object Detectors 384 Object Detection Using YOLO v4 386 COCO-Based Creation of a Pretrained YOLO v4 Object Detector 387 Fine-Tuning of a Pretrained YOLO v4 Object Detector 389 Evaluating an Object Detector 394 Object Detection Using R-CNN Algorithms 396 R-CNN 397 Fast R-CNN 397 Faster R-CNN 398 Transfer Learning (Re-Training) 399 R-CNN Creation and Training 399 Fast R-CNN Creation and Training 403 Faster R-CNN Creation and Training 408 evaluateDetectionPrecision Function for Precision Metric 413 evaluateDetectionMissRate for Miss Rate Metric 417 HCW 8.1 Testing yolov4ObjectDetector and fasterRCNN Object Detector 424 HCW 8.2 Creation of Two CNN-based yolov4ObjectDetectors 424 HCW 8.3 Creation of GoogleNet-Based Fast R-CNN Object Detector 425 HCW 8.4 Creation of a GoogleNet-Based Faster R-CNN Object Detector 426 HCW 8.5 Calculation of Average Precision and Miss Rate Using GoogleNet-Based Faster R-CNN Object Detector 427 HCW 8.6 Calculation of Average Precision and Miss Rate Using GoogleNet-Based yolov4 Object Detector 427 HCW 8.7 Faster RCNN-based Car Objects Prediction and Calculation of Average Precision for Training and Test Data 427 9 Recurrent Neural Network (RNN) 430 Long Short-Term Memory (LSTM) and BiLSTM Network 430 Train LSTM RNN Network for Sequence Classification 437 Improving LSTM RNN Performance 441 Sequence Length 441 Classifying Categorical Sequences 445 Sequence-to-Sequence Regression Using Deep Learning: Turbo Fan Data 446 Classify Text Data Using Deep Learning: Factory Equipment Failure Text Analysis -- 1 453 Classify Text Data Using Deep Learning: Factory Equipment Failure Text Analysis -- 2 462 Word-by-Word Text Generation Using Deep Learning -- 1 465 Word-by-Word Text Generation Using Deep Learning -- 2 473 Train Network for Time Series Forecasting Using Deep Network Designer (DND) 475 Train Network with Numeric Features 486 HCW 9.1 Text Classification: Factory Equipment Failure Text Analysis 491 HCW 9.2 Text Classification: Sentiment Labeled Sentences Data Set 492 HCW 9.3 Text Classification: Netflix Titles Data Set 492 HCW 9.4 Text Regression: Video Game Titles Data Set 492 HCW 9.5 Multivariate Classification: Mill Data Set 493 HCW 9.6 Word-by-Word Text Generation Using Deep Learning 494 10 Image/Video-Based Apps 495 Image Labeler (IL) App 495 Creating ROI Labels 498 Creating Scene Labels 499 Label Ground Truth 500 Export Labeled Ground Truth 501 Video Labeler (VL) App: Ground Truth Data Creation, Training, and Prediction 502 Ground Truth Labeler (GTL) App 513 Running/Walking Classification with Video Clips using LSTM 520 Experiment Manager (EM) App 526 Image Batch Processor (IBP) App 533 HCW 10.1 Cat Dog Video Labeling, Training, and Prediction -- 1 537 HCW 10.2 Cat Dog Video Labeling, Training, and Prediction -- 2 537 HCW 10.3 EM Hyperparameters of CNN Retraining for Merchandise Data Prediction 538 HCW 10.4 EM Hyperparameters of CNN Retraining for Round Worms Alive or Dead Prediction 539 HCW 10.5 EM Hyperparameters of CNN Retraining for Food Images Prediction 540 Appendix A Useful MATLAB Functions 543 A.1 Data Transfer from an External Source into MATLAB 543 A.2 Data Import Wizard 543 A.3 Table Operations 544 A.4 Table Statistical Analysis 547 A.5 Access to Table Variables (Column Titles) 547 A.6 Merging Tables with Mixed Columns and Rows 547 A.7 Data Plotting 548 A.8 Data Normalization 549 A.9 How to Scale Numeric Data Columns to Vary Between 0 and 1 549 A.10 Random Split of a Matrix into a Training and Test Set 550 A.11 Removal of NaN Values from a Matrix 550 A.12 How to Calculate the Percent of Truly Judged Class Type Cases for a Binary Class Response 550 A.13 Error Function m-file 551 A.14 Conversion of Categorical into Numeric Dummy Matrix 552 A.15 evaluateFit2 Function 553 A.16 showActivationsForChannel Function 554 A.17 upsampLowRes Function 555 A.18A preprocessData function 555 A.18B preprocessData2 function 555 A.19 processTurboFanDataTrain function 556 A.20 processTurboFanDataTest Function 556 A.21 preprocessText Function 557 A.22 documentGenerationDatastore Function 557 A.23 subset Function for an Image Data Store Partition 560 Index 561

    15 in stock

    £126.90

  • Metaheuristics for Machine Learning

    John Wiley & Sons Inc Metaheuristics for Machine Learning

    Out of stock

    Book SynopsisMETAHEURISTICS for MACHINE LEARNING The book unlocks the power of nature-inspired optimization in machine learning and presents a comprehensive guide to cutting-edge algorithms, interdisciplinary insights, and real-world applications. The field of metaheuristic optimization algorithms is experiencing rapid growth, both in academic research and industrial applications. These nature-inspired algorithms, which draw on phenomena like evolution, swarm behavior, and neural systems, have shown remarkable efficiency in solving complex optimization problems. With advancements in machine learning and artificial intelligence, the application of metaheuristic optimization techniques has expanded, demonstrating significant potential in optimizing machine learning models, hyperparameter tuning, and feature selection, among other use-cases. In the industrial landscape, these techniques are becoming indispensable for solving real-world problems in sectors ranging from healthcare to cybersecurity and s

    Out of stock

    £140.40

  • Can We Trust AI

    Johns Hopkins University Press Can We Trust AI

    5 in stock

    Book SynopsisArtificial intelligence is part of our daily lives. How can we address its limitations and guide its use for the benefit of communities worldwide?Artificial intelligence (AI) has evolved from an experimental computer algorithm used by academic researchers to a commercially reliable method of sifting through large sets of data that detect patterns not readily apparent through more rudimentary search tools. As a result, AI-based programs are helping doctors make more informed decisions about patient care, city planners align roads and highways to reduce traffic congestion with better efficiency, and merchants scan financial transactions to quickly flag suspicious purchases. But as AI applications grow, concerns have increased, too, including worries about applications that amplify existing biases in business practices and about the safety of self-driving vehicles. In Can We Trust AI?, Dr. Rama Chellappa, a researcher and innovator with 40 years in the field, recounts the evolution of AI,Trade ReviewDrawing on interviews with researchers pushing the boundaries of AI for the world's benefit and working to make its applications safer and more just, Can We Trust AI? responds with a qualified affirmative.—Inside Higher EdIn Can We Trust AI?, Chellappa explores both the promise and peril of AI. For readers searching for an understanding how AI came to be...Chellappa situates AI in an historical context that is thorough, and thoroughly fascinating. Most refreshing is his current assessment of AI that dispels the hype of AI's world takeover....Chellappa gracefully moves among AI's past, present, and future.—Technical CommunicationTable of ContentsPrefaceChapter 1. The Birth and Growth of AIChapter 2. Saving Lives with Artificial IntelligenceChapter 3. The Complexities and Contributions of Facial RecognitionChapter 4. The Promise of Autonomous VehiclesChapter 5. AI's FuturescapeAcknowledgmentsGlossaryNotesIndex

    5 in stock

    £13.30

  • Machine Learning for Social and Behavioral

    Guilford Publications Machine Learning for Social and Behavioral

    1 in stock

    Book SynopsisToday's social and behavioral researchers increasingly need to know: What do I do with all this data? This book provides the skills needed to analyze and report large, complex data sets using machine learning tools, and to understand published machine learning articles. Techniques are demonstrated using actual data (Big Five Inventory, early childhood learning, and more), with a focus on the interplay of statistical algorithm, data, and theory. The identification of heterogeneity, measurement error, regularization, and decision trees are also emphasized. The book covers basic principles as well as a range of methods for analyzing univariate and multivariate data (factor analysis, structural equation models, and mixed-effects models). Analysis of text and social network data is also addressed. End-of-chapter Computational Time and Resources sections include discussions of key R packages; the companion website provides R programming scripts and data for the book's examples.Trade Review"Current, highly informative, and useful, this is a 'go-to' book for social science graduate students, faculty, and practitioners seeking a strong introduction to machine learning. Unlike typical, more technical machine learning books, this one is unique in providing the strong psychological measurement guidance required to apply these techniques most appropriately. It walks the reader through general principles of machine learning, regression- and tree-based predictive models, text- and network-based methods of clustering, and--most innovatively--machine learning–based psychometric approaches (CFA and SEM)."--Fred Oswald, PhD, Professor and Herbert S. Autrey Chair in Social Sciences, Department of Psychological Sciences, Rice University "This book is very timely. Social scientists need to be educated about the pros and cons of machine learning methods and about how, when, and why these methods can be applied to their research topics. The book describes key techniques in enough detail to enable readers to subsequently digest more specialized journal articles or software applications, but not in so much detail as to lose momentum."--Sonya K. Sterba, PhD, Department of Psychology and Human Development, Vanderbilt University "Jacobucci, Grimm, and Zhang's ambitious book takes the reader on an in-depth tour of machine learning methods. Its strength is that the authors link machine learning to more traditional topics of regression, structural equation modeling, factor analysis, and network analysis methods. This book should be required reading for the new generation of psychology graduate students who are interested in more advanced quantitative methods."--James W. Pennebaker, PhD, Regents Centennial Professor of Liberal Arts and Professor of Psychology, The University of Texas at Austin ​"A 'must read' for social scientists who want to familiarize themselves with machine learning but don’t know where to start. Understanding the practices and principles of machine learning is fundamental to modern data analysis. Many social scientists will be surprised by how well their traditional statistical training has prepared them to grasp the material in the book."--Alexander Christensen, PhD, Department of Psychology and Human Development, Vanderbilt University-Table of ContentsI. Fundamental Concepts 1. Introduction - Why the Term Machine Learning? - Why do We Need Machine Learning? - How is this Book Different? - Definitions - Software - Datasets 2. The Principles of Machine Learning Research - Overview - Principle #1: Machine Learning is Not Just Lazy Induction - Principle #2: Orienting Our Goals Relative to Prediction, Explanation, and Description - Principle #3: Labeling a Study as Exploratory or Confirmatory is too Simplistic - Principle #4: Report Everything - Summary 3. The Practices of Machine Learning - Comparing Algorithms and Models - Model Fit - Bias-Variance Tradeoff - Resampling - Classification - Conclusion II. Algorithms for Univariate Outcomes 4. Regularized Regression - Linear Regression - Logistic Regression - Regularization - Rationale for Regularization - Alternative Forms of Regularization - Bayesian Regression - Summary 5. Decision Trees - Introduction - Decision Tree Algorithms - Miscellaneous Topics 6. Ensembles - Bagging - Random Forests - Gradient Boosting - Interpretation - Empirical Example - Important Notes - Summary III. Algorithms for Multivariate Outcomes 7. Machine Learning and Measurement - Defining Measurement Error - Impact of Measurement Error - Assessing Measurement Error - Weighting - Alternative Methods - Summary 8. Machine Learning and Structural Equation Modeling - Latent Variables as Predictors - Predicting Latent Variables - Using Latent Variables as Outcomes and Predictors - Can Regularization Improve Generalizability in SEM? - Nonlinear Relationships and Latent Variables - Summary 9. Machine Learning with Mixed-Effects Models - Mixed-Effects Models - Machine Learning with Clustered Data - Regularization with Mixed-Effects Models - Illustrative Example - Additional Strategies for Mining Longitudinal Data - Summary 10. Searching for Groups - Finite Mixture Model - Structural Equation Model Trees - Summary IV. Alternative Data Types 11. Introduction to Text Mining - Key Terminology - Data - Basic Text Mining - Text Data Preprocessing - Basic Analysis of the Teaching Comment Data - Sentiment Analysis - Topic Models - Summary 12. Introduction to Social Network Analysis - Network Visualization - Network Statistics - Basic Network Analysis - Network Modeling - Summary References

    1 in stock

    £74.09

  • Machine Learning for Social and Behavioral

    Guilford Publications Machine Learning for Social and Behavioral

    1 in stock

    Book SynopsisToday's social and behavioral researchers increasingly need to know: What do I do with all this data? This book provides the skills needed to analyze and report large, complex data sets using machine learning tools, and to understand published machine learning articles. Techniques are demonstrated using actual data (Big Five Inventory, early childhood learning, and more), with a focus on the interplay of statistical algorithm, data, and theory. The identification of heterogeneity, measurement error, regularization, and decision trees are also emphasized. The book covers basic principles as well as a range of methods for analyzing univariate and multivariate data (factor analysis, structural equation models, and mixed-effects models). Analysis of text and social network data is also addressed. End-of-chapter Computational Time and Resources sections include discussions of key R packages; the companion website provides R programming scripts and data for the book's examples.Trade Review"Current, highly informative, and useful, this is a 'go-to' book for social science graduate students, faculty, and practitioners seeking a strong introduction to machine learning. Unlike typical, more technical machine learning books, this one is unique in providing the strong psychological measurement guidance required to apply these techniques most appropriately. It walks the reader through general principles of machine learning, regression- and tree-based predictive models, text- and network-based methods of clustering, and--most innovatively--machine learning–based psychometric approaches (CFA and SEM)."--Fred Oswald, PhD, Professor and Herbert S. Autrey Chair in Social Sciences, Department of Psychological Sciences, Rice University "This book is very timely. Social scientists need to be educated about the pros and cons of machine learning methods and about how, when, and why these methods can be applied to their research topics. The book describes key techniques in enough detail to enable readers to subsequently digest more specialized journal articles or software applications, but not in so much detail as to lose momentum."--Sonya K. Sterba, PhD, Department of Psychology and Human Development, Vanderbilt University "Jacobucci, Grimm, and Zhang's ambitious book takes the reader on an in-depth tour of machine learning methods. Its strength is that the authors link machine learning to more traditional topics of regression, structural equation modeling, factor analysis, and network analysis methods. This book should be required reading for the new generation of psychology graduate students who are interested in more advanced quantitative methods."--James W. Pennebaker, PhD, Regents Centennial Professor of Liberal Arts and Professor of Psychology, The University of Texas at Austin ​"A 'must read' for social scientists who want to familiarize themselves with machine learning but don’t know where to start. Understanding the practices and principles of machine learning is fundamental to modern data analysis. Many social scientists will be surprised by how well their traditional statistical training has prepared them to grasp the material in the book."--Alexander Christensen, PhD, Department of Psychology and Human Development, Vanderbilt University-Table of ContentsI. Fundamental Concepts 1. Introduction - Why the Term Machine Learning? - Why do We Need Machine Learning? - How is this Book Different? - Definitions - Software - Datasets 2. The Principles of Machine Learning Research - Overview - Principle #1: Machine Learning is Not Just Lazy Induction - Principle #2: Orienting Our Goals Relative to Prediction, Explanation, and Description - Principle #3: Labeling a Study as Exploratory or Confirmatory is too Simplistic - Principle #4: Report Everything - Summary 3. The Practices of Machine Learning - Comparing Algorithms and Models - Model Fit - Bias-Variance Tradeoff - Resampling - Classification - Conclusion II. Algorithms for Univariate Outcomes 4. Regularized Regression - Linear Regression - Logistic Regression - Regularization - Rationale for Regularization - Alternative Forms of Regularization - Bayesian Regression - Summary 5. Decision Trees - Introduction - Decision Tree Algorithms - Miscellaneous Topics 6. Ensembles - Bagging - Random Forests - Gradient Boosting - Interpretation - Empirical Example - Important Notes - Summary III. Algorithms for Multivariate Outcomes 7. Machine Learning and Measurement - Defining Measurement Error - Impact of Measurement Error - Assessing Measurement Error - Weighting - Alternative Methods - Summary 8. Machine Learning and Structural Equation Modeling - Latent Variables as Predictors - Predicting Latent Variables - Using Latent Variables as Outcomes and Predictors - Can Regularization Improve Generalizability in SEM? - Nonlinear Relationships and Latent Variables - Summary 9. Machine Learning with Mixed-Effects Models - Mixed-Effects Models - Machine Learning with Clustered Data - Regularization with Mixed-Effects Models - Illustrative Example - Additional Strategies for Mining Longitudinal Data - Summary 10. Searching for Groups - Finite Mixture Model - Structural Equation Model Trees - Summary IV. Alternative Data Types 11. Introduction to Text Mining - Key Terminology - Data - Basic Text Mining - Text Data Preprocessing - Basic Analysis of the Teaching Comment Data - Sentiment Analysis - Topic Models - Summary 12. Introduction to Social Network Analysis - Network Visualization - Network Statistics - Basic Network Analysis - Network Modeling - Summary References

    1 in stock

    £49.39

  • Automated Deep Learning Using Neural Network

    APress Automated Deep Learning Using Neural Network

    1 in stock

    Book SynopsisOptimize, develop, and design PyTorch and TensorFlow models for a specific problem using the Microsoft Neural Network Intelligence (NNI) toolkit. This book includes practical examples illustrating automated deep learning approaches and provides techniques to facilitate your deep learning model development. The first chapters of this book cover the basics of NNI toolkit usage and methods for solving hyper-parameter optimization tasks. You will understand the black-box function maximization problem using NNI, and know how to prepare a TensorFlow or PyTorch model for hyper-parameter tuning, launch an experiment, and interpret the results. The book dives into optimization tuners and the search algorithms they are based on: Evolution search, Annealing search, and the Bayesian Optimization approach. The Neural Architecture Search is covered and you will learn how to develop deep learning models from scratch. Multi-trial and one-shot searching approaches of automatic neural networTable of ContentsChapter 1: Introduction to Neural Network Intelligence1.1 Installation1.2 Trial, search space, experiment1.3 Finding maxima of multivariate function1.4 Interacting with NNIChapter 2:Hyper-Parameter Tuning2.1 Preparing a model for hyper-parameter tuning2.2 Running experiment2.3 Interpreting results2.4 DebuggingChapter 3: Hyper-Parameter TunersChapter 4: Neural Architecture Search: Multi-trial4.1 Constructing a search space4.2 Running architecture search4.3 Exploration strategies4.4 Comparing exploration strategiesChapter 5: Neural Architecture Search: One-shot5.1 What is one-shot NAS?5.2 ENAS5.3 DARTSChapter 6: Model Compression6.1 What is model compression?6.2 Compressing your model6.3 Pruning6.4 QuantizationChapter 7: Advanced NNI

    1 in stock

    £46.74

  • SelfService AI mit Power BI

    Springer-Verlag Berlin and Heidelberg GmbH & Co. KG SelfService AI mit Power BI

    1 in stock

    Book SynopsisIntermediate-Advanced user levelTable of Contents1. Fragen in natürlicher Sprache stellen2. Die Insights-Funktion3. Entdeckung wichtiger Einflussfaktoren4. Drill-Down und Zerlegung von Hierarchien5. Hinzufügen intelligenter Visualisierungen6. Mit Szenarien experimentieren7. Einen Datensatz charakterisieren8. Spalten aus Beispielen erstellen9. Ausführen von R- und Python-Visualisierungen10. Datenumwandlung mit R und Python11. Ausführen von Machine Learning Modellen in der Azure Cloud

    1 in stock

    £26.59

  • Deep Learning

    O'Reilly Media Deep Learning

    1 in stock

    Book SynopsisHow can machine learningespecially deep neural networksmake a real difference in your organization? This hands-on guide not only provides the most practical information available on the subject, but also helps you get started building efficient deep learning networks.

    1 in stock

    £38.39

  • Thoughtful Machine Learning with Python

    O'Reilly Media Thoughtful Machine Learning with Python

    1 in stock

    Book SynopsisGain the confidence you need to apply machine learning in your daily work. With this practical guide, author Matthew Kirk shows you how to integrate and test machine learning algorithms in your code, without the academic subtext.

    1 in stock

    £25.88

© 2026 Book Curl

    • American Express
    • Apple Pay
    • Diners Club
    • Discover
    • Google Pay
    • Maestro
    • Mastercard
    • PayPal
    • Shop Pay
    • Union Pay
    • Visa

    Login

    Forgot your password?

    Don't have an account yet?
    Create account