Machine learning Books
John Wiley & Sons Inc AWS Certified Machine Learning Study Guide
Book SynopsisTable of ContentsIntroduction xvii Assessment Test xxix Answers to Assessment Test xxxv Part I Introduction 1 Chapter 1 AWS AI ML Stack 3 Amazon Rekognition 4 Image and Video Operations 6 Amazon Textract 10 Sync and Async APIs 11 Amazon Transcribe 13 Transcribe Features 13 Transcribe Medical 14 Amazon Translate 15 Amazon Translate Features 16 Amazon Polly 17 Amazon Lex 19 Lex Concepts 19 Amazon Kendra 21 How Kendra Works 22 Amazon Personalize 23 Amazon Forecast 27 Forecasting Metrics 30 Amazon Comprehend 32 Amazon CodeGuru 33 Amazon Augmented AI 34 Amazon SageMaker 35 Analyzing and Preprocessing Data 36 Training 39 Model Inference 40 AWS Machine Learning Devices 42 Summary 43 Exam Essentials 43 Review Questions 44 Chapter 2 Supporting Services from the AWS Stack 49 Storage 50 Amazon S3 50 Amazon EFS 52 Amazon FSx for Lustre 52 Data Versioning 53 Amazon VPC 54 AWS Lambda 56 AWS Step Functions 59 AWS RoboMaker 60 Summary 62 Exam Essentials 62 Review Questions 63 Part II Phases of Machine Learning Workloads 67 Chapter 3 Business Understanding 69 Phases of ML Workloads 70 Business Problem Identification 71 Summary 72 Exam Essentials 73 Review Questions 74 Chapter 4 Framing a Machine Learning Problem 77 ML Problem Framing 78 Recommended Practices 80 Summary 81 Exam Essentials 81 Review Questions 82 Chapter 5 Data Collection 85 Basic Data Concepts 86 Data Repositories 88 Data Migration to AWS 89 Batch Data Collection 89 Streaming Data Collection 92 Summary 96 Exam Essentials 96 Review Questions 98 Chapter 6 Data Preparation 101 Data Preparation Tools 102 SageMaker Ground Truth 102 Amazon EMR 104 Amazon SageMaker Processing 105 AWS Glue 105 Amazon Athena 107 Redshift Spectrum 107 Summary 107 Exam Essentials 107 Review Questions 109 Chapter 7 Feature Engineering 113 Feature Engineering Concepts 114 Feature Engineering for Tabular Data 114 Feature Engineering for Unstructured and Time Series Data 119 Feature Engineering Tools on AWS 120 Summary 121 Exam Essentials 121 Review Questions 123 Chapter 8 Model Training 127 Common ML Algorithms 128 Supervised Machine Learning 129 Textual Data 138 Image Analysis 141 Unsupervised Machine Learning 142 Reinforcement Learning 146 Local Training and Testing 147 Remote Training 149 Distributed Training 150 Monitoring Training Jobs 154 Amazon CloudWatch 155 AWS CloudTrail 155 Amazon Event Bridge 158 Debugging Training Jobs 158 Hyperparameter Optimization 159 Summary 162 Exam Essentials 162 Review Questions 164 Chapter 9 Model Evaluation 167 Experiment Management 168 Metrics and Visualization 169 Metrics in AWS AI/ML Services 173 Summary 174 Exam Essentials 175 Review Questions 176 Chapter 10 Model Deployment and Inference 181 Deployment for AI Services 182 Deployment for Amazon SageMaker 184 SageMaker Hosting: Under the Hood 184 Advanced Deployment Topics 187 Autoscaling Endpoints 187 Deployment Strategies 188 Testing Strategies 190 Summary 191 Exam Essentials 191 Review Questions 192 Chapter 11 Application Integration 195 Integration with On-Premises Systems 196 Integration with Cloud Systems 198 Integration with Front-End Systems 200 Summary 200 Exam Essentials 201 Review Questions 202 Part III Machine Learning Well-Architected Lens 205 Chapter 12 Operational Excellence Pillar for ML 207 Operational Excellence on AWS 208 Everything as Code 209 Continuous Integration and Continuous Delivery 210 Continuous Monitoring 213 Continuous Improvement 214 Summary 215 Exam Essentials 215 Review Questions 217 Chapter 13 Security Pillar 221 Security and AWS 222 Data Protection 223 Isolation of Compute 224 Fine-Grained Access Controls 225 Audit and Logging 226 Compliance Scope 227 Secure SageMaker Environments 228 Authentication and Authorization 228 Data Protection 231 Network Isolation 232 Logging and Monitoring 233 Compliance Scope 235 AI Services Security 235 Summary 236 Exam Essentials 236 Review Questions 238 Chapter 14 Reliability Pillar 241 Reliability on AWS 242 Change Management for ML 242 Failure Management for ML 245 Summary 246 Exam Essentials 246 Review Questions 247 Chapter 15 Performance Efficiency Pillar for ML 251 Performance Efficiency for ML on AWS 252 Selection 253 Review 254 Monitoring 255 Trade-offs 256 Summary 257 Exam Essentials 257 Review Questions 258 Chapter 16 Cost Optimization Pillar for ML 261 Common Design Principles 262 Cost Optimization for ML Workloads 263 Design Principles 263 Common Cost Optimization Strategies 264 Summary 266 Exam Essentials 266 Review Questions 267 Chapter 17 Recent Updates in the AWS AI/ML Stack 271 New Services and Features Related to AI Services 272 New Services 272 New Features of Existing Services 275 New Features Related to Amazon SageMaker 279 Amazon SageMaker Studio 279 Amazon SageMaker Data Wrangler 279 Amazon SageMaker Feature Store 280 Amazon SageMaker Clarify 281 Amazon SageMaker Autopilot 282 Amazon SageMaker JumpStart 283 Amazon SageMaker Debugger 283 Amazon SageMaker Distributed Training Libraries 284 Amazon SageMaker Pipelines and Projects 284 Amazon SageMaker Model Monitor 284 Amazon SageMaker Edge Manager 285 Amazon SageMaker Asynchronous Inference 285 Summary 285 Exam Essentials 285 Appendix Answers to the Review Questions 287 Chapter 1: AWS AI ML Stack 288 Chapter 2: Supporting Services from the AWS Stack 289 Chapter 3: Business Understanding 290 Chapter 4: Framing a Machine Learning Problem 291 Chapter 5: Data Collection 291 Chapter 6: Data Preparation 292 Chapter 7: Feature Engineering 293 Chapter 8: Model Training 294 Chapter 9: Model Evaluation 295 Chapter 10: Model Deployment and Inference 295 Chapter 11: Application Integration 296 Chapter 12: Operational Excellence Pillar for ML 297 Chapter 13: Security Pillar 298 Chapter 14: Reliability Pillar 298 Chapter 15: Performance Efficiency Pillar for ML 299 Chapter 16: Cost Optimization Pillar for ML 300 Index 303
£35.62
John Wiley & Sons Inc Not with a Bug But with a Sticker
Book SynopsisTable of ContentsForeword xv Introduction xix Chapter 1: Do You Want to Be Part of the Future? 1 Business at the Speed of AI 2 Follow Me, Follow Me 4 In AI, We Overtrust 6 Area 52 Ramblings 10 I’ll Do It 12 Adversarial Attacks Are Happening 16 ML Systems Don’t Jiggle-Jiggle; They Fold 19 Never Tell Me the Odds 22 AI’s Achilles’ Heel 25 Chapter 2: Salt, Tape, and Split-Second Phantoms 29 Challenge Accepted 30 When Expectation Meets Reality 35 Color Me Blind 39 Translation Fails 42 Attacking AI Systems via Fails 44 Autonomous Trap 001 48 Common Corruption 51 Chapter 3: Subtle, Specific, and Ever-Present 55 Intriguing Properties of Neural Networks 57 They Are Everywhere 60 Research Disciplines Collide 62 Blame Canada 66 The Intelligent Wiggle-Jiggle 71 Bargain-Bin Models Will Do 75 For Whom the Adversarial Example Bell Tolls 79 Chapter 4: Here’s Something I Found on the Web 85 Bad Data = Big Problem 87 Your AI Is Powered by Ghost Workers 88 Your AI Is Powered by Vampire Novels 91 Don’t Believe Everything You Read on the Internet 94 Poisoning the Well 96 The Higher You Climb, the Harder You Fall 104 Chapter 5: Can You Keep a Secret? 107 Why Is Defending Against Adversarial Attacks Hard? 108 Masking Is Important 111 Because It Is Possible 115 Masking Alone Is Not Good Enough 118 An Average Concerned Citizen 119 Security by Obscurity Has Limited Benefit 124 The Opportunity Is Great; the Threat Is Real; the Approach Must Be Bold 125 Swiss Cheese 130 Chapter 6: Sailing for Adventure on the Deep Blue Sea 133 Why Be Securin’ AI Systems So Blasted Hard? An Economics Perspective, Me Hearties! 136 Tis a Sign, Me Mateys 141 Here Be the Most Crucial AI Law Ye’ve Nary Heard Tell Of! 144 Lies, Accursed Lies, and Explanations! 146 No Free Grub 148 Whatcha measure be whatcha get! 151 Who Be Reapin’ the Benefits? 153 Cargo Cult Science 155 Chapter 7: The Big One 159 This Looks Futuristic 161 By All Means, Move at a Glacial Pace; You Know How That Thrills Me 163 Waiting for the Big One 166 Software, All the Way Down 169 The Aftermath 172 Race to AI Safety 173 Happy Story 176 In Medias Res 178 Big-Picture Questions 181 Acknowledgments 185 Index 189
£18.69
APress DataDriven SEO with Python
Book Synopsis Solve SEO problems using data science. This hands-on book is packed with Python code and data science techniques to help you generate data-driven recommendations and automate the SEO workload. This book is a practical, modern introduction to data science in the SEO context using Python. With social media, mobile, changing search engine algorithms, and ever-increasing expectations of users for super web experiences, too much data is generated for an SEO professional to make sense of in spreadsheets. For any modern-day SEO professional to succeed, it is relevant to find an alternate solution, and data science equips SEOs to grasp the issue at hand and solve it. From machine learning to Natural Language Processing (NLP) techniques, Data-Driven SEO with Python provides tried and tested techniques with full explanations for solving both everyday and complex SEO problems. This book is ideal for SEO professionals who want to take their industry skiTable of ContentsData Driven SEO with PythonChapter 1: Meeting the Challenges of SEO with Data1.1 Agents of change in SEO1.2 The Pillars of SEO Strategy1.3 Installing Python1.4 Using Python for SEOChapter 2: Keyword Research2.1 Data Sources2.2 Google Search Console2.4 Google Trends2.5 Google Suggest2.6 Competitor Analytics2.7 SERPsChapter 3: Technical3.1 Improving CTRs3.2 Allocate keywords to pages based on the copy3.3 Allocating parent nodes to the orphaned URLs3.4 Improve interlinking based on copy3.5 Automate Technical AuditsChapter 4: Content & UX4.1 Content that best satisfies the user query4.2 Splitting and merging URLs4.3 Content Strategy: Planning landing page content Chapter 5: Authority5.1 A little SEO history5.1 The source of authority5.2 Finding good linksChapter 6: Competitors6.1 Defining the problem6.2 Data Strategy6.3 Data Sources6.4 Selecting Your Competitors6.5 Get Features6.6 Explore, Clean and Transform6.7 Modelling The SERPS6.8 Evaluating your Model6.9 ActivationChapter 7: Experiments7.1 How experiments fit into the SEO process7.2 Generating Hypotheses7.3 Experiment Design7.4 Running your experiment7.5 Experiment EvaluationChapter 8: Dashboards8.1 Use a Data Layer8.2 Extract, Transform and Load (ETL)8.3 Transform8.4 Querying the Data Warehouse (DW)8.5 Visualization8.6 Making Future ForecastsChapter 9: Site Migrations and Relaunches9.1 Data sources9.2 Establishing the Impact9.3 Segmenting the URLs9.4 Legacy Site URLs9.5 Priority9.6 RoadmapChapter 10: Google Updates10.1 Data sources10.2 Winners and Losers10.3 Quantifying the Impact10.4 Search Intent10.5 Unique URLs10.6 RecommendationsChapter 11: The Future of SEO11.1 Automation11.2 Your journey to SEO science11.3 Suggest resourcesAppendix: CodeGlossaryIndex
£29.69
O'Reilly Media Learning Spark
Book SynopsisUpdated to emphasize new features in Spark 2.4., this second edition shows data engineers and scientists why structure and unification in Spark matters. Specifically, this book explains how to perform simple and complex data analytics and employ machine-learning algorithms.
£47.99
O'Reilly Media Fundamentals of Deep Learning
Book SynopsisThis updated second edition describes the intuition behind deep learning innovations without jargon or complexity. By the end of this book, Python-proficient programmers, software engineering professionals, and computer science majors will be able to re-implement these breakthroughs on their own.
£47.99
O'Reilly Media Introducing MLOps
Book SynopsisThis book introduces the key concepts of MLOps to help data scientists and application engineers not only operationalize ML models to drive real business change but also maintain and improve those models over time.
£39.74
O'Reilly Media Probabilistic Machine Learning for Finance and
Book SynopsisBy moving away from flawed statistical methodologies, you'll move toward an intuitive view of probability as a mathematically rigorous statistical framework that quantifies uncertainty holistically and successfully. This book shows you how.
£47.99
O'Reilly Media Building Recommendation Systems in Python and Jax
Book SynopsisIn this practical book, authors Bryan Bischof and Hector Yee illustrate the core concepts and examples to help you create a RecSys for any industry or scale. You'll learn the math, ideas, and implementation details you need to succeed.
£47.99
Manning Publications Deep Reinforcement Learning in Action
Book SynopsisHumans learn best from feedback—we are encouraged to take actions that lead to positive results while deterred by decisions with negative consequences. This reinforcement process can be applied to computer programs allowing them to solve more complex problems that classical programming cannot. Deep Reinforcement Learning in Action teaches you the fundamental concepts and terminology of deep reinforcement learning, along with the practical skills and techniques you’ll need to implement it into your own projects. Key features • Structuring problems as Markov Decision Processes • Popular algorithms such Deep Q-Networks, Policy Gradient method and Evolutionary Algorithms and the intuitions that drive them • Applying reinforcement learning algorithms to real-world problems Audience You’ll need intermediate Python skills and a basic understanding of deep learning. About the technology Deep reinforcement learning is a form of machine learning in which AI agents learn optimal behavior from their own raw sensory input. The system perceives the environment, interprets the results of its past decisions, and uses this information to optimize its behavior for maximum long-term return. Deep reinforcement learning famously contributed to the success of AlphaGo but that’s not all it can do! Alexander Zai is a Machine Learning Engineer at Amazon AI working on MXNet that powers a suite of AWS machine learning products. Brandon Brown is a Machine Learning and Data Analysis blogger at outlace.com committed to providing clear teaching on difficult topics for newcomers.
£35.99
Manning Publications Inside Deep Learning: Math, Algorithms, Models
Book Synopsis"If you want to learn some of the deeper explanations of deep learning and PyTorch then read this book!" - Tiklu Ganguly Journey through the theory and practice of modern deep learning, and apply innovative techniques to solve everyday data problems. In Inside Deep Learning, you will learn how to: Implement deep learning with PyTorchSelect the right deep learning componentsTrain and evaluate a deep learning modelFine tune deep learning models to maximize performanceUnderstand deep learning terminologyAdapt existing PyTorch code to solve new problems Inside Deep Learning is an accessible guide to implementing deep learning with the PyTorch framework. It demystifies complex deep learning concepts and teaches you to understand the vocabulary of deep learning so you can keep pace in a rapidly evolving field. No detail is skipped—you'll dive into math, theory, and practical applications. Everything is clearly explained in plain English. about the technologyDeep learning isn't just for big tech companies and academics. Anyone who needs to find meaningful insights and patterns in their data can benefit from these practical techniques! The unique ability for your systems to learn by example makes deep learning widely applicable across industries and use-cases, from filtering out spam to driving cars. about the bookInside Deep Learning is a fast-paced beginners' guide to solving common technical problems with deep learning. Written for everyday developers, there are no complex mathematical proofs or unnecessary academic theory. You'll learn how deep learning works through plain language, annotated code and equations as you work through dozens of instantly useful PyTorch examples. As you go, you'll build a French-English translator that works on the same principles as professional machine translation and discover cutting-edge techniques just emerging from the latest research. Best of all, every deep learning solution in this book can run in less than fifteen minutes using free GPU hardware! about the readerFor Python programmers with basic machine learning skills. about the authorEdward Raff is a Chief Scientist at Booz Allen Hamilton, and the author of the JSAT machine learning library. His research includes deep learning, malware detection, reproducibility in ML, fairness/bias, and high performance computing. He is also a visiting professor at the University of Maryland, Baltimore County and teaches deep learning in the Data Science department. Dr Raff has over 40 peer reviewed publications, three best paper awards, and has presented at numerous major conferences.Trade Review“Afantastic book with a colourful and intuitive way of describing how deep learning works.” Richard Vaughan “Amazing at what it does. It's a book for people who not only want to use deep learning, but also understand it!” Adam Slysz “A remarkably clear explanation of practical deep learning showing readers how to quickly and systematically apply deep learning techniques tosolve their everyday data problems.” Jeff Neumann “If you want to learn some of the deeper explanations of deep learning and PyTorch then read this book!” Tiklu Ganguly “A must read if you don't understand how Deep Learning works under the hood.” Abdul Basit Hafeez
£35.99
Manning Publications Evolutionary Deep Learning
Book SynopsisDiscover one-of-a-kind AI strategies never before seen outside of academic papers! Learn how the principles of evolutionary computation overcome deep learning's common pitfalls and deliver adaptable model upgrades without constant manual adjustment. In Evolutionary Deep Learning you will learn how to: Solve complex design and analysis problems with evolutionary computation Tune deep learning hyperparameters with evolutionary computation (EC), genetic algorithms, and particle swarm optimization Use unsupervised learning with a deep learning autoencoder to regenerate sample data Understand the basics of reinforcement learning and the Q Learning equation Apply Q Learning to deep learning to produce deep reinforcement learning Optimize the loss function and network architecture of unsupervised autoencoders Make an evolutionary agent that can play an OpenAI Gym game Evolutionary Deep Learning is a guide to improving your deep learning models with AutoML enhancements based on the principles of biological evolution. This exciting new approach utilizes lesser-known AI approaches to boost performance without hours of data annotation or model hyperparameter tuning. about the technology Evolutionary deep learning merges the biology-simulating practices of evolutionary computation (EC) with the neural networks of deep learning. This unique approach can automate entire DL systems and help uncover new strategies and architectures. It gives new and aspiring AI engineers a set of optimization tools that can reliably improve output without demanding an endless churn of new data. about the reader For data scientists who know Python.
£41.39
Manning Publications Time Series Forecasting in Python
Book SynopsisBuild predictive models from time-based patterns in your data. Master statistical models including new deep learning approaches for time series forecasting. In Time Series Forecasting in Python you will learn how to: Recognize a time series forecasting problem and build a performant predictive model Create univariate forecasting models that account for seasonal effects and external variables Build multivariate forecasting models to predict many time series at once Leverage large datasets by using deep learning for forecasting time series Automate the forecasting process DESCRIPTION Time Series Forecasting in Python teaches you to build powerful predictive models from time-based data. Every model you create is relevant, useful, and easy to implement with Python. You'll explore interesting real-world datasets like Google's daily stock price and economic data for the USA, quickly progressing from the basics to developing large-scale models that use deep learning tools like TensorFlow.Time Series Forecasting in Python teaches you to apply time series forecasting and get immediate, meaningful predictions. You'll learn both traditional statistical and new deep learning models for time series forecasting, all fully illustrated with Python source code. Time Series Forecasting in Python teaches you to build powerful predictive models from time-based data. Every model you create is relevant, useful, and easy to implement with Python. You'll explore interesting real-world datasets like Google's daily stock price and economic data for the USA, quickly progressing from the basics to developing large-scale models that use deep learning tools like TensorFlow. about the technology Time series forecasting reveals hidden trends and makes predictions about the future from your data. This powerful technique has proven incredibly valuable across multiple fields—from tracking business metrics, to healthcare and the sciences. Modern Python libraries and powerful deep learning tools have opened up new methods and utilities for making practical time series forecasts. about the book Time Series Forecasting in Python teaches you to apply time series forecasting and get immediate, meaningful predictions. You'll learn both traditional statistical and new deep learning models for time series forecasting, all fully illustrated with Python source code. Test your skills with hands-on projects for forecasting air travel, volume of drug prescriptions, and the earnings of Johnson & Johnson. By the time you're done, you'll be ready to build accurate and insightful forecasting models with tools from the Python ecosystem.Table of Contentstable of contents detailed TOC PART 1: TIME WAITS FOR NO ONE READ IN LIVEBOOK 1UNDERSTANDING TIME SERIES FORECASTING READ IN LIVEBOOK 2A NAÏVE PREDICTION OF THE FUTURE READ IN LIVEBOOK 3GOING ON A RANDOM WALK PART 2: FORECASTING WITH STATISTICAL MODELS READ IN LIVEBOOK 4MODELING A MOVING AVERAGE PROCESS READ IN LIVEBOOK 5MODELING AN AUTOREGRESSIVE PROCESS READ IN LIVEBOOK 6MODELING COMPLEX TIME SERIES READ IN LIVEBOOK 7FORECASTING NON-STATIONARY TIME SERIES READ IN LIVEBOOK 8ACCOUNTING FOR SEASONALITY READ IN LIVEBOOK 9ADDING EXTERNAL VARIABLES TO OUR MODEL READ IN LIVEBOOK 10FORECASTING MULTIPLE TIME SERIES READ IN LIVEBOOK 11CAPSTONE: FORECASTING THE NUMBER OF ANTIDIABETIC DRUG PRESCRIPTIONS IN AUSTRALIA PART 3: LARGE-SCALE FORECASTING WITH DEEP LEARNING READ IN LIVEBOOK 12INTRODUCING DEEP LEARNING FOR TIME SERIES FORECASTING READ IN LIVEBOOK 13DATA WINDOWING AND CREATING BASELINES FOR DEEP LEARNING READ IN LIVEBOOK 14BABY STEPS WITH DEEP LEARNING READ IN LIVEBOOK 15REMEMBERING THE PAST WITH LSTM READ IN LIVEBOOK 16FILTERING OUR TIME SERIES WITH CNN READ IN LIVEBOOK 17USING PREDICTIONS TO MAKE MORE PREDICTIONS READ IN LIVEBOOK 18CAPSTONE: FORECASTING THE ELECTRIC POWER CONSUMPTION OF A HOUSEHOLD PART 4: AUTOMATING FORECASTING AT SCALE READ IN LIVEBOOK 19AUTOMATING TIME SERIES FORECASTING WITH PROPHET READ IN LIVEBOOK 20CAPSTONE: FORECASTING THE MONTHLY AVERAGE RETAIL PRICE OF STEAK IN CANADA 21 GOING ABOVE AND BEYOND APPENDIX APPENDIX A: INSTALLATION INSTRUCTIONS
£41.39
Manning Publications Bayesian Optimization in Action
Book SynopsisApply advanced techniques for optimising machine learning processes For machine learning practitioners confident in maths and statistics. Bayesian Optimization in Action shows you how to optimise hyperparameter tuning, A/B testing, and other aspects of the machine learning process, by applying cutting-edge Bayesian techniques. Using clear language, Bayesian Optimization helps pinpoint the best configuration for your machine-learning models with speed and accuracy. With a range of illustrations, and concrete examples, this book proves that Bayesian Optimisation doesn't have to be difficult! Key features include: Train Gaussian processes on both sparse and large data sets Combine Gaussian processes with deep neural networks to make them flexible and expressive Find the most successful strategies for hyperparameter tuning Navigate a search space and identify high-performing regions Apply Bayesian Optimisation to practical use cases such as cost-constrained, multi-objective, and preference optimisation Use PyTorch, GPyTorch, and BoTorch to implement Bayesian optimisation You will get in-depth insights into how Bayesian optimisation works and learn how to implement it with cutting-edge Python libraries. The book's easy-to-reuse code samples will let you hit the ground running by plugging them straight into your own projects! About the technology Experimenting in science and engineering can be costly and time-consuming, especially without a reliable way to narrow down your choices. Bayesian Optimisation helps you identify optimal configurations to pursue in a search space. It uses a Gaussian process and machine learning techniques to model an objective function and quantify the uncertainty of predictions. Whether you're tuning machine learning models, recommending products to customers, or engaging in research, Bayesian Optimisation can help you make better decisions faster.
£34.49
Harvard Business Review Press HBR's 10 Must Reads on AI
Book SynopsisThe next generation of AI is here—use it to lead your business forward.If you read nothing else on artificial intelligence and machine learning, read these 10 articles. We've combed through hundreds of Harvard Business Review articles and selected the most important ones to help you understand the future direction of AI, bring your AI initiatives to scale, and use AI to transform your organization.This book will inspire you to: Create a new AI strategy Learn to work with intelligent robots Get more from your marketing AI Be ready for ethical and regulatory challenges Understand how generative AI is game changing Stop tinkering with AI and go all in This collection of articles includes "Competing in the Age of AI," by Marco Iansiti and Karim R. Lakhani; "How to Win with Machine Learning," by Ajay Agrawal, Joshua Gans, and Avi Goldfarb; "Developing a Digital Mindset," by Tsedal Neeley and Paul Leonardi; "Learning to Work with Intelligent Machines," by Matt Beane; "Getting AI to Scale," by Tim Fountaine, Brian McCarthy, and Tamim Saleh; "Why You Aren't Getting More from Your Marketing AI," by Eva Ascarza, Michael Ross, and Bruce G. S. Hardie; "The Pitfalls of Pricing Algorithms," by Marco Bertini and Oded Koenigsberg; "A Smarter Strategy for Using Robots," by Ben Armstrong and Julie Shah; "Why You Need an AI Ethics Committee," by Reid Blackman; "Robots Need Us More Than We Need Them," by H. James Wilson and Paul R. Daugherty; "Stop Tinkering with AI," by Thomas H. Davenport and Nitin Mittal; and "ChatGPT Is a Tipping Point for AI," by Ethan Mollick.HBR's 10 Must Reads paperback series is the definitive collection of books for new and experienced leaders alike. Leaders looking for the inspiration that big ideas provide, both to accelerate their own growth and that of their companies, should look no further. HBR's 10 Must Reads series focuses on the core topics that every ambitious manager needs to know: leadership, strategy, change, managing people, and managing yourself. Harvard Business Review has sorted through hundreds of articles and selected only the most essential reading on each topic. Each title includes timeless advice that will be relevant regardless of an ever‐changing business environment.
£16.14
The Pragmatic Programmers Genetic Algorithms and Machine Learning for
Book SynopsisSelf-driving cars, natural language recognition, and online recommendation engines are all possible thanks to Machine Learning. Now you can create your own genetic algorithms, nature-inspired swarms, Monte Carlo simulations, cellular automata, and clusters. Learn how to test your ML code and dive into even more advanced topics. If you are a beginner-to-intermediate programmer keen to understand machine learning, this book is for you. Discover machine learning algorithms using a handful of self-contained recipes. Build a repertoire of algorithms, discovering terms and approaches that apply generally. Bake intelligence into your algorithms, guiding them to discover good solutions to problems. In this book, you will: Use heuristics and design fitness functions. Build genetic algorithms. Make nature-inspired swarms with ants, bees and particles. Create Monte Carlo simulations. Investigate cellular automata. Find minima and maxima, using hill climbing and simulated annealing. Try selection methods, including tournament and roulette wheels. Learn about heuristics, fitness functions, metrics, and clusters. Test your code and get inspired to try new problems. Work through scenarios to code your way out of a paper bag; an important skill for any competent programmer. See how the algorithms explore and learn by creating visualizations of each problem. Get inspired to design your own machine learning projects and become familiar with the jargon. What You Need: Code in C++ (>= C++11), Python (2.x or 3.x) and JavaScript (using the HTML5 canvas). Also uses matplotlib and some open source libraries, including SFML, Catch and Cosmic-Ray. These plotting and testing libraries are not required but their use will give you a fuller experience. Armed with just a text editor and compiler/interpreter for your language of choice you can still code along from the general algorithm descriptions.
£35.14
Springer International Publishing AG Deep Learning: Foundations and Concepts
Book SynopsisThis book offers a comprehensive introduction to the central ideas that underpin deep learning. It is intended both for newcomers to machine learning and for those already experienced in the field. Covering key concepts relating to contemporary architectures and techniques, this essential book equips readers with a robust foundation for potential future specialization. The field of deep learning is undergoing rapid evolution, and therefore this book focusses on ideas that are likely to endure the test of time.The book is organized into numerous bite-sized chapters, each exploring a distinct topic, and the narrative follows a linear progression, with each chapter building upon content from its predecessors. This structure is well-suited to teaching a two-semester undergraduate or postgraduate machine learning course, while remaining equally relevant to those engaged in active research or in self-study.A full understanding of machine learning requires some mathematical background and so the book includes a self-contained introduction to probability theory. However, the focus of the book is on conveying a clear understanding of ideas, with emphasis on the real-world practical value of techniques rather than on abstract theory. Complex concepts are therefore presented from multiple complementary perspectives including textual descriptions, diagrams, mathematical formulae, and pseudo-code.Chris Bishop is a Technical Fellow at Microsoft and is the Director of Microsoft Research AI4Science. He is a Fellow of Darwin College Cambridge, a Fellow of the Royal Academy of Engineering, and a Fellow of the Royal Society. Hugh Bishop is an Applied Scientist at Wayve, a deep learning autonomous driving company in London, where he designs and trains deep neural networks. He completed his MPhil in Machine Learning and Machine Intelligence at Cambridge University.“Chris Bishop wrote a terrific textbook on neural networks in 1995 and has a deep knowledge of the field and its core ideas. His many years of experience in explaining neural networks have made him extremely skillful at presenting complicated ideas in the simplest possible way and it is a delight to see these skills applied to the revolutionary new developments in the field.” -- Geoffrey Hinton"With the recent explosion of deep learning and AI as a research topic, and the quickly growing importance of AI applications, a modern textbook on the topic was badly needed. The "New Bishop" masterfully fills the gap, covering algorithms for supervised and unsupervised learning, modern deep learning architecture families, as well as how to apply all of this to various application areas." – Yann LeCun“This excellent and very educational book will bring the reader up to date with the main concepts and advances in deep learning with a solid anchoring in probability. These concepts are powering current industrial AI systems and are likely to form the basis of further advances towards artificial general intelligence.” -- Yoshua BengioTable of ContentsPreface 3 1 The Deep Learning Revolution 19 1.1 The Impact of Deep Learning . . . . . . . . . . . . . . . . . . . . 20 1.1.1 Medical diagnosis . . . . . . . . . . . . . . . . . . . . . . 20 1.1.2 Protein structure . . . . . . . . . . . . . . . . . . . . . . . 21 1.1.3 Image synthesis . . . . . . . . . . . . . . . . . . . . . . . . 22 1.1.4 Large language models . . . . . . . . . . . . . . . . . . . . 23 1.2 A Tutorial Example . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.2.1 Synthetic data . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.2.2 Linear models . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.2.3 Error function . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.2.4 Model complexity . . . . . . . . . . . . . . . . . . . . . . 27 1.2.5 Regularization . . . . . . . . . . . . . . . . . . . . . . . . 30 1.2.6 Model selection . . . . . . . . . . . . . . . . . . . . . . . . 32 1.3 A Brief History of Machine Learning . . . . . . . . . . . . . . . . 34 1.3.1 Single-layer networks . . . . . . . . . . . . . . . . . . . . 35 1.3.2 Backpropagation . . . . . . . . . . . . . . . . . . . . . . . 36 1.3.3 Deep networks . . . . . . . . . . . . . . . . . . . . . . . . 38 2 Probabilities 41 2.1 The Rules of Probability . . . . . . . . . . . . . . . . . . . . . . . 43 2.1.1 A medical screening example . . . . . . . . . . . . . . . . 43 2.1.2 The sum and product rules . . . . . . . . . . . . . . . . . . 44 2.1.3 Bayes’ theorem . . . . . . . . . . . . . . . . . . . . . . . . 46 2.1.4 Medical screening revisited . . . . . . . . . . . . . . . . . 48 2.1.5 Prior and posterior probabilities . . . . . . . . . . . . . . . 49 2.1.6 Independent variables . . . . . . . . . . . . . . . . . . . . 49 2.2 Probability Densities . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.2.1 Example distributions . . . . . . . . . . . . . . . . . . . . 51 2.2.2 Expectations and covariances . . . . . . . . . . . . . . . . 52 2.3 The Gaussian Distribution . . . . . . . . . . . . . . . . . . . . . . 54 2.3.1 Mean and variance . . . . . . . . . . . . . . . . . . . . . . 55 2.3.2 Likelihood function . . . . . . . . . . . . . . . . . . . . . . 55 2.3.3 Bias of maximum likelihood . . . . . . . . . . . . . . . . . 57 2.3.4 Linear regression . . . . . . . . . . . . . . . . . . . . . . . 58 2.4 Transformation of Densities . . . . . . . . . . . . . . . . . . . . . 60 2.4.1 Multivariate distributions . . . . . . . . . . . . . . . . . . . 62 2.5 Information Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 64 2.5.1 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 2.5.2 Physics perspective . . . . . . . . . . . . . . . . . . . . . . 65 2.5.3 Differential entropy . . . . . . . . . . . . . . . . . . . . . . 67 2.5.4 Maximum entropy . . . . . . . . . . . . . . . . . . . . . . 68 2.5.5 Kullback–Leibler divergence . . . . . . . . . . . . . . . . . 69 2.5.6 Conditional entropy . . . . . . . . . . . . . . . . . . . . . 71 2.5.7 Mutual information . . . . . . . . . . . . . . . . . . . . . . 72 2.6 Bayesian Probabilities . . . . . . . . . . . . . . . . . . . . . . . . 72 2.6.1 Model parameters . . . . . . . . . . . . . . . . . . . . . . . 73 2.6.2 Regularization . . . . . . . . . . . . . . . . . . . . . . . . 74 2.6.3 Bayesian machine learning . . . . . . . . . . . . . . . . . . 75 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3 Standard Distributions 83 3.1 Discrete Variables . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.1.1 Bernoulli distribution . . . . . . . . . . . . . . . . . . . . . 84 3.1.2 Binomial distribution . . . . . . . . . . . . . . . . . . . . . 85 3.1.3 Multinomial distribution . . . . . . . . . . . . . . . . . . . 86 3.2 The Multivariate Gaussian . . . . . . . . . . . . . . . . . . . . . . 88 3.2.1 Geometry of the Gaussian . . . . . . . . . . . . . . . . . . 89 3.2.2 Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 3.2.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . 93 3.2.4 Conditional distribution . . . . . . . . . . . . . . . . . . . 94 3.2.5 Marginal distribution . . . . . . . . . . . . . . . . . . . . . 97 3.2.6 Bayes’ theorem . . . . . . . . . . . . . . . . . . . . . . . . 99 3.2.7 Maximum likelihood . . . . . . . . . . . . . . . . . . . . . 102 3.2.8 Sequential estimation . . . . . . . . . . . . . . . . . . . . . 103 3.2.9 Mixtures of Gaussians . . . . . . . . . . . . . . . . . . . . 104 3.3 Periodic Variables . . . . . . . . . . . . . . . . . . . . . . . . . . 107 3.3.1 Von Mises distribution . . . . . . . . . . . . . . . . . . . . 107 3.4 The Exponential Family . . . . . . . . . . . . . . . . . . . . . . . 112 3.4.1 Sufficient statistics . . . . . . . . . . . . . . . . . . . . . . 115 3.5 Nonparametric Methods . . . . . . . . . . . . . . . . . . . . . . . 116 3.5.1 Histograms . . . . . . . . . . . . . . . . . . . . . . . . . . 116 3.5.2 Kernel densities . . . . . . . . . . . . . . . . . . . . . . . . 118 3.5.3 Nearest-neighbours . . . . . . . . . . . . . . . . . . . . . . 121 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 4 Single-layer Networks: Regression 129 4.1 Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . 130 4.1.1 Basis functions . . . . . . . . . . . . . . . . . . . . . . . . 130 4.1.2 Likelihood function . . . . . . . . . . . . . . . . . . . . . . 132 4.1.3 Maximum likelihood . . . . . . . . . . . . . . . . . . . . . 133 4.1.4 Geometry of least squares . . . . . . . . . . . . . . . . . . 135 4.1.5 Sequential learning . . . . . . . . . . . . . . . . . . . . . . 135 4.1.6 Regularized least squares . . . . . . . . . . . . . . . . . . . 136 4.1.7 Multiple outputs . . . . . . . . . . . . . . . . . . . . . . . 137 4.2 Decision theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 4.3 The Bias–Variance Trade-off . . . . . . . . . . . . . . . . . . . . . 141 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 5 Single-layer Networks: Classification 149 5.1 Discriminant Functions . . . . . . . . . . . . . . . . . . . . . . . . 150 5.1.1 Two classes . . . . . . . . . . . . . . . . . . . . . . . . . . 150 5.1.2 Multiple classes . . . . . . . . . . . . . . . . . . . . . . . . 152 5.1.3 1-of-K coding . . . . . . . . . . . . . . . . . . . . . . . . 153 5.1.4 Least squares for classification . . . . . . . . . . . . . . . . 154 5.2 Decision Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 5.2.1 Misclassification rate . . . . . . . . . . . . . . . . . . . . . 157 5.2.2 Expected loss . . . . . . . . . . . . . . . . . . . . . . . . . 158 5.2.3 The reject option . . . . . . . . . . . . . . . . . . . . . . . 160 5.2.4 Inference and decision . . . . . . . . . . . . . . . . . . . . 161 5.2.5 Classifier accuracy . . . . . . . . . . . . . . . . . . . . . . 165 5.2.6 ROC curve . . . . . . . . . . . . . . . . . . . . . . . . . . 166 5.3 Generative Classifiers . . . . . . . . . . . . . . . . . . . . . . . . 168 5.3.1 Continuous inputs . . . . . . . . . . . . . . . . . . . . . . 170 5.3.2 Maximum likelihood solution . . . . . . . . . . . . . . . . 171 5.3.3 Discrete features . . . . . . . . . . . . . . . . . . . . . . . 174 5.3.4 Exponential family . . . . . . . . . . . . . . . . . . . . . . 174 5.4 Discriminative Classifiers . . . . . . . . . . . . . . . . . . . . . . 175 5.4.1 Activation functions . . . . . . . . . . . . . . . . . . . . . 176 5.4.2 Fixed basis functions . . . . . . . . . . . . . . . . . . . . . 176 5.4.3 Logistic regression . . . . . . . . . . . . . . . . . . . . . . 177 5.4.4 Multi-class logistic regression . . . . . . . . . . . . . . . . 179 5.4.5 Probit regression . . . . . . . . . . . . . . . . . . . . . . . 181 5.4.6 Canonical link functions . . . . . . . . . . . . . . . . . . . 182 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 6 Deep Neural Networks 189 6.1 Limitations of Fixed Basis Functions . . . . . . . . . . . . . . . . 190 6.1.1 The curse of dimensionality . . . . . . . . . . . . . . . . . 190 6.1.2 High-dimensional spaces . . . . . . . . . . . . . . . . . . . 193 6.1.3 Data manifolds . . . . . . . . . . . . . . . . . . . . . . . . 194 6.1.4 Data-dependent basis functions . . . . . . . . . . . . . . . 196 6.2 Multilayer Networks . . . . . . . . . . . . . . . . . . . . . . . . . 198 6.2.1 Parameter matrices . . . . . . . . . . . . . . . . . . . . . . 199 6.2.2 Universal approximation . . . . . . . . . . . . . . . . . . . 199 6.2.3 Hidden unit activation functions . . . . . . . . . . . . . . . 200 6.2.4 Weight-space symmetries . . . . . . . . . . . . . . . . . . 203 6.3 Deep Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 6.3.1 Hierarchical representations . . . . . . . . . . . . . . . . . 205 6.3.2 Distributed representations . . . . . . . . . . . . . . . . . . 205 6.3.3 Representation learning . . . . . . . . . . . . . . . . . . . 206 6.3.4 Transfer learning . . . . . . . . . . . . . . . . . . . . . . . 207 6.3.5 Contrastive learning . . . . . . . . . . . . . . . . . . . . . 209 6.3.6 General network architectures . . . . . . . . . . . . . . . . 211 6.3.7 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 6.4 Error Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 6.4.1 Regression . . . . . . . . . . . . . . . . . . . . . . . . . . 212 6.4.2 Binary classification . . . . . . . . . . . . . . . . . . . . . 214 6.4.3 multiclass classification . . . . . . . . . . . . . . . . . . . 215 6.5 Mixture Density Networks . . . . . . . . . . . . . . . . . . . . . . 216 6.5.1 Robot kinematics example . . . . . . . . . . . . . . . . . . 216 6.5.2 Conditional mixture distributions . . . . . . . . . . . . . . 217 6.5.3 Gradient optimization . . . . . . . . . . . . . . . . . . . . 219 6.5.4 Predictive distribution . . . . . . . . . . . . . . . . . . . . 220 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 7 Gradient Descent 227 7.1 Error Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 7.1.1 Local quadratic approximation . . . . . . . . . . . . . . . . 229 7.2 Gradient Descent Optimization . . . . . . . . . . . . . . . . . . . 231 7.2.1 Use of gradient information . . . . . . . . . . . . . . . . . 232 7.2.2 Batch gradient descent . . . . . . . . . . . . . . . . . . . . 232 7.2.3 Stochastic gradient descent . . . . . . . . . . . . . . . . . . 232 7.2.4 Mini-batches . . . . . . . . . . . . . . . . . . . . . . . . . 234 7.2.5 Parameter initialization . . . . . . . . . . . . . . . . . . . . 234 7.3 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 7.3.1 Momentum . . . . . . . . . . . . . . . . . . . . . . . . . . 238 7.3.2 Learning rate schedule . . . . . . . . . . . . . . . . . . . . 240 7.3.3 RMSProp and Adam . . . . . . . . . . . . . . . . . . . . . 241 7.4 Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 7.4.1 Data normalization . . . . . . . . . . . . . . . . . . . . . . 244 7.4.2 Batch normalization . . . . . . . . . . . . . . . . . . . . . 245 7.4.3 Layer normalization . . . . . . . . . . . . . . . . . . . . . 247 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 8 Backpropagation 251 8.1 Evaluation of Gradients . . . . . . . . . . . . . . . . . . . . . . . 252 8.1.1 Single-layer networks . . . . . . . . . . . . . . . . . . . . 252 8.1.2 General feed-forward networks . . . . . . . . . . . . . . . 253 8.1.3 A simple example . . . . . . . . . . . . . . . . . . . . . . 256 8.1.4 Numerical differentiation . . . . . . . . . . . . . . . . . . . 257 8.1.5 The Jacobian matrix . . . . . . . . . . . . . . . . . . . . . 258 8.1.6 The Hessian matrix . . . . . . . . . . . . . . . . . . . . . . 260 8.2 Automatic Differentiation . . . . . . . . . . . . . . . . . . . . . . 262 8.2.1 Forward-mode automatic differentiation . . . . . . . . . . . 264 8.2.2 Reverse-mode automatic differentiation . . . . . . . . . . . 267 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 9 Regularization 271 9.1 Inductive Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 9.1.1 Inverse problems . . . . . . . . . . . . . . . . . . . . . . . 272 9.1.2 No free lunch theorem . . . . . . . . . . . . . . . . . . . . 273 9.1.3 Symmetry and invariance . . . . . . . . . . . . . . . . . . . 274 9.1.4 Equivariance . . . . . . . . . . . . . . . . . . . . . . . . . 277 9.2 Weight Decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 9.2.1 Consistent regularizers . . . . . . . . . . . . . . . . . . . . 280 9.2.2 Generalized weight decay . . . . . . . . . . . . . . . . . . 282 9.3 Learning Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 9.3.1 Early stopping . . . . . . . . . . . . . . . . . . . . . . . . 284 9.3.2 Double descent . . . . . . . . . . . . . . . . . . . . . . . . 286 9.4 Parameter Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . 288 9.4.1 Soft weight sharing . . . . . . . . . . . . . . . . . . . . . . 289 9.5 Residual Connections . . . . . . . . . . . . . . . . . . . . . . . . 292 9.6 Model Averaging . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 9.6.1 Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 10 Convolutional Networks 305 10.1 Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 10.1.1 Image data . . . . . . . . . . . . . . . . . . . . . . . . . . 307 10.2 Convolutional Filters . . . . . . . . . . . . . . . . . . . . . . . . . 308 10.2.1 Feature detectors . . . . . . . . . . . . . . . . . . . . . . . 308 10.2.2 Translation equivariance . . . . . . . . . . . . . . . . . . . 309 10.2.3 Padding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 10.2.4 Strided convolutions . . . . . . . . . . . . . . . . . . . . . 312 10.2.5 Multi-dimensional convolutions . . . . . . . . . . . . . . . 313 10.2.6 Pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 10.2.7 Multilayer convolutions . . . . . . . . . . . . . . . . . . . 316 10.2.8 Example network architectures . . . . . . . . . . . . . . . . 317 10.3 Visualizing Trained CNNs . . . . . . . . . . . . . . . . . . . . . . 320 10.3.1 Visual cortex . . . . . . . . . . . . . . . . . . . . . . . . . 320 10.3.2 Visualizing trained filters . . . . . . . . . . . . . . . . . . . 321 10.3.3 Saliency maps . . . . . . . . . . . . . . . . . . . . . . . . 323 10.3.4 Adversarial attacks . . . . . . . . . . . . . . . . . . . . . . 324 10.3.5 Synthetic images . . . . . . . . . . . . . . . . . . . . . . . 326 10.4 Object Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 10.4.1 Bounding boxes . . . . . . . . . . . . . . . . . . . . . . . 327 10.4.2 Intersection-over-union . . . . . . . . . . . . . . . . . . . . 328 10.4.3 Sliding windows . . . . . . . . . . . . . . . . . . . . . . . 329 10.4.4 Detection across scales . . . . . . . . . . . . . . . . . . . . 331 10.4.5 Non-max suppression . . . . . . . . . . . . . . . . . . . . . 332 10.4.6 Fast region CNNs . . . . . . . . . . . . . . . . . . . . . . . 332 10.5 Image Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . 333 10.5.1 Convolutional segmentation . . . . . . . . . . . . . . . . . 333 10.5.2 Up-sampling . . . . . . . . . . . . . . . . . . . . . . . . . 334 10.5.3 Fully convolutional networks . . . . . . . . . . . . . . . . . 336 10.5.4 The U-net architecture . . . . . . . . . . . . . . . . . . . . 337 10.6 Style Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 11 Structured Distributions 343 11.1 Graphical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 11.1.1 Directed graphs . . . . . . . . . . . . . . . . . . . . . . . . 344 11.1.2 Factorization . . . . . . . . . . . . . . . . . . . . . . . . . 345 11.1.3 Discrete variables . . . . . . . . . . . . . . . . . . . . . . . 347 11.1.4 Gaussian variables . . . . . . . . . . . . . . . . . . . . . . 350 11.1.5 Binary classifier . . . . . . . . . . . . . . . . . . . . . . . 352 11.1.6 Parameters and observations . . . . . . . . . . . . . . . . . 352 11.1.7 Bayes’ theorem . . . . . . . . . . . . . . . . . . . . . . . . 354 11.2 Conditional Independence . . . . . . . . . . . . . . . . . . . . . . 355 11.2.1 Three example graphs . . . . . . . . . . . . . . . . . . . . 356 11.2.2 Explaining away . . . . . . . . . . . . . . . . . . . . . . . 359 11.2.3 D-separation . . . . . . . . . . . . . . . . . . . . . . . . . 361 11.2.4 Naive Bayes . . . . . . . . . . . . . . . . . . . . . . . . . 362 11.2.5 Generative models . . . . . . . . . . . . . . . . . . . . . . 364 11.2.6 Markov blanket . . . . . . . . . . . . . . . . . . . . . . . . 365 11.2.7 Graphs as filters . . . . . . . . . . . . . . . . . . . . . . . . 366 11.3 Sequence Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 11.3.1 Hidden variables . . . . . . . . . . . . . . . . . . . . . . . 370 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 12 Transformers 375 12.1 Attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 12.1.1 Transformer processing . . . . . . . . . . . . . . . . . . . . 378 12.1.2 Attention coefficients . . . . . . . . . . . . . . . . . . . . . 379 12.1.3 Self-attention . . . . . . . . . . . . . . . . . . . . . . . . . 380 12.1.4 Network parameters . . . . . . . . . . . . . . . . . . . . . 381 12.1.5 Scaled self-attention . . . . . . . . . . . . . . . . . . . . . 384 12.1.6 Multi-head attention . . . . . . . . . . . . . . . . . . . . . 384 12.1.7 Transformer layers . . . . . . . . . . . . . . . . . . . . . . 386 12.1.8 Computational complexity . . . . . . . . . . . . . . . . . . 388 12.1.9 Positional encoding . . . . . . . . . . . . . . . . . . . . . . 389 12.2 Natural Language . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 12.2.1 Word embedding . . . . . . . . . . . . . . . . . . . . . . . 393 12.2.2 Tokenization . . . . . . . . . . . . . . . . . . . . . . . . . 395 12.2.3 Bag of words . . . . . . . . . . . . . . . . . . . . . . . . . 396 12.2.4 Autoregressive models . . . . . . . . . . . . . . . . . . . . 397 12.2.5 Recurrent neural networks . . . . . . . . . . . . . . . . . . 398 12.2.6 Backpropagation through time . . . . . . . . . . . . . . . . 399 12.3 Transformer Language Models . . . . . . . . . . . . . . . . . . . . 400 12.3.1 Decoder transformers . . . . . . . . . . . . . . . . . . . . . 401 12.3.2 Sampling strategies . . . . . . . . . . . . . . . . . . . . . . 404 12.3.3 Encoder transformers . . . . . . . . . . . . . . . . . . . . . 406 12.3.4 Sequence-to-sequence transformers . . . . . . . . . . . . . 408 12.3.5 Large language models . . . . . . . . . . . . . . . . . . . . 408 12.4 Multimodal Transformers . . . . . . . . . . . . . . . . . . . . . . 412 12.4.1 Vision transformers . . . . . . . . . . . . . . . . . . . . . . 413 12.4.2 Generative image transformers . . . . . . . . . . . . . . . . 414 12.4.3 Audio data . . . . . . . . . . . . . . . . . . . . . . . . . . 417 12.4.4 Text-to-speech . . . . . . . . . . . . . . . . . . . . . . . . 418 12.4.5 Vision and language transformers . . . . . . . . . . . . . . 420 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 13 Graph Neural Networks 425 13.1 Machine Learning on Graphs . . . . . . . . . . . . . . . . . . . . 427 13.1.1 Graph properties . . . . . . . . . . . . . . . . . . . . . . . 428 13.1.2 Adjacency matrix . . . . . . . . . . . . . . . . . . . . . . . 428 13.1.3 Permutation equivariance . . . . . . . . . . . . . . . . . . . 429 13.2 Neural Message-Passing . . . . . . . . . . . . . . . . . . . . . . . 430 13.2.1 Convolutional filters . . . . . . . . . . . . . . . . . . . . . 431 13.2.2 Graph convolutional networks . . . . . . . . . . . . . . . . 432 13.2.3 Aggregation operators . . . . . . . . . . . . . . . . . . . . 434 13.2.4 Update operators . . . . . . . . . . . . . . . . . . . . . . . 436 13.2.5 Node classification . . . . . . . . . . . . . . . . . . . . . . 437 13.2.6 Edge classification . . . . . . . . . . . . . . . . . . . . . . 438 13.2.7 Graph classification . . . . . . . . . . . . . . . . . . . . . . 438 13.3 General Graph Networks . . . . . . . . . . . . . . . . . . . . . . . 438 13.3.1 Graph attention networks . . . . . . . . . . . . . . . . . . . 439 13.3.2 Edge embeddings . . . . . . . . . . . . . . . . . . . . . . . 439 13.3.3 Graph embeddings . . . . . . . . . . . . . . . . . . . . . . 440 13.3.4 Over-smoothing . . . . . . . . . . . . . . . . . . . . . . . 440 13.3.5 Regularization . . . . . . . . . . . . . . . . . . . . . . . . 441 13.3.6 Geometric deep learning . . . . . . . . . . . . . . . . . . . 442 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 14 Sampling 447 14.1 Basic Sampling Algorithms . . . . . . . . . . . . . . . . . . . . . 448 14.1.1 Expectations . . . . . . . . . . . . . . . . . . . . . . . . . 448 14.1.2 Standard distributions . . . . . . . . . . . . . . . . . . . . 449 14.1.3 Rejection sampling . . . . . . . . . . . . . . . . . . . . . . 451 14.1.4 Adaptive rejection sampling . . . . . . . . . . . . . . . . . 453 14.1.5 Importance sampling . . . . . . . . . . . . . . . . . . . . . 455 14.1.6 Sampling-importance-resampling . . . . . . . . . . . . . . 457 14.2 Markov Chain Monte Carlo . . . . . . . . . . . . . . . . . . . . . 458 14.2.1 The Metropolis algorithm . . . . . . . . . . . . . . . . . . 459 14.2.2 Markov chains . . . . . . . . . . . . . . . . . . . . . . . . 460 14.2.3 The Metropolis–Hastings algorithm . . . . . . . . . . . . . 463 14.2.4 Gibbs sampling . . . . . . . . . . . . . . . . . . . . . . . . 464 14.2.5 Ancestral sampling . . . . . . . . . . . . . . . . . . . . . . 468 14.3 Langevin Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . 469 14.3.1 Energy-based models . . . . . . . . . . . . . . . . . . . . . 470 14.3.2 Maximizing the likelihood . . . . . . . . . . . . . . . . . . 471 14.3.3 Langevin dynamics . . . . . . . . . . . . . . . . . . . . . . 472 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 15 Discrete Latent Variables 477 15.1 K-means Clustering . . . . . . . . . . . . . . . . . . . . . . . . . 478 15.1.1 Image segmentation . . . . . . . . . . . . . . . . . . . . . 482 15.2 Mixtures of Gaussians . . . . . . . . . . . . . . . . . . . . . . . . 484 15.2.1 Likelihood function . . . . . . . . . . . . . . . . . . . . . . 486 15.2.2 Maximum likelihood . . . . . . . . . . . . . . . . . . . . . 488 15.3 Expectation–Maximization Algorithm . . . . . . . . . . . . . . . . 492 15.3.1 Gaussian mixtures . . . . . . . . . . . . . . . . . . . . . . 496 15.3.2 Relation to K-means . . . . . . . . . . . . . . . . . . . . . 498 15.3.3 Mixtures of Bernoulli distributions . . . . . . . . . . . . . . 499 15.4 Evidence Lower Bound . . . . . . . . . . . . . . . . . . . . . . . 503 15.4.1 EM revisited . . . . . . . . . . . . . . . . . . . . . . . . . 504 15.4.2 Independent and identically distributed data . . . . . . . . . 506 15.4.3 Parameter priors . . . . . . . . . . . . . . . . . . . . . . . 507 15.4.4 Generalized EM . . . . . . . . . . . . . . . . . . . . . . . 507 15.4.5 Sequential EM . . . . . . . . . . . . . . . . . . . . . . . . 508 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508 16 Continuous Latent Variables 513 16.1 Principal Component Analysis . . . . . . . . . . . . . . . . . . . . 515 16.1.1 Maximum variance formulation . . . . . . . . . . . . . . . 515 16.1.2 Minimum-error formulation . . . . . . . . . . . . . . . . . 517 16.1.3 Data compression . . . . . . . . . . . . . . . . . . . . . . . 519 16.1.4 Data whitening . . . . . . . . . . . . . . . . . . . . . . . . 520 16.1.5 High-dimensional data . . . . . . . . . . . . . . . . . . . . 522 16.2 Probabilistic Latent Variables . . . . . . . . . . . . . . . . . . . . 524 16.2.1 Generative model . . . . . . . . . . . . . . . . . . . . . . . 524 16.2.2 Likelihood function . . . . . . . . . . . . . . . . . . . . . . 525 16.2.3 Maximum likelihood . . . . . . . . . . . . . . . . . . . . . 527 16.2.4 Factor analysis . . . . . . . . . . . . . . . . . . . . . . . . 531 16.2.5 Independent component analysis . . . . . . . . . . . . . . . 532 16.2.6 Kalman filters . . . . . . . . . . . . . . . . . . . . . . . . . 533 16.3 Evidence Lower Bound . . . . . . . . . . . . . . . . . . . . . . . 534 16.3.1 Expectation maximization . . . . . . . . . . . . . . . . . . 536 16.3.2 EM for PCA . . . . . . . . . . . . . . . . . . . . . . . . . 537 16.3.3 EM for factor analysis . . . . . . . . . . . . . . . . . . . . 538 16.4 Nonlinear Latent Variable Models . . . . . . . . . . . . . . . . . . 540 16.4.1 Nonlinear manifolds . . . . . . . . . . . . . . . . . . . . . 540 16.4.2 Likelihood function . . . . . . . . . . . . . . . . . . . . . . 542 16.4.3 Discrete data . . . . . . . . . . . . . . . . . . . . . . . . . 544 16.4.4 Four approaches to generative modelling . . . . . . . . . . 545 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545 17 Generative Adversarial Networks 551 17.1 Adversarial Training . . . . . . . . . . . . . . . . . . . . . . . . . 552 17.1.1 Loss function . . . . . . . . . . . . . . . . . . . . . . . . . 553 17.1.2 GAN training in practice . . . . . . . . . . . . . . . . . . . 554 17.2 Image GANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557 17.2.1 CycleGAN . . . . . . . . . . . . . . . . . . . . . . . . . . 557 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562 18 Normalizing Flows 565 18.1 Coupling Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567 18.2 Autoregressive Flows . . . . . . . . . . . . . . . . . . . . . . . . . 570 18.3 Continuous Flows . . . . . . . . . . . . . . . . . . . . . . . . . . 572 18.3.1 Neural differential equations . . . . . . . . . . . . . . . . . 572 18.3.2 Neural ODE backpropagation . . . . . . . . . . . . . . . . 573 18.3.3 Neural ODE flows . . . . . . . . . . . . . . . . . . . . . . 575 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577 19 Autoencoders 581 19.1 Deterministic Autoencoders . . . . . . . . . . . . . . . . . . . . . 582 19.1.1 Linear autoencoders . . . . . . . . . . . . . . . . . . . . . 582 19.1.2 Deep autoencoders . . . . . . . . . . . . . . . . . . . . . . 583 19.1.3 Sparse autoencoders . . . . . . . . . . . . . . . . . . . . . 584 19.1.4 Denoising autoencoders . . . . . . . . . . . . . . . . . . . 585 19.1.5 Masked autoencoders . . . . . . . . . . . . . . . . . . . . . 585 19.2 Variational Autoencoders . . . . . . . . . . . . . . . . . . . . . . . 587 19.2.1 Amortized inference . . . . . . . . . . . . . . . . . . . . . 590 19.2.2 The reparameterization trick . . . . . . . . . . . . . . . . . 592 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596 20 Diffusion Models 599 20.1 Forward Encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . 600 20.1.1 Diffusion kernel . . . . . . . . . . . . . . . . . . . . . . . 601 20.1.2 Conditional distribution . . . . . . . . . . . . . . . . . . . 602 20.2 Reverse Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . 603 20.2.1 Training the decoder . . . . . . . . . . . . . . . . . . . . . 605 20.2.2 Evidence lower bound . . . . . . . . . . . . . . . . . . . . 606 20.2.3 Rewriting the ELBO . . . . . . . . . . . . . . . . . . . . . 607 20.2.4 Predicting the noise . . . . . . . . . . . . . . . . . . . . . . 609 20.2.5 Generating new samples . . . . . . . . . . . . . . . . . . . 610 20.3 Score Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612 20.3.1 Score loss function . . . . . . . . . . . . . . . . . . . . . . 613 20.3.2 Modified score loss . . . . . . . . . . . . . . . . . . . . . . 614 20.3.3 Noise variance . . . . . . . . . . . . . . . . . . . . . . . . 615 20.3.4 Stochastic differential equations . . . . . . . . . . . . . . . 616 20.4 Guided Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 617 20.4.1 Classifier guidance . . . . . . . . . . . . . . . . . . . . . . 618 20.4.2 Classifier-free guidance . . . . . . . . . . . . . . . . . . . 618 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621 Appendix A Linear Algebra 627 A.1 Matrix Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . 627 A.2 Traces and Determinants . . . . . . . . . . . . . . . . . . . . . . . 628 A.3 Matrix Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . 629 A.4 Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630 Appendix B Calculus of Variations 635 Appendix C Lagrange Multipliers 639 Bibliography 643 Index 659
£62.99
Cambridge University Press Inference and Learning from Data
Book SynopsisThis extraordinary three-volume work, written in an engaging and rigorous style by a world authority in the field, provides an accessible, comprehensive introduction to the full spectrum of mathematical and statistical techniques underpinning contemporary methods in data-driven learning and inference. The first volume, Foundations, establishes core topics in inference and learning, and prepares readers for studying their practical application. The second volume, Inference, introduces readers to cutting-edge techniques for inferring unknown variables and quantities. The final volume, Learning, provides a rigorous introduction to state-of-the-art learning methods. A consistent structure and pedagogy is employed throughout all three volumes to reinforce student understanding, with over 1280 end-of-chapter problems (including solutions for instructors), over 600 figures, over 470 solved examples, datasets and downloadable Matlab code. Unique in its scale and depth, this textbook sequence i
£199.50
McGraw-Hill Education Fuzzy Logic Applications in Artificial
Book SynopsisFuzzy logic principles, practices, and real-world applicationsThis hands-on guide offers clear explanations of fuzzy logic along with practical applications and real-world examples. Written by an award-winning engineer, Fuzzy Logic: Applications in Artificial Intelligence, Big Data, and Machine Learning is aimed at improving competence and motivation in students and professionals alike.Inside, you will discover how to apply fuzzy logic in the context of pervasive digitization and big data across emerging technologies which require a very different man-machine relationship than the ones previously used in engineering, science, economics, and social sciences. Applications covered include intelligent energy systems with demand response, smart homes, electrification of transportation, supply chain efficiencies, smart cities, e-commerce, education, healthcare, and decarbonization.Serves as a classroom guide and as an on-the-job resource
£72.89
APress Microsoft Conversational AI Platform for
Book SynopsisIntermediate-Advanced user levelTable of ContentsChapter 1: Introduction to the Microsoft Conversational AI PlatformChapter 2: Introduction to the Microsoft Bot FrameworkChapter 3: Introduction to Azure Cognitive ServicesChapter 4: Design Principles of a ChatbotChapter 5: Building a ChatbotChapter 6: Testing a ChatbotChapter 7: Publishing a ChatbotChapter 8: Connecting a Chatbot with Channels
£37.49
APress Handson Azure Cognitive Services
Book SynopsisIntermediate-Advanced user levelTable of ContentsChapter 1: The Power of Cognitive Services Chapter Goal: This first chapter sets up the values, reasons, and impacts you can achieve through Microsoft Azure Cognitive Services. It provides an overview of the features and capabilities. The chapter also introduces you to our case study and structures that we’ll use throughout the rest of the book. No of pages: 14 Sub - Topics 1. Overview of Azure Cognitive Services 2. Understanding the Use Cases 3. Exploring the Cognitive Services APIs: Vision, Speech, Language, Search, and Decision 4. Overview of Machine Leaning 5. The COVID-19 SmartApp Scenario Chapter 2: The Azure Portal for Cognitive Services Chapter Goal: The aim of this chapter to get started with Microsoft Cognitive services by exploring the Azure Portal. This chapter will explore the Cognitive Azure Portal and some of the common features. Finally, the chapter will take you inside the Azure Marketplace for Bot Service, Cognitive Services, and Machine Learning. No of pages: 18 Sub - Topics 1. Getting started with Azure Portal and Microsoft Cognitive Services 2. Azure Marketplace – an overview of AI + Machine Learning 3. Getting started with Azure Bot Service 4. Understanding software development kits (SDKs) – to get started with a favorite programing language [Ref. https://docs.microsoft.com/en-us/azure/cognitive-services/] 5. Setting up your Visual Studio template Chapter 3: Vision – Identify and Analyze Images and Videos Chapter Goal: This chapter will provide insight on Computer Vision with a full of hands-on example, where we build an application to analyze an Image. There are two features currently in preview that this chapter will also cover: Form Recognizer and Ink Recognizer. No of pages: 24 Sub - Topics 1. Understanding the Vision API with Computer Vision 2. Analyzing images 3. Identifying a face 4. Understanding the working behavior of vision APIs for Video Analysis 5. Recognizing forms, tables, and ink 6. Summary of the Vision API Chapter 4: Language – Gain an Understanding of Unstructured Text and Models Chapter Goal: This chapter will provide insight on NLP (Natural language processing) by evaluating user sentiments. The chapter will also touch preview features – including Immersive Reader. No of pages: 20 Sub - Topics 1. Creating and understanding language models 2. Training language models 3. Translating text to create your own translator application 4. Using QnA Maker to host conversational discussions about your data 5. Using Immersive Reader to understand text via audio and visual cues 6. Summary of the Language API Chapter 5: Speech – Talk to Your Application Chapter Goal: This chapter will provide insight on speech services by evaluating translating text to speech and vice versa. Enabling a speaker and translating into multiple languages. The chapter will also touch a preview feature – Speaker Recognition. The Bing speech feature will not be covered as it is retiring soon. No of pages: 18 Sub - Topics 1. Understanding speech and speech services 2. Converting speech into text and vice versa 3. Translating speech real-time into your application 4. Identifying the speaker from speech using Speaker Recognition 5. Customizing speech 6. Summary of the Speech API Chapter 6: Decision – Make Smarter Decisions In Your Applications Chapter Goal: This chapter will provide insight on decision services by adding content a moderation facility in the application. The chapter will also touch on a preview feature – Anomaly Detector. No of pages: 17 Sub - Topics 1. Understanding the decision service and decision APIs 2. Creating an auto Content Moderator application 3. Creating personalized experiences with the Personalizer 4. Identifying future problems with the Anomaly Detector 5. Summary of the Decision API Chapter 7: Search – Add Search Capabilities to Your Application Chapter Goal: This chapter will provide insight on Bing Search APIs by adding various search functionalities to the application. No of pages: 18 Sub - Topics 1. Understanding search and the Bing Search APIs 2. Creating a smart application by adding Bing Search 3. Suggesting a user with auto suggestions 4. Summary of the Search API Chapter 8: Deploy and Host Services Using Containers Chapter Goal: This chapter will provide a complete insight on Cognitive Services containers. In this chapter, we will highlight the key feature by creating an application. The application will deploy using Docker. No of pages: 22 Sub - Topics 1. Getting started with Cognitive Services containers 2. Understanding deployment and how to deploy and run a container on an Azure container instance 3. Understand Docker compose and use it to deploy multiple containers 4. Understanding Azure Kubernetes Service and how to deploy an application to Azure Kubernetes Service Chapter 9: Azure Bot Service Chapter Goal: This chapter will provide insight on Bot Service by creating the COVID-19 Bot. No of pages: 24 Sub - Topics 1. Understanding Azure Bot services 2. Create a COVID-19 Bot using Azure Bot Service 3. Using the Azure Bot Builder SDK. Reference: https://docs.microsoft.com/en-us/azure/bot-service/dotnet/bot-builder-dotnet-sdk-quickstart?view=azure-bot-service-4.0 Chapter 10: Azure Machine Learning Chapter Goal: This chapter will lead the reader to fully understand Azure Machine Learning and how to use it. You can train your application to learn without being explicitly programmed. We will include forecasts and predictions. The chapter will cover a preview feature – Azure Machine Learning designer. No of pages: 22 Sub - Topics 1. Building models with no-code, using the Azure Machine Learning designer 2. Publishing to Jupyter notebooks 3. Building ML models in Python or R 4. The ML Visual Studio Code extension 5. Commanding the ML CLI 6. Summary of ML
£48.74
APress Synthetic Data for Deep Learning
Book SynopsisData is the indispensable fuel that drives the decision making of everything from governments, to major corporations, to sports teams. Its value is almost beyond measure. But what if that data is either unavailable or problematic to access? That''s where synthetic data comes in. This book will show you how to generate synthetic data and use it to maximum effect.Synthetic Data for Deep Learning begins by tracing the need for and development of synthetic data before delving into the role it plays in machine learning and computer vision. You''ll gain insight into how synthetic data can be used to study the benefits of autonomous driving systems and to make accurate predictions about real-world data. You''ll work through practical examples of synthetic data generation using Python and R, placing its purpose and methods in a real-world context. Generative Adversarial Networks (GANs) are also covered in detail, explaining how they work and their potential applicationsTable of ContentsChapter I: Introduction to Data 40 pagesChapter Goal: The book section entitled "Data" aims to provide readers with information on the history, definition, and future of data storage, as well as the role that synthetic data can play in the field of computer vision. 1.1. The History of Data1.3. Definitions of Synthetic Data1.4. The Lifecycle of Data1.5. The Future of Data Storage1.6. Synthetic Data and Metaverse1.7. Computer Vision1.8. Generating an Artificial Neural Network Using Package “nnet” in R1.9. Understanding of Visual Scenes1.10. Segmentation Problem1.11. Accuracy Problems1.12. Generative Pre-trained Transformer 3 (GPT-3) Chapter 2: Synthetic Data 40 pagesChapter Goal: The purpose of this chapter is to provide information about synthetic data and how it can be used to benefit autonomous driving systems. Synthetic data is a term used to describe data that has been generated by a computer. 2.1. Synthetic Data2.2. A Brief History of Synthetic Data2.3. Types of Synthetic Data2.4. Benefits and Challenges of Synthetic Data2.5. Generating Synthetic Data in A Simple Way2.6. An Example of Biased Synthetic Data Generation2.7. Domain Transfer2.8. Domain Adaptation2.9. Domain Randomization2.10. Using Video Games to Create Synthetic Data2.11. Synthetic Data And Autonomous Driving System2.11.1. Perception2.11.2. Localization2.11.3. Prediction2.11.4. Decision Making2.12. Simulation in Autonomous Vehicle Companies2.13. How to Make Automatic Data Labeling? 2.14. Is Real-World Experience Unavoidable? 2.15. Data for Learning Medical Images2.16. Reinforcement Learning2.17. Self-Supervised LearningChapter 3: Synthetic Data Generation with R..... 55 pagesChapter Goal: The purpose of this book section is to provide information about the content and purpose of synthetic data generation with R. Synthetic data is generated data that is used to mimic real data. There are many reasons why one might want to generate synthetic data. For example, synthetic data can be used to test data-driven models when real data is not available. Synthetic data can also be used to protect the privacy of individuals in data sets.3.1. Basic Functions Used In Generating Synthetic Data3.1.1. Creating a Value Vector from a Known Univariate Distribution3.1.2. Vector Generation from a Multi-levels Categorical Variable3.1.3. Multivariate3.1.4. Multivariate (with correlation) 3.2. Multivariate Imputation Via Mice Package in R3.2.1. Example of MICE3.3. Augmented Data3.4. Image Augmentation Using Torch Package3.5. Generating Synthetic Data with The "conjurer" Package in R3.5.1. Create a Customer3.5.2. Create a Product3.5.3. Creating Transactions3.5.4. Generating Synthetic Data3.6. Generating Synthetic Data With “Synthpop” Package In R3.7. Copula3.7.1. t Copula3.7.2. Normal Copula3.7.3. Gaussian CopulaChapter 4: GANs.... 15 pagesChapter Goal: This book chapter aims to provide information on the content and purpose of GANs. GANs are a type of artificial intelligence that is used to generate new data that is similar to the training data. This is done by training a generator network to produce data that is similar to the training data. The generator network is trained by using a discriminator network, which is used to distinguish between the generated data and the training data. 4.1. GANs4.2. CTGAN4.3. SurfelGAN4.4. Cycle GANs4.5. SinGAN4.6. DCGAN4.7. medGAN4.8. WGAN4.9. seqGAN4.10. Conditional GANChapter 5: Synthetic Data Generation with Python.... 40 pagesChapter Goal: The purpose of this chapter is to provide information about the methods of synthetic data generation with Python. Python is a widely used high-level programming language that is known for its ease of use and readability. It has a large standard library that covers a wide range of programming tasks.5.1. Data Generation with Know Distribution5.2. Synthetic Data Generation in Regression Problem5.3. Gaussian Noise Apply to Regression Model5.4. Friedman Functions and Symbolic Regression5.5. Synthetic data generation for Classification and Clustering Problems5.6. Clustering Problems5.7. Generation Tabular Synthetic Data by Applying GANs
£37.49
APress Productionizing AI
Book SynopsisChapter 1: Introduction to AI & the AI Ecosystem.- Chapter 2: AI Best Practise & DataOps.- Chapter 3: Data Ingestion for AI.- Chapter 4: Machine Learning on Cloud.- Chapter 5: Neural Networks and Deep Learning.- Chapter 6: The Employer's Dream: AutoML, AutoAI and the rise of NoLo UIs.- Chapter 7: AI Full Stack: Application Development.- Chapter 8: AI Case Studies.- Chapter 9: Deploying an AI Solution (Productionizing & Containerization).- Chapter 10: Natural Language Processing.- Postscript.Table of ContentsChapter 1: Introduction to AI & the AI EcosystemChapter Goal: Embracing the hype and the pitfalls, introduces the reader to current and emerging trends in AI and how many businesses and organisations are struggling to get machine and deep learning operationalizedNo of pages: 30Sub -Topics1. The AI ecosystem2. Applications of AI3. AI pipelines4. Machine learning5. Neural networks & deep learning6. Productionizing AIChapter 2: AI Best Practise & DataOpsChapter Goal: Help the reader understand the wider context for AI, key stakeholders, the importance of collaboration, adaptability and re-use as well as DataOps best practice in delivering high-performance solutionsNo of pages: 20Sub - Topics 1. Introduction to DataOps and MLOps 2. Agile development3. Collaboration and adaptability4. Code repositories5. Module 4: Data pipeline orchestration6. CI / CD7. Testing, performance evaluation & monitoringChapter 3: Data Ingestion for AIChapter Goal: Inform on best practice and the right (cloud) data architectures and orchestration requirements to ensure the successful delivery of an AI project.No of pages : 20Sub - Topics: 1. Introduction to data ingestion2. Data stores for AI3. Data lakes, warehousing & streaming4. Data pipeline orchestrationChapter 4: Machine Learning on CloudChapter Goal: Top-down ML model building from design thinking, through high level process, data wrangling, unsupervised clustering techniques, supervised classification, regression and time series approaches before interpreting results and algorithmic performance No of pages: 20Sub - Topics: 1. ML fundamentals2. EDA & data wrangling3. Supervised & unsupervised machine learning4. Python Implementation5. Unsupervised clustering, pattern & anomaly detection6. Supervised classification & regression case studies: churn & retention modelling, risk engines, social media sentiment analysis7. Time series forecasting and comparison with fbprophetChapter 5: Neural Networks and Deep LearningChapter Goal: Help the reader establish the right artificial neural network architecture, data orchestration and infrastructure for deep learning with TensorFlow, Keras and PyTorch on CloudNo of pages: 40Sub - Topics: 1. An introduction to deep learning2. Stochastic processes for deep learning3. Artificial neural networks4. Deep learning tools & frameworks5. Implementing a deep learning model6. Tuning a deep learning model7. Advanced topics in deep learningChapter 6: The Employer’s Dream: AutoML, AutoAI and the rise of NoLo UIsChapter Goal: Building on acquired ML and DL skills, learn to leverage the growing ecosystem of AutoML, AutoAI and No/Low code user interfacesNo of pages: 20Sub - Topics: 1. AutoML2. Optimizing the AI pipeline3. Python-based libraries for automation4. Case Studies in Insurance, HR, FinTech & Trading, Cybersecurity and Healthcare5. Tools for AutoAI: IBM Cloud Pak for Data, Azure Machine Learning, Google Teachable MachinesChapter 7: AI Full Stack: Application Development Chapter Goal: Starting from key business/organizational needs for AI, identify the correct solution and technologies to develop and deliver “Full Stack AI”No of pages: 20Sub - Topics: 6. Introduction to AI application development7. Software for AI development8. Key Business applications of AI:• ML Apps• NLP Apps• DL Apps4. Designing & building an AI applicationChapter 8: AI Case StudiesChapter Goal: A comprehensive (multi-sector, multi-functional) look at the main AI use uses in 2022No of pages: 20Sub - Topics: 1. Industry case studies2. Telco solutions3. Retail solutions4. Banking & financial services / fintech solutions5. Oil & gas / energy & utilities solutions6. Supply chain solutions7. HR solutions8. Healthcare solutions9. Other case studiesChapter 9: Deploying an AI Solution (Productionizing & Containerization)Chapter Goal: A practical look at “joining the dots” with full-stack deployment of Enterprise AI on CloudNo of pages: 20Sub - Topics: 1. Productionizing an AI application2. AutoML / AutoML3. Storage & Compute4. Containerization5. The final frontier…
£41.24
APress Time Series Algorithms Recipes
Book Synopsis Chapter 1: Getting Started with Time Series.- Chapter 2: Statistical Univariate Modelling.- Chapter 3: Statistical Multivariate Modelling.- Chapter 4: Machine Learning Regression-Based Forecasting.- Chapter 5: Forecasting Using Deep Learning. Table of ContentsChapter 1: Getting Started with Time Series.Chapter Goal: Exploring and analyzing the timeseries data, and preprocessing it, which includes feature engineering for model building.No of pages: 25Sub - Topics 1 Reading time series data2 Data cleaning3 EDA4 Trend5 Noise6 Seasonality7 Cyclicity8 Feature Engineering9 StationarityChapter 2: Statistical Univariate ModellingChapter Goal: The fundamentals of time series forecasting with the use of statistical modelling methods like AR, MA, ARMA, ARIMA, etc. No of pages: 25Sub - Topics 1 AR2 MA3 ARMA4 ARIMA5 SARIMA6 AUTO ARIMA7 FBProphetChapter 3: Statistical Multivariate ModellingChapter Goal: implementing multivariate modelling techniques like HoltsWinter and SARIMAX.No of pages: 25Sub - Topics: 1 HoltsWinter 2 ARIMAX3 SARIMAXChapter 4: Machine Learning Regression-Based Forecasting.Chapter Goal: Building and comparing multiple classical ML Regression algorithms for timeseries forecasting.No of pages: 25Sub - Topics: 1 Random Forest2 Decision Tree3 Light GBM4 XGBoost5 SVMChapter 5: Forecasting Using Deep Learning.Chapter Goal: Implementing advanced concepts like deep learning for time series forecasting from scratch.No of pages: 25Sub - Topics: 1 LSTM 2 ANN3 MLP
£22.49
APress Precision Health and Artificial Intelligence
Book SynopsisBeginning user levelTable of ContentsChapter 1: Introduction to Precision Health and Artificial IntelligenceChapter Goal: An introduction to precision health, the concepts of AI-wearables, health data and health tech and how they transform the health industry No of pages: 15Chapter 2: Foundations of Precision HealthChapter Goal: A deep dive into precision health including key principles and processes.No of pages: 25 Chapter 3: DataChapter Goal: Data has been the beginning of many great products, services or ventures in health tech — explore types of data, and how they can be used.No of pages: 25Sub - Topics: 1. Little and big data2. Types of data3. Wearables and IoT, genomics4. Using data to enable precision health Chapter 4: Artificial Intelligence in Precision HealthChapter Goal: Concepts and ideas in artificial intelligence (AI) and machine learning -- including statistical approaches, visualization, human-computer interactions and evaluating health AI.Pages: 251. Statistical approaches2. Visualization3. Human computer interaction4. Evaluations of AIChapter 5: Ethics and RegulatoryChapter Goal: An in-depth study of legal, ethical, and regulatory concepts in precision health.No of pages: 35Sub - Topics:1.Ethics2.Legal3.Regulatory concerns Chapter 6: Case Studies: The Application of Artificial Intelligence in Precision Healthcare and MedicineChapter Goal: Applications of AI techniques and software tools. This will primarily involve exploring recent examples of AI and Machine Learning tools being specifically used to aid in clinical practice.Pages: 251. Best case examples of AI to aid clinical practice
£37.49
APress Exploring the Power of ChatGPT
Book SynopsisLearn how to use the large-scale natural language processing model developed by OpenAI: ChatGPT. This book explains how ChatGPT uses machine learning to autonomously generate text based on user input and explores the significant implications for human communication and interaction. Author Eric Sarrion examines various aspects of ChatGPT, including its internal workings, use in computer projects, and impact on employment and society. He also addresses long-term perspectives for ChatGPT, including possible future advancements, adoption challenges, and considerations for ethical and responsible use. The book starts with an introduction to ChatGPT covering its versions, application areas, how it works with neural networks, NLP, and its advantages and limitations. Next, you'll be introduced to applications and training development projects using ChatGPT, as well as best practices for it. You'll then explore the ethical implications of ChatGPT, such as potentialbiases and risks, regulationTable of ContentsPart 1: Introduction to ChatGPT1 - What is ChatGPT ?Describes hat is ChatGPT, its history...• 1.1 Definition of ChatGPT• 1.2 ChatGPT History• 1.3 Versions of ChatGPT• 1.4 Application areas of ChatGPT2 - How Does ChatGPT Work?Describes how it works inside• 2.1 Neural networks• 2.2 Natural language processing techniques used by ChatGPT• 2.3 The data used to train ChatGPT• 2.4 The advantages and limitations of ChatGPT3 - Applications of ChatGPTDescribes what you can do whith ChatGPT• 3.1 Chatbots and virtual assistants• 3.2 Machine translation apps• 3.3 Content writing apps• 3.4 Applications in information retrievalPart 2: How To Train and Use ChatGPT4 - ChatGPT TrainingDescribes how to build the models used by ChatGPT• 4.1 Data collection and preparation• 4.2 ChatGPT training settings• 4.3 Training tools available• 4.4 Techniques to improve ChatGPT performance5 - Using ChatGPT in Development ProjectsDescribes how to use ChatGPT in a web page with an API• 5.1 Libraries and frameworks for ChatGPT• 5.2 Examples of projects using ChatGPT• 5.3 Techniques to integrate ChatGPT into applications• 5.4 Use ChatGPT with the OpenAI API• 5.5 Use ChatGPT with a voice interface• 5.6 Methods to evaluate the performance of ChatGPT6 - Best Practices for Using ChatGPTDescribes how to optimize ChatGPT• 6.1 Strategies to ensure the quality of input data• 6.2 Techniques to avoid bias in data• 6.3 Methods to optimize ChatGPT performance• 6.4 ChatGPT maintenance tipsPart 3 The Ethical Implications of ChatGPT7 - Potential Biases and Risks of ChatGPTDescribes biases and riks of ChatGPT• 7.1 Sources of bias in the data• 7.2 The risks of discrimination and stigmatization• 7.3 The limits of ChatGPT transparency• 7.4 Consequences for privacy and data security 8 - The Implications of ChatGPT on Employment and SocietyDescribes impacts on employment and society• 8.1 The impacts on employment in various sectors• 8.2 The implications for education and vocational training• 8.3 Consequences for social and cultural norms• 8.4 Political and legal responses to the changes brought about by ChatGPT9 - Regulations and Standards for Using ChatGPTDescribes responsible use of ChatGPT• 9.1 Existing regulations for consumer protection• 9.2 Standards for Responsible Use of ChatGPT• 9.3 ChatGPT governance initiatives• 9.4 Considerations for Legal and Ethical Responsibility of ChatGPTPart 4 Future Prospects of ChatGPT10 - Future Developments of ChatGPTDescribes future developments • 10.1 Advances in Machine Learning and Natural Language Processing Research• 10.2 ChatGPT performance and efficiency improvements• 10.3 Advances in applications and areas of use of ChatGPT• 10.4 Developments in the competition and the ChatGPT market11 - The Long Term Outlook for ChatGPTDescribes long term outlook• 11.1 The implications for artificial intelligence and cognition• 11.2 Merging possibilities between ChatGPT and other emerging technologies• 11.3 The challenges of adopting and accepting ChatGPT• 11.4 Issues for regulation and governance of ChatGPTPart 5 : Examples of Using ChatGPT12 - Using ChatGPT for Text Content Creation13 - Using ChatGPT for Software Programming14 - Using ChatGPT for Text Translation15 - Using ChatGPT for Artistic Content Creation16 - Using ChatGPT for Innovation and Creativity17 - ConclusionGives a conclusion of the book• 17.1 Summaries of the key elements covered in the book• 17.2 Final thoughts on the impact and implications of ChatGPT• 17.3 Suggestions for future research and development on ChatGPT• 17.4 Considerations for the ethical and responsible use of ChatGPT in the future.• 17.5 In conclusion
£26.39
Manning Publications Machine Learning Systems: Designs that scale
Book SynopsisMachine learning applications autonomously reason about data at massive scale. It’s important that they remain responsive in the face of failure and changes in load. But machine learning systems are different than other applications when it comes to testing, building, deploying, and monitoring. Reactive Machine Learning Systems teaches readers how to implement reactive design solutions in their machine learning systems to make them as reliable as a well-built web app. Using Scala and powerful frameworks such as Spark, MLlib, and Akka, they’ll learn to quickly and reliably move from a single machine to a massive cluster. Key Features: · Example-rich guide · Step-by-step guide · Move from single-machine to massive cluster Readers should have intermediate skills in Java or Scala. No previous machine learning experience is required. About the Technology: Machine learning systems are different than other applications when it comes to testing, building, deploying, and monitoring. To make machine learning systems reactive, you need to understand both reactive design patterns and modern data architecture patterns.
£32.39
Manning Publications Machine Learning for Business: Using Amazon
Book Synopsis Imagine predicting which customers are thinking about switching to a competitor or flagging potential process failures before they happen Think about the benefits of automating tedious business processes and back-office tasks Consider the competitive advantage of making decisions when you know the most likely future events Machine learning can deliver these and other advantages to your business, and it’s never been easier to get started! Machine Learning for Business teaches you how to make your company more automated, productive, and competitive by mastering practical, implementable machine learning techniques and tools. Thanks to the authors’ down-to-earth style, you’ll easily grok why process automation is so important and why machine learning is key to its success. In this hands-on guide, you’ll work through seven end-to-end automation scenarios covering business processes in accounts payable, billing, payroll, customer support, and other common tasks. Using Amazon SageMaker (no installation required!), you’ll build and deploy machine learning applications as you practice takeaway skills you’ll use over and over. By the time you’re finished, you’ll confidently identify machine learning opportunities in your company and implement automated applications that can sharpen your competitive edge! Key Features Identifying processes suited to machine learning Using machine learning to automate back office processes Seven everyday business process projects Using open source and cloud-based tools Case studies for machine learning decision making For technically-inclined business professionals or business developers. No previous experience with automation tools or programming is necessary. Doug Hudgeon runs a business automation consultancy, putting his considerable experience helping companies set up automation and machine learning teams to good use. In 2000, Doug launched one of Australia’s first electronic invoicing automation companies. Richard Nichol has over 20 years of experience as a data scientist and software engineer. He currently specializes in maximizing the value of data through AI and machine learning techniques.
£26.99
Manning Publications Ensemble Methods for Machine Learning
Book SynopsisMany machine learning problems are too complex to be resolved by a single model or algorithm. Ensemble machine learning trains a group of diverse machine learning models to work together to solve a problem. By aggregating their output, these ensemble models can flexibly deliver rich and accurate results. Ensemble Methods for Machine Learning is a guide to ensemble methods with proven records in data science competitions and real world applications. Learning from hands-on case studies, you'll develop an under-the-hood understanding of foundational ensemble learning algorithms to deliver accurate, performant models. About the Technology Ensemble machine learning lets you make robust predictions without needing the huge datasets and processing power demanded by deep learning. It sets multiple models to work on solving a problem, combining their results for better performance than a single model working alone. This "wisdom of crowds" approach distils information from several models into a set of highly accurate results.Trade Review"The definitive and complete guide on ensemble learning. A must read!" Al Krinker "The examples are clear and easy to reproduce, the writing is engaging and clear, and the reader is not bogged down by details which might be unimportant for beginners in the field!" Or Golan "This book is a great tutorial on ensemble methods!" Stephen Warnett "The code examples as well as the case studies at the end of each chapter open many possibilities of using these techniques on your data/projects." Joaquin Beltran
£41.39
Manning Publications Automated Machine Learning in Action
Book SynopsisOptimize every stage of your machine learning pipelines with powerful automation components and cutting-edge tools like AutoKeras and KerasTuner. Automated Machine Learning in Action, filled with hands-onexamples and written in an accessible style, reveals how premade machine learning components can automate time-consuming ML tasks. Automated Machine Learning in Action teaches you to automate selecting the best machine learning models or data preparation methods for your own machine learning tasks, so your pipelines tune themselves without needing constant input. You'll quickly run through machine learning basics thatopen upon AutoML to non-data scientists, before putting AutoML into practicefor image classification, supervised learning, and more. Automated machine learning (AutoML) automates complex andtime-consuming stages in a machine learning pipeline with pre packaged optimal solutions. This frees up data scientists from data processing and manualtuning, and lets domain experts easily apply machine learning models to their projects.Trade Review“Automating automation itself is a new concept and this book does justice to it in terms of explaining the concepts, sharing real world advancements, use cases and research related to the topic. “ Satej KumarSahu “A book with a lot of promise, covering a topic that's like to become hot in the next year or so. Read this now, and get ahead of the curve!” RichardVaughan “A nice introduction to AutoML, its ambitions, and challenges bothin theory and in practice.” Alain Couniot “Helps you to clearly understand the process of Machine Learning automation. The examples are clear, concise, and applicable to the real world.”Walter Alexander Mata López “The author's friendly style makes novices feel ready to try outAutoML tools.” Gaurav Kumar Leekha “A great book to take your machine learning skills to the next level.” Harsh Raval “An impressive effort by the authors to break down a complex MLtopic into understandable chunks.” Venkatesh Rajagopal
£34.19
Manning Publications Deep Learning Design Patterns
Book SynopsisDeep learning has revealed ways to create algorithms for applications that we never dreamed were possible. For software developers, the challenge lies in taking cutting-edge technologies from R&D labs through to production. Deep Learning Design Patterns is here to help. In it, you'll find deep learning models presented in a unique new way: as extendable design patterns you can easily plug-and-play into your software projects. Written by Google deep learning expert Andrew Ferlitsch, it's filled with the latest deep learning insights and best practices from his work with Google Cloud AI. Each valuable technique is presented in a way that's easy to understand and filled with accessible diagrams and code samples. about the technologyYou don't need to design your deep learning applications from scratch! By viewing cutting-edge deep learning models as design patterns, developers can speed up their creation of AI models and improve model understandability for both themselves and other users. about the book Deep Learning Design Patterns distills models from the latest research papers into practical design patterns applicable to enterprise AI projects. Using diagrams, code samples, and easy-to-understand language, Google Cloud AI expert Andrew Ferlitsch shares insights from state-of-the-art neural networks. You'll learn how to integrate design patterns into deep learning systems from some amazing examples, including a real-estate program that can evaluate house prices just from uploaded photos and a speaking AI capable of delivering live sports broadcasting. Building on your existing deep learning knowledge, you'll quickly learn to incorporate the very latest models and techniques into your apps as idiomatic, composable, and reusable design patterns. what's inside Internal functioning of modern convolutional neural networks Procedural reuse design pattern for CNN architectures Models for mobile and IoT devices Composable design pattern for automatic learning methods Assembling large-scale model deployments Complete code samples and example notebooks Accompanying YouTube videos about the readerFor machine learning engineers familiar with Python and deep learning. about the author Andrew Ferlitsch is an expert on computer vision and deep learning at Google Cloud AI Developer Relations. He was formerly a principal research scientist for 20 years at Sharp Corporation of Japan, where he amassed 115 US patents and worked on emerging technologies in telepresence, augmented reality, digital signage, and autonomous vehicles. In his present role, he reaches out to developer communities, corporations and universities, teaching deep learning and evangelizing Google's AI technologies.
£43.19
Manning Publications Engineering Deep Learning Systems
Book SynopsisDesign systems optimized for deep learning models. Written for software engineers, this book teaches you how to implement a maintainable platform for developing deep learning models. In Engineering Deep Learning Systems you will learn how to: Transfer your software development skills to deep learning systems Recognize and solve common engineering challenges for deep learning systems Understand the deep learning development cycle Automate training for models in TensorFlow and PyTorch Optimize dataset management, training, model serving and hyperparameter tuning Pick the right open-source project for your platform Engineering Deep Learning Systems is a practical guide for software engineers and data scientists who are designing and building platforms for deep learning. It's full of hands-on examples that will help you transfer your software development skills to implementing deep learning platforms. You'll learn how to build automated and scalable services for core tasks like dataset management, model training/serving, and hyperparameter tuning. This book is the perfect way to step into an exciting—and lucrative—career as a deep learning engineer. about the technology Behind every deep learning researcher is a team of engineers bringing their models to production. To build these systems, you need to understand how a deep learning system's platform differs from other distributed systems. By mastering the core ideas in this book, you'll be able to support deep learning systems in a way that's fast, repeatable, and reliable.
£34.49
Nova Science Publishers Inc Internet of Things and Machine Learning in
Book SynopsisAgriculture is one of the most fundamental human activities. It has kept humans happier and healthier and helped birth modern society as we know it. As farming has expanded, however, the usage of resources such as land, fertilizer, and water has grown exponentially. Environmental pressures from modern farming techniques have stressed our natural landscapes. Still, by some estimates, worldwide food production will need to increase 70% by 2050 to keep up with global demand. With global populations rising, it falls to technology to make farming processes more efficient and keep up with the growing demand. Fortunately, Machine Learning (ML) and the Internet of Things (IoT) can play a very promising role in the agricultural industry. Some examples include: an AI-powered drone to monitor the field, an IoT-designed automated crop watering system, sensors embedded in the field to monitor temperature and humidity, etc. The agriculture industry is the largest in the world, but when it comes to innovation there is a lot more to explore. IoT devices can be used to analyze the status of crops. For instance, with soil sensors, farmers can detect any irregular conditions such as high acidity and efficiently tackle these issues to improve their yield. In this book, we will point out the challenges facing the agro-industry that can be addressed by ML and IoT and explore the impacts of these technologies in the agriculture sector.Table of ContentsPreface; Smart Farming Enabling Technologies: A Systematic Review; Internet of Things Platform for Smart Farming; Internet of Things for Smart Farming; A Comprehensive Review on Intelligent Systems for Mitigating Pests and Diseases in Agriculture; Plant Disease Detection Using Image Sensors: A Step Towards Precision Agriculture; Recent Trends in Agriculture Using IoT, Challenges and Opportunities; Early Detection of Infection/Disease in Agriculture; Application of Agriculture Using IoT: Future Prospective for Smart Cities Management 5.0; The Internet of Things (IoT) for Sustainable Agriculture; IoT Based Data Collection and Data Analytics Decision Making for Precision Farming; Index.
£113.59
Nova Science Publishers Inc Green Computing and Its Applications
Book Synopsis
£163.19
O'Reilly Media Deep Learning at Scale
Book Synopsis
£47.99
Cambridge University Press Exponential Families in Theory and Practice
Book SynopsisDuring the past half-century, exponential families have attained a position at the center of parametric statistical inference. Theoretical advances have been matched, and more than matched, in the world of applications, where logistic regression by itself has become the go-to methodology in medical statistics, computer-based prediction algorithms, and the social sciences. This book is based on a one-semester graduate course for first year Ph.D. and advanced master''s students. After presenting the basic structure of univariate and multivariate exponential families, their application to generalized linear models including logistic and Poisson regression is described in detail, emphasizing geometrical ideas, computational practice, and the analogy with ordinary linear regression. Connections are made with a variety of current statistical methodologies: missing data, survival analysis and proportional hazards, false discovery rates, bootstrapping, and empirical Bayes analysis. The book coTrade Review'This book provides a unique perspective on exponential families, bringing together theory and methods into a unified whole. No other text covers the range of topics in this text. If you want to understand the 'why' as well as the `how' of exponential families, then this book should be on your bookshelf.' Larry Wasserman, Carnegie Mellon University'I am excited to see the publication of this monograph on exponential families by my friend and colleague Brad Efron. I learned some of this material during my Ph.D. studies at Stanford from the maestro himself, as well as the geometry of curved exponential families, Hoeffding's lemma, the Lindsey method, and the list goes on. They have lived with me my entire career and informed our work on GAMs and sparse GLMs. Generations of Stanford students have shared this privilege, and now generations in the future will be able to enjoy the unique Efron style.' Trevor Hastie, Stanford University'Exponential families can be magical in simplifying both theoretical and applied statistical analyses. Brad Efron's wonderful book exposes their secrets, from R. A. Fisher's early magic to Efron's own bootstrap: an essential text for understanding how data of all sizes can be approached scientifically.' Stephen Stigler, University of Chicago'This book provides an original and accessible study of statistical inference in the class of models called exponential families. The mathematical properties and flexibility of this class makes the models very useful for statistical practice – they underpin the class of generalized linear models, for example. Writing with his characteristic elegance and clarity, Efron shows how exponential families underpin, and provide insight into, many modern topics in statistical science, including bootstrap inference, empirical Bayes methodology, high-dimensional inference, analysis of survival data, missing data, and more.' Nancy Reid, University of Toronto'In this book, Brad Efron illuminates the exponential family as a practical, extendible, and crucial ingredient in all manners of data analysis, be they Bayesian, frequentist, or machine learning. He shows us how to shape, understand, and employ these distributions in both algorithms and analysis. The book is crisp, insightful, and indispensable.' David Blei, Columbia UniversityTable of Contents1. One-parameter exponential families; 2. Multiparameter exponential families; 3. Generalized linear models; 4. Curved exponential families, eb, missing data, and the em algorithm; 5. Bootstrap confidence intervals; Bibliography; Index.
£28.49
APress Python Data Analytics
Book Synopsis1. An Introduction to Data Analysis .- 2. Introduction to the Python's World.- 3. The NumPy Library .- 4. The pandas Library-- An Introduction.- 5. pandas: Reading and Writing Data .- 6. pandas in Depth: Data Manipulation .- 7. Data Visualization with matplotlib .- 8. Machine Learning with scikit-learn.- 9. Deep Learning with TensorFlow.- 10. An Example - Meteorological Data.- 11. Embedding the JavaScript D3 Library in IPython Notebook.- 12. Recognizing Handwritten Digits.- 13. Textual data Analysis with NLTK.- 14. Image Analysis and Computer Vision with OpenCV.- Appendix A.- Appendix B.Table of ContentsPython Data Analytics1. An Introduction to Data Analysis 2. Introduction to the Python's World3. The NumPy Library 4. The pandas Library-- An Introduction5. pandas: Reading and Writing Data 6. pandas in Depth: Data Manipulation 7. Data Visualization with matplotlib 8. Machine Learning with scikit-learn9. Deep Learning with TensorFlow10. An Example - Meteorological Data11. Embedding the JavaScript D3 Library in IPython Notebook12. Recognizing Handwritten Digits13. Textual data Analysis with NLTK 14. Image Analysis and Computer Vision with OpenCV Appendix A Appendix B
£46.74
APress Beginning Data Science in R 4
Book SynopsisDiscover best practices for data analysis and software development in R and start on the path to becoming a fully-fledged data scientist. Updated for the R 4.0 release, this book teaches you techniques for both data manipulation and visualization and shows you the best way for developing new software packages for R.Beginning Data Science in R 4, Second Editiondetails how data science is a combination of statistics, computational science, and machine learning. You'll see how to efficiently structure and mine data to extract useful patterns and build mathematical models. This requires computational methods and programming, and R is an ideal programming language for this.Modern data analysis requires computational skills and usually a minimum of programming. After reading and using this book, you'll have what you need to get started with R programming with data science applications. Source code will be available to support your next projects as well. Source code is available at github.cTable of Contents1. Introduction to R programming. 2. Reproducible analysis. 3. Data manipulation. 4. Visualizing and exploring data. 5. Working with large data sets.6. Supervised learning. 7. Unsupervised learning. 8. More R programming.9. Advanced R programming.10. Object oriented programming.11. Building an R package.12. Testing and checking. 13. Version control. 14. Profiling and optimizing.
£37.99
Manning Publications Machine Learning Algorithms in Depth
Book SynopsisDevelop a mathematical intuition around machine learning algorithms to improve model performance and effectively troubleshoot complex ML problems. For intermediate machine learning practitioners familiar with linear algebra, probability, and basic calculus. Machine Learning Algorithms in Depth dives into the design and underlying principles of some of the most exciting machine learning (ML) algorithms in the world today. With a particular emphasis on probability-based algorithms, you will learn the fundamentals of Bayesian inference and deep learning. You will also explore the core data structures and algorithmic paradigms for machine learning. You will explore practical implementations of dozens of ML algorithms, including: Monte Carlo Stock Price Simulation Image Denoising using Mean-Field Variational Inference EM algorithm for Hidden Markov Models Imbalanced Learning, Active Learning and Ensemble Learning Bayesian Optimisation for Hyperparameter Tuning Dirichlet Process K-Means for Clustering Applications Stock Clusters based on Inverse Covariance Estimation Energy Minimisation using Simulated Annealing Image Search based on ResNet Convolutional Neural Network Anomaly Detection in Time-Series using Variational Autoencoders Each algorithm is fully explored with both math and practical implementations so you can see how they work and put into action. About the technology Fully understanding how machine learning algorithms function is essential for any serious ML engineer. This vital knowledge lets you modify algorithms to your specific needs, understand the trade-offs when picking an algorithm for a project, and better interpret and explain your results to your stakeholders. This unique guide will take you from relying on one-size-fits-all ML libraries to developing your own algorithms to solve your business needs.
£54.89
MIT Press Ltd Probabilistic Graphical Models
Book Synopsis
£100.80
Emerald Publishing Limited Learning in Humans and Machines
Book SynopsisDiscusses the analysis, comparison and integration of computational approaches to learning and research on human learning. This book aims to provide the reader with an overview of the prolific research on learning throughout the disciplines. It also highlights the important research issues and methodologies.Trade ReviewEphraim Nissan, University of Greenwich The title of this book accurately describes its editors' ambition: outstretching both arms wide open to get hold of as diverse foci as learning in humans, versus what the discipline of machine learning (ML) within artificial intelligence (AI) actually amounts to in the main...Used properly...this volume can be a trove. A trove of leads to lead you outside the grasp of its compass. To the extent that the book can do that for the reader, it has fulfilled its purpose. No other single book, to my knowledge, would do the same for us on this global subject. Pragmatics & Cognition A certain unity (in this publication's) approach, focusing on the analysis of phenomena in their compexity and developing a "flexible" vision of learning, integrating the role of context, goals and previous knowledge, gives an undeniable coherence to this work. L'Annee PsychologiqueTable of ContentsChapter headings: Towards an Interdisciplinary Learning Science (P. Reimann, H. Spada). A Cognitive Psychological Approach to Learning (S. Vosniadou). Learning to Do and Learning to Understand: A Lesson and a Challenge for Cognitive Modeling (S. Ohlsson). Machine Learning: Case Studies of an Interdisciplinary Approach (W. Emde). Mental and Physical Artifacts in Cognitive Practices (R. Saljo). Learning Theory and Instructional Science (E. De Corte). Knowledge Representation Changes in Humans and Machines (L. Saitta and Task Force 1). Multi-Objective Learning with Multiple Representations (M. Van Someren, P. Reimann). Order Effects in Incremental Learning (P. Langley). Situated Learning and Transfer (H. Gruber et al.). The Evolution of Research on Collaborative Learning (P. Dillenbourg et al.). A Developmental Case Study on Sequential Learning: The Day-Night Cycle (K. Morik, S. Vosniadou). Subject index. Author index.
£87.39
MIT Press Ltd Deep Learning
Book Synopsis
£80.75
MIT Press Ltd Algorithms for Optimization
Book Synopsis
£85.50
MIT Press Foundations of Machine Learning
Book Synopsis
£72.00
MIT Press Ltd Fundamentals of Machine Learning for Predictive
Book Synopsis
£68.40
MIT Press Deep Learning
Book Synopsis
£14.39
MIT Press AI Ethics
Book Synopsis
£14.39
Elsevier Science & Technology Signal Processing and Machine Learning Theory
Book SynopsisTable of Contents1. Introduction to Signal Processing and Machine Learning Theory 2. Continuous-Time Signals and Systems 3. Discrete-Time Signals and Systems 4. Random Signals and Stochastic Processes 5. Sampling and Quantization 6. Digital Filter Structures and Their Implementation 7. Multi-rate Signal Processing for Software Radio Architectures 8. Modern Transform Design for Practical Audio/Image/Video Coding Applications 9. Discrete Multi-Scale Transforms in Signal Processing 10. Frames in Signal Processing 11. Parametric Estimation 12. Adaptive Filters 13. Signal Processing over Graphs 14. Tensors for Signal Processing and Machine Learning 15. Non-convex Optimization for Machine Learning 16. Dictionary Learning and Sparse Representation
£114.30
John Wiley & Sons Inc Reinforcement and Systemic Machine Learning for
Book Synopsis* Authors have both industrial and academic experiences * Case studies are included reflecting author's industrial experiences * Downloadable tutorials are available .Table of ContentsPreface xv Acknowledgments xix About the Author xxi 1 Introduction to Reinforcement and Systemic Machine Learning 1 1.1. Introduction 1 1.2. Supervised, Unsupervised, and Semisupervised Machine Learning 2 1.3. Traditional Learning Methods and History of Machine Learning 4 1.4. What Is Machine Learning? 7 1.5. Machine-Learning Problem 8 1.6. Learning Paradigms 9 1.7. Machine-Learning Techniques and Paradigms 12 1.8. What Is Reinforcement Learning? 14 1.9. Reinforcement Function and Environment Function 16 1.10. Need of Reinforcement Learning 17 1.11. Reinforcement Learning and Machine Intelligence 17 1.12. What Is Systemic Learning? 18 1.13. What Is Systemic Machine Learning? 18 1.14. Challenges in Systemic Machine Learning 19 1.15. Reinforcement Machine Learning and Systemic Machine Learning 19 1.16. Case Study Problem Detection in a Vehicle 20 1.17. Summary 20 2 Fundamentals of Whole-System, Systemic, and Multiperspective Machine Learning 23 2.1. Introduction 23 2.2. What Is Systemic Machine Learning? 27 2.3. Generalized Systemic Machine-Learning Framework 30 2.4. Multiperspective Decision Making and Multiperspective Learning 33 2.5. Dynamic and Interactive Decision Making 43 2.6. The Systemic Learning Framework 47 2.7. System Analysis 52 2.8. Case Study: Need of Systemic Learning in the Hospitality Industry 54 2.9. Summary 55 3 Reinforcement Learning 57 3.1. Introduction 57 3.2. Learning Agents 60 3.3. Returns and Reward Calculations 62 3.4. Reinforcement Learning and Adaptive Control 63 3.5. Dynamic Systems 66 3.6. Reinforcement Learning and Control 68 3.7. Markov Property and Markov Decision Process 68 3.8. Value Functions 69 3.8.1. Action and Value 70 3.9. Learning an Optimal Policy (Model-Based and Model-Free Methods) 70 3.10. Dynamic Programming 71 3.11. Adaptive Dynamic Programming 71 3.12. Example: Reinforcement Learning for Boxing Trainer 75 3.13. Summary 75 4 Systemic Machine Learning and Model 77 4.1. Introduction 77 4.2. A Framework for Systemic Learning 78 4.3. Capturing the Systemic View 86 4.4. Mathematical Representation of System Interactions 89 4.5. Impact Function 91 4.6. Decision-Impact Analysis 91 4.7. Summary 97 5 Inference and Information Integration 99 5.1. Introduction 99 5.2. Inference Mechanisms and Need 101 5.3. Integration of Context and Inference 107 5.4. Statistical Inference and Induction 111 5.5. Pure Likelihood Approach 112 5.6. Bayesian Paradigm and Inference 113 5.7. Time-Based Inference 114 5.8. Inference to Build a System View 114 5.9. Summary 118 6 Adaptive Learning 119 6.1. Introduction 119 6.2. Adaptive Learning and Adaptive Systems 119 6.3. What Is Adaptive Machine Learning? 123 6.4. Adaptation and Learning Method Selection Based on Scenario 124 6.5. Systemic Learning and Adaptive Learning 127 6.6. Competitive Learning and Adaptive Learning 140 6.7. Examples 146 6.8. Summary 149 7 Multiperspective and Whole-System Learning 151 7.1. Introduction 151 7.2. Multiperspective Context Building 152 7.3. Multiperspective Decision Making and Multiperspective Learning 154 7.4. Whole-System Learning and Multiperspective Approaches 164 7.5. Case Study Based on Multiperspective Approach 167 7.6. Limitations to a Multiperspective Approach 174 7.7. Summary 174 8 Incremental Learning and Knowledge Representation 177 8.1. Introduction 177 8.2. Why Incremental Learning? 178 8.3. Learning from What Is Already Learned. . . 180 8.4. Supervised Incremental Learning 191 8.5. Incremental Unsupervised Learning and Incremental Clustering 191 8.6. Semisupervised Incremental Learning 196 8.7. Incremental and Systemic Learning 199 8.8. Incremental Closeness Value and Learning Method 200 8.9. Learning and Decision-Making Model 205 8.10. Incremental Classification Techniques 206 8.11. Case Study: Incremental Document Classification 207 8.12. Summary 208 9 Knowledge Augmentation: A Machine Learning Perspective 209 9.1. Introduction 209 9.2. Brief History and Related Work 211 9.3. Knowledge Augmentation and Knowledge Elicitation 215 9.4. Life Cycle of Knowledge 217 9.5. Incremental Knowledge Representation 222 9.6. Case-Based Learning and Learning with Reference to Knowledge Loss 224 9.7. Knowledge Augmentation: Techniques and Methods 224 9.8. Heuristic Learning 228 9.9. Systemic Machine Learning and Knowledge Augmentation 229 9.10. Knowledge Augmentation in Complex Learning Scenarios 232 9.11. Case Studies 232 9.12. Summary 235 10 Building a Learning System 237 10.1. Introduction 237 10.2. Systemic Learning System 237 10.3. Algorithm Selection 242 10.4. Knowledge Representation 244 10.5. Designing a Learning System 245 10.6. Making System to Behave Intelligently 246 10.7. Example-Based Learning 246 10.8. Holistic Knowledge Framework and Use of Reinforcement Learning 246 10.9. Intelligent Agents—Deployment and Knowledge Acquisition and Reuse 250 10.10. Case-Based Learning: Human Emotion-Detection System 251 10.11. Holistic View in Complex Decision Problem 253 10.12. Knowledge Representation and Data Discovery 255 10.13. Components 258 10.14. Future of Learning Systems and Intelligent Systems 259 10.15. Summary 259 Appendix A: Statistical Learning Methods 261 Appendix B: Markov Processes 271 Index 281
£98.96