Description

Book Synopsis
Machine learning methods are now an important tool for scientists, researchers, engineers and students in a wide range of areas. This book is written for people who want to adopt and use the main tools of machine learning, but aren’t necessarily going to want to be machine learning researchers. Intended for students in final year undergraduate or first year graduate computer science programs in machine learning, this textbook is a machine learning toolkit. Applied Machine Learning covers many topics for people who want to use machine learning processes to get things done, with a strong emphasis on using existing tools and packages, rather than writing one’s own code.
A companion to the author's Probability and Statistics for Computer Science, this book picks up where the earlier book left off (but also supplies a summary of probability that the reader can use).

Emphasizing the usefulness of standard machinery from applied statistics, this textbook gives an overview of the major applied areas in learning, including coverage of:• classification using standard machinery (naive bayes; nearest neighbor; SVM)• clustering and vector quantization (largely as in PSCS)• PCA (largely as in PSCS)• variants of PCA (NIPALS; latent semantic analysis; canonical correlation analysis)• linear regression (largely as in PSCS)• generalized linear models including logistic regression• model selection with Lasso, elasticnet• robustness and m-estimators• Markov chains and HMM’s (largely as in PSCS)• EM in fairly gory detail; long experience teaching this suggests one detailed example is required, which students hate; but once they’ve been through that, the next one is easy• simple graphical models (in the variational inference section)• classification with neural networks, with a particular emphasis onimage classification• autoencoding with neural networks• structure learning

Table of Contents
1. Learning to Classify.- 2. SVM’s and Random Forests.- 3. A Little Learning Theory.- 4. High-dimensional Data.- 5. Principal Component Analysis.- 6. Low Rank Approximations.- 7. Canonical Correlation Analysis.- 8. Clustering.- 9. Clustering using Probability Models.- 10. Regression.- 11. Regression: Choosing and Managing Models.- 12. Boosting.- 13. Hidden Markov Models.- 14. Learning Sequence Models Discriminatively.- 15. Mean Field Inference.- 16. Simple Neural Networks.- 17. Simple Image Classifiers.- 18. Classifying Images and Detecting Objects.- 19. Small Codes for Big Signals.- Index.

Applied Machine Learning

Product form

£62.99

Includes FREE delivery

RRP £69.99 – you save £7.00 (10%)

Order before 4pm tomorrow for delivery by Thu 22 Jan 2026.

A Paperback / softback by David Forsyth

1 in stock


    View other formats and editions of Applied Machine Learning by David Forsyth

    Publisher: Springer Nature Switzerland AG
    Publication Date: 14/08/2020
    ISBN13: 9783030181161, 978-3030181161
    ISBN10: 3030181162

    Description

    Book Synopsis
    Machine learning methods are now an important tool for scientists, researchers, engineers and students in a wide range of areas. This book is written for people who want to adopt and use the main tools of machine learning, but aren’t necessarily going to want to be machine learning researchers. Intended for students in final year undergraduate or first year graduate computer science programs in machine learning, this textbook is a machine learning toolkit. Applied Machine Learning covers many topics for people who want to use machine learning processes to get things done, with a strong emphasis on using existing tools and packages, rather than writing one’s own code.
    A companion to the author's Probability and Statistics for Computer Science, this book picks up where the earlier book left off (but also supplies a summary of probability that the reader can use).

    Emphasizing the usefulness of standard machinery from applied statistics, this textbook gives an overview of the major applied areas in learning, including coverage of:• classification using standard machinery (naive bayes; nearest neighbor; SVM)• clustering and vector quantization (largely as in PSCS)• PCA (largely as in PSCS)• variants of PCA (NIPALS; latent semantic analysis; canonical correlation analysis)• linear regression (largely as in PSCS)• generalized linear models including logistic regression• model selection with Lasso, elasticnet• robustness and m-estimators• Markov chains and HMM’s (largely as in PSCS)• EM in fairly gory detail; long experience teaching this suggests one detailed example is required, which students hate; but once they’ve been through that, the next one is easy• simple graphical models (in the variational inference section)• classification with neural networks, with a particular emphasis onimage classification• autoencoding with neural networks• structure learning

    Table of Contents
    1. Learning to Classify.- 2. SVM’s and Random Forests.- 3. A Little Learning Theory.- 4. High-dimensional Data.- 5. Principal Component Analysis.- 6. Low Rank Approximations.- 7. Canonical Correlation Analysis.- 8. Clustering.- 9. Clustering using Probability Models.- 10. Regression.- 11. Regression: Choosing and Managing Models.- 12. Boosting.- 13. Hidden Markov Models.- 14. Learning Sequence Models Discriminatively.- 15. Mean Field Inference.- 16. Simple Neural Networks.- 17. Simple Image Classifiers.- 18. Classifying Images and Detecting Objects.- 19. Small Codes for Big Signals.- Index.

    Recently viewed products

    © 2026 Book Curl

      • American Express
      • Apple Pay
      • Diners Club
      • Discover
      • Google Pay
      • Maestro
      • Mastercard
      • PayPal
      • Shop Pay
      • Union Pay
      • Visa

      Login

      Forgot your password?

      Don't have an account yet?
      Create account