Description

Book Synopsis
This book describes theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Research on pattern classification with binary-output networks is surveyed, including a discussion of the relevance of the VapnikâChervonenkis dimension, and calculating estimates of the dimension for several neural network models. A model of classification by real-output networks is developed, and the usefulness of classification with a 'large margin' is demonstrated. The authors explain the role of scale-sensitive versions of the VapnikâChervonenkis dimension in large margin classification, and in real prediction. They also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient constructive learning algorithms. The book is self-contained and is intended to be accessible to researchers and gradua

Trade Review
'The book is a useful and readable mongraph. For beginners it is a nice introduction to the subject, for experts a valuable reference.' Zentralblatt MATH

Table of Contents
1. Introduction; Part I. Pattern Recognition with Binary-output Neural Networks: 2. The pattern recognition problem; 3. The growth function and VC-dimension; 4. General upper bounds on sample complexity; 5. General lower bounds; 6. The VC-dimension of linear threshold networks; 7. Bounding the VC-dimension using geometric techniques; 8. VC-dimension bounds for neural networks; Part II. Pattern Recognition with Real-output Neural Networks: 9. Classification with real values; 10. Covering numbers and uniform convergence; 11. The pseudo-dimension and fat-shattering dimension; 12. Bounding covering numbers with dimensions; 13. The sample complexity of classification learning; 14. The dimensions of neural networks; 15. Model selection; Part III. Learning Real-Valued Functions: 16. Learning classes of real functions; 17. Uniform convergence results for real function classes; 18. Bounding covering numbers; 19. The sample complexity of learning function classes; 20. Convex classes; 21. Other learning problems; Part IV. Algorithmics: 22. Efficient learning; 23. Learning as optimisation; 24. The Boolean perceptron; 25. Hardness results for feed-forward networks; 26. Constructive learning algorithms for two-layered networks.

Neural Network Learning Theoretical Foundations

Product form

£47.99

Includes FREE delivery

Order before 4pm today for delivery by Sat 17 Jan 2026.

A Paperback by Martin Anthony, Peter L. Bartlett

15 in stock


    View other formats and editions of Neural Network Learning Theoretical Foundations by Martin Anthony

    Publisher: Cambridge University Press
    Publication Date: 8/20/2009 12:00:00 AM
    ISBN13: 9780521118620, 978-0521118620
    ISBN10: 052111862X

    Description

    Book Synopsis
    This book describes theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Research on pattern classification with binary-output networks is surveyed, including a discussion of the relevance of the VapnikâChervonenkis dimension, and calculating estimates of the dimension for several neural network models. A model of classification by real-output networks is developed, and the usefulness of classification with a 'large margin' is demonstrated. The authors explain the role of scale-sensitive versions of the VapnikâChervonenkis dimension in large margin classification, and in real prediction. They also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient constructive learning algorithms. The book is self-contained and is intended to be accessible to researchers and gradua

    Trade Review
    'The book is a useful and readable mongraph. For beginners it is a nice introduction to the subject, for experts a valuable reference.' Zentralblatt MATH

    Table of Contents
    1. Introduction; Part I. Pattern Recognition with Binary-output Neural Networks: 2. The pattern recognition problem; 3. The growth function and VC-dimension; 4. General upper bounds on sample complexity; 5. General lower bounds; 6. The VC-dimension of linear threshold networks; 7. Bounding the VC-dimension using geometric techniques; 8. VC-dimension bounds for neural networks; Part II. Pattern Recognition with Real-output Neural Networks: 9. Classification with real values; 10. Covering numbers and uniform convergence; 11. The pseudo-dimension and fat-shattering dimension; 12. Bounding covering numbers with dimensions; 13. The sample complexity of classification learning; 14. The dimensions of neural networks; 15. Model selection; Part III. Learning Real-Valued Functions: 16. Learning classes of real functions; 17. Uniform convergence results for real function classes; 18. Bounding covering numbers; 19. The sample complexity of learning function classes; 20. Convex classes; 21. Other learning problems; Part IV. Algorithmics: 22. Efficient learning; 23. Learning as optimisation; 24. The Boolean perceptron; 25. Hardness results for feed-forward networks; 26. Constructive learning algorithms for two-layered networks.

    Recently viewed products

    © 2026 Book Curl

      • American Express
      • Apple Pay
      • Diners Club
      • Discover
      • Google Pay
      • Maestro
      • Mastercard
      • PayPal
      • Shop Pay
      • Union Pay
      • Visa

      Login

      Forgot your password?

      Don't have an account yet?
      Create account