Description
Book SynopsisOptimize, develop, and design PyTorch and TensorFlow models for a specific problem using the Microsoft Neural Network Intelligence (NNI) toolkit. This book includes practical examples illustrating automated deep learning approaches and provides techniques to facilitate your deep learning model development.
The first chapters of this book cover the basics of NNI toolkit usage and methods for solving hyper-parameter optimization tasks. You will understand the black-box function maximization problem using NNI, and know how to prepare a TensorFlow or PyTorch model for hyper-parameter tuning, launch an experiment, and interpret the results. The book dives into optimization tuners and the search algorithms they are based on: Evolution search, Annealing search, and the Bayesian Optimization approach. The Neural Architecture Search is covered and you will learn how to develop deep learning models from scratch. Multi-trial and one-shot searching approaches of automatic neural networ
Table of Contents
Chapter 1: Introduction to Neural Network Intelligence1.1 Installation1.2 Trial, search space, experiment1.3 Finding maxima of multivariate function1.4 Interacting with NNI
Chapter 2:Hyper-Parameter Tuning2.1 Preparing a model for hyper-parameter tuning2.2 Running experiment2.3 Interpreting results2.4 Debugging
Chapter 3: Hyper-Parameter Tuners
Chapter 4: Neural Architecture Search: Multi-trial4.1 Constructing a search space4.2 Running architecture search4.3 Exploration strategies4.4 Comparing exploration strategies
Chapter 5: Neural Architecture Search: One-shot5.1 What is one-shot NAS?5.2 ENAS5.3 DARTS
Chapter 6: Model Compression6.1 What is model compression?6.2 Compressing your model6.3 Pruning6.4 Quantization
Chapter 7: Advanced NNI