Description
Deploying a machine learning model into a fully realized production system usually requires painstaking work by an operations team creating and managing custom servers.
Cloud Native Machine Learning helps you bridge that gap by using the pre-built services provided by cloud platforms like Azure and AWS to assemble your ML system’s infrastructure. Following a real-world use case for calculating taxi fares, you’ll learn how to get a serverless ML pipeline up and running using AWS services. Clear and detailed tutorials show you how to develop reliable, flexible, and scalable machine learning systems without time-consuming management tasks or the costly overheads of physical hardware. about the technologyYour new machine learning model is ready to put into production, and suddenly all your time is taken up by setting up your server infrastructure. Serverless machine learning offers a productivity-boosting alternative. It eliminates the time-consuming operations tasks from your machine learning lifecycle, letting out-of-the-box cloud services take over launching, running, and managing your ML systems. With the serverless capabilities of major cloud vendors handling your infrastructure, you’re free to focus on tuning and improving your models. about the book
Cloud Native Machine Learning is a guide to bringing your experimental machine learning code to production using serverless capabilities from major cloud providers. You’ll start with best practices for your datasets, learning to bring VACUUM data-quality principles to your projects, and ensure that your datasets can be reproducibly sampled. Next, you’ll learn to implement machine learning models with PyTorch, discovering how to scale up your models in the cloud and how to use PyTorch Lightning for distributed ML training. Finally, you’ll tune and engineer your serverless machine learning pipeline for scalability, elasticity, and ease of monitoring with the built-in notification tools of your cloud platform. When you’re done, you’ll have the tools to easily bridge the gap between ML models and a fully functioning production system. what's inside
- Extracting, transforming, and loading datasets
- Querying datasets with SQL
- Understanding automatic differentiation in PyTorch
- Deploying trained models and pipelines as a service endpoint
- Monitoring and managing your pipeline’s life cycle
- Measuring performance improvements
about the readerFor data professionals with intermediate Python skills and basic familiarity with machine learning. No cloud experience required. about the author
Carl Osipov has spent over 15 years working on big data processing and machine learning in multi-core, distributed systems, such as service-oriented architecture and cloud computing platforms. While at IBM, Carl helped IBM Software Group to shape its strategy around the use of Docker and other container-based technologies for serverless computing using IBM Cloud and Amazon Web Services. At Google, Carl learned from the world’s foremost experts in machine learning and also helped manage the company’s efforts to democratize artificial intelligence. You can learn more about Carl from his blog Clouds With Carl.