MLOps engineering on AWS

Course Description

This course makes upon and extends the DevOps practice predominant in software development to train, build, and deploy machine learning (ML) models. The course stresses the importance of data, models, and code to successful ML deployments. It will exhibit the use of automation, processes, tools, & teamwork in addressing the challenges related to handoffs between data scientists, software developers, data engineers, and operations. The course will also talk about the use of tools and processes to monitor

& take action when the model estimation in production begins to drift from agreed-upon key functioning indicators.

The instructor will encourage the professionals in this course to develop an MLOps action plan for their organization through daily reflection on lesson and lab content and through conversations with instructors & peers.

 

Prerequisites

Required

Suggested

Target Audience

This course is intended for any of the following roles with accountability for productionizing machine learning models in the AWS Cloud-

  • Developers/operations with responsibility for operationalizing 
  • DevOps Engineers
  • ML models
  • ML engineers

Course Objectives

In this course, you will learn to-

  • Explain machine learning operations
  • Comprehend the key differences between DevOps and MLOps
  • Describe the machine learning workflow
  • Discuss the importance of communications in MLOps
  • Explain end-to-end options for the automation of ML workflows
  • List key Amazon SageMaker characters for MLOps automation
  • Develop an automated ML process that builds, trains, tests, and deploys models
  • Develop an automated ML process that retrains the model grounded on change(s) to the model code
  • Recognize elements and essential steps in the deployment process
  • Explain items that might be included in a model package, & their use in training or inference
  • Identify Amazon SageMaker options for selecting models for deployment with the help of ML frameworks & in-built algorithms or bring-your-own-models
  • Distinguish scaling in machine learning from scaling in other applications
  • Determine when to use various approaches to inference
  • Discuss deployment benefits, strategies, challenges, and typical use cases
  • Explain the challenges when deploying machine learning to edge devices
  • Identify important Amazon SageMaker features that are relevant to deployment and inference
  • Explain why monitoring is important
  • Detect data drifts in the underlying input data
  • Exhibit how to monitor ML models for bias
  • Describe how to monitor model resource consumption and latency
  • Discuss how to combine human-in-the-loop reviews of model results in the production

Content Outline

Course Introduction

  • Machine learning operations
  • Goals of MLOps
  • Communication
  • From DevOps to MLOps
  • ML workflow
  • Scope
  • maps view of ML workflow
  • MLOps cases
  • Intro to train, build, & evaluate machine learning models
  • MLOps security
  • Automating
  • Apache Airflow
  • Kubernetes integration for MLOps
  • Amazon SageMaker for MLOps
  • Lab- Bring your own algorithm to an MLOps pipeline
  • Demonstration- Amazon SageMaker
  • Intro to train, build and evaluate machine learning models
  • Lab- Code and serve your ML model with AWS CodeBuild
  • Activity- MLOps Action Plan Workbook
  • Introduction to deployment operations
  • Model packaging
  • Inference
  • Lab- Deploy your model to production
  • SageMaker production variants
  • Deployment strategies
  • Deploying to the edge
  • Lab- Conduct A/B testing
  • Activity- MLOps Action Plan Workbook
  • Lab- Troubleshoot your pipeline
  • The importance of monitoring
  • Monitoring by design
  • Lab- Monitor your ML model
  • Human-in-the-loop
  • Amazon SageMaker Model Monitor
  • Demonstration- Amazon SageMaker Pipelines, Model Monitor, model registry, and Feature Store
  • Solving the Problem(s)
  • Activity- MLOps Action Plan Workbook
  • Course review
  • Activity- MLOps Action Plan Workbook
  • Wrap-up

FAQs

MLOps Engineers (or ML Engineers) enable model deployment automation to production systems. The amount of automation varies with the organization. MLOps Engineers take a data scientist's model and make it accessible to the software that utilizes it.

EC2 is a service that enables business clients to run application programs in the computing environment.

AWS security provides opportunities to protect the data, check out security-related activity and receive automated responses.

Radiant believes in a practical and creative approach to training and development, which distinguishes it from other training and developmental platforms. Moreover, training courses are undertaken by some of the experts who have a vast range of experience in their domain.

Radiant team of experts will be available at e-mail support@radianttechlearning.com to answer your technical queries even after the training program.

Yes, Radiant will provide you most updated high-value and relevant real-time projects and case studies in each training program.

Technical issues are unpredictable and might occur with us as well. Participants have to ensure that they have access to the required configuration with good internet speed.

Radiant Techlearning offers training programs on weekdays, weekends, and a combination of weekdays and weekends. We provide you with complete liberty to choose the schedule that suits your need.

Send a Message.


  • Enroll