ML Training & Deployment with AWS SageMaker + EC2 Docker
1. System Overview & Architecture
End-to-End ML Flow (Train → Version → Deploy → Inference)
Dev / Prod Environment Strategy
2. SageMaker Jupyter Notebook Setup
Project & Folder Structure
Environment & Dependency Management
Config Management ( ENV)
3. Data Ingestion & Feature Engineering
Data Loading (API / S3)
Time-series Alignment
Feature Selection
4. Anomaly Model Training
Model Selection (AutoEncoder / One-Class / Isolation Forest)
Training Pipeline
Parameter Tuning
5. Classification Model Training
Labeling Strategy
Train / Validation / Test Split
6. Model Evaluation
Confusion Matrix
7. Experiment Tracking & Model Versioning
MLflow Experiment Design
Parameter & Metric Logging
Artifact Management
Model Version Control (v1, v2, v3…)
8. Dockerized Inference – Anomaly Model
Docker Image Structure
Environment Variable Design
Inference Script Integration
Configurable Inference Parameters
9. Anomaly Inference Scheduling
Inference Interval Configuration (minutes)
Time Window Control (start / end time)
Schedule Control via Dockerfile / ENV
Container Restart Strategy
10. Dockerized Inference – Classification Model
REST API Structure
Health Check Endpoint
11. Deployment on EC2
EC2 Environment Preparation
Docker Build & Run
Container Naming & Version Tagging
12. Logging & Monitoring
Anomaly Event Logging
Container Log Inspection