DevOps Proficiency

Docker & Containerization

I containerize all my AI/ML workflows for reproducibility. If it works in my container, it works everywhere.

What I Containerize

ML Training Environments

PyTorch + CUDA + all dependencies locked. No more 'it worked on my machine' for training runs.

vLLM Inference Servers

Containerized model serving with GPU passthrough. Easy to deploy and scale.

ComfyUI Workflows

Full ComfyUI setup with custom nodes and models. Artists can spin up identical environments.

Development Environments

Consistent dev setups for Python/Node projects. New team members productive in minutes.

GPU Container Setup

I work extensively with NVIDIA GPU containers for ML workloads:

nvidia-docker runtimeCUDA base imagesMulti-GPU passthroughGPU memory managementcuDNN configuration

My Standard Practices

Multi-stage builds

Keep runtime images small. Build deps stay in build stage.

Layer caching

Requirements first, code last. Rebuilds are fast.

Non-root users

Security by default, especially for production.

Health checks

Containers that can tell you when they're unhealthy.

Technology Stack

DockerDocker Composenvidia-dockerCUDABuildKitMulti-stage buildsGPU Passthrough

Expertise by Sumit Chatterjee

Industrial Light & Magic, Sydney

Back to Portfolio