Machine Learning Operations (MLOps): Deploy at Scale
Alex Cattle
on 10 September 2019
Tags: artificial intelligence , devops , Kubeflow , kubernetes , machine learning , Ubuntu

Artificial Intelligence and Machine Learning adoption in the enterprise is exploding from Silicon Valley to Wall Street with diverse use cases ranging from the analysis of customer behaviour and purchase cycles to diagnosing medical conditions.
Following on from our webinar ‘Getting started with AI’, this webinar will dive into what success looks like when deploying machine learning models, including training, at scale. The key topics are:
- Automatic Workflow Orchestration
- ML Pipeline development
- Kubernetes / Kubeflow Integration
- On-device Machine Learning, Edge Inference and Model Federation
- On-prem to cloud, on-demand extensibility
- Scale-out model serving and inference
This webinar will detail recent advancements in these areas alongside providing actionable insights for viewers to apply to their AI/ML efforts!
Enterprise AI, simplified

AI doesn’t have to be difficult. Accelerate innovation with an end-to-end stack that delivers all the open source tooling you need for the entire AI/ML lifecycle.
Newsletter signup
Related posts
Accelerating AI with open source machine learning infrastructure
The landscape of artificial intelligence is rapidly evolving, demanding robust and scalable infrastructure. To meet these challenges, we’ve developed a...
KubeCon Europe 2025: Containers & Connections with Ubuntu
It’s hard to believe that the first KubeCon took place nearly 10 years ago. Back then, Kubernetes was still in its early days, and the world was only just...
Canonical announces it will support and distribute NVIDIA CUDA in Ubuntu
Today Canonical, the publisher of Ubuntu, announced support for the NVIDIA CUDA toolkit and the distribution of CUDA within Ubuntu’s repositories. CUDA is a...