Machine Learning Operations (MLOps): Deploy at Scale
Alex Cattle
on 10 September 2019
Tags: artificial intelligence , devops , Kubeflow , kubernetes , machine learning , Ubuntu

Artificial Intelligence and Machine Learning adoption in the enterprise is exploding from Silicon Valley to Wall Street with diverse use cases ranging from the analysis of customer behaviour and purchase cycles to diagnosing medical conditions.
Following on from our webinar ‘Getting started with AI’, this webinar will dive into what success looks like when deploying machine learning models, including training, at scale. The key topics are:
- Automatic Workflow Orchestration
- ML Pipeline development
- Kubernetes / Kubeflow Integration
- On-device Machine Learning, Edge Inference and Model Federation
- On-prem to cloud, on-demand extensibility
- Scale-out model serving and inference
This webinar will detail recent advancements in these areas alongside providing actionable insights for viewers to apply to their AI/ML efforts!
Enterprise AI, simplified
AI doesn’t have to be difficult. Accelerate innovation with an end-to-end stack that delivers all the open source tooling you need for the entire AI/ML lifecycle.
Newsletter signup
Related posts
Canonical releases FIPS-enabled Kubernetes
Today at KubeCon North America, Canonical, the publisher of Ubuntu, released support to enable FIPS mode in its Kubernetes distribution, providing everything...
Canonical announces optimized Ubuntu images for Google Cloud’s Axion N4A Virtual Machines
This new release brings the stability and security of Ubuntu to Axion-based N4A virtual machines on Google Compute Engine. November 6, 2025 – Today Canonical,...
Ubuntu worker nodes for OKE now in Limited Availability
Oracle Kubernetes Engine now supports Ubuntu images for worker nodes natively, with no need for custom images 8 October 2025 – Today Canonical, the publisher...