Welcome to kedro-mlflow’s documentation!¶
- Introduction
- Installation
- Getting Started
- Advanced machine learning versioning
- Configure mlflow
- Version parameters
- Version datasets
- What is artifact tracking?
- How to version data in a kedro project?
- Frequently asked questions
- Can I pass extra parameters to the
MlflowArtifactDataSet
for finer control? - Can I use the
MlflowArtifactDataSet
in interactive mode? - How do I upload an artifact to a non local destination (e.g. an S3 or blog storage)?
- Can I log an artifact in a specific run?
- Can I create a remote folder/subfolders architecture to organize the artifacts?
- Can I pass extra parameters to the
- Version models
- Version metrics
- Opening the User Interface
- A mlops framework for efficient deployment
- Why we need a mlops framework for development lifecycle
- Machine learning deployment is hard because it comes with a lot of constraints and no adequate tooling
- Deployment issues addressed by
kedro-mlflow
and their solutions- Out of scope
- Issue 1: The training process is poorly reproducible
- Issue 2: The data scientist and stakeholders focus on training
- Issue 3: Inference and training are entirely decoupled
- Issue 4: Data scientists do not handle business objects
- Overcoming these problems: support an organisational solution with an efficient tool
- The architecture of a machine learning project
- An efficient tool for model serving and training / inference synchronization
- A step by step example
- Why we need a mlops framework for development lifecycle
- Python objects