Welcome to kedro-mlflow’s documentation!
- Introduction
- Installation
- Install the plugin
- Setup your kedro project
- Migration guide between versions
- Getting Started
- Experimentation tracking
- Configure mlflow
- Version parameters
- Version datasets
- What is artifact tracking?
- How to version data in a kedro project?
- Frequently asked questions
- Can I pass extra parameters to the
MlflowArtifactDataSet
for finer control? - Can I use the
MlflowArtifactDataSet
in interactive mode? - How do I upload an artifact to a non local destination (e.g. an S3 or blog storage)?
- Can I log an artifact in a specific run?
- Can I reload an artifact from an existing run to use it in another run ?
- Can I create a remote folder/subfolders architecture to organize the artifacts?
- Can I pass extra parameters to the
- Version models
- Version metrics
- Open the User Interface
- Pipeline serving
- A mlops framework for continuous model serving
- Why we need a mlops framework for development lifecycle
- Machine learning deployment is hard because it comes with a lot of constraints and no adequate tooling
- Deployment issues addressed by
kedro-mlflow
and their solutions- Out of scope
- Issue 1: The training process is poorly reproducible
- Issue 2: The data scientist and stakeholders focus on training
- Issue 3: Inference and training are entirely decoupled
- Issue 4: Data scientists do not handle business objects
- Overcoming these problems: support an organisational solution with an efficient tool
- The architecture of a machine learning project
- An efficient tool for model serving and training / inference synchronization
- Why we need a mlops framework for development lifecycle
- Interactive use
- Python objects