Metadata-Version: 2.4
Name: finetuning-scheduler
Version: 2.9.1
Summary: A PyTorch Lightning extension that enhances model experimentation with flexible fine-tuning schedules.
Author-email: Daniel Dale <danny.dale@gmail.com>
License-Expression: Apache-2.0
Project-URL: Homepage, https://github.com/speediedan/finetuning-scheduler
Project-URL: Bug Tracker, https://github.com/speediedan/finetuning-scheduler/issues
Project-URL: Documentation, https://finetuning-scheduler.readthedocs.io/en/stable/
Project-URL: Source Code, https://github.com/speediedan/finetuning-scheduler
Keywords: deep learning,pytorch,AI,machine learning,pytorch-lightning,lightning,fine-tuning,finetuning
Classifier: Environment :: Console
Classifier: Natural Language :: English
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Scientific/Engineering :: Image Recognition
Classifier: Topic :: Scientific/Engineering :: Information Analysis
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: torch>=2.5.0
Requires-Dist: lightning<2.5.7,>=2.5.0
Provides-Extra: examples
Requires-Dist: datasets>=4.0.0; extra == "examples"
Requires-Dist: evaluate; extra == "examples"
Requires-Dist: transformers>=4.18.0; extra == "examples"
Requires-Dist: scikit-learn; extra == "examples"
Requires-Dist: sentencepiece; extra == "examples"
Requires-Dist: tensorboardX>=2.2; extra == "examples"
Requires-Dist: tabulate; extra == "examples"
Requires-Dist: psutil; extra == "examples"
Requires-Dist: jsonargparse[signatures]<4.42.0,>=4.27.7; extra == "examples"
Requires-Dist: omegaconf>=2.1.0; extra == "examples"
Requires-Dist: hydra-core>=1.1.0; extra == "examples"
Provides-Extra: extra
Requires-Dist: rich>=12.3.0; extra == "extra"
Requires-Dist: jsonargparse[signatures]<4.42.0,>=4.27.7; extra == "extra"
Requires-Dist: omegaconf>=2.1.0; extra == "extra"
Requires-Dist: hydra-core>=1.1.0; extra == "extra"
Provides-Extra: test
Requires-Dist: coverage>=6.4; extra == "test"
Requires-Dist: pytest>=6.0; extra == "test"
Requires-Dist: pytest-rerunfailures>=10.2; extra == "test"
Requires-Dist: twine==3.2; extra == "test"
Requires-Dist: mypy>=0.920; extra == "test"
Requires-Dist: pre-commit>=1.0; extra == "test"
Requires-Dist: protobuf<=3.20.1; extra == "test"
Requires-Dist: mlflow>=1.0.0; extra == "test"
Provides-Extra: ipynb
Requires-Dist: ipython[notebook]; extra == "ipynb"
Requires-Dist: jupytext>=1.10; extra == "ipynb"
Requires-Dist: nbval>=0.9.6; extra == "ipynb"
Provides-Extra: cli
Requires-Dist: jsonargparse[signatures]<4.42.0,>=4.27.7; extra == "cli"
Requires-Dist: omegaconf>=2.1.0; extra == "cli"
Requires-Dist: hydra-core>=1.1.0; extra == "cli"
Provides-Extra: dev
Requires-Dist: rich>=12.3.0; extra == "dev"
Requires-Dist: jsonargparse[signatures]<4.42.0,>=4.27.7; extra == "dev"
Requires-Dist: omegaconf>=2.1.0; extra == "dev"
Requires-Dist: hydra-core>=1.1.0; extra == "dev"
Requires-Dist: coverage>=6.4; extra == "dev"
Requires-Dist: pytest>=6.0; extra == "dev"
Requires-Dist: pytest-rerunfailures>=10.2; extra == "dev"
Requires-Dist: twine==3.2; extra == "dev"
Requires-Dist: mypy>=0.920; extra == "dev"
Requires-Dist: pre-commit>=1.0; extra == "dev"
Requires-Dist: protobuf<=3.20.1; extra == "dev"
Requires-Dist: mlflow>=1.0.0; extra == "dev"
Requires-Dist: ipython[notebook]; extra == "dev"
Requires-Dist: jupytext>=1.10; extra == "dev"
Requires-Dist: nbval>=0.9.6; extra == "dev"
Provides-Extra: all
Requires-Dist: rich>=12.3.0; extra == "all"
Requires-Dist: jsonargparse[signatures]<4.42.0,>=4.27.7; extra == "all"
Requires-Dist: omegaconf>=2.1.0; extra == "all"
Requires-Dist: hydra-core>=1.1.0; extra == "all"
Requires-Dist: coverage>=6.4; extra == "all"
Requires-Dist: pytest>=6.0; extra == "all"
Requires-Dist: pytest-rerunfailures>=10.2; extra == "all"
Requires-Dist: twine==3.2; extra == "all"
Requires-Dist: mypy>=0.920; extra == "all"
Requires-Dist: pre-commit>=1.0; extra == "all"
Requires-Dist: protobuf<=3.20.1; extra == "all"
Requires-Dist: mlflow>=1.0.0; extra == "all"
Requires-Dist: ipython[notebook]; extra == "all"
Requires-Dist: jupytext>=1.10; extra == "all"
Requires-Dist: nbval>=0.9.6; extra == "all"
Requires-Dist: datasets>=4.0.0; extra == "all"
Requires-Dist: evaluate; extra == "all"
Requires-Dist: transformers>=4.18.0; extra == "all"
Requires-Dist: scikit-learn; extra == "all"
Requires-Dist: sentencepiece; extra == "all"
Requires-Dist: tensorboardX>=2.2; extra == "all"
Requires-Dist: tabulate; extra == "all"
Requires-Dist: psutil; extra == "all"
Requires-Dist: jsonargparse[signatures]<4.42.0,>=4.27.7; extra == "all"
Requires-Dist: omegaconf>=2.1.0; extra == "all"
Requires-Dist: hydra-core>=1.1.0; extra == "all"
Dynamic: description
Dynamic: description-content-type
Dynamic: license-file
Dynamic: provides-extra
Dynamic: requires-dist
Dynamic: summary

<div align="center">

<img src="https://github.com/speediedan/finetuning-scheduler/raw/v2.9.1/docs/source/_static/images/logos/logo_fts.png" width="401px">

**A PyTorch Lightning extension that enhances model experimentation with flexible fine-tuning schedules.**

______________________________________________________________________

<p align="center">
  <a href="https://finetuning-scheduler.readthedocs.io/en/stable/">Docs</a> •
  <a href="#Setup">Setup</a> •
  <a href="#examples">Examples</a> •
  <a href="#community">Community</a>
</p>

[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/finetuning-scheduler)](https://pypi.org/project/finetuning-scheduler/)
[![PyPI Status](https://badge.fury.io/py/finetuning-scheduler.svg)](https://badge.fury.io/py/finetuning-scheduler)\
[![codecov](https://codecov.io/gh/speediedan/finetuning-scheduler/release/2.9.1/graph/badge.svg?flag=gpu)](https://codecov.io/gh/speediedan/finetuning-scheduler)
[![ReadTheDocs](https://readthedocs.org/projects/finetuning-scheduler/badge/?version=latest)](https://finetuning-scheduler.readthedocs.io/en/stable/)
[![DOI](https://zenodo.org/badge/455666112.svg)](https://zenodo.org/badge/latestdoi/455666112)
[![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/speediedan/finetuning-scheduler/blob/master/LICENSE)

</div>

______________________________________________________________________

<img width="300px" src="https://github.com/speediedan/finetuning-scheduler/raw/v2.9.1/docs/source/_static/images/fts/fts_explicit_loss_anim.gif" alt="FinetuningScheduler explicit loss animation" align="right"/>

[FinetuningScheduler](https://finetuning-scheduler.readthedocs.io/en/stable/api/finetuning_scheduler.fts.html#finetuning_scheduler.fts.FinetuningScheduler) is simple to use yet powerful, offering a number of features that facilitate model research and exploration:

- easy specification of flexible fine-tuning schedules with explicit or regex-based parameter selection
  - implicit schedules for initial/naive model exploration
  - explicit schedules for performance tuning, fine-grained behavioral experimentation and computational efficiency
- automatic restoration of best per-phase checkpoints driven by iterative application of early-stopping criteria to each fine-tuning phase
- composition of early-stopping and manually-set epoch-driven fine-tuning phase transitions

______________________________________________________________________

## Setup

### Step 0: Install from PyPI

```bash
pip install finetuning-scheduler
```

<!--  -->

### Step 1: Import the FinetuningScheduler callback and start fine-tuning!

```python
import lightning as L
from finetuning_scheduler import FinetuningScheduler

trainer = L.Trainer(callbacks=[FinetuningScheduler()])
```

Get started by following [the Fine-Tuning Scheduler introduction](https://finetuning-scheduler.readthedocs.io/en/stable/index.html) which includes a [CLI-based example](https://finetuning-scheduler.readthedocs.io/en/stable/index.html#example-scheduled-fine-tuning-for-superglue) or by following the [notebook-based](https://pytorch-lightning.readthedocs.io/en/stable/notebooks/lightning_examples/finetuning-scheduler.html) Fine-Tuning Scheduler tutorial.

______________________________________________________________________

### Installation Using the Standalone `pytorch-lightning` Package

*applicable to versions >= `2.0.0`*

Now that the core Lightning package is `lightning` rather than `pytorch-lightning`, Fine-Tuning Scheduler (FTS) by default depends upon the `lightning` package rather than the standalone `pytorch-lightning`. If you would like to continue to use FTS with the standalone `pytorch-lightning` package instead, you can still do so as follows:

Install a given FTS release (for example v2.0.0) using standalone `pytorch-lightning`:

```bash
export FTS_VERSION=2.0.0
export PACKAGE_NAME=pytorch
wget https://github.com/speediedan/finetuning-scheduler/releases/download/v${FTS_VERSION}/finetuning-scheduler-${FTS_VERSION}.tar.gz
pip install finetuning-scheduler-${FTS_VERSION}.tar.gz
```

### Dynamic Versioning

FTS (as of version `2.9.0`) now enables dynamic versioning both at installation time and via CLI post-installation. Initially, the dynamic versioning system allows toggling between Lightning unified and standalone imports. The two conversion operations are individually idempotent and mutually reversible.

#### Toggling Between Unified and Standalone Lightning Imports

FTS provides a simple CLI tool to easily toggle between unified and standalone import installation versions post-installation:

```bash
# Toggle from unified to standalone Lightning imports
toggle-lightning-mode --mode standalone

# Toggle from standalone to unified Lightning imports (default)
toggle-lightning-mode --mode unified
```

> **Note:** If you have the standalone package (`pytorch-lightning`) installed but not the unified package (`lightning`), toggling to unified mode will be prevented. You must install the `lightning` package first before toggling.

This can be useful when:

- You need to adapt existing code to work with a different Lightning package
- You're switching between projects using different Lightning import styles
- You want to test compatibility with both import styles

______________________________________________________________________

## Examples

### Scheduled Fine-Tuning For SuperGLUE

- [Notebook-based Tutorial](https://pytorch-lightning.readthedocs.io/en/stable/notebooks/lightning_examples/finetuning-scheduler.html)
- [CLI-based Tutorial](https://finetuning-scheduler.readthedocs.io/en/stable/#example-scheduled-fine-tuning-for-superglue)
- [FSDP Scheduled Fine-Tuning](https://finetuning-scheduler.readthedocs.io/en/stable/advanced/fsdp_scheduled_fine_tuning.html)
- [LR Scheduler Reinitialization](https://finetuning-scheduler.readthedocs.io/en/stable/advanced/lr_scheduler_reinitialization.html) (advanced)
- [Optimizer Reinitialization](https://finetuning-scheduler.readthedocs.io/en/stable/advanced/optimizer_reinitialization.html) (advanced)

______________________________________________________________________

## Continuous Integration

Fine-Tuning Scheduler is rigorously tested across multiple CPUs, GPUs and against major Python and PyTorch versions.

**Versioning Policy (Updated in 2.9)**: Starting with the 2.9 minor release, Fine-Tuning Scheduler is pivoting from tight Lightning version alignment to **core PyTorch version alignment**. This change:

- Provides greater flexibility to integrate the latest PyTorch functionality increasingly important in research
- Reduces maintenance burden while continuing to support the stable Lightning API and robust integration
- Officially supports **at least the latest 4 PyTorch minor releases** (e.g., when PyTorch 2.9 is released, FTS supports >= 2.6)

This versioning approach is motivated by Lightning's evolving release cadence (see [Lightning Issue #21073](https://github.com/Lightning-AI/pytorch-lightning/issues/21073) and [PR #21107](https://github.com/Lightning-AI/pytorch-lightning/pull/21107)) and allows FTS to adopt new PyTorch capabilities more rapidly while maintaining clear deprecation policies.

See the [versioning documentation](https://finetuning-scheduler.readthedocs.io/en/stable/versioning.html) for complete details on compatibility policies and migration guidance.

**Prior Versioning (\< 2.9)**: Each Fine-Tuning Scheduler minor release (major.minor.patch) was paired with a Lightning minor release (e.g., Fine-Tuning Scheduler 2.0 depends upon Lightning 2.0). To ensure maximum stability, the latest Lightning patch release fully tested with Fine-Tuning Scheduler was set as a maximum dependency in Fine-Tuning Scheduler's requirements.txt (e.g., \<= 1.7.1).

<details>
  <summary>Current build statuses for Fine-Tuning Scheduler </summary>

| System / (PyTorch/Python ver) |                                                                                                        2.5.1/3.9                                                                                                         |                                                                                                              2.9.0/3.9, 2.9.0/3.12                                                                                                               |
| :---------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
|      Linux \[GPUs\*\*\]       |                                                                                                            -                                                                                                             | [![Build Status](https://dev.azure.com//speediedan/finetuning-scheduler/_apis/build/status/Multi-GPU%20&%20Example%20Tests?branchName=refs%2Ftags%2F2.9.1)](https://dev.azure.com/speediedan/finetuning-scheduler/_build/latest?definitionId=2&branchName=refs%2Ftags%2F2.9.1) |
|     Linux (Ubuntu 22.04)      | [![Test](https://github.com/speediedan/finetuning-scheduler/actions/workflows/ci_test-full.yml/badge.svg?tag=2.9.1)](https://github.com/speediedan/finetuning-scheduler/actions/workflows/ci_test-full.yml) |             [![Test](https://github.com/speediedan/finetuning-scheduler/actions/workflows/ci_test-full.yml/badge.svg?tag=2.9.1)](https://github.com/speediedan/finetuning-scheduler/actions/workflows/ci_test-full.yml)             |
|           OSX (14)            | [![Test](https://github.com/speediedan/finetuning-scheduler/actions/workflows/ci_test-full.yml/badge.svg?tag=2.9.1)](https://github.com/speediedan/finetuning-scheduler/actions/workflows/ci_test-full.yml) |             [![Test](https://github.com/speediedan/finetuning-scheduler/actions/workflows/ci_test-full.yml/badge.svg?tag=2.9.1)](https://github.com/speediedan/finetuning-scheduler/actions/workflows/ci_test-full.yml)             |
|        Windows (2022)         | [![Test](https://github.com/speediedan/finetuning-scheduler/actions/workflows/ci_test-full.yml/badge.svg?tag=2.9.1)](https://github.com/speediedan/finetuning-scheduler/actions/workflows/ci_test-full.yml) |             [![Test](https://github.com/speediedan/finetuning-scheduler/actions/workflows/ci_test-full.yml/badge.svg?tag=2.9.1)](https://github.com/speediedan/finetuning-scheduler/actions/workflows/ci_test-full.yml)             |

- \*\* tests run on one RTX 4090 and one RTX 2070

</details>

## Community

Fine-Tuning Scheduler is developed and maintained by the community in close communication with the [Lightning team](https://pytorch-lightning.readthedocs.io/en/stable/governance.html). Thanks to everyone in the community for their tireless effort building and improving the immensely useful core Lightning project.

PR's welcome! Please see the [contributing guidelines](https://finetuning-scheduler.readthedocs.io/en/stable/generated/CONTRIBUTING.html) (which are essentially the same as Lightning's).

______________________________________________________________________

## Citing Fine-Tuning Scheduler

Please cite:

```tex
@misc{Dan_Dale_2022_6463952,
    author       = {Dan Dale},
    title        = {{Fine-Tuning Scheduler}},
    month        = Feb,
    year         = 2022,
    doi          = {10.5281/zenodo.6463952},
    publisher    = {Zenodo},
    url          = {https://zenodo.org/record/6463952}
    }
```

Feel free to star the repo as well if you find it useful or interesting. Thanks 😊!
