Metadata-Version: 2.4
Name: training-hub
Version: 0.1.0a2
Summary: An algorithm-focused interface for common language model training, continual learning, and reinforcement learning techniques
License: Apache-2.0
Project-URL: homepage, https://ai-innovation.team/
Project-URL: source, https://github.com/Red-Hat-AI-Innovation-Team/training_hub
Project-URL: issues, https://github.com/Red-Hat-AI-Innovation-Team/training_hub/issues
Requires-Python: >=3.11
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: setuptools>=80.0
Requires-Dist: packaging>=24.2
Requires-Dist: wheel>=0.43
Requires-Dist: instructlab-training>=0.11.1
Requires-Dist: rhai-innovation-mini-trainer>=0.1.0
Requires-Dist: torch>=2.6.0
Requires-Dist: numba>=0.50
Requires-Dist: datasets>=2.15.0
Requires-Dist: numpy<2.0.0,>=1.26.4
Requires-Dist: rich>=14.1.0
Requires-Dist: peft>=0.15
Requires-Dist: pydantic>=2.7.0
Requires-Dist: aiofiles>=23.2.1
Requires-Dist: accelerate>=0.34.2
Requires-Dist: sympy>=1.0
Requires-Dist: networkx>=3.0
Requires-Dist: jinja2>=3.1.0
Requires-Dist: fsspec>=2025.0
Requires-Dist: pandas>=2.2
Requires-Dist: multiprocess>=0.70.16
Requires-Dist: aiohttp>=3.12
Requires-Dist: pyparsing>=3.0
Requires-Dist: regex>=2025.0
Requires-Dist: llvmlite>=0.42
Requires-Dist: filelock>=3.0
Requires-Dist: psutil>=6.0
Requires-Dist: urllib3>=2.4
Requires-Dist: frozenlist>=1.7
Requires-Dist: xxhash>=3.0
Requires-Dist: requests>=2.32.5
Requires-Dist: attr>=0.3.2
Requires-Dist: filelock>=3.19.1
Requires-Dist: mpmath>=1.3.0
Provides-Extra: cuda
Requires-Dist: instructlab-training[cuda]>=0.11.1; extra == "cuda"
Requires-Dist: rhai-innovation-mini-trainer[cuda]>=0.1.0; extra == "cuda"
Requires-Dist: flash-attn>=2.8; extra == "cuda"
Requires-Dist: einops>=0.8; extra == "cuda"
Provides-Extra: gpt-oss
Requires-Dist: transformers>=4.55.0; extra == "gpt-oss"
Requires-Dist: flash-attn>=2.8; extra == "gpt-oss"
Requires-Dist: einops>=0.8; extra == "gpt-oss"
Requires-Dist: datasets>=4.0.0; extra == "gpt-oss"
Requires-Dist: bitsandbytes==0.47.0; extra == "gpt-oss"
Requires-Dist: pytest>=8.0; extra == "gpt-oss"
Requires-Dist: kernels>=0.9.0; extra == "gpt-oss"
Dynamic: license-file

# training_hub
An algorithm-focused interface for common llm training, continual learning, and reinforcement learning techniques.

## Support Matrix

| Algorithm | InstructLab-Training | PEFT | VERL | Status |
|-----------|---------------------|------|------|--------|
| **Supervised Fine-tuning (SFT)** | ✅ | - | - | Implemented |
| Continual Learning (OSFT) | 🔄 | 🔄 | - | Planned |
| Direct Preference Optimization (DPO) | - | - | 🔄 | Planned |
| Low-Rank Adaptation (LoRA) | 🔄 | 🔄 | - | Planned |
| Group Relative Policy Optimization (GRPO) | - | - | 🔄 | Planned |

**Legend:**
- ✅ Implemented and tested
- 🔄 Planned for future implementation  
- \- Not applicable or not planned

## Implemented Algorithms

### [Supervised Fine-tuning (SFT)](examples/sft_usage.md)
Fine-tune language models on supervised datasets with support for:
- Single-node and multi-node distributed training
- Configurable training parameters (epochs, batch size, learning rate, etc.)
- InstructLab-Training backend integration

```python
from training_hub import sft

result = sft(
    model_path="/path/to/model",
    data_path="/path/to/data",
    ckpt_output_dir="/path/to/checkpoints",
    num_epochs=3,
    learning_rate=1e-5
)
```

## Installation

### Basic Installation
```bash
pip install training-hub
```

### Development Installation
```bash
git clone https://github.com/Red-Hat-AI-Innovation-Team/training_hub
cd training_hub
pip install -e .
```

### CUDA Support
For GPU training with CUDA support:
```bash
pip install training-hub[cuda]
# or for development
pip install -e .[cuda]
```

**Note:** If you encounter build issues with flash-attn, install the base package first:
```bash
# Install base package (provides torch, packaging, wheel, ninja)
pip install training-hub
# Then install with CUDA extras
pip install training-hub[cuda]

# For development installation:
pip install -e .
pip install -e .[cuda]
```

**For uv users:** You may need the `--no-build-isolation` flag:
```bash
uv pip install training-hub
uv pip install training-hub[cuda] --no-build-isolation

# For development:
uv pip install -e .
uv pip install -e .[cuda] --no-build-isolation
```

## Getting Started

For comprehensive tutorials, examples, and documentation, see the [examples directory](examples/).
