Releases: lightly-ai/lightly-train
v0.15.0
[0.15.0] - 2026-04-16
New Distillation Method and Custom Teacher Models: We release the new
Distillationv3
method that achieves better generalization across fine-tuning tasks and works better
with DINOv3 teacher models. The new method also supports using
custom teacher models
Added
- Add distillationv3 method for dense as well as global feature distillation.
- Add support for custom teacher models with distillationv3.
- Add support for the new
EUPE models from Meta for all
distillation, pretraining, and fine-tuning tasks. For example, use
dinov3/vits16-eupeinstead ofdinov3/vits16to load the EUPE pretrained ViT-S/16
model. See the
documentation
for all supported models. - Add
Mosaicaugmentation for LTDETR object detection training. - Add
CopyBlendaugmentation for LTDETR object detection training. - Add
MixUpaugmentation for LTDETR object detection training. - Add logging of completed
epochs to the console and the loggers. - Add support for COCO object detection dataset format.
- Semantic segmentation now allows one to specify classes from a JSON file.
Changed
- Default distillation method is now v3 (previously v2), with a DINOv3 teacher instead
of a DINOv2 teacher. Previous default still available withmethod="distillationv2". - Make
ScaleJitterin LTDETR step-aware. Now you can stop the augmentation by adding a
step_stopargs like the following
transform_args={"scale_jitter": {"step_stop": 10000}}
Removed
- Remove
StopPolicyand useActivationPolicyinstead for more fine-grained control
over the step-aware augmentations.
What's Changed
- Distill from any model by @liopeer in #660
- Distill from custom models by @liopeer in #668
- Upper bound for rfdetr dependency by @liopeer in #669
- update EMA by @yutong-xiang-97 in #670
- Parse labels in YoloObjectDetectionDatasetArgs.list_image_info by @simonschoelly in #674
- Fix CUDA OOM by reducing feature map size by @liopeer in #676
- Add step-aware LTDETR scale jitter control by @yutong-xiang-97 in #675
- Add Step-aware MixUp Augmentation by @yutong-xiang-97 in #679
- Log Epochs in Console and Loggers by @yutong-xiang-97 in #680
- added version to tracker.py by @mrpositron in #682
- Add Step-aware CopyBlend Augmentation by @yutong-xiang-97 in #684
- Add coco object detection dataset by @simonschoelly in #677
- Bump min RF-DETR version to 1.4.1 by @simonschoelly in #688
- Add docs for COCO object detection dataset by @simonschoelly in #683
- Add mosaic transform and args - Part 1 by @yutong-xiang-97 in #690
- Distillationv3: Add convolutional detection heuristic by @liopeer in #678
- bump nbdev version by @yutong-xiang-97 in #695
- Apply Mosaic transform in ObjectDetectionTransform by @yutong-xiang-97 in #694
- Add EUPE support by @guarin in #696
- Refactor StopPolicy by @yutong-xiang-97 in #697
- Fix epoch calculation by @yutong-xiang-97 in #698
- Allow specifying classes from a json file for semantic segmentation by @simonschoelly in #699
- Document distillationv3 + make it default by @liopeer in #691
- Document EUPE Models by @guarin in #700
- Resolve default start and stop step based on total number of steps automatically. by @yutong-xiang-97 in #702
- Make ready for release of 0.15.0 by @liopeer in #704
New Contributors
- @mrpositron made their first contribution in #682
Full Changelog: v0.14.3...v0.15.0
v0.14.3
Added
- Add support for DINOv2 panoptic segmentation inference and fine-tuning.
- Add support for
metric_argsin all fine-tuning commands to allow configuring the metrics used for validation and best model checkpointing. See the documentation for details. - Add option to freeze the backbone for all EoMT models during training with the
model_args={"backbone_freeze": True}argument. - Add
YOLOOrientedObjectDetectionDatasetfor loading YOLO oriented object detection datasets with (cx, cy, w, h, angle) bounding boxes.
Changed
- PicoDet switched to O2O NMS-free inference/export, updated L preset to
picodet/l-640, and improved ONNX/TensorRT export robustness.
Removed
- It is no longer possible to set
seed=None. Instead, an integer seed must be provided for reproducibility. This fixes a bug where recent PyTorch Lightning versions (>=2.2) no longer generate random seeds whenseed=Noneis set. - LTDETR no longer supports the
detector_weight_decayandbackbone_weight_decayarguments. Instead use the generalweight_decayargument.
Fixed
- Fix incorrect model name format in
export_model()log example for DINOv2 and DINOv3 packages. The example now shows the correct format (without prefix) that works withget_model(). - Fix the wrong config of ScaleJitter sizes of LT-DETR.
- Fix a bug when loading DINOv3 LTDETR checkpoints that were not pretrained on COCO which resulted in backbone weights not being loaded.
What's Changed
- Gradient Accumulation for Task Training by @liopeer in #627
- Add metric_args by @guarin in #629
- Ignore grad acc warning by @guarin in #632
- Oriented Object Detection Tranforms, collation and types. by @gabrielfruet in #616
- Skip slow multihead test on windows by @guarin in #638
- Simplify fine-tuning learning rate logging by @guarin in #633
- Add DINOv2 EoMT Panoptic Segmentation by @guarin in #630
- Improve metrics naming by @guarin in #639
- Move collate functions to respective transforms by @guarin in #641
- Picodet improvements is by @IgorSusmelj in #599
- Add EoMT backbone freezing by @guarin in #636
- Remove cmake install in github actions by @guarin in #648
- Add torch.compile support for image classification by @guarin in #640
- Improve user id handling by @guarin in #644
- Update seed behavior by @guarin in #651
- Fix STA learning rate by @guarin in #646
- Refactor scale jitter by @guarin in #643
- Remove extra config validation by @guarin in #652
- Skip slow windows tests by @guarin in #654
- Unify DINOv3 fine-tuning backbone loading by @guarin in #642
- Remove detector weight decay by @guarin in #647
- Allow nonfinite clip gradient by @guarin in #653
- Add LightlyStudio link to README by @michal-lightly in #657
- Raise error if
best_metrics.metrics.watch_metric != last_metrics.watch_metricby @yutong-xiang-97 in #659 - Fix ScaleJitter Config by @yutong-xiang-97 in #658
- Fix mypy issues by @yutong-xiang-97 in #661
- No longer test supergradients by @guarin in #649
- Log ci container by @guarin in #655
- Add YOLOOrientedObjectDetectionDataset for oriented object detection by @gabrielfruet in #650
- fix: correct model name format in export_model log example by @gabrielfruet in #656
- Update metrics_args documentation by @guarin in #663
- Accumulate train loss with gradient_accumulation_steps by @guarin in #662
- Improve loss running window handling by @guarin in #664
- Disable torch.compile by default by @guarin in #666
- Release v0.14.3 by @guarin in #667
New Contributors
- @michal-lightly made their first contribution in #657
Full Changelog: v0.14.2...v0.14.3
v0.14.2
[0.14.2] - 2026-02-24
New Classification Support: You can now train image classification models with LightlyTrain! See the classification documentation for more information.
Added
- Add
classification support - Add support for frozen backbone training in LTDETR and Picodet object detection
models. Setmodel_args={"backbone_freeze": True}intrain_object_detectionto
freeze the backbone and reduce VRAM usage. - Add LTDETR support for DINOv3 ViT-B/L and DINOv2 ViT-L/B/G models. Pretrained weights
are not yet available for these models. - Add support for fine-tuning DINOv2 models for instance segmentation with the
train_instance_segmentationcommand. See the
instance segmentation documentation
for more information.
Fixed
- Filter invalid bounding boxes in instance segmentation
- Fix incorrect logging of training times.
What's Changed
- Update UV by @guarin in #606
- Relax tests by @guarin in #607
- Add Image Classification Implementation by @guarin in #608
- Implement functions for reading YOLO OBB labels. by @gabrielfruet in #609
- Filter invalid bounding boxes in instance segmentation by @guarin in #610
- Add dinov2 eomt instance segmentation by @guarin in #611
- Add more ltdetr configs by @guarin in #612
- Add image classification multihead by @guarin in #613
- Add semantic segmentation multihead by @guarin in #614
- Fix pywinpty CI error by @guarin in #621
- Update GPU/Data time logging by @guarin in #617
- Document classification by @guarin in #620
- Raise error if backbone weights file doesn't exist by @guarin in #622
- Update license logs by @guarin in #623
- Object Detection backbone freezing option by @gabrielfruet in #618
- Fix timer tests by @guarin in #625
- Release 0.14.2 by @guarin in #626
New Contributors
- @gabrielfruet made their first contribution in #609
Full Changelog: v0.14.1...v0.14.2
v0.14.1
Added
- Add Python 3.13 support.
- Add random rotation transforms for all fine-tuning tasks.
- Add DistillationV3 preview tailored for ViT models.
- Skip weight decay for bias/norm/token/etc. layers.
- Add automatic fine-tuning learning rate scaling based on batch size for all tasks.
- Log fine-tuning training time breakdown.
Changed
- Missing object detection and instance segmentation label files are now treated as
images without objects instead of being skipped. This can be configured by setting the
skip_if_label_file_missingflag in thedataargument of the
train_object_detectionandtrain_instance_segmentationfunctions respectively. - DINO now updates the teacher temperature and last layer freezing based on the number
of training steps instead of epochs.
Fixed
- Fix missing libxcb1 dependency in Dockerfile causing cv2 import errors.
- Fix issue when fine-tuning panoptic segmentation models with a different number of
classes than the pretrained model.
What's Changed
- Add random rotation fine-tune transforms by @guarin in #577
- Thomas trn 1752 classification dataset by @stegmuel in #571
- fix: ensure boolean mask dtype in YOLOObjectDetectionDataset by @ucsk in #580
- Document training settings by @guarin in #581
- Add distillationv3 by @stegmuel in #582
- Update contributing guide by @guarin in #592
- Document pretrain settings by @guarin in #586
- DINO: Use by step schedules by default by @guarin in #588
- Fix docker build by @guarin in #595
- Add LTDETR predict docstring by @guarin in #594
- Add python 3.13 support by @guarin in #593
- Scale fine-tune lr by batch size by @guarin in #589
- Handle missing detection/segmentation label files by @guarin in #587
- No weight decay for bias/norm/token/etc. by @guarin in #597
- No weight decay for norm/bias/tokens/etc. for LTDETR by @guarin in #598
- CI: Remove python 3.12 by @guarin in #603
- Cleanup state dict hooks by @guarin in #600
- LTDETR keep weights hook by @guarin in #602
- Fine-tune log training times by @guarin in #604
- Handle panoptic fine-tune with different num classes by @guarin in #601
- Release 0.14.1 by @guarin in #605
New Contributors
Full Changelog: v0.14.0...v0.14.1
v0.14.0
New PicoDet Models
We release a preview of PicoDet object detection models for low-power embedded devices!
New Tiny Models
We release tiny DINOv3 based models for instance segmentation, panoptic segmentation, and semantic segmentation!
New ONNX and TensorRT FP16 Export
You can now export all supported models to ONNX and TensorRT in FP16 precision for faster inference! Object detection, instance segmentation, panoptic segmentation, and semantic segmentation are supported!
Added
- Add PicoDet family of object detection models for low-power embedded devices.
- Add tiny semantic segmentation models.
- Add tiny instance segmentation models.
- Add tiny panoptic segmentation models.
- Add FP16/FP32 ONNX and TensorRT export for object detection models.
- Add FP16/FP32 ONNX and TensorRT export for instance segmentation models.
- Add FP16/FP32 ONNX and TensorRT export for panoptic segmentation models.
- Add FP16/FP32 ONNX and TensorRT export for semantic segmentation models.
- Add example jupyter notebooks for ONNX and TensorRT export.
- Add Slicing Aided Hyper Inference (SAHI) for object detection to improve small objects
recall at inference. - Add pretraining support for Ultralytics RT-DETR models.
- Add support for different patch size in EoMT and semantic segmentation.
- Add classwise metrics support for object detection models.
- Add ignore_classes support for object detection models.
Changed
- Change default DINOv3 EoMT semantic segmentation image size from 518x518 to 512x512.
- New checkpoints for the COCO pretrained DINOv3 EoMT semantic segmentation models.
Fixed
- Fix numerical stability issues in LTDETR's matcher.
- Fix issue when resuming from interrupted runs with custom model args.
What's Changed
- Auto-wrap markdown by @guarin in #525
- Improve teacher loading logging by @guarin in #527
- Thomas trn 1715 ltdetr tensorrt export by @stegmuel in #522
- Object detection - SAHI by @stegmuel in #508
- Set dinov3 eomt semseg default image size to 512 by @guarin in #533
- Add pretrain to API docs by @guarin in #534
- Tiny eomt semseg models by @stegmuel in #536
- Move semantic segmentation onnx export to model by @guarin in #528
- Update EoMT coco semseg models by @guarin in #537
- Fix potential NaNs in matchs by @stegmuel in #538
- Update coco sem. seg. results by @stegmuel in #539
- Add semantic segmentation tensorrt export by @guarin in #535
- Enable RTDETR by @XevenQC in #529
- Add instance segmentation onnx export by @guarin in #540
- Support ultralytics models from local path by @guarin in #543
- Add instance segmentation tensorrt export by @guarin in #541
- Add panoptic segmentation onnx/tensorrt export by @guarin in #542
- Thomas trn 1744 eomt tiny by @stegmuel in #532
- Handle zero ema warmup steps by @guarin in #550
- Thomas trn 1745 tiny eomt inst models by @stegmuel in #552
- fix best model export by @stegmuel in #554
- Add task images by @guarin in #556
- Update LTDETR train arg names by @guarin in #545
- fix broken instance seg. table by @stegmuel in #559
- make vit tiny available for pan. seg. by @stegmuel in #557
- Add FP16 support for EoMT models by @guarin in #561
- change type before transforms by @stegmuel in #563
- Fix validation aliases by @guarin in #568
- Fix LTDETR ONNX FP16 export by @guarin in #562
- Add LTDETR classwise metrics and ignore classes by @guarin in #569
- Add picodet model by @IgorSusmelj in #531
- Add picodet docs by @IgorSusmelj in #574
- get model_init_args from checkpoint by @stegmuel in #575
- Prepeare 0.14.0 release by @guarin in #572
New Contributors
Full Changelog: v0.13.2...v0.14.0
v0.13.2
Added
- Support for pretraining RF-DETR 1.3 models.
Changed
- Export only EMA weights for object detection models. This reduces the exported model
size by 2x.
Fixed
- Fix pretrained ViT-small panoptic segmentation model checkpoint.
What's Changed
- Support new RF-DETR 1.3 models (Small, Nano, Medium, SegPreview) by @masakljun in #503
- Update DINOv3 ViT-S Panoptic COCO Checkpoint by @guarin in #519
- Update events for predictions by @IgorSusmelj in #521
- Export only EMA model by @guarin in #516
- Add DINOv3 fine-tuning tests by @guarin in #517
- Release 0.13.2 by @guarin in #524
Full Changelog: v0.13.1...v0.13.2
v0.13.1
v0.13.0
New DINOv3 Tiny Object Detection Models: We release tiny DINOv3 models pretrained on
COCO for object detection!
New DINOv3 Panoptic Segmentation: You can now run inference and fine-tune DINOv3 models
for panoptic segmentation!
Added
- New COCO pretrained tiny LTDETR models
vitt16andvitt16plus. - Support for DINOv3 panoptic segmentation inference and fine-tuning.
- Quick start guide for object detection.
- Possibility to load backbone weights in LTDETR.
- ONNX export for LTDETR.
- Add Weights & Biases logging support for all fine-tuning tasks.
- Log best validation metrics at the end of training.
Changed
- Rename
lightly_train.train()tolightly_train.pretrain(). The old name is still
available as an alias for backward compatibility but will be removed in a future release. - Restructured the documentation to better reflect the different workflows supported
by LightlyTrain.
Fixed
- Fix bug in
model.predict()for object detection models. - Fix bug in object detection transforms when using images with dtype float32.
- Fix bug when running pretraining on an MPS device.
- Fix bug when resuming training with a recent PyTorch version.
- Fix bug when resuming a crashed run that was initialized from a pretrained COCO model.
What's Changed
- add log of best result by @masakljun in #475
- Thomas trn 1628 small od models by @stegmuel in #430
- Object detection support backbone weights by @stegmuel in #483
- Fix resuming from coco init by @stegmuel in #485
- Add panoptic segmentation by @guarin in #488
- Load teacher on correct device by @guarin in #490
- Thomas trn 1653 ltdetr onnx export by @stegmuel in #487
- Remove RSNA/Neurips banner by @stegmuel in #494
- Fix resuming error with recent torch versions by @guarin in #492
- Fix learning rate monitor float64 MPS issue by @guarin in #496
- Remove teacher eval mode warning by @guarin in #495
- Masa trn 1695 load ema weights only by @masakljun in #484
- Use latency everywhere by @stegmuel in #498
- Add links to vit tiny checkpoints by @stegmuel in #497
- Fix pytest out of disk space by @guarin in #502
- Cleanup transforms dtypes and asserts by @guarin in #499
- Revert docs CSS changes by @guarin in #504
- Restructure docs by @guarin in #489
- Update distillation quickstart by @guarin in #506
- Release v0.13.0 by @guarin in #507
New Contributors
- @masakljun made their first contribution in #475
Full Changelog: v0.12.4...v0.13.0
v0.12.4
v0.12.3
Added
- Add support for specifying data configs in YAML format.
Changed
- Improve the layout of the object detection training logs.
Deprecated
- Deprecate
reuse_class_headargument intraincommand. The model will now
automatically reuse the classification head only when the number of classes in the
data config matches that in the checkpoint. Otherwise, the classification head will
be re-initialized.
Fixed
- Fix
image_sizenot tuple when training from pretrained model. - Fix a bug when fine-tuning a model with
resume_interrupted=True. - Fix
num_classesnot updated when loading an object detection checkpoint with
different number of classes.
What's Changed
- Fix and speed up unit tests by @guarin in #450
- Add events for training detection and segmentation models by @IgorSusmelj in #447
- Fix dist training issues by @yutong-xiang-97 in #446
- Fix resume interrupted for fine-tuning by @guarin in #448
- Support yaml data config by @yutong-xiang-97 in #455
- Improve Metrics Logging by @yutong-xiang-97 in #458
- Internal class id mapping in Object Detection dataset by @yutong-xiang-97 in #462
- Update docs/readme by @guarin in #454
- Fix url of SAT493m url by @stegmuel in #465
- Deprecate reuse class head and use hooks for segmentation by @yutong-xiang-97 in #466
- Add RSNA/NeurIPS README banner by @liopeer in #468
- Fix README title by @liopeer in #469
- Fix OD
num_classesissue. by @yutong-xiang-97 in #467 - Release v0.12.3 by @guarin in #472
Full Changelog: v0.12.2...v0.12.3