Skip to content

Releases: lightly-ai/lightly-train

v0.15.0

16 Apr 16:00
ed6e427

Choose a tag to compare

[0.15.0] - 2026-04-16

New Distillation Method and Custom Teacher Models: We release the new
Distillationv3
method that achieves better generalization across fine-tuning tasks and works better
with DINOv3 teacher models. The new method also supports using
custom teacher models

Added

  • Add distillationv3 method for dense as well as global feature distillation.
  • Add support for custom teacher models with distillationv3.
  • Add support for the new
    EUPE models from Meta for all
    distillation, pretraining, and fine-tuning tasks. For example, use
    dinov3/vits16-eupe instead of dinov3/vits16 to load the EUPE pretrained ViT-S/16
    model. See the
    documentation
    for all supported models.
  • Add Mosaic augmentation for LTDETR object detection training.
  • Add CopyBlend augmentation for LTDETR object detection training.
  • Add MixUp augmentation for LTDETR object detection training.
  • Add logging of completed epochs to the console and the loggers.
  • Add support for COCO object detection dataset format.
  • Semantic segmentation now allows one to specify classes from a JSON file.

Changed

  • Default distillation method is now v3 (previously v2), with a DINOv3 teacher instead
    of a DINOv2 teacher. Previous default still available with method="distillationv2".
  • Make ScaleJitter in LTDETR step-aware. Now you can stop the augmentation by adding a
    step_stop args like the following
    transform_args={"scale_jitter": {"step_stop": 10000}}

Removed

  • Remove StopPolicy and use ActivationPolicy instead for more fine-grained control
    over the step-aware augmentations.

What's Changed

New Contributors

Full Changelog: v0.14.3...v0.15.0

v0.14.3

26 Mar 08:52
8828a08

Choose a tag to compare

Added

  • Add support for DINOv2 panoptic segmentation inference and fine-tuning.
  • Add support for metric_args in all fine-tuning commands to allow configuring the metrics used for validation and best model checkpointing. See the documentation for details.
  • Add option to freeze the backbone for all EoMT models during training with the model_args={"backbone_freeze": True} argument.
  • Add YOLOOrientedObjectDetectionDataset for loading YOLO oriented object detection datasets with (cx, cy, w, h, angle) bounding boxes.

Changed

  • PicoDet switched to O2O NMS-free inference/export, updated L preset to picodet/l-640, and improved ONNX/TensorRT export robustness.

Removed

  • It is no longer possible to set seed=None. Instead, an integer seed must be provided for reproducibility. This fixes a bug where recent PyTorch Lightning versions (>=2.2) no longer generate random seeds when seed=None is set.
  • LTDETR no longer supports the detector_weight_decay and backbone_weight_decay arguments. Instead use the general weight_decay argument.

Fixed

  • Fix incorrect model name format in export_model() log example for DINOv2 and DINOv3 packages. The example now shows the correct format (without prefix) that works with get_model().
  • Fix the wrong config of ScaleJitter sizes of LT-DETR.
  • Fix a bug when loading DINOv3 LTDETR checkpoints that were not pretrained on COCO which resulted in backbone weights not being loaded.

What's Changed

New Contributors

Full Changelog: v0.14.2...v0.14.3

v0.14.2

24 Feb 15:18
3c71f04

Choose a tag to compare

[0.14.2] - 2026-02-24

New Classification Support: You can now train image classification models with LightlyTrain! See the classification documentation for more information.

Added

  • Add
    classification support
  • Add support for frozen backbone training in LTDETR and Picodet object detection
    models. Set model_args={"backbone_freeze": True} in train_object_detection to
    freeze the backbone and reduce VRAM usage.
  • Add LTDETR support for DINOv3 ViT-B/L and DINOv2 ViT-L/B/G models. Pretrained weights
    are not yet available for these models.
  • Add support for fine-tuning DINOv2 models for instance segmentation with the
    train_instance_segmentation command. See the
    instance segmentation documentation
    for more information.

Fixed

  • Filter invalid bounding boxes in instance segmentation
  • Fix incorrect logging of training times.

What's Changed

New Contributors

Full Changelog: v0.14.1...v0.14.2

v0.14.1

09 Feb 09:29
523462b

Choose a tag to compare

Added

  • Add Python 3.13 support.
  • Add random rotation transforms for all fine-tuning tasks.
  • Add DistillationV3 preview tailored for ViT models.
  • Skip weight decay for bias/norm/token/etc. layers.
  • Add automatic fine-tuning learning rate scaling based on batch size for all tasks.
  • Log fine-tuning training time breakdown.

Changed

  • Missing object detection and instance segmentation label files are now treated as
    images without objects instead of being skipped. This can be configured by setting the
    skip_if_label_file_missing flag in the data argument of the
    train_object_detection and train_instance_segmentation functions respectively.
  • DINO now updates the teacher temperature and last layer freezing based on the number
    of training steps instead of epochs.

Fixed

  • Fix missing libxcb1 dependency in Dockerfile causing cv2 import errors.
  • Fix issue when fine-tuning panoptic segmentation models with a different number of
    classes than the pretrained model.

What's Changed

New Contributors

  • @ucsk made their first contribution in #580

Full Changelog: v0.14.0...v0.14.1

v0.14.0

20 Jan 08:50
bac8c81

Choose a tag to compare

New PicoDet Models
We release a preview of PicoDet object detection models for low-power embedded devices!

New Tiny Models
We release tiny DINOv3 based models for instance segmentation, panoptic segmentation, and semantic segmentation!

New ONNX and TensorRT FP16 Export
You can now export all supported models to ONNX and TensorRT in FP16 precision for faster inference! Object detection, instance segmentation, panoptic segmentation, and semantic segmentation are supported!

Added

  • Add PicoDet family of object detection models for low-power embedded devices.
  • Add tiny semantic segmentation models.
  • Add tiny instance segmentation models.
  • Add tiny panoptic segmentation models.
  • Add FP16/FP32 ONNX and TensorRT export for object detection models.
  • Add FP16/FP32 ONNX and TensorRT export for instance segmentation models.
  • Add FP16/FP32 ONNX and TensorRT export for panoptic segmentation models.
  • Add FP16/FP32 ONNX and TensorRT export for semantic segmentation models.
  • Add example jupyter notebooks for ONNX and TensorRT export.
  • Add Slicing Aided Hyper Inference (SAHI) for object detection to improve small objects
    recall at inference.
  • Add pretraining support for Ultralytics RT-DETR models.
  • Add support for different patch size in EoMT and semantic segmentation.
  • Add classwise metrics support for object detection models.
  • Add ignore_classes support for object detection models.

Changed

  • Change default DINOv3 EoMT semantic segmentation image size from 518x518 to 512x512.
  • New checkpoints for the COCO pretrained DINOv3 EoMT semantic segmentation models.

Fixed

  • Fix numerical stability issues in LTDETR's matcher.
  • Fix issue when resuming from interrupted runs with custom model args.

What's Changed

New Contributors

Full Changelog: v0.13.2...v0.14.0

v0.13.2

29 Dec 11:00
1915f19

Choose a tag to compare

Added

  • Support for pretraining RF-DETR 1.3 models.

Changed

  • Export only EMA weights for object detection models. This reduces the exported model
    size by 2x.

Fixed

  • Fix pretrained ViT-small panoptic segmentation model checkpoint.

What's Changed

Full Changelog: v0.13.1...v0.13.2

v0.13.1

18 Dec 10:40
1b973a9

Choose a tag to compare

Fixed

  • Fix bug in ONNX export for object detection models.

What's Changed

Full Changelog: v0.13.0...v0.13.1

v0.13.0

15 Dec 13:09
4eb84f9

Choose a tag to compare

New DINOv3 Tiny Object Detection Models: We release tiny DINOv3 models pretrained on
COCO for object detection!

New DINOv3 Panoptic Segmentation: You can now run inference and fine-tune DINOv3 models
for panoptic segmentation!

Added

Changed

  • Rename lightly_train.train() to lightly_train.pretrain(). The old name is still
    available as an alias for backward compatibility but will be removed in a future release.
  • Restructured the documentation to better reflect the different workflows supported
    by LightlyTrain.

Fixed

  • Fix bug in model.predict() for object detection models.
  • Fix bug in object detection transforms when using images with dtype float32.
  • Fix bug when running pretraining on an MPS device.
  • Fix bug when resuming training with a recent PyTorch version.
  • Fix bug when resuming a crashed run that was initialized from a pretrained COCO model.

What's Changed

New Contributors

Full Changelog: v0.12.4...v0.13.0

v0.12.4

27 Nov 08:46
10a3528

Choose a tag to compare

Fixed

  • Fix bug in model.predict() for object detection models.

What's Changed

Full Changelog: v0.12.3...v0.12.4

v0.12.3

26 Nov 13:48
e022717

Choose a tag to compare

Added

  • Add support for specifying data configs in YAML format.

Changed

  • Improve the layout of the object detection training logs.

Deprecated

  • Deprecate reuse_class_head argument in train command. The model will now
    automatically reuse the classification head only when the number of classes in the
    data config matches that in the checkpoint. Otherwise, the classification head will
    be re-initialized.

Fixed

  • Fix image_size not tuple when training from pretrained model.
  • Fix a bug when fine-tuning a model with resume_interrupted=True.
  • Fix num_classes not updated when loading an object detection checkpoint with
    different number of classes.

What's Changed

Full Changelog: v0.12.2...v0.12.3