Skip to content

Conversation

@wbruna
Copy link
Contributor

@wbruna wbruna commented Jan 3, 2026

Models available as multiple file types were likely converted from one another, so we should prioritize native, safer and more efficient formats.

Models available as multiple file types were likely converted from
one another, so we should prioritize native, safer and more efficient
formats.
@leejet
Copy link
Owner

leejet commented Jan 4, 2026

I’m not entirely sure whether this is necessary here. Generally, GGUF files include the corresponding quantization format in their filenames, so they differ from the original version.

@wbruna
Copy link
Contributor Author

wbruna commented Jan 4, 2026

My main concern is with the current .pt priority. I agree it doesn't matter that much for .gguf versus .safetensors (as I also don't see a strong reason for preferring .safetensors over .gguf, either; but I can adapt the PR if there is one).

@leejet leejet merged commit c5602a6 into leejet:master Jan 5, 2026
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants