fix(deps): update dependency transformers to v5#317
Open
dreadnode-renovate-bot[bot] wants to merge 1 commit intomainfrom
Open
fix(deps): update dependency transformers to v5#317dreadnode-renovate-bot[bot] wants to merge 1 commit intomainfrom
dreadnode-renovate-bot[bot] wants to merge 1 commit intomainfrom
Conversation
| datasource | package | from | to | | ---------- | ------------ | ------ | ----- | | pypi | transformers | 4.57.1 | 5.1.0 |
9f77e00 to
157a706
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
>=4.41.0,<5.0.0→>=5.1.0,<5.2.0Release Notes
huggingface/transformers (transformers)
v5.1.0: : EXAONE-MoE, PP-DocLayoutV3, Youtu-LLM, GLM-OCRCompare Source
New Model additions
EXAONE-MoE
K-EXAONE is a large-scale multilingual language model developed by LG AI Research. Built using a Mixture-of-Experts architecture, K-EXAONE features 236 billion total parameters, with 23 billion active during inference. Performance evaluations across various benchmarks demonstrate that K-EXAONE excels in reasoning, agentic capabilities, general knowledge, multilingual understanding, and long-context processing.
PP-DocLayoutV3
PP-DocLayoutV3 is a unified and high-efficiency model designed for comprehensive layout analysis. It addresses the challenges of complex physical distortions—such as skewing, curving, and adverse lighting—by integrating instance segmentation and reading order prediction into a single, end-to-end framework.
Youtu-LLM
Youtu-LLM is a new, small, yet powerful LLM, contains only 1.96B parameters, supports 128k long context, and has native agentic talents. On general evaluations, Youtu-LLM significantly outperforms SOTA LLMs of similar size in terms of Commonsense, STEM, Coding and Long Context capabilities; in agent-related testing, Youtu-LLM surpasses larger-sized leaders and is truly capable of completing multiple end2end agent tasks.
GlmOcr
GLM-OCR is a multimodal OCR model for complex document understanding, built on the GLM-V encoder–decoder architecture. It introduces Multi-Token Prediction (MTP) loss and stable full-task reinforcement learning to improve training efficiency, recognition accuracy, and generalization. The model integrates the CogViT visual encoder pre-trained on large-scale image–text data, a lightweight cross-modal connector with efficient token downsampling, and a GLM-0.5B language decoder. Combined with a two-stage pipeline of layout analysis and parallel recognition based on PP-DocLayout-V3, GLM-OCR delivers robust and high-quality OCR performance across diverse document layouts.
Breaking changes
🚨 T5Gemma2 model structure (#43633) - Makes sure that the attn implementation is set to all sub-configs. The config.encoder.text_config was not getting its attn set because we aren't passing it to PreTrainedModel.init. We can't change the model structure without breaking so I manually re-added a call to self.adjust_attn_implemetation in modeling code
🚨 Generation cache preparation (#43679) - Refactors cache initialization in generation to ensure sliding window configurations are now properly respected. Previously, some models (like Afmoe) created caches without passing the model config, causing sliding window limits to be ignored. This is breaking because models with sliding window attention will now enforce their window size limits during generation, which may change generation behavior or require adjusting sequence lengths in existing code.
🚨 Delete duplicate code in backbone utils (#43323) - This PR cleans up backbone utilities. Specifically, we have currently 5 different config attr to decide which backbone to load, most of which can be merged into one and seem redundant
After this PR, we'll have only one config.backbone_config as a single source of truth. The models will load the backbone from_config and load pretrained weights only if the checkpoint has any weights saved. The overall idea is same as in other composite models. A few config arguments are removed as a result.
🚨 Refactor DETR to updated standards (#41549) - standardizes the DETR model to be closer to other vision models in the library.
🚨Fix floating-point precision in JanusImageProcessor resize (#43187) - replaces an
int()withround(), expect light numerical differences🚨 Remove deprecated AnnotionFormat (#42983) - removes a missnamed class in favour of
AnnotationFormat.Bugfixes and improvements
feat] Allow loading T5Gemma2Encoder with AutoModel (#43559) by @tomaarsenimage_sizesinput param (#43678) by @kaixuanliuAttn] Fixup interface usage after refactor (#43706) by @vasqunum_framesin ASR pipeline (#43546) by @jiqing-fengPreTrainedTokenizerBase(#43675) by @tarekziadeFP8Expertfor DeepSeek R1 (#43616) by @yiliu30HunYuan] Fix RoPE init (#43411) by @vasquSam] Fixup training flags (#43567) by @vasquprocess_bad_commit_report.py: avoid items to appear innullauthor in the report (#43662) by @ydshiehKeyErrorincheck_bad_commit.py(#43655) by @ydshiehtied_weight_keysin-place (#43619) by @zucchini-nlpRope] Revert #43410 and make inheritance implicit again (#43620) by @vasqumake_batched_videowith 5D arrays (#43486) by @zucchini-nlputils/fetch_hub_objects_for_ci.py: avoid too many requests and/or timeout (#43584) by @ydshiehMistralConverter.extract_vocab_merges_from_model(#43557) by @tarekziadetemplatesfolder (#43536) by @CyrilvallezModular] Allow to add new bases that are not present in the inherited class (#43556) by @vasqupad_token_id(#43453) by @Sai-Suraj-27RoPE] Make explicit inheritance (#43410) by @vasquShieldGemma2IntegrationTest::test_model(#43343) by @sywangyiSamHQModelIntegrationTest::test_inference_mask_generation_batched_points_batched_imagesforXPU(#43511) by @sywangyisuper()(#43280) by @zucchini-nlppytest-random-orderfor reproducible test randomization (#43483) by @tarekziademarkuplm&perception_lmintegration tests (#43464) by @Sai-Suraj-27Significant community contributions
The following contributors have made significant changes to the library over the last release:
PreTrainedTokenizerBase(#43675)MistralConverter.extract_vocab_merges_from_model(#43557)pytest-random-orderfor reproducible test randomization (#43483)Attn] Fixup interface usage after refactor (#43706)HunYuan] Fix RoPE init (#43411)Sam] Fixup training flags (#43567)Rope] Revert #43410 and make inheritance implicit again (#43620)Modular] Allow to add new bases that are not present in the inherited class (#43556)RoPE] Make explicit inheritance (#43410)process_bad_commit_report.py: avoid items to appear innullauthor in the report (#43662)KeyErrorincheck_bad_commit.py(#43655)utils/fetch_hub_objects_for_ci.py: avoid too many requests and/or timeout (#43584)v5.0.0: Transformers v5Compare Source
Transformers v5 release notes
We have a migration guide that will be continuously updated available on the
mainbranch, please check it out in case you're facing issues: migration guide.Highlights
We are excited to announce the initial release of Transformers v5. This is the first major release in five years, and the release is significant: 1200 commits have been pushed to
mainsince the latest minor release. This release removes a lot of long-due deprecations, introduces several refactors that significantly simplify our APIs and internals, and comes with a large number of bug fixes.We give an overview of our focus for this release in the following blogpost. In these release notes, we'll focus directly on the refactors and new APIs coming with v5.
This release is the full V5 release. It sets in motion something bigger: going forward, starting with v5, we'll now release minor releases every week, rather than every 5 weeks. Expect v5.1 to follow next week, then v5.2 the week that follows, etc.
We're moving forward with this change to ensure you have access to models as soon as they're supported in the library, rather than a few weeks after.
In order to install this release, please do so with the following:
For us to deliver the best package possible, it is imperative that we have feedback on how the toolkit is currently working for you. Please try it out, and open an issue in case you're facing something inconsistent/a bug.
Transformers version 5 is a community endeavor, and we couldn't have shipped such a massive release without the help of the entire community.
Significant API changes
Dynamic weight loading
We introduce a new weight loading API in
transformers, which significantly improves on the previous API. Thisweight loading API is designed to apply operations to the checkpoints loaded by transformers.
Instead of loading the checkpoint exactly as it is serialized within the model, these operations can reshape, merge,
and split the layers according to how they're defined in this new API. These operations are often a necessity when
working with quantization or parallelism algorithms.
This new API is centered around the new
WeightConverterclass:The weight converter is designed to apply a list of operations on the source keys, resulting in target keys. A common
operation done on the attention layers is to fuse the query, key, values layers. Doing so with this API would amount
to defining the following conversion:
In this situation, we apply the
Concatenateoperation, which accepts a list of layers as input and returns a singlelayer.
This allows us to define a mapping from architecture to a list of weight conversions. Applying those weight conversions
can apply arbitrary transformations to the layers themselves. This significantly simplified the
from_pretrainedmethodand helped us remove a lot of technical debt that we accumulated over the past few years.
This results in several improvements:
Linked PR: #41580
Tokenization
Just as we moved towards a single backend library for model definition, we want our tokenizers, and the
Tokenizerobject to be a lot more intuitive. With v5, tokenizer definition is much simpler; one can now initialize an emptyLlamaTokenizerand train it directly on your corpus.Defining a new tokenizer object should be as simple as this:
Once the tokenizer is defined as above, you can load it with the following:
Llama5Tokenizer(). Doing this returns you an empty, trainable tokenizer that follows the definition of the authors ofLlama5(it does not exist yet 😉).The above is the main motivation towards refactoring tokenization: we want tokenizers to behave similarly to models: trained or empty, and with exactly what is defined in their class definition.
Backend Architecture Changes: moving away from the slow/fast tokenizer separation
Up to now, transformers maintained two parallel implementations for many tokenizers:
tokenization_<model>.py) - Python-based implementations, often using SentencePiece as the backend.tokenization_<model>_fast.py) - Rust-based implementations using the 🤗 tokenizers library.In v5, we consolidate to a single tokenizer file per model:
tokenization_<model>.py. This file will use the most appropriate backend available:sentencepiecelibrary. It inherits fromPythonBackend.tokenizers. Basically allows adding tokens.MistralCommon's tokenization library. (Previously known as theMistralCommonTokenizer)The
AutoTokenizerautomatically selects the appropriate backend based on available files and dependencies. This is transparent, you continue to useAutoTokenizer.from_pretrained()as before. This allows transformers to be future-proof and modular to easily support future backends.Defining a tokenizers outside of the existing backends
We enable users and tokenizer builders to define their own tokenizers from top to bottom. Tokenizers are usually defined using a backend such as
tokenizers,sentencepieceormistral-common, but we offer the possibility to design the tokenizer at a higher-level, without relying on those backends.To do so, you can import the
PythonBackend(which was previously known asPreTrainedTokenizer). This class encapsulates all the logic related to added tokens, encoding, and decoding.If you want something even higher up the stack, then
PreTrainedTokenizerBaseis whatPythonBackendinherits from. It contains the very basic tokenizer API features:encodedecodevocab_sizeget_vocabconvert_tokens_to_idsconvert_ids_to_tokensfrom_pretrainedsave_pretrainedAPI Changes
1. Direct tokenizer initialization with vocab and merges
Starting with v5, we now enable initializing blank, untrained
tokenizers-backed tokenizers:This tokenizer will therefore follow the definition of the
LlamaTokenizeras defined in its class definition. It can then be trained on a corpus as can be seen in thetokenizersdocumentation.These tokenizers can also be initialized from vocab and merges (if necessary), like the previous "slow" tokenizers:
This tokenizer will behave as a Llama-like tokenizer, with an updated vocabulary. This allows comparing different tokenizer classes with the same vocab; therefore enabling the comparison of different pre-tokenizers, normalizers, etc.
vocab_file(as in, a path towards a file containing the vocabulary) cannot be used to initialize theLlamaTokenizeras loading from files is reserved to thefrom_pretrainedmethod.2. Simplified decoding API
The
batch_decodeanddecodemethods have been unified to reflect behavior of theencodemethod. Both single and batch decoding now use the samedecodemethod. See an example of the new behavior below:Gives:
We expect
encodeanddecodeto behave, as two sides of the same coin:encode,process,decode, should work.3. Unified encoding API
The
encode_plusmethod is deprecated in favor of the single__call__method.4.
apply_chat_templatereturnsBatchEncodingPreviously,
apply_chat_templatereturnedinput_idsfor backward compatibility. Starting with v5, it now consistently returns aBatchEncodingdict like other tokenizer methods.5. Removed legacy configuration file saving:
We simplify the serialization of tokenization attributes:
special_tokens_map.json- special tokens are now stored intokenizer_config.json.added_tokens.json- added tokens are now stored intokenizer.json.added_tokens_decoderis only stored when there is notokenizer.json.When loading older tokenizers, these files are still read for backward compatibility, but new saves use the consolidated format. We're gradually moving towards consolidating attributes to fewer files so that other libraries and implementations may depend on them more reliably.
6. Model-Specific Changes
Several models that had identical tokenizers now import from their base implementation:
These modules will eventually be removed altogether.
Removed T5-specific workarounds
The internal
_eventually_correct_t5_max_lengthmethod has been removed. T5 tokenizers now handle max length consistently with other models.Testing Changes
A few testing changes specific to tokenizers have been applied:
add_tokens,encode,decode) are now centralized and automatically applied across all tokenizers. This reduces test duplication and ensures consistent behaviorFor legacy implementations, the original BERT Python tokenizer code (including
WhitespaceTokenizer,BasicTokenizer, etc.) is preserved inbert_legacy.pyfor reference purposes.7. Deprecated / Modified Features
Special Tokens Structure:
SpecialTokensMixin: Merged intoPreTrainedTokenizerBaseto simplify the tokenizer architecture.special_tokens_map: Now only stores named special token attributes (e.g.,bos_token,eos_token). Useextra_special_tokensfor additional special tokens (formerlyadditional_special_tokens).all_special_tokensincludes both named and extra tokens.special_tokens_map_extendedandall_special_tokens_extended: Removed. AccessAddedTokenobjects directly from_special_tokens_mapor_extra_special_tokensif needed.additional_special_tokens: Still accepted for backward compatibility but is automatically converted toextra_special_tokens.Deprecated Methods:
sanitize_special_tokens(): Already deprecated in v4, removed in v5.prepare_seq2seq_batch(): Deprecated; use__call__()withtext_targetparameter instead.BatchEncoding.words(): Deprecated; useword_ids()instead.Removed Methods:
create_token_type_ids_from_sequences(): Removed from base class. Subclasses that need custom token type ID creation should implement this method directly.prepare_for_model(),build_inputs_with_special_tokens(),truncate_sequences(): Moved fromtokenization_utils_base.pytotokenization_python.pyforPythonBackendtokenizers.TokenizersBackendprovides model-ready input viatokenize()andencode(), so these methods are no longer needed in the base class._switch_to_input_mode(),_switch_to_target_mode(),as_target_tokenizer(): Removed from base class. Use__call__()withtext_targetparameter instead.