Skip to content

v0.22.0

Compare
Choose a tag to compare
@sachinprasadhs sachinprasadhs released this 14 Aug 18:01
· 2 commits to r0.22 since this release

Summary:

New Models:

We've integrated a range of cutting-edge models, each designed to tackle specific challenges in their respective domains:

  • Gemma 3 270M: Released Gemma 3 270M parameter model and instruction tuned, 18-layer, text-only model designed for
    hyper-efficient AI, particularly for task-specific fine-tuning.

  • Qwen3: A powerful, large-scale multilingual language model, excelling in various natural language processing tasks, from text generation to complex reasoning.

  • DeiT: Data-efficient Image Transformers (DeiT), specifically designed to train Vision Transformers effectively with less data, making high-performance visual models more accessible.

  • HGNetV2: An advanced version of the Hybrid-Grouped Network, known for its efficient architecture in computer vision tasks, particularly optimized for performance on diverse hardware.

  • DINOV2: A state-of-the-art Self-Supervised Vision Transformer, enabling the learning of robust visual representations without relying on explicit labels, ideal for foundation models.

  • ESM & ESM2: Evolutionary Scale Modeling (ESM & ESM2), powerful protein language models used for understanding protein sequences and structures, with ESM2 offering improved capabilities for bioinformatics research.

Improvements & Enhancements

This update also includes several key improvements to enhance the platform's stability, compatibility, and flexibility:

  • Python 3.10 Minimum Support: Updated the minimum supported Python version to 3.10, ensuring compatibility with the latest libraries and features.
  • Gemma Conversion (Keras to SafeTensors): Added a new conversion script to effortlessly convert Gemma models from Keras format to Hugging Face's Safetensor format.
  • Gemma3 Conversion Script: Added conversion script for Gemma3 models, streamlining their integration into the Hugging Face ecosystem.
  • ViT Non-Square Image Support: Enhanced the Vision Transformer (ViT) model to now accept non-square images as input, providing greater flexibility for various computer vision applications.
  • LLM Left Padding Method: Added support for left padding in our LLM padding methods, offering more control and compatibility for specific model architectures and inference requirements.

What's Changed

Complete list of all the changes included in this release.

New Contributors

Full Changelog: v0.21.1...v0.22.0

For detailed documentation and usage examples/guides, please refer to our updated guides on https://keras.io/keras_hub/