End-to-end fine-tuning of Hugging Face models using LoRA, QLoRA, quantization, and PEFT techniques. Optimized for low-memory with efficient model deployment
          nlp          machine-learning          natural-language-processing          deep-learning          transformers          pytorch          lora          quantization          model-training          fine-tuning          peft          gpu-optimization          huggingface          gradient-checkpointing          huggingface-datasets          qlora          bitsandbytes          parameter-efficient-fine-tuning          low-memory-training          fp16-training      
    - 
            Updated
            Jun 19, 2025 
- Jupyter Notebook