Back to projects
Feb 15, 2024
1 min read

Fine Tuneing Google’s Gemma

I've fine-tuned Google's Gemma 2B model to create a powerful Python language question-answering tool.
  • Led the fine-tuning of Google’s GEMMA for natural language understanding, improving model accuracy by 15%.Utilized TensorFlow and Hugging Face Transformers for model optimization and deployment.
  • Reduced training time by 25% through efficient data preprocessing and augmentation techniques.Collaborated with data scientists and software engineers to integrate the model into production, enhancing the system’s overall performance.