Posts by Tags

Form

Next-Gen Form Filling with Gemini 1.5 Pro

7 minute read

Published:

Filling out forms can be boring and time-consuming. This often leads to user frustration and incomplete submissions. However, conversational AI, like the Gemini 1.5 Pro language model, is changing how we interact with forms.

Gemini

Next-Gen Form Filling with Gemini 1.5 Pro

7 minute read

Published:

Filling out forms can be boring and time-consuming. This often leads to user frustration and incomplete submissions. However, conversational AI, like the Gemini 1.5 Pro language model, is changing how we interact with forms.

Gemini-1.5-Pro

Next-Gen Form Filling with Gemini 1.5 Pro

7 minute read

Published:

Filling out forms can be boring and time-consuming. This often leads to user frustration and incomplete submissions. However, conversational AI, like the Gemini 1.5 Pro language model, is changing how we interact with forms.

Gemma

Retrieval Augmented Generation (RAG) using Gemma to Explain Basic Data Science Concepts

11 minute read

Published:

The world of data science can be daunting for newcomers, filled with complex terminology and intricate concepts. But what if you had an AI assistant by your side, ready to explain these concepts in simple terms and guide you through the learning process? This is where the power of Retrieval Augmented Generation (RAG) comes into play. In this blog post, we’ll embark on a hands-on journey, building an AI-powered explainer for data science concepts using the RAG model and the Gemma language model.

Gemma-1.1-2b-it

Retrieval Augmented Generation (RAG) using Gemma to Explain Basic Data Science Concepts

11 minute read

Published:

The world of data science can be daunting for newcomers, filled with complex terminology and intricate concepts. But what if you had an AI assistant by your side, ready to explain these concepts in simple terms and guide you through the learning process? This is where the power of Retrieval Augmented Generation (RAG) comes into play. In this blog post, we’ll embark on a hands-on journey, building an AI-powered explainer for data science concepts using the RAG model and the Gemma language model.

Gemma2

Fine-Tune Gemma 2 2b with Keras and LoRA (Part 3)

3 minute read

Published:

Gemma2 Continuing our educational series focusing on Arabic language handling with large language models, in this part, we will explore how to fine-tune the Gemma2-9b model on an Arabic dataset using the Keras library, Keras_nlp, and LoRA technique. We will cover how to set up the environment, load the model, make necessary modifications, and train the model using model parallelism to distribute model parameters across multiple accelerators.

Fine-Tune Gemma 2 2b Using Transformers and qLoRA (Part 2)

4 minute read

Published:

Gemma2 As part of our tutorial series focusing on handling Arabic with large language models, we will continue in the second part by exploring specialized methods for fine-tuning the Gemma-2b model on an Arabic dataset to enhance its performance. We will use the Transformers library and qLoRA (Quantized Low-Rank Adaptation) technique to reduce memory usage.

Build a RAG Application Using Gemma 2 (Part 1)

3 minute read

Published:

Gemma2 When working on a project that requires handling the Arabic language, you might wonder whether to use Retrieval-Augmented Generation (RAG) or to fine-tune a model with a new Arabic dataset. In this series of tutorials with two parts, we will explore both options: using RAG and fine-tuning a model with Arabic data, specifically Wikipedia. Throughout the project, we will focus on open-source models, utilizing Gemma 2 Instruct and an open-source embeddings model. We will also leverage the LangChain framework to streamline the process of building the RAG and fine-tuning the model. Let’s dive into the practical implementation.

GenAI

Generative AI

Fine-Tune Gemma 2 2b with Keras and LoRA (Part 3)

3 minute read

Published:

Gemma2 Continuing our educational series focusing on Arabic language handling with large language models, in this part, we will explore how to fine-tune the Gemma2-9b model on an Arabic dataset using the Keras library, Keras_nlp, and LoRA technique. We will cover how to set up the environment, load the model, make necessary modifications, and train the model using model parallelism to distribute model parameters across multiple accelerators.

Fine-Tune Gemma 2 2b Using Transformers and qLoRA (Part 2)

4 minute read

Published:

Gemma2 As part of our tutorial series focusing on handling Arabic with large language models, we will continue in the second part by exploring specialized methods for fine-tuning the Gemma-2b model on an Arabic dataset to enhance its performance. We will use the Transformers library and qLoRA (Quantized Low-Rank Adaptation) technique to reduce memory usage.

Build a RAG Application Using Gemma 2 (Part 1)

3 minute read

Published:

Gemma2 When working on a project that requires handling the Arabic language, you might wonder whether to use Retrieval-Augmented Generation (RAG) or to fine-tune a model with a new Arabic dataset. In this series of tutorials with two parts, we will explore both options: using RAG and fine-tuning a model with Arabic data, specifically Wikipedia. Throughout the project, we will focus on open-source models, utilizing Gemma 2 Instruct and an open-source embeddings model. We will also leverage the LangChain framework to streamline the process of building the RAG and fine-tuning the model. Let’s dive into the practical implementation.

Next-Gen Form Filling with Gemini 1.5 Pro

7 minute read

Published:

Filling out forms can be boring and time-consuming. This often leads to user frustration and incomplete submissions. However, conversational AI, like the Gemini 1.5 Pro language model, is changing how we interact with forms.

Retrieval Augmented Generation (RAG) using Gemma to Explain Basic Data Science Concepts

11 minute read

Published:

The world of data science can be daunting for newcomers, filled with complex terminology and intricate concepts. But what if you had an AI assistant by your side, ready to explain these concepts in simple terms and guide you through the learning process? This is where the power of Retrieval Augmented Generation (RAG) comes into play. In this blog post, we’ll embark on a hands-on journey, building an AI-powered explainer for data science concepts using the RAG model and the Gemma language model.

LLM

Fine-Tune Gemma 2 2b with Keras and LoRA (Part 3)

3 minute read

Published:

Gemma2 Continuing our educational series focusing on Arabic language handling with large language models, in this part, we will explore how to fine-tune the Gemma2-9b model on an Arabic dataset using the Keras library, Keras_nlp, and LoRA technique. We will cover how to set up the environment, load the model, make necessary modifications, and train the model using model parallelism to distribute model parameters across multiple accelerators.

Fine-Tune Gemma 2 2b Using Transformers and qLoRA (Part 2)

4 minute read

Published:

Gemma2 As part of our tutorial series focusing on handling Arabic with large language models, we will continue in the second part by exploring specialized methods for fine-tuning the Gemma-2b model on an Arabic dataset to enhance its performance. We will use the Transformers library and qLoRA (Quantized Low-Rank Adaptation) technique to reduce memory usage.

Build a RAG Application Using Gemma 2 (Part 1)

3 minute read

Published:

Gemma2 When working on a project that requires handling the Arabic language, you might wonder whether to use Retrieval-Augmented Generation (RAG) or to fine-tune a model with a new Arabic dataset. In this series of tutorials with two parts, we will explore both options: using RAG and fine-tuning a model with Arabic data, specifically Wikipedia. Throughout the project, we will focus on open-source models, utilizing Gemma 2 Instruct and an open-source embeddings model. We will also leverage the LangChain framework to streamline the process of building the RAG and fine-tuning the model. Let’s dive into the practical implementation.

Next-Gen Form Filling with Gemini 1.5 Pro

7 minute read

Published:

Filling out forms can be boring and time-consuming. This often leads to user frustration and incomplete submissions. However, conversational AI, like the Gemini 1.5 Pro language model, is changing how we interact with forms.

Retrieval Augmented Generation (RAG) using Gemma to Explain Basic Data Science Concepts

11 minute read

Published:

The world of data science can be daunting for newcomers, filled with complex terminology and intricate concepts. But what if you had an AI assistant by your side, ready to explain these concepts in simple terms and guide you through the learning process? This is where the power of Retrieval Augmented Generation (RAG) comes into play. In this blog post, we’ll embark on a hands-on journey, building an AI-powered explainer for data science concepts using the RAG model and the Gemma language model.

MoE

Papers

RAG

Retrieval Augmented Generation (RAG) using Gemma to Explain Basic Data Science Concepts

11 minute read

Published:

The world of data science can be daunting for newcomers, filled with complex terminology and intricate concepts. But what if you had an AI assistant by your side, ready to explain these concepts in simple terms and guide you through the learning process? This is where the power of Retrieval Augmented Generation (RAG) comes into play. In this blog post, we’ll embark on a hands-on journey, building an AI-powered explainer for data science concepts using the RAG model and the Gemma language model.