Exploring Gen AI — A Guide to Model Parameters for Better Content Creation (Part 1)
We are already aware of the Firebase Genkit & How to create a basic Firebase Genkit App
Build your First AI application with Firebase Genkit 🚀(Part 1)
https://shorturl.at/891JsBuild your First AI application with Firebase Genkit 🚀(Part 2)
https://shorturl.at/zPMCf
But for leveraging more power from it, it’s necessary for us to understand ,
How to Generate Content from AI Models.
The Heart for any Generative AI is Model. Breaking down lore the most prominent examples of generative models are LLM’s & Image generation models. These models take input called as prompt (most commonly text or image or combination of both) and from it produces a output text, image or even audio or video
The important thing to notice is LLM’s generate text that appears as though it could have been written by human being and image generation models can produce images that are very close to real photographs or artwork created by humans.
Common Workflow
- As an app developer while building anything with Gen AI, you typically don’t interact with generative AI models directly, but rather through services available as web APIs.
- Although these services often have similar functionality, they all provide them through different and incompatible APIs.
- If you want to make use of multiple model services, you have to use each of their proprietary SDKs, potentially incompatible with each other.
- If you want to upgrade from one model to the newest and most capable one, you might have to build that integration all over again
Doesn’t it seems like a Headache ?? 😖😣😩😫
No, Worries 🥳
Genkit addresses this challenge by providing a single interface that abstracts away the details of accessing potentially any generative AI model service, with several pre-built implementations already available.
Building your AI-powered app around Genkit simplifies the process of making your first generative AI call and makes it equally easy to combine multiple models or swap one model for another as new models emerge.
While generating output with AI model what output you expect and how perfect it should be in terms of length & creativity is controlled by model parameters.
To configure this it’s divided into two parts
- Parameters that controls length
- Parameters controlling Creativity
Parameters that controls length
- maxOutputTokens
- Tokens are the basic building blocks that LLM’s to process and generate text.
- When you hit Generate after giving a prompt the first step involves is Tokenization — tokenize your prompt string into a sequence of tokens
- This parameter sets a limit on how many tokens the model is allowed to generate in its response
- Think of it as setting a word limit, but instead of words, you’re setting a token limit.
- Example Prompt: “Explain photosynthesis.”
Output: “Plants convert sunlight.” (with maxOutputTokens=3 )
- Example Prompt: “Explain photosynthesis.”
Output: “Photosynthesis is the process by which plants convert sunlight into energy, using carbon dioxide and water to produce oxygen and glucose.” (with maxOutputTokens=50) - stopSequences
- It is a specific sequence of characters that tells the model, “Stop generating text when you reach this point
- You give the model a prompt to generate text & you specify a sequence of characters that should end the response.
- As soon as the model generates that sequence, it stops outputting text, even if it hasn’t reached the maxOutputTokens limit
- Simple Example: Imagine you’re dictating a message, but someone tells you, “Stop speaking as soon as you say the word ‘done.”
- You might start saying, “I went to the store and bought some groceries, done.” The moment you say “done,” you stop talking
- Similarly, the LLM stops generating output when it hits the stop sequence you define.
We already have this code with us
// import the Genkit and Google AI plugin libraries
import { gemini15Flash, googleAI } from '@genkit-ai/googleai';
import { genkit } from 'genkit';
import dotenv from 'dotenv';
// Load the API key from environment variables
dotenv.config();
// configure a Genkit instance
const ai = genkit({
plugins: [googleAI({ apiKey: process.env.GOOGLE_GENAI_API_KEY })], // Pass the API key
model: gemini15Flash, // Set the model
});
(async () => {
// make a generation request
const { text } = await ai.generate('Hello, Gemini!');
console.log(text);
})();
To adjust the parameters, update the code as given below
// import the Genkit and Google AI plugin libraries
import { gemini15Flash, googleAI } from '@genkit-ai/googleai';
import { genkit } from 'genkit';
import dotenv from 'dotenv';
// Load the API key from environment variables
dotenv.config();
// configure a Genkit instance
const ai = genkit({
plugins: [googleAI({ apiKey: process.env.GOOGLE_GENAI_API_KEY })], // Pass the API key
model: gemini15Flash, // Set the model
});
(async () => {
// make a generation request
const { text } = await ai.generate({
prompt: 'Explain photosynthesis.',
config: {
maxOutputTokens: 50,
stopSequences: ['Thank you.'] // Stops when the model generates "Thank you."
},
},
});
console.log(text);
})();
What Next ?
Parameters controlling Creativity of Model while Content Creation.
Stay Tuned and Keep Watch for Part 2, Happy Learning.
Feel Free to Clap 👏 and Keep me Motivated 👏 to come up with Next Parts.