Ratnesh Verma • 24 days ago
Add Groq as Free Model Option in PO || Groq LPU vs GitHub/Gemini Free Models
Quick observation that might help a lots of participants, on hitting rate limit walls. PO currently offer two free model providers ->
Github Models - GPT-4.1, but hits rate limits fast during active testing
Google AI Studio - Gemini free tier, only 20 requests/day on flash models — exhausted within one testing session
Both are great for getting started but fall apart quickly when you're actually building and testing agents seriously.
The problem:
When you're debugging an MCP agent, you need to make 20-30 test calls in a session easily. GitHub free tier and Gemini free tier both hit limits within an hour of real work. Then you're stuck waiting or paying.
Suggestion — add Groq:
Groq offers completely free API access with much more generous limits:
llama-3.3-70b-versatile → 100K tokens/day, 1K requests/day
llama-4-scout-17b → 500K tokens/day, 1K requests/day
No credit card required
Sign up at console.groq.com in 2 minutes
For reference, GitHub Models gives 8K tokens/min and Gemini gives 20 requests/day free. Groq gives 100K-500K tokens per day free. That's a completely different ballpark for hackathon development.
Could PO consider adding Groq as a third free model option in Configuration → Models? Would significantly improve developer experience for participants who don't want to spend money during the hackathon
Log in or sign up for Devpost to join the conversation.

3 comments
Pawan Jindal Manager • 24 days ago
Hi Ratnesh - thank you so much for that feedback. We will definitely look into adding this. I will share an update here early next week after reviewing with the team
Pawan Jindal Manager • 22 days ago
Hello Ratnesh - we had a team discussion on this. Unfortunately, we will not be able to support this for the hackathon. As there is no official .NET SDK, we will need to create our own, which will require more time. We will definitely be looking to support this in the future.
Ratnesh Verma • 21 days ago
Hi Pawan, thanks for looking into it and sharing the update! Totally understand — no worries at all. Good to know it's on the roadmap for the future! Thanks again for the quick response