Use Llama with Groq in Retool to get fast response

January 1, 2000

Groq is fast

It use a different inference method to work with large language models, and its respose time is one order of magnitude different.

We believe that the response time will play a crucial role in shaping new type of interaction between human being and software.

For this reason, we experiments continuously with all the LLM alternative to find the right balance between results and performance.

Setting the resource

In retool you need to create a new REST API resource

configuring the base URL

adding the Groq token in the header

Prepare the body

Testing if it works

A simple example:

A real-time text corrector

Have a business need or a project in mind?

We help ambitious companies ride the change