Language Processing Units™

groc company logo

Proprietary large language models (LLMs) such as GPT-4 and open-source models such as Llama 2 are trained on parallel processors such as graphics processing units provided by Nvidia. Similar processing architectures are used during inference. Here inference refers to the situation where the trained model is called upon to generate text by writing a prompt.

While parallel processors used in training are well-suited to the task of optimizing billions or even trillions of parameters, a different architecture is required for speedy inference. We have all experienced the slow typed response of OpenAI’s ChatGPT, Microsoft’s Copilot, and now Google’s Gemini interfaces.

Along comes the company, groq, with its language processing units (LPU). Groq claims to make inference 10-100 times faster. Try this for yourself at the groq home page. At the time of writing Croq allows testers on their site to choose between Mixtral 8x7B-32k and Llama 2 70B-4k.

A quicker response time by LLMs greatly enhances their usability. It feels more natural and interactive. This CNN video gives us a quick glimpse.

We will see faster inference in the future, that is for sure. Perhaps we will even see such chips imbedded in our own computers. LLMs are very large, though, so we will also need to see bigger storage and more memory.