Plugin directory#

The following plugins are available for LLM. Here’s how to install them.

Local models#

These plugins all help you run LLMs directly on your own computer:

  • llm-mlc can run local models released by the MLC project, including models that can take advantage of the GPU on Apple Silicon M1/M2 devices.

  • llm-llama-cpp uses llama.cpp to run models published in the GGML format.

  • llm-gpt4all adds support for various models released by the GPT4All project that are optimized to run locally on your own machine. These models include versions of Vicuna, Orca, Falcon and MPT - here’s a full list of models.

  • llm-mpt30b adds support for the MPT-30B local model.

Remote APIs#

These plugins can be used to interact with remotely hosted models via their API:

If an API model host provides an OpenAI-compatible API you can also configure LLM to talk to it without needing an extra plugin.

Embedding models#

Embedding models are models that can be used to generate and store embedding vectors for text.

Extra commands#

  • llm-python adds a llm python command for running a Python interpreter in the same virtual environment as LLM. This is useful for debugging, and also provides a convenient way to interact with the LLM Python API if you installed LLM using Homebrew or pipx.

  • llm-cluster adds a llm cluster command for calculating clusters for a collection of embeddings. Calculated clusters can then be passed to a Large Language Model to generate a summary description.

Just for fun#