LLMForwarder
is a Python package for forwarding chat requests to OpenAI-compatible API endpoints. It allows users to customize request handling through an external function. This package is ideal for injecting custom context or pre-processing user messages before forwarding to the model API.
Primary use-case in mind
Adding private context to existing chat applications. Like while using continue (https://www.continue.dev/) with either external or local LLM injecting extra context for the code. The context injection is handled by an external python function the user should provide. This can be injection of static text or dynamic test based on the chat, like extend the original tool with RAG capabilities on private date.
Sample for static text injection
def dummy_rag(messages):
static_context = "use the following context to answer the question.\nContext:\nThe code is 556677\n"
for i in range(len(messages) - 1, -1, -1):
if messages[i]['role'] == 'user':
messages[i]['content'] = messages[i]['content'] + static_context
break
return messages
Note the injected function receives the whole chat, it’s up to the user to process it. In the example we injecting additional static text to the last user message.
Usage of the service
The LLM has successfully answered with the code while the original prompt has not contained the information.
Features
- Configurable server address and port
- OpenAI API compatible
- Customizable prompt-handling function, allowing easy injection of context or prompt modifications
- Easy setup and usage for OpenAI-compatible models
Installation
To install LLMForwarder
, use pip:
pip install llm-forwarder
LLMForwarder
is a Python package for forwarding chat requests to OpenAI-compatible API endpoints. It allows users to customize request handling through an external function. This package is ideal for injecting custom context or pre-processing user messages before forwarding to the model API.