Built-in local LLM support

Currently to use local LLMs via LM Studio or Ollama. While this works and is relatively straightforward, some people would prefer to have a more streamlined “batteries included” approach.

The design would be that you need to download an optional extra model via the settings (we would recommend a model so it’s one click), and then it would “just work”.

The compromise is that it would take up rather a lot of your computer’s memory for a supposedly lightweight “simple” task.

Please authenticate to join the conversation.

Upvoters
Status

In Review

Board
💡

Feature Request

Date

11 months ago

Author

joethephish

Subscribe to post

Get notified by email when there are changes.