Currently the only way that LLMs are used, is to convert natural language prompts into Terminal commands, and then to summarise the output of that command.
However, if we add support for arbitrary API calls (assuming user is using their own API key), then we could allow stuff like “transcribe this audio”, using OpenAI’s Whisper API. Behind the scenes it would make a curl
call, and make use of the API key that the user has already supplied in Substage’s settings.
Please authenticate to join the conversation.
In Review
Feature Request
2 months ago
joethephish
Get notified by email when there are changes.
In Review
Feature Request
2 months ago
joethephish
Get notified by email when there are changes.