Have something to say?

Tell us how we could make the product more useful to you.

LLM (conditional) conversation history awareness

This could be a bad idea but how about LLM memory/history? As in you can chain requests without having to specify every time. Pressing enter would work as usual, but something like option enter / control enter / etc would let you chain requests in the same 'chat'. For example, File A and File B exist Selecting File A, requesting "make into MP4, 1080p" followed by selecting file B, pressing enter, and requesting "this too" will result in a "No request provided" However, Selecting File A, requesting "make into MP4, 1080p" followed by selecting file B, pressing dedicated hotkey (or a continue button next to the return button on the app), and requesting "this too" will result in file B also getting turned to a 1080p MP4 video. Not sure how this will work with online models and API keys, but for local ones, while the model does get unloaded like how it works rn, request history will shortly be kept on cache until you press enter(which resets to default) An additional option which defaults to keeping conversational data (unless a dedicated hotkey is pressed that resets to default) could be interesting.

lionheo9 13 days ago