We propose a new approach to LLM usage by momentarily reconstructing the context.

Read Post