Prompts
Brilliant's prompts are designed to work with our powerful templating system, making LLM interactions as complex as you need — or as simple as you want.
Every time you run a prompt, templated keys like {{myKey}} are substituted with the key's corresponding value. Special keys like {{input: Set a programming language to use}}, {{file:package.json}}, or {{arg: AddDebugLogs}} gather input dynamically from the environment or by prompting the user for input. In Brilliant, you have complete control over what information gets included in your LLM's context: it's simply the result of replacing template keys with their values.
To create a prompt, click Create new prompt within Brilliant's Prompts tab.
Prompt Information
- Title: The name of your prompt as it will appear in Brilliant's Prompts tab.
- Category: The location within your prompt list where your new prompt will be saved. Categories are used to organize and arrange your existing prompts. Select an existing category via the dropdown menu or create a new category via the New Category option.
- Description: A description of what your new prompt does or what its purpose is. The description will appear at the top of the prompt overview when hovering over the prompt's edit button.
Prompt Templates
Brilliant allows you to directly set user and system level prompts which are automatically templatized and fed into your specified LLM. The system prompt should define how you want the model to behave throughout the interaction, while the user prompt specifies the tasks you want to complete in your request. User prompts may be overridden by subsequent commands given within Brilliant's conversation window, while system prompts will remain consistent throughout the conversation.
Actions and Workflows
Brilliant Actions define the way in which your LLM requests interact with your codebase. By linking together multiple actions within your prompts, you can create robust agentic workflows driven by a single click. To add an action to your prompt, click the Add New Insertion button.
Mapping Actions to Code Blocks
Within Brilliant, all LLM code responses are returned within individual code blocks. In addition to organizing and arranging your LLM response output, each code block can be mapped to one or more actions to be performed by Brilliant via the Prompt Response Block Number
field. While this may be straightforward for simple prompts that return a single code block and perform a single action, we reccommend specifying each desired code block within your prompt template.
For example, if you wanted to create workflow that generates a code file and then runs it, you may want to include the following in your prompt:
Your response should include two separate, labelled code blocks:
Code Block 1: The function code.
Code Block 2: A terminal command that runs the generated code.
By utilizing the Write to File and Run in Terminal insertion methods with the Prompt Response Block Number
set to 1 and 2 respectively, Brilliant will create a new file, populate the file with the LLM's code output, and then run the generated command in your terminal.
Insertion Methods
- Replace Selection: Your prompt's output automatically replaces the highlighted section of your codebase.
- Insert Before Selection: Your prompt's output is added immediately before the highlighted section of your codebase.
- Insert After Selection: Your prompt's output is added immediately after the highlighted section of your codebase.
- Insert in New Tmp File: Your prompt's output is written to a virtual file within the VS Code editor window. It will not generate a backing file.
- Run in Terminal: Runs your prompt’s output as a terminal command in your current or a new terminal. This can be an especially useful tool in developing agentic workflows by automatically executing scripts, installing dependencies, running previously generated code, etc.
- Write to File: Your prompt's output is written to a new file in your specified working directory.
- Use in New Prompt: Your prompt's output is included and run in a subsequent prompt.
- Set Contents to a Workflow Arg: Sets a value for future {{arg:ArgName}} substitutions within your workflow. You may use it to provide parameters for subsequent actions whose values aren't known until after the LLM responds.
Conversations
Upon running any prompt, Brilliant will open a conversation. You can continue to iterate on the prompt's initial request via the Say something box. Your entire conversation will automatically save and can be located in the Prompts tab when no active conversations are open.
ContextData
ContextData allows you to reference specific instructions, settings, parameters, or data by mapping a template key like {{myContextDataKey}} to a static value like "My ContextData Value". Any newly run prompts referencing {{myContextDataKey}} will be substituted with "My ContextData Value".
ContextData can be used to modify prompts' inputs or settings without updating the prompts' definitions themselves, to share context across prompts without copying it everywhere, or just as a way to make it easier to pass large amounts of data into prompts by manually referencing {{myContextDataKey}} in a prompt's input. They can even be chained together to implement branching or other complex logic, with e.g., {{myBranchedCondition{{myBranchingKey}}Key}} becoming {{myBranchedConditionBranchAKey}} or {{myBranchedConditionBranchBKey}} with distinct values.
You can view, edit, and create ContextData elements from the ContextData tab within Brilliant. Alternatively, you can easily turn any text within Brilliant into a ContextData element by highlighting the relative text, right clicking, and selecting Create Brilliant placeholder.
Include Files as Context
Easily include a subset of context files directly in your prompts using the {{file:relative/path.txt}} ContextData element. Since each prompt is evaluated independently, you have complete control over how and when specific context files are included in your requests. Use the {{file:relative/path.txt}} ContextData element within other ContextData elements, prompt templates, or even with the chat box.
Terminal Integration
With Brilliant, you can seamlessly generate and run terminal commands directly within your integrated terminal. You can use either the run in terminal insertion method to automatically run generated terminal commands, or alternatively, you can use the Run command in Current Terminal button within your conversation.
Accretional Proxy
By default, your LLM requests will be served by our Proxy. If you have an API key for another LLM provider and don't wish to go through the Accretional Proxy, you can use the settings icon within Brilliant to switch LLM providers and add your API key.