The D-AI PromptIDE is an integrated development environment designed for prompt engineering and interpretability research. It accelerates the process of prompt engineering through an SDK that enables the implementation of complex prompting techniques and provides rich analytics to visualize the network's outputs. We use it extensively in our continuous development of Cotton™.

We developed the PromptIDE to provide transparent access to Cotton-1, the model that powers Cotton™, for engineers and researchers in the community. The IDE is designed to empower users and enable them to explore the capabilities of our large language models (LLMs) efficiently. At the core of the IDE is a Python code editor, which, combined with a new SDK, facilitates the implementation of complex prompting techniques. As users execute prompts in the IDE, they are presented with valuable analytics, such as precise tokenization, sampling probabilities, alternative tokens, and aggregated attention masks.

The IDE also includes several quality of life features to enhance the user experience. It automatically saves all prompts and includes built-in versioning, ensuring that users can track and revisit their previous work. The analytics generated from running a prompt can be stored permanently, enabling users to compare outputs from different prompting techniques. Additionally, users can upload small files, such as CSV files, and read them easily using a single Python function from the SDK. With the SDK's concurrency features, even relatively large files can be processed swiftly.

We aim to build a vibrant community around the PromptIDE. With just a click of a button, users can share any prompt publicly. They have the option to share either a single version of the prompt or the entire prompt tree. Additionally, users can choose to include any stored analytics when sharing their prompts, fostering collaboration and allowing others to explore different approaches and insights.

The PromptIDE is available to members of our early access program. Below, you find a walkthrough of the main features of the IDE.

Thank you,
the D-AI Team

Code editor & SDK

Sampling probabilities in the PromptIDE

At the core of the PromptIDE is a powerful code editor and a Python SDK, designed to streamline the development of complex prompting techniques. The SDK introduces a new programming paradigm where all Python functions are executed within an implicit context, represented as a sequence of tokens. Users can manually append tokens to the context using the prompt() function or leverage the model's capabilities to generate tokens automatically based on the current context using the sample() function.

The PromptIDE utilizes an in-browser Python interpreter that operates in a separate web worker. This allows for efficient execution of Python code locally without relying on external servers. Multiple web workers can be spun up simultaneously, enabling the execution of numerous prompts in parallel. This parallelism enhances the development process, allowing users to quickly iterate through different prompting techniques, compare results, and perform more complex analyses without any noticeable delays.

Sampling probabilities in the PromptIDE

Complex prompting techniques can be implemented using multiple contexts within the same program. If a function is annotated with the @prompt_fn decorator, it is executed in its own, fresh context. The function can perform some operations independently of its parent context and pass the results back to the caller using the return statement. This programming paradigm enables recursive and iterative prompts with arbitrarily nested sub-contexts.

Concurrency

The SDK uses Python coroutines that enable processing multiple @prompt_fn-annotated Python functions concurrently. This can significantly speed up the time to completion - especially when working with CSV files.

Sampling probabilities in the PromptIDE

User inputs

Prompts can be made interactive through the user_input() function, which blocks execution until the user has entered a string into a textbox in the UI. The user_input() function returns the string entered by the user, which cen then, for example, be added to the context via the prompt() function. Using these APIs, a chatbot can be implemented in just four lines of code:

await prompt(PREAMBLE)
while text := await user_input("Write a message"):
    await prompt(f"<|separator|>\n\nHuman: {text}<|separator|>\n\nAssistant:")
    await sample(max_len=1024, stop_tokens=["<|separator|>"], return_attention=True)

Files

Developers can upload small files to the PromptIDE (up to 5 MiB per file. At most 50 MiB total) and use their uploaded files in the prompt. The read_file() function returns any uploaded file as a byte array. When combined with the concurrency feature mentioned above, this can be used to implement batch processing prompts to evaluate a prompting technique on a variety of problems. The screenshot below shows a prompt that calculates the MMLU evaluation score.

Sampling probabilities in the PromptIDE

Analytics

While executing a prompt, users see detailed per-token analytics to help them better understand the model's output. The completion window shows the precise tokenization of the context alongside the numeric identifiers of each token. When clicking on a token, users also see the top-K tokens after applying top-P thresholding and the aggregated attention mask at the token.

Sampling probabilities in the PromptIDE Sampling probabilities in the PromptIDE

When using the user_input() function, a textbox shows up in the window while the prompt is running that users can enter their response into. The below screenshot shows the result of executing the chatbot code snippet listed above.

Sampling probabilities in the PromptIDE

Finally, the context can also be rendered in markdown to improve legibility when the token visualization features are not required.

Sampling probabilities in the PromptIDE