Before I knew it I was tinkering with loading animations, creating a shader background, and making calls to multiple LLM providers.
I used this as an opportunity to explore how it would feel to use an LLM to suggest fully-formed questions from just a few keywords or a phrase.
design foundation
I used Radix UI, creating a simple gray scale theme with their color utility. Since this exploration was driven by a specific UI pattern, I really just needed to come up with interesting content for each bottom sheet.
My LLM API usage to date was limited to OpenAI and Anthropic, so I decided to dedicate a bottom sheet to a different LLM provider. This would make for an interesting proof-of-concept and a way to test out the Vaul library.
You are a question-answering expert. Your task is to provide a concise answer followed by a deeper explanation for a given question.
Here is the question you need to answer:
${basePrompt}
1. First, provide a concise answer to the question. If it is a yes or no question, begin your answer with "Yes" or "No". This answer should be brief and to the point.
2. Next, provide a deeper explanation of your answer. This explanation should elaborate on the concise answer, offering more context, details, or reasoning behind your response.
Format your response as follows:
Concise Answer:
[Your concise answer here]
Explanation:
[Your deeper explanation here]
I structured the prompt for the model providers to have two parts: a concise answer and an explanation.
This gave me more latitude with the design of the content of the bottom sheet, as I could lean into typography hiearchy to create contrast between the concise answer and the explanation content.
It also created the opportunity for progressive disclosure. Users could compare direct answers from all models on one screen, then open the bottom sheet to view the explanations.
In conversation with Claude, we came up with the idea of using Anthropic's Haiku model to generate natural-language questions from keyword inputs.
I used the Anthropic console's prompt refinement tool. This was my first time using a few-shot prompt, providing an example keyword and example suggestions.
Claude also helped with the debounce implementation, ensuring the API call for autosuggestions happened after a user's last keystroke in the search field.
<example>
<KEYWORD>
coffee
</KEYWORD>
<ideal_output>
What are the health benefits of drinking coffee?
How do you make the perfect cup of coffee at home?
What are the most popular coffee drinks around the world?
Why does coffee make you feel more awake?
Where do the best coffee beans come from?
</ideal_output>
</example>
</examples>
You are an AI assistant tasked with generating auto-suggest questions for a search input field. Your goal is to create natural language questions based on a given keyword or phrase that a user has entered. These questions should be relevant, diverse, and helpful in guiding the user's search intent.
Here are the guidelines for generating auto-suggest questions:
1. Analyze the given keyword or phrase to understand potential user intents.
2. Generate questions that explore different aspects or interpretations of the keyword.
3. Ensure questions are in natural language format, as if a person is asking them.
4. Vary the types of questions (e.g., "what", "how", "why", "when", "where") when appropriate.
5. Avoid repetitive or overly similar questions.
6. If the keyword is ambiguous, generate questions for multiple possible interpretations.
The keyword or phrase entered by the user is:
<keyword>
${keyword}
</keyword>
Based on this keyword, generate 5 auto-suggest questions. Output your suggestions in the following format:
<auto_suggest_questions>
[First question]
[Second question]
[Third question]
[Fourth question]
[Fifth question]
</auto_suggest_questions>
If the keyword is very short (1-2 characters) or highly ambiguous, generate broader questions that could help clarify the user's intent.
Remember to focus on creating natural, conversational questions that a real person might ask. Avoid overly technical or robotic-sounding language.
Now, based on the provided keyword, generate 5 auto-suggest questions following these instructions.
shaders as intelligence
I've become enchanted with shaders and WebGL. I admire those who can write these advanced motion artifacts by hand. I must rely on Claude.
Something about the perpetual motion, the fluidity of a shader, represents to me the idea of intelligence. Claude helped me write a simple background shader for PrettyQuery that morphs between the simple charcoal and grays of the color palette. It also transitions between light and dark mode alongside the rest of the interface.
reflection
Another journey of curiosity, PrettyQuery started as a desire to learn about a specific component library and evolved into something much more interesting.
When I completed it, it was the fastest, high-fidelity protottype I had executed in code. It came together in the evenings after my 9-5 design role in just a couple of weeks.
I feel truly at home—and like I have so much to contribute—at this convergence of intellectual curiosity, design, code, and artificial intelligence.