Building complex UI queries in plain English with AI

Just ask 'Show me logs from yesterday' and AI finds them. No more clicking filters - type what you want, like you're texting a friend. Simple log search that just works.

Written by

Oguzhan Olguncu

Published on

Imagine this: You're tracking down an issue in your logs. The traditional approach feels like following a recipe - select the request status, choose the method, navigate to the date-time picker, set it to two hours ago, and keep adding more criteria. Click after click, filter after filter.

Now imagine simply typing: "I need requests with GET methods, success status that happened 2 hours ago." That's it. No more menu diving, no more juggling multiple filters - just tell the system what you want, like you're asking a colleague.

Implementation Journey

When this feature was first discussed, we thought implementation would be challenging. However, if you're already using zod, it's surprisingly straightforward. OpenAI provides a zodResponseFormat helper to generate structured outputs, making the integration super easy.

Query Parameter Structure

At the heart of our implementation is a query parameter design that bridges natural language and code. We used a syntax that's both powerful and intuitive:

1operator:value,operator:value (e.g., "is:200,is:404")
2
3Example -> status=is:200,is:400
4           path=startsWith:foo,endsWith:bar

This pattern allows for incredible flexibility - you can chain multiple conditions while maintaining readability. To handle these parameters in our Unkey dashboard, we implemented a custom parser for nuqs:

1export const parseAsFilterValueArray: Parser<FilterUrlValue[]> = {
2  parse: (str: string | null) => {
3    if (!str) {
4      return [];
5    }
6    try {
7      // Format: operator:value,operator:value (e.g., "is:200,is:404")
8      return str.split(",").map((item) => {
9        const [operator, val] = item.split(/:(.+)/);
10        if (!["is", "contains", "startsWith", "endsWith"].includes(operator)) {
11          throw new Error("Invalid operator");
12        }
13        return {
14          operator: operator as FilterOperator,
15          value: val,
16        };
17      });
18    } catch {
19      return [];
20    }
21  },
22  // In our app we pass a valid type but for brevity it's omitted
23  serialize: (value: any[]) => {
24    if (!value?.length) {
25      return "";
26    }
27    return value.map((v) => `${v.operator}:${v.value}`).join(",");
28  },
29};
30
31export const queryParamsPayload = {
32  requestId: parseAsFilterValueArray,
33  host: parseAsFilterValueArray,
34  methods: parseAsFilterValueArray,
35  paths: parseAsFilterValueArray,
36  status: parseAsFilterValueArray,
37  startTime: parseAsInteger,
38  endTime: parseAsInteger,
39  since: parseAsRelativeTime,
40} as const;

Our parser handles edge cases gracefully - from null inputs to invalid operators - while maintaining a clean, predictable output format. The type-safe payload configuration ensures consistency across different parameter types.

Defining the Schema

With our parameter structure in place, we needed a way to ensure the AI's responses would map perfectly to our system. Enter Zod - our schema validation powerhouse:

1export const filterOutputSchema = z.object({
2  filters: z.array(
3    z.object({
4      field: z.enum([
5        "host",
6        "requestId",
7        "methods",
8        "paths",
9        "status",
10        "startTime",
11        "endTime",
12        "since",
13      ]),
14      filters: z.array(
15        z.object({
16          operator: z.enum(["is", "contains", "startsWith", "endsWith"]),
17          value: z.union([z.string(), z.number()]),
18        })
19      ),
20    })
21  ),
22});

This schema acts as a contract between natural language and our application's expectations. It ensures that every AI response will be structured in a way our system can understand and process. The nested array structure allows for complex queries while maintaining strict type safety.

System Prompt and OpenAI Integration

The magic happens in how we instruct the AI. Our system prompt is carefully crafted to ensure consistent, reliable outputs:

1You are an expert at converting natural language queries into filters. For queries with multiple conditions, output all relevant filters. We will process them in sequence to build the complete filter. For status codes, always return one for each variant like 200,400 or 500 instead of 200,201, etc... - the application will handle status code grouping internally. Always use this ${usersReferenceMS} timestamp when dealing with time related queries.
2
3Query: "path should start with /api/oz and method should be POST"
4Result: [
5  {
6    "field": "paths",
7    "filters": [
8      {
9        "operator": "startsWith",
10        "value": "/api/oz"
11      }
12    ]
13  },
14  {
15    "field": "methods",
16    "filters": [
17      {
18        "operator": "is",
19        "value": "POST"
20      }
21    ]
22  }
23]

In our prompt there are lots of examples for each search variation, but in here it's omitted for brevity. For the best result make sure your prompt is as detailed as possible.

OpenAI Configuration

Tuning the AI's behavior is crucial for reliable results. Here's our optimized configuration:

1const completion = await openai.beta.chat.completions.parse({
2  model: "gpt-4o-mini",
3  temperature: 0.2, // Lower temperature for more deterministic outputs
4  top_p: 0.1, // Focus on highest probability tokens
5  frequency_penalty: 0.5, // Maintain natural language variety
6  presence_penalty: 0.5, // Encourage diverse responses
7  n: 1, // Single, confident response
8  messages: [
9    {
10      role: "system",
11      content: systemPrompt,
12    },
13    {
14      role: "user",
15      content: userQuery,
16    },
17  ],
18  response_format: zodResponseFormat(filterOutputSchema, "searchQuery"),
19});

The low temperature and top_p values ensure predictable outputs, while the penalty parameters help maintain natural-sounding responses.

Process Flow

Here's how the entire process works:

1User
2  |
3  | "Show me failed requests from last hour"
4  v
5Frontend
6  |
7  | {query: "show me failed requests from last hour"}
8  v
9tRPC Route
10  |
11  | {model, messages with system prompt, schema}
12  v
13OpenAI
14  |
15  | {structured JSON matching our schema}
16  v
17tRPC Route
18  |
19  | status=is:400,since:1h
20  v
21Frontend
22  |
23  | /logs?status=is:400&since=is:1h
24  v
25URL
26  |
27  | trigger fetch with new params
28  v
29Logs tRPC Query
30  |
31  | return filtered logs
32  v
33Frontend
34  |
35  | display filtered results
36  v
37User

Important Considerations

Before implementing this feature in your own application, here are some crucial factors to consider:

  • Integrating LLMs into your application requires robust error handling. The OpenAI API might experience downtime or rate limiting, so implement fallback mechanisms or meaningful error message to show to user.
  • Each query consumes OpenAI API tokens - More AI search burns more money
  • Implement rate limiting - Without ratelimit users can abuse your AI-powered search

Conclusion

While traditional filter-based UIs work well, the ability to express search criteria in plain English makes log exploration more intuitive and efficient.

The integration with OpenAI's structured output feature and zod makes the implementation surprisingly straightforward. The key to success lies in:

  • Crafting a clear system prompt
  • Defining a robust schema for your use case
  • Implementing proper error handling and fallbacks

Remember that while AI-powered features can enhance your application, they should complement rather than completely replace traditional interfaces. This hybrid approach ensures the best experience for all users while maintaining reliability and accessibility.

Protect your API.
Start today.

2500 verifications and 100K successful rate‑limited requests per month. No CC required.