Handling Tool Calls
When your LLM returns tool_use blocks, you need to route them to the correct Configure SDK method and return the results. This guide covers the complete tool routing pattern.
Two Categories of Tools
Configure tools fall into two categories:
| Category | Description | Example |
|---|---|---|
| Backend tools | Call the Configure API to read/write data or query external services | search_emails, remember, get_profile |
| UI tools | Render client-side components — no backend call needed | show_ui_component |
Checking for UI Tools
Before routing a tool call to the backend, check if it is a UI tool. UI tools are handled entirely in your frontend code.
typescript
import { isUITool, parseUIToolCall } from 'configure';
for (const block of response.content) {
if (block.type === 'tool_use') {
if (isUITool(block.name)) {
const parsed = parseUIToolCall(block.name, block.input);
// parsed = { component: 'connection_list', props: { connectors: ['gmail'] } }
renderComponent(parsed.component, parsed.props);
} else {
// Route to backend
const result = await handleToolCall(block.name, block.input);
}
}
}Backend Tool Routing
Here is the complete switch statement covering all Configure tool names and their corresponding SDK methods.
typescript
import { ConfigureClient } from 'configure';
const client = new ConfigureClient('sk_your_api_key');
async function handleToolCall(
toolName: string,
input: Record<string, any>,
token: string,
userId: string,
): Promise<unknown> {
switch (toolName) {
// ---- Profile tools ----
case 'get_profile':
return client.profile.get(token, userId, {
sections: input.sections,
});
case 'get_memories':
return client.profile.getMemories(token, userId, {
agent: input.agent,
from: input.from,
to: input.to,
});
case 'remember':
return client.profile.remember(token, userId, input.fact);
case 'ingest':
if (input.text) {
return client.profile.ingest(token, userId, {
text: input.text,
memoryCriteria: input.memory_criteria,
sync: input.sync,
});
}
return client.profile.ingest(token, userId, input.messages, {
memoryCriteria: input.memory_criteria,
sync: input.sync,
});
case 'profile_read':
return client.profile.read(token, userId, input.path);
case 'profile_ls':
return client.profile.ls(token, userId, input.path);
// ---- Document tools ----
case 'get_documents': {
const docNames = input.documents || ['user.md', 'soul.md', 'preferences.md', 'context.md'];
const documents: Record<string, any> = {};
for (const doc of docNames) {
documents[doc] = await client.profile.read(token, userId, `/documents/${doc}`);
}
return { documents };
}
case 'generate_documents':
return client.profile.generateDocuments(token, userId, input.documents);
// ---- Search tools ----
case 'search_emails':
return client.tools.searchEmails(token, userId, input.query, {
maxResults: input.max_results,
});
case 'get_calendar':
return client.tools.getCalendar(token, userId, input.range);
case 'search_files':
return client.tools.searchFiles(token, userId, input.query, {
maxResults: input.max_results,
});
case 'search_notes':
return client.tools.searchNotes(token, userId, input.query, {
maxResults: input.max_results,
});
case 'search_web':
return client.tools.searchWeb(token, userId, input.query, {
maxResults: input.max_results,
});
// ---- Action tools ----
case 'create_calendar_event':
return client.tools.createCalendarEvent(token, userId, {
title: input.title,
startTime: input.start_time,
endTime: input.end_time,
description: input.description,
location: input.location,
});
case 'send_email':
return client.tools.sendEmail(token, userId, {
to: input.to,
subject: input.subject,
body: input.body,
});
// ---- Self tools (agent storage — no user token needed) ----
case 'self_get_profile':
return client.self.getProfile({ sections: input.sections });
case 'self_get_memories':
return client.self.getMemories({
from: input.from,
to: input.to,
});
case 'self_remember':
return client.self.remember(input.fact);
case 'self_read':
return client.self.read(input.path);
case 'self_write':
return client.self.write(input.path, input.content, {
type: input.type,
mode: input.mode,
});
case 'self_ls':
return client.self.ls(input.path);
default:
return { error: `Unknown tool: ${toolName}` };
}
}Complete Tool Loop
Here is a full example using the Anthropic SDK with Configure tools. This handles multi-turn conversations where the LLM may call multiple tools before producing a final response.
typescript
import Anthropic from '@anthropic-ai/sdk';
import { ConfigureClient, CONFIGURE_TOOLS, isUITool } from 'configure';
const anthropic = new Anthropic();
const configure = new ConfigureClient('sk_your_api_key');
async function chat(
userMessage: string,
token: string,
userId: string,
) {
// 1. Get the user's profile and build the system prompt
const profile = await configure.profile.get(token, userId);
const systemPrompt = `You are TravelBot, a travel planning assistant.\n\n${profile.format()}`;
const messages: Anthropic.MessageParam[] = [
{ role: 'user', content: userMessage },
];
// 2. Loop until the LLM produces a final text response
while (true) {
const response = await anthropic.messages.create({
model: 'claude-haiku-4-5-20251001',
max_tokens: 4096,
system: systemPrompt,
messages,
tools: CONFIGURE_TOOLS as Anthropic.Tool[],
});
// 3. If no tool use, return the text response
if (response.stop_reason !== 'tool_use') {
const text = response.content
.filter((b): b is Anthropic.TextBlock => b.type === 'text')
.map((b) => b.text)
.join('');
// 4. Fire-and-forget memory extraction
await configure.profile.ingest(token, userId, [
{ role: 'user', content: userMessage },
{ role: 'assistant', content: text },
]);
return text;
}
// 5. Process tool calls
const toolResults: Anthropic.MessageParam = {
role: 'user',
content: [],
};
for (const block of response.content) {
if (block.type !== 'tool_use') continue;
let result: unknown;
if (isUITool(block.name)) {
// UI tools return a confirmation string
result = `Displayed ${block.input.component_type} component to user`;
} else {
result = await handleToolCall(block.name, block.input, token, userId);
}
(toolResults.content as Anthropic.ToolResultBlockParam[]).push({
type: 'tool_result',
tool_use_id: block.id,
content: JSON.stringify(result),
});
}
// 6. Add assistant response and tool results, then loop
messages.push({ role: 'assistant', content: response.content });
messages.push(toolResults);
}
}Memory Extraction with ingest
After each LLM turn, call profile.ingest() with the user message and assistant response. This fires a background extraction request that saves memories without blocking your response. Pass memoryCriteria to control what gets extracted.
typescript
// After each conversation turn
await configure.profile.ingest(token, userId, [
{ role: 'user', content: 'I prefer flying JAL and my budget is $5000' },
{ role: 'assistant', content: 'Noted! I will keep those preferences in mind for your trip planning.' },
], { memoryCriteria: 'travel preferences, budget, dietary restrictions' });
// Runs in the background — does not blockHeadless Agents
For CLI tools, backend services, or any agent that cannot render UI, simply use CONFIGURE_TOOLS without spreading UI_TOOLS:
typescript
import { CONFIGURE_TOOLS } from 'configure';
// CONFIGURE_TOOLS does not include show_ui_component by default.
// Only add UI_TOOLS if your agent has a frontend that can render components.
const tools = CONFIGURE_TOOLS;CONFIGURE_TOOLS does not include show_ui_component. To add UI tools, spread UI_TOOLS alongside: [...CONFIGURE_TOOLS, ...UI_TOOLS].