Chat is the best way to get cited, detailed answers to your internal questions. For most queries, Butler will use sources as context. Simple queries such as follow-ons will not.There are three filters/toggleables:
Sources
Pick which apps Butler will search for relevant sources
By default, searches will be performed on all data from connected apps
Model
The LLM used to provide the response
AI Only toggle
Allows the chat to be used like a regular LLM (no app context used)
Web Search toggle
Sets Model to Perplexity Sonar to enable web search
AI Only toggle
Forces Butler answer without fetching data from connectors; allows the chat to be used as a regular LLM
Use relevant keywords: Include specific terms related to your query. For example, instead of “how do I deploy”, try “what is our AWS deployment process for staging”.
Be specific: Frame questions as complete sentences rather than keywords. Example: “What were the key decisions from the Q1 product roadmap meeting?” instead of “roadmap updates”.
Specify source: Mention which apps or documents to search. Example: “Find the security guidelines document in Notion about API authentication”.
Request format: Clarify the type of response you want - whether it’s a summary, link, or detailed explanation.
Think of Butler as a real person - the more context you provide, the better it will perform.