Skip to main content
Chat is the best way to get cited, detailed answers to your internal questions. For most queries, Butler will use sources as context. Simple queries such as follow-ons will not. There are three filters/toggleables:
  1. Sources
    • Pick which apps Butler will search for relevant sources
    • By default, searches will be performed on all data from connected apps
  2. Model
    • The LLM used to provide the response
  3. AI Only toggle
    • Allows the chat to be used like a regular LLM (no app context used)
  4. Web Search toggle
    • Sets Model to Perplexity Sonar to enable web search
  5. AI Only toggle
    • Forces Butler answer without fetching data from connectors; allows the chat to be used as a regular LLM

All available models are listed below:
ModelContext Window
GPT-4o128k tokens
GPT-4o Mini128k tokens
Claude 3.5 Sonnet200k tokens
Claude 3.0 Opus200k tokens
Perplexity Sonar32k tokens
DeepSeek Chat64k tokens

Optimization

Tips for getting better responses:
  1. Use relevant keywords: Include specific terms related to your query. For example, instead of “how do I deploy”, try “what is our AWS deployment process for staging”.
  2. Be specific: Frame questions as complete sentences rather than keywords. Example: “What were the key decisions from the Q1 product roadmap meeting?” instead of “roadmap updates”.
  3. Specify source: Mention which apps or documents to search. Example: “Find the security guidelines document in Notion about API authentication”.
  4. Request format: Clarify the type of response you want - whether it’s a summary, link, or detailed explanation.
Think of Butler as a real person - the more context you provide, the better it will perform.
I