Chat
Chat is the best way to get cited, detailed answers to your internal questions. For most queries, Butler will use sources as context. Simple queries such as follow-ons will not.
There are three filters/toggleables:
-
Sources
- Pick which apps Butler will search for relevant sources
- By default, searches will be performed on all data from connected apps
-
Model
- The LLM used to provide the response
-
AI Only toggle
- Allows the chat to be used like a regular LLM (no app context used)
-
Web Search toggle
- Sets Model to Perplexity Sonar to enable web search
All available models are listed below:
Model | Context Window |
---|---|
GPT-4o | 128k tokens |
GPT-4o Mini | 128k tokens |
Claude 3.5 Sonnet | 200k tokens |
Claude 3.0 Opus | 200k tokens |
Perplexity Sonar | 32k tokens |
DeepSeek Chat | 64k tokens |
Optimization
Tips for getting better responses:
-
Use relevant keywords: Include specific terms related to your query. For example, instead of “how do I deploy”, try “what is our AWS deployment process for staging”.
-
Be specific: Frame questions as complete sentences rather than keywords. Example: “What were the key decisions from the Q1 product roadmap meeting?” instead of “roadmap updates”.
-
Specify source: Mention which apps or documents to search. Example: “Find the security guidelines document in Notion about API authentication”.
-
Request format: Clarify the type of response you want - whether it’s a summary, link, or detailed explanation.
Think of Butler as a real person - the more context you provide, the better it will perform.