Chat App
FAQ
Frequently Asked Questions
General
What is the Chat App?
The Chat App is FlowState's multi-provider LLM chat interface. It enables you to have real-time streaming conversations with AI models from Anthropic (Claude), OpenAI (GPT), and local models via LM Studio. All conversations are persistently stored with organization and workspace scoping.
Who should use this app?
The Chat App is designed for:
| User Type | Primary Use Cases |
|---|---|
| Developers | Code assistance, debugging, API exploration, tool execution via MCP |
| Knowledge Workers | Document drafting, research, Q&A with persistent conversation history |
| Teams | Shared workspace conversations with organizational context |
| Power Users | Local model integration via LM Studio, custom system prompts |
How is this different from using ChatGPT or Claude directly?
The Chat App offers several advantages:
- Multi-Provider Support: Switch between Anthropic, OpenAI, and local models without changing apps
- Persistent History: Conversations are stored in your FlowState database, not just browser sessions
- Organization Scoping: Conversations are organized by workspace and team
- MCP Tool Integration: AI can execute tools via FlowState's MCP Gateway
- Local-First: Works offline with RxDB's local-first architecture
Features
How do I switch between AI providers?
To switch providers:
- Open any conversation
- Click the model dropdown in the header
- Select your desired provider (Anthropic, OpenAI, or LM Studio)
- Choose a specific model from that provider
The new model will be used for subsequent messages in that conversation.
Can I use custom system prompts?
Yes, each conversation can have a custom system prompt. System prompts help guide the AI's behavior and tone. The system prompt indicator in the header shows when a custom prompt is active.
What models are available?
Anthropic Claude:
- Claude Sonnet 4.5 (balanced performance)
- Claude Opus 4.1 (most capable)
- Claude Haiku 4.5 (fastest)
- Previous generation models (4.0, 4)
OpenAI GPT:
- GPT-4o (latest flagship)
- GPT-4 Turbo (fast and capable)
- GPT-3.5 Turbo (fast and economical)
LM Studio:
- Any model you have loaded locally
How do MCP tools work?
When the FlowState MCP Gateway is configured:
- Available tools are automatically discovered when the app loads
- The AI can invoke these tools during conversations
- Tool calls appear with expandable sections showing arguments and results
- Status indicators show pending, success, or error states
Tools enable the AI to perform actions like querying databases, creating records, or interacting with external services.
Is markdown supported in messages?
Yes, the Chat App provides full markdown support including:
- Headers, bold, italic, strikethrough
- Code blocks with syntax highlighting for many languages
- Lists (ordered and unordered)
- Tables
- Links and images
- GitHub Flavored Markdown (GFM) extensions
Account and Access
How do I get access to this app?
The Chat App is included with your FlowState workspace. Ensure you:
- Have an active FlowState account
- Are a member of a workspace
- Have the necessary permissions to access apps
What permissions do I need?
You need the following permissions:
- Database access: To read and write conversations and messages
- API access: To communicate with LLM providers (via configured API keys)
Are my conversations private?
Conversations are scoped to your organization and workspace. This means:
- Only members of your organization can see organization conversations
- Workspace conversations are visible to workspace members
- API keys are managed at the organization level, not shared in conversations
Technical
Which browsers are supported?
We support the latest versions of:
- Chrome (recommended)
- Firefox
- Safari
- Edge
Is my data secure?
Yes, security measures include:
- All data is encrypted in transit (HTTPS)
- RxDB supports encryption at rest
- API keys are stored securely in FlowState configuration, not in conversation data
- Organization isolation ensures data separation between teams
Do I need to be online to use the Chat App?
For sending new messages, yes - you need connectivity to reach the LLM providers. However:
- Your conversation history is stored locally in RxDB
- You can view past conversations offline
- When using LM Studio with local models, you can chat without internet
- Data syncs when you reconnect
What are the message/token limits?
Limits depend on your chosen provider and model:
| Provider | Model | Context Window |
|---|---|---|
| Anthropic | Claude Sonnet 4.5 | 200K tokens |
| Anthropic | Claude Opus 4.1 | 200K tokens |
| OpenAI | GPT-4o | 128K tokens |
| OpenAI | GPT-4 Turbo | 128K tokens |
| LM Studio | Varies | Depends on loaded model |
How do I use LM Studio?
To use local models via LM Studio:
- Download and install LM Studio
- Download a model (e.g., Llama, Mistral, Phi)
- Load the model in LM Studio
- Start the local server (default:
http://localhost:1234) - Select "LM Studio" as your provider in the Chat App
Billing and Usage
Do conversations count toward API usage?
Yes, messages sent to AI providers count toward your API usage with each provider:
- Anthropic charges per token for Claude models
- OpenAI charges per token for GPT models
- LM Studio uses local resources (no API charges)
How can I reduce API costs?
To optimize costs:
- Use faster/cheaper models when possible (Haiku, GPT-3.5 Turbo)
- Keep prompts concise
- Use LM Studio for development and testing
- Review token usage displayed in message metadata
Still Have Questions?
If your question isn't answered here:
- Check Troubleshooting for common issues
- Review the Features documentation for detailed capabilities
- Contact support for additional help