AI Provider System
MIFY provides unified access to 108+ AI providers through a layered architecture.
How It Works
Section titled “How It Works”Your Workflow → MIFY selects provider based on pack/tier → AI SDK adapter normalizes the request → Provider receives the call → Response normalized back to MIFY formatYou write your workflow once. MIFY handles provider differences — authentication, request format, response parsing, error handling, and cost tracking.
Provider Tiers
Section titled “Provider Tiers”Provider packs group models by optimization target:
| Tier | Optimized For | Example Models |
|---|---|---|
| Fast | Low latency | GPT-4o-mini, Claude Haiku, Gemini Flash |
| Balanced | Cost/quality | GPT-4o, Claude Sonnet, Gemini Pro |
| Accurate | Best quality | GPT-4, Claude Opus, Gemini Ultra |
Select a tier when running workflows, or configure a default.
Local Models (Ollama)
Section titled “Local Models (Ollama)”Run AI models locally with zero API costs:
- Install Ollama:
curl -fsSL https://ollama.com/install.sh | sh - Pull a model:
ollama pull phi3:mini - In MIFY, select Ollama as your provider — no API key needed
Cloudflare Workers AI
Section titled “Cloudflare Workers AI”Run models at the edge with Cloudflare:
- Chat (LLaMA, Phi)
- Embeddings (BGE)
- Image Generation (Stable Diffusion XL)
- Vision, Speech Recognition, Text-to-Speech
- Translation, Classification, Object Detection
Cost Tracking
Section titled “Cost Tracking”MIFY tracks AI costs per workflow run:
- Token usage per node
- Cost estimation based on provider pricing
- Usage dashboard at
/settings/usage - Admin usage overview at
/admin/usage
BYOK (Bring Your Own Key)
Section titled “BYOK (Bring Your Own Key)”Add your own API keys for any provider:
- Go to Settings → Credentials
- Select the provider
- Enter your API key
- The key is encrypted at rest and used for your workflows only