Skip to content

AI Provider System

MIFY provides unified access to 108+ AI providers through a layered architecture.

Your Workflow
→ MIFY selects provider based on pack/tier
→ AI SDK adapter normalizes the request
→ Provider receives the call
→ Response normalized back to MIFY format

You write your workflow once. MIFY handles provider differences — authentication, request format, response parsing, error handling, and cost tracking.

Provider packs group models by optimization target:

TierOptimized ForExample Models
FastLow latencyGPT-4o-mini, Claude Haiku, Gemini Flash
BalancedCost/qualityGPT-4o, Claude Sonnet, Gemini Pro
AccurateBest qualityGPT-4, Claude Opus, Gemini Ultra

Select a tier when running workflows, or configure a default.

Run AI models locally with zero API costs:

  1. Install Ollama: curl -fsSL https://ollama.com/install.sh | sh
  2. Pull a model: ollama pull phi3:mini
  3. In MIFY, select Ollama as your provider — no API key needed

Run models at the edge with Cloudflare:

  • Chat (LLaMA, Phi)
  • Embeddings (BGE)
  • Image Generation (Stable Diffusion XL)
  • Vision, Speech Recognition, Text-to-Speech
  • Translation, Classification, Object Detection

MIFY tracks AI costs per workflow run:

  • Token usage per node
  • Cost estimation based on provider pricing
  • Usage dashboard at /settings/usage
  • Admin usage overview at /admin/usage

Add your own API keys for any provider:

  1. Go to Settings → Credentials
  2. Select the provider
  3. Enter your API key
  4. The key is encrypted at rest and used for your workflows only