AI API Providers

The AI API Providers page is the technical core of PrxmptStudix. It allows you to manage the connections to the Large Language Models (LLMs) that power your experiments and internal app features.

1. Provider Management

PrxmptStudix supports both built-in "Preset" providers and fully "Custom" providers.

Preset Providers: Presets are pre-configured for popular services (OpenAI, Anthropic, Gemini, Groq, DeepSeek, etc.). They include the correct API host and path by default, requiring only your API Key to function.

Custom Providers: Add a provider that isn’t in the preset list.

  • API Compatibility: Choose between OpenAI, Anthropic, or Gemini compatibility modes.

  • Base URL & Path: Define exactly where the application should send requests (e.g., http://localhost:11434 for Ollama).


2. API Configuration

Authentication

  • API Keys: Securely enter your provider credentials. Use the Eye Icon to toggle visibility.

  • Connection Test: Use the Test button to verify your credentials and network connectivity before running experiments.

API Modes

  • OpenAI Compatible: Works with any service following the /v1/chat/completions standard.

  • Anthropic/Gemini: Specialized modes for native provider SDK features.

  • Ollama: Dedicated mode for local model management without requiring an API key.


3. Model Management

Each provider maintains its own list of available models.

Synchronizing Models

  • Fetch Models: Clicking Fetch Models will query the provider's API to automatically discover every model available to your account.

  • Model Enrichment: PrxmptStudix uses a built-in database to automatically populate context windows, token limits, and pricing information for known models once they are fetched. User is free to modify these values as needed.

Manual Configuration - You can manually add or edit model details at any time:

  • IDs vs. Nicknames: Assign a friendly name (e.g., "Main Storywriter") to a specific model ID (e.g., gpt-4o-2024-05-13) for easier recognition in the Lab.

  • Capabilities: Flag models as supporting Vision, Tool Use, or Image Generation.

  • Pricing: Manually enter Input/Output costs per 1 million tokens to ensure accurate experiment cost estimates.


4. Best Practices

  1. Always Test First: Before launching a large experiment, use the Test button on both the Provider and the specific Model to ensure your account has sufficient credits and the API version is current.

  2. Use Nicknames: When testing many variations of the same model (e.g., different O1 reasoning levels), use nicknames to distinguish them in your result charts.

  3. Local Fetching: If using Ollama, ensure the Ollama service is running on your machine before clicking Fetch Models.


Was this article helpful?
© 2024-2026 | All Rights Reserved. Hikoky