đī¸ Setting Up AI Models
Before starting to create AI services, you first need to configure your AI model provider. APIPark supports over 100 AI models, including OpenAI, Anthropic, AWS Bedrock, Google Gemini, and more. Once the provider is configured, you can select different models to create AI services and manage all authorization information and cost statistics for AI services within APIPark.
đī¸ AI Model Load Balancing
AI model load balancing is an intelligent scheduling mechanism designed to ensure the high availability and stability of AI services. When a primary AI provider service fails, the load balancing can automatically switch requests to a backup AI provider. This effectively avoids service interruptions caused by provider issues, ensuring continuous operation of AI applications and enhancing user experience.
đī¸ One-click LLM Deployment
APIPark One-click deployment LLM capability enables users to deploy mainstream open-source large language models (such as DeepSeek, LLaMA, ChatGLM, QWen, etc.) through a visual interface with one click, automatically completing model optimization, service deployment, and gateway configuration initialization. Developers do not need to focus on the underlying architecture; they can deploy open-source models locally within minutes and transform them into API interfaces that comply with the openai request and response format. It can be integrated into existing business systems, significantly reducing the threshold for AI application implementation and helping enterprises quickly build intelligent service capabilities.
đī¸ APIKEY Resource Pool
The APIKEY resource pool is a feature that centrally manages and allocates APIKEYs, providing strong support for the stable operation of AI services. In the resource pool, users can view and manage APIKEYs from various vendors, including their status (such as normal, exceeded, expired, etc.) and calling priority. Through drag-and-drop operations, users can easily adjust the priority order of APIKEYs to meet different business needs. When an APIKEY encounters issues like exceeded usage or expiration, the system automatically activates other APIKEYs based on priority to ensure the continuous availability of AI services.
đī¸ AI API Management
AI API Management is used to centrally display and manage the APIs called from various AI vendors. Users can view detailed information and token consumption for all AI API calls through this list.