📄️ Integrating LLM Providers
Before creating AI services, you first need to configure an AI model provider. APIPark supports over 100 AI models, including OpenAI, Anthropic, AWS Bedrock, Google Gemini, etc. After configuring the provider, you can choose different models to create AI services and manage all authorization information and cost statistics for AI services in APIPark.
📄️ One-click LLM Deployment
APIPark One-click deployment LLM capability enables users to deploy mainstream open-source large language models (such as DeepSeek, LLaMA, ChatGLM, QWen, etc.) through a visual interface with one click, automatically completing model optimization, service deployment, and gateway configuration initialization. Developers do not need to focus on the underlying architecture; they can deploy open-source models locally within minutes and transform them into API interfaces that comply with the openai request and response format. It can be integrated into existing business systems, significantly reducing the threshold for AI application implementation and helping enterprises quickly build intelligent service capabilities.
📄️ APIKEY Resource Pool
The APIKEY resource pool is a feature that centrally manages and allocates APIKEYs, providing strong support for the stable operation of AI services. In the resource pool, users can view and manage APIKEYs from various vendors, including their status (such as normal, exceeded, expired, etc.) and calling priority. Through drag-and-drop operations, users can easily adjust the priority order of APIKEYs to meet different business needs. When an APIKEY encounters issues like exceeded usage or expiration, the system automatically activates other APIKEYs based on priority to ensure the continuous availability of AI services.
📄️ AI Model Disaster Recovery
AI model disaster recovery is an intelligent scheduling mechanism designed to ensure high availability and stability of AI services. When the primary AI provider experiences a failure, load balancing can automatically switch requests to a backup AI provider. This effectively prevents service interruption caused by provider issues, ensures continuous operation of AI applications, and enhances user experience.
📄️ AI API Management
AI API Management is used to centrally display and manage the APIs called from various AI vendors. Users can view detailed information and token consumption for all AI API calls through this list.
📄️ Model Alias Mapping
Through the AI unified interface provided by APIPark, global model parameter routing is supported. You can directly call the target model in any connected AI interface using the format parameter model=supplier ID/model name. This automatically completes channel routing, authentication parameter passing, and response format standardization.
📄️ Integrating MCP (Model Context Protocol)
In late 2024, Anthropic introduced the Model Context Protocol (MCP). As an emerging open protocol, MCP establishes a bidirectional communication channel between LLMs and external applications, akin to an AI “USB-C” connection, facilitating the discovery, understanding, and secure invocation of various external tools or APIs by models. This means: