Changelog
๐ V1.6 Betaโ
New LLM Provider Integrationsโ
Added pre-configured support for:
- Volcano Engine
- Alibaba Bailian Cloud
- Hugging Face
- Ollama
- LM Studio
- Xinference
Expanding service capabilities across multiple cloud platforms and open-source frameworks
Custom Channel Integrationโ
- Now supports custom API channel integration for any provider that strictly adheres to OpenAI-compatible interfaces
- Seamlessly integrate third-party LLM services into APIPark ecosystem
Enhanced Model Customizationโ
- Custom model options available across all channels
- Flexible configuration for model selection and parameter tuning
Model Parameter Value Redirectionโ
- Added model value mapping in service configurations
- Allows using simplified alias names instead of original model identifiers
- Example: Map "gpt-lite" โ "azure-gpt-4-0125-preview"
๐ V1.5 Betaโ
- Added one-click deployment capability for open-source LLMs. Supports deploying the world's most popular open-source large-scale models via APIPark, including simplified and full-featured versions of models like DeepSeek-R1 and DeepSeek-V3.
- Optimized the AI model deployment configuration page experience by migrating load balancing capabilities to a new standalone menu page and upgrading it to support model-level load balancing. This allows users to more flexibly define failover strategies between AI models.
- Continuously improved the AI model interface invocation process. When creating an AI service, the system now automatically initializes service access authorization, shortening the user configuration process and enhancing the overall user experience.
๐ V1.4 Betaโ
- Added support for AI model load balancing, enabling smooth failover when the original AI provider is inaccessible, ensuring your customers are not affected by the provider's issues.
- Introduced support for an AI API KEY resource pool, allowing multiple API keys for the same AI provider to be entered, with the system automatically managing available API keys, overcoming original factory restrictions.
- Added support for token consumption statistics of AI APIs, allowing you to view the number of tokens consumed when calling various AI services' APIs over a specified time range.