ML Model - Container Resource¶
The EnOS Enterprise Analytics Platform’s MI Hub provides a series of smart asset distribution centers for smart model users and data scientists, and provides a full model registration process and hosting services for model developers.
When a model is developed from an authoring laboratory or in a third-party system, the model developer can deploy the model to a production environment for prediction tasks. Based on user model update requirements, users can deploy multiple models to the production environment at the same time through canary or blue/green deployment. The MI Hub provides model developers with scientific and effective version management tools which can be shared with end users and other collaborators in a safe and controlled environment. Smart assets that other developers have explored or created can also be reused.
Resource Application Scenario¶
Before using MI Hub to deploy machine learning models, you need to apply for the ML Model - Container resource.
Resource Specification ¶
When deploying a model, you can select the CPU, RAM, and storage according to the actual business requirements, and apply for the corresponding resources to support the use of the production functions.
Specification |
Description |
---|---|
CPU |
Available options are 2 - 64 vcore |
RAM |
Available options are 4 - 128 GB |
Storage |
Available options are 5 - 320 GB |