Call Model Service


After completing the online deployment of model version, you can call the deployed model service through the model service interface. The AI Hub supports calling model services through REST API and GRPC.

Call Model Service via HTTP


The following sample shows how to call the REST API model service via the HTTP protocol.

Get Model Service Interface URL


The pattern of the model service interface URL is:


{URI-scheme}://{domainName}/eap/metrics/{namespace}/v0.2/{namespace}/{modelName}/{modelName-instanceName-deployment}/api/v1.0/predictions


Where:

  • URI-sheme: protocol, where HTTP protocol is supported.

  • domainName: gateway address of the service, the pattern of which is eap.{enos-environment}.{abc}.com, and the enos-environment is the name of the deployment environment in EnOS.

  • namespace: name of the container resource of the deployment service, as shown in the figure below:

  • modelName: model name.

  • instanceName: name of the model version deployment instance.

Request Sample


By taking the service that calls a certain predictive model as an example, the request format is shown as follows:

url: http://eap.{domainName}/eap/metrics/mmc-dw4pdv-o15960025444321/v0.2/mmc-dw4pdv-o15960025444321/demo01/demo01-instance-deployment/api/v1.0/predictions

method: POST

requestBody:
{
  "data": {
    "names": [
      "AGE",
      "AGE1",
      "AGE1",
      "AGE1"
    ],
    "ndarray": [
      [
        6,
        3,
        2,
        2
      ]
    ]
  }
}

Feedback Interface Request Sample


The feedback interface request format is shown as follows:

url: http://eap.{domainName}/eap/metrics/mmc-dw4pdv-o15960025444321/v0.2/mmc-dw4pdv-o15960025444321/demo01/demo01-instance-deployment/api/v1.0/feedback

method:POST

requestBody:
{
  "request": {
    "data": {
      "names": [
        "AGE",
        "RACE",
        "DCAPS",
        "VOL"
      ],
      "ndarray": [
        [
          1,
          1,
          1,
          100
        ]
      ]
    }
  },
  "response": {
    "meta": {
      "routing": {
        "eg-router": 1
      }
    },
  "data": {
    "names": [
      "t:0",
      "t:1",
      "t:2"
    ],
    "ndarray": [
      [
        0.027491291429107768,
        0.00240284367849394,
        1.0586489239828885E-4
      ]
    ]
  }
  },
  "reward": 1
}


Where:

  • request: input the test data.

  • response: return the real data.

  • data: real prediction results.

  • routing: slot machine deployment parameter, which is used to specify the model version for prediction (according to the model version deployment order).

  • reward: slot machine parameter, which is an activation value.

Call Model Service via Seldon Client


Based on the framework services provided by Seldon, the AI Hub supports calling the model service API via REST (internal/external) and GRPC after installing the Seldon Client.


The following sample shows how to call the model service API via Seldon Client.

Install Seldon Client


Currently, the Seldon Client only supports pip installation under Python. When being connected to the public network, it can be installed via Notebook or Python editor. Sample:

pip install seldon_core


After installation, you need to import Seldon Client before calling the code to call the model service. Sample:

from seldon_core.seldon_client import SeldonClient
import seldon_core.seldon_client
import time

Get Model Service Interface URL


For the method of getting the model service interface URL, see the instructions in Calling Model Service via HTTP.

Call Service via REST (Internal)


The following sample shows how to call the model services within EnOS:

from seldon_core.seldon_client import SeldonClient
import seldon_core.seldon_client
import time

if __name__ == '__main__':
    sc = SeldonClient(deployment_name="demo01-instance-deployment", # set deployment name
                      namespace="mmc-rd9vj2-o15960025444321", # set namespace where deployment is running
                      gateway_endpoint="istio-ingressgateway.istio-system.svc.cluster.local:80",
                      gateway="istio")

    success_cnt = 0
    failure_cnt = 0
    start_time = time.time()
    range_num = 1
    for i in range (range_num):
        res = sc.predict(transport="rest", json_data=[[1,25,167,1.205,265.133,24.771,860.392,1.181,41.64,1.329,281.878,18.26,
                                                                            852.903,1.389,80.508,13360]])
        if res.success:
            print(res)
            success_cnt = success_cnt + 1
        else:
            print(res)
            failure_cnt = failure_cnt + 1
    end_time = time.time()
    qps = range_num / (end_time - start_time)

    print(success_cnt)
    print(failure_cnt)
    print(qps)

Call Service via REST (External)


If you need to call the model service outside EnOS, you need to mount the service as an externally accessible API through EnOS API Management, and then call the model service through REST (external).


The following sample shows how to call the model services outside EnOS:

from seldon_core.seldon_client import SeldonClient
from seldon_core.seldon_client import microservice_api_rest_seldon_message
import seldon_core.seldon_client
import time


if __name__ == '__main__':
    sc = SeldonClient(gateway_endpoint="eap.{enos-environment}.{abc}.com",
                      gateway="istio")

    success_cnt = 0
    failure_cnt = 0
    start_time = time.time()
    range_num = 1
    for i in range (range_num):
        res = sc.predict(gateway_prefix="/eap/metrics/mmc-dw4pdv-o15960025444321/v0.2/mmc-rd9vj2-o15960025444321/demo01/demo01-instance-deployment/",
                         transport="rest", json_data=[[1,25,167,1.205,265.133,24.771,860.392,1.181,41.64,1.329,281.878,18.26,
                                                                            852.903,1.389,80.508,13360]])
        if res.success:
            print(res)
            success_cnt = success_cnt + 1
        else:
            print(res)
            failure_cnt = failure_cnt + 1
    end_time = time.time()
    qps = range_num / (end_time - start_time)

    print(success_cnt)
    print(failure_cnt)
    print(qps)

Call Service via GRPC


The following sample shows how to call the model services via GRPC:

from seldon_core.seldon_client import SeldonClient
from seldon_core.seldon_client import microservice_api_rest_seldon_message
import seldon_core.seldon_client
import time

if __name__ == '__main__':
    sc = SeldonClient(deployment_name="demo01-instance-deployment", # set deployment name
                      namespace="mmc-rd9vj2-o15960025444321", # set namespace where deployment is running
                      gateway_endpoint="istio-ingressgateway.istio-system.svc.cluster.local:80",
                      gateway="istio")

    success_cnt = 0
    failure_cnt = 0
    start_time = time.time()
    range_num = 1
    for i in range (range_num):
        res = sc.predict(transport="grpc", json_data=[[1,25,167,1.205,265.133,24.771,860.392,1.181,41.64,1.329,281.878,18.26,
                                                                            852.903,1.389,80.508,13360]])
        if res.success:
            print(res)
            success_cnt = success_cnt + 1
        else:
            print(res)
            failure_cnt = failure_cnt + 1
    end_time = time.time()
    qps = range_num / (end_time - start_time)

    print(success_cnt)
    print(failure_cnt)
    print(qps)