202101~06¶
This section introduces new features and enhancements from January to June in Version 2.2.
Core Service¶
Issue |
Fix / Enhancement |
Impact (if any) |
---|---|---|
Zookeeper logs are easily overwritten by rolling. |
Collected Zookeeper logs to ES. |
None. |
Kafka eagle bug fix: to solve the problem of consumer offset -1. |
Increased the snapshot mechanism to quickly restore consumption data from historical snapshots. |
None. |
|
|
During the update process, the latest logs may not be available. |
All 2.2.0 online environments have only one default read-only account, k8smon, which is the same as the password, which leads to a weak password security problem. |
Changed Grafana password. |
None. |
Resource Management¶
Issue |
Fix / Enhancement |
Impact (if any) |
---|---|---|
The RMS server has a problem that fails to call Kafka topic to create an interface. |
Fixed the problem, and calling Kafka topic for interface creation no longer fails. |
None. |
The stream computing design state and the upper limit of the batch container will be too high according to the dynamic calculation, and need to be adjusted to a static value 4 CU. |
Modified the configuration rm-service.resource.static.limit, adjusted the static upper limit of the design state of streaming calculation from 100 to 4, and adjusted the static upper limit of the design state of batch processing container from 5000 to 4. |
With the upper limit adjustment, the amount that users can apply for will be limited. |
IoT Hub¶
Alert Management Service¶
Issue |
Fix / Enhancement |
Impact (if any) |
---|---|---|
|
|
None. |
|
|
None. |
Alert cannot resolve negative time zone. |
Fixed the error of parsing alert negative time zone. |
None. |
|
|
None. |
|
|
None. |
Obvious performance loss due to the “get” operation for alias in alert Update History Alert Tags API. |
Optimized the performance of Update History Alert Tags API. |
None. |
Device Integration Service¶
Issue |
Fix / Enhancement |
Impact (if any) |
---|---|---|
SFTP Client node does not support filenames that contain “*”. |
SFTP Client node supports filenames that contain “*”. |
None. |
Switch node: error publishing flow when the node has 1 branch with multiple labels and another branch with Others label. |
Fixed error: publish will no longer show error. |
None. |
Unable to handle CSV fields with double quotation marks, and the double quotation marks will be considered as text content. |
Fixed the issue to be able to handle CSV fields with double quotation marks. |
None. |
MQTT Sub node will execute repeatedly after receiving the message. |
Fixed the repeat execution problem. The MQTT Sub node will now only execute once. |
None. |
Enterprise Data Platform¶
Stream Processing Service¶
Issue |
Fix / Enhancement |
Impact (if any) |
---|---|---|
|
|
When publishing, the Lag monitoring of flow operation and maintenance will pause display of data temporarily. |
The cluster task flow has a CheckPoint write failure problem, increasing the driver memory usage when data is backlogged in the task. |
|
When publishing, the cluster task flow will be restarted and interrupted for about 10 minutes. |
In version 0.3.0 of the Normalizer operator, RPC is frequently called in some scenarios, resulting in poor performance of the system and advanced streams that use this operator. |
Optimized the performance of the Normalizer operator in the 0.3.0 version of the operator. |
When publishing, the stream using the 0.3.0 operator will be restarted and interrupted for about 10 minutes. |
Time Series Data Management¶
Issue |
Fix / Enhancement |
Impact (if any) |
---|---|---|
In the storage policy update, the TTL of the underlying table is updated every time by default. When the amount of data in the corresponding table is large, the update will take a long time, causing the retry mechanism to perform updates repeatedly, affecting the usual data writing performance. |
Optimized the performance of the Normalizer operator in the 0.3.0 version of the operator. |
When publishing, the storage policy service is unavailable. |
|
|
When publishing, the data cleaning service is unavailable. |
Data Federation Service¶
Issue |
Fix / Enhancement |
Impact (if any) |
---|---|---|
data-query-proxy repeatedly adds data sources. |
Fixed the problem of repeatedly adding data sources. |
None. |
There was no way to access HDFS across OUs to filter big data accounts that users authorized to themselves. |
Added a function to access HDFS and Hive across OUs. |
None. |
The address and domain name for APIM to verify the token have changed, and different environments must be compatible with the previous address. |
The token verification function is enhanced to be compatible with both new and old tokens. |
None. |
Require a download file size field to verify the callbackUrl function. |
Added download file size when obtaining download status. |
None. |
Data Synchronization Service¶
Issue |
Fix / Enhancement |
Impact (if any) |
---|---|---|
Security issue disabled some commands, affecting the function of creating Hive tables in script development. |
Fixed the problem where blacklisted Hive script keywords affected normal functions during script development. |
During the upgrade process, the IDE page may report an interface error for a short time (not more than 10 minutes). |
|
|
During the upgrade process, the IDE page may report an interface error for a short time (not more than 10 minutes). |
Batch Processing Service¶
Issue |
Fix / Enhancement |
Impact (if any) |
---|---|---|
Script development does not support Shell, and the number of lines displayed in Hive query is too few, which is inconvenient to use. |
Script development to support Shell script. |
During the upgrade process, the IDE page may report an interface error for a short time (not more than 10 minutes). |
Need to support dynamic increase of Hive partition/support special characters for FTP data source account passwords. |
Support partitioning by column values for the data synchronization front end/rewrote the tool classes in the dependency package to avoid the special characters bug. |
None. |
|
Perform table structure verification before writing to the Hive table, support adding columns after task creation/fixed the bug of reading S3 in multi-threaded situations. |
None. |
Optimize the data preview speed when wildcard matches a large number of small files. |
Return the preview results immediately after the first file matching the wildcard is found. |
None. |
Enterprise Analytics Platform¶
MI Lab¶
Issue |
Fix / Enhancement |
Impact (if any) |
---|---|---|
The Mount Hadoop PVC option is not selected by default when creating a notebook instance. |
Optimized the workspace configuration: + The Mount Hadoop PVC option is selected by default. + A message indicating the potential risk prompts if you choose not to mount PVC. |
None. |
If the Data IDE account of the OU is not a standard account (not in the format of data_ouid), and Mode Hadoop PVC is selected when creating a Notebook, the Notebook might fail to start due to missing keytab file. |
Added a keytab mount path to specify a keytab file. |
None. |
MI Hub¶
Issue |
Fix / Enhancement |
Impact (if any) |
---|---|---|
Due to the resource and permission restrictions of the underlying image repository, the amount of registered models and model versions is limited. Currently, at most 30 models can be registered, and at most 50 versions can be staged for a model. After reaching the limit, users need to contact the operation team to remove invalid models from the image repository. After release resources, users can register new models and stage new model versions. |
Removed the limit on the number of online models/model versions. |
None. |
Unable to monitor the model performance. |
Added the model monitoring operator to monitor the performance of the model for a certain period of time. |
None. |
Due to security control, only administrators can create data source connections, and only the creator can test, modify, and delete data source connections. |
|
None. |
Unable to select folder under a selected Git branch. |
You can select a branch/tag, file, or folder after select a Git file, source, and project with Seldon core import. |
None. |
The policy algorithm package is not supported in the computing architecture. |
Created the basic mirror image for the power transaction algorithm, including pyscipopt, cvxpy, cplex, and ortools. Also added Optimize or Trading image management package when staging a version. |
None. |
Need to change the default number of retires in the advanced settings and the resource lower limit. |
CPU limit changed from 0.5 core to 0.1 core, and memory limit changed from 1 G to 0.5 G. |
None. |
Unable to customize a version name when staging a model version. |
Naming rule of a model version supports naming the version according to the system timestamp when the version is published. |
None. |
Tag is not supported when using Git Source to select the image file for model deployment. |
Git Source supports getting files from git branch or tag. |
None. |
Unable to specify a version when exporting a version in Version Details page. |
You can specify a version to export. |
None. |
All pods restart when you delete an instance whose name exceeds 52 characters. |
A message prompts when an instance name exceeds 52 characters. |
None. |
MI Pipelines¶
Issue |
Fix / Enhancement |
Impact (if any) |
---|---|---|
When OnExit is added to the workflow, “Internal Server Error” occurs when searching operators in the DAG graph. |
The error does not occur. |
None. |
Application Enablement Platform¶
Business Process Management¶
Issue |
Fix / Enhancement |
Impact (if any) |
---|---|---|
Business process management does not support batch approvals. |
|
None. |
EnOS HMI¶
Issue |
Fix / Enhancement |
Impact (if any) |
---|---|---|
When EnOS HMI calls the EnOS asset API, the device and site data could not be displayed due to paging parameter error. |
Fixed the batch authorization function. |
None. |