Unit 2: Initializing the OU


Congratulations! Now that you have logged in to the EnOS Management Console, you can now initialize your OU to make it ready for your project with the OU profile and security settings, user and permission management, environment information, resource planning, and purchasing applications.

Managing the OU Profile and Security Settings

After logging in to the EnOS Management Console with the administrator account, you can go to the IAM > Organization Profile page to edit the basic information of your OU and change your user name as needed. For more information, see Managing the Organization Information and Organization Accounts.


To change the security settings for your organization, go to the IAM > Security Setting page and update the password policy, login IP restrictions, and session expiration time as needed. For more information, see Setting the Security Options.

Managing Users and Permissions

To enable collaboration with multiple users, you can create 3 user account types: ordinary users (internal users within the OU), external users (imported from another OU), and LDAP users. To secure your assets and data, EnOS enforces the access control of users at different levels:

  • The service level - a user would need to request proper access from the OU administrator to be able to read, write, or control objects in a service.

  • The asset level - a user can only access the assets that are authorized to the user.


To create user accounts and assign access permission for users, go to the IAM > User page. For information about how to manage users and user permissions, see Creating and Managing Users.

Getting Environment Information

To connect your devices to EnOS through MQTT, consume the subscribed asset data, or invoke API service, you will need to get the EnOS environment information.


The EnOS environment information varies with the cloud region and instance where EnOS is deployed. Log in to the EnOS Management Console and go to Help > Environment Information in the upper right corner to get the environment information.

Planning Resource

Before you start your project on EnOS, you will first need to evaluate your volume of devices, data to be uploaded into EnOS, and the computing scale you need. This would help you determine how much computing and storage resource you will need to request to support your project.


Based on your business requirements, you can plan for the following resources.

Resource Name

Function Description

Device File Storage

IoT Hub enables storing the measurement point data of connected devices with the device file storage resource. A 0.1TB storage space is allocated by default when an OU is created. By estimating the number of connected devices and device file size, you can decide how much device file storage you will need to request.

Data Federation

Before creating data federation channels, you need to request the Data Federation resource. Different resource specifications correspond to different data querying and writing capabilities. Each resource can be associated with one channel only at one time.

Time Series Database

To store time series data that is ingested from devices for processing or application development, you need to request the Time Series Database resource, which includes write capacity and storage space.

Stream Processing

The specification of the Stream Data Processing resource is defined by the calculation capacity of the streaming engine, which refers to the number of data points that can be processed every second. By estimating the number of connected devices and measuring points, you can decide how much computing resource you will need to request.

Data Archiving

Before archiving asset data from either the real-time message channel or offline message channel to the target storage, you need to request the Data Archiving resource.

Batch Processing - Queue

Before using the Data Sandbox notebook for running offline data analytics tasks, you need to request the Batch Processing - Queue resource. If the tasks require higher CPU usage, choose the Computing-Intensive specification. If the tasks require higher memory usage, choose the Memory-Intensive specification.

Batch Processing - Container

To run big data analysis tasks (such as Python and Shell task nodes) using the batch processing service, you need to request the Batch Processing - Container resource.

Data Warehouse Storage

A subject-oriented and integrated data storage for Hive tables that are created in the data warehouse. Select the appropriate storage size according to actual business requirements (10 ~ 1,000G).

File Storage HDFS

HDFS is used for big data analysis and storage scenarios. Data stored in File Storage HDFS can be accessed through creating Hive external tables. Select the appropriate storage size according to actual business requirements (10 ~ 1,000G).

Data Sandbox

Before using the Data Sandbox notebook to develop scripts for data processing and analysis, you will need to request the Data Explorer sandbox resource. Currently, you can request 3 Data Explorer sandbox instances for different data analysis scenarios.

ML Model - Container

The MI Hub of Enterprise Analytics Platform provides a ML model distribution center for users and data scientists, which supports a full model registration process and hosting services for model developers. Before using MI Hub to deploy ML models, you need to request the ML Model - Container resource.


For details about resource management, see Resource Specification.

Next Unit

Connecting and Managing Devices