Feature List Overview
9/16/25About 26 min
qData Data Platform is built on the core principles of “Standards First, Asset Visualization, Closed-Loop Governance, Open Services, and Intelligent Enablement.”
It covers the full lifecycle of data management—from ingestion, modeling, and governance to service delivery—helping enterprises build a unified, trustworthy, usable, and shareable data foundation.
Currently, qData provides 12 core functional modules:
System Management, Basic Management, Data Integration, Data Standards, Data Assets, Data Governance, Data Quality, Data Security, Data Services, Data Asset Portal, Data Visualization, and Artificial Intelligence.
No. | Module | Submodule | Function Description | Open Source | Commercial | Differences in Open Source |
---|---|---|---|---|---|---|
1 | System Management | User Management | Supports full lifecycle management of user accounts (create, edit, delete, enable/disable, search), password reset and role assignment, enabling unified identity management and meeting organizational user governance requirements. | ✅ (Included) | ✅ | / |
Role Management | Provides a Role-Based Access Control (RBAC) mechanism, supports custom roles and fine-grained permission configuration, enabling flexible control over function and data permissions. | |||||
Menu Management | Supports visual configuration of system menus and functional points, multi-level tree structure, and binding with roles to achieve dynamic control over interface and permissions. | |||||
Department Management | Supports hierarchical configuration and maintenance of organizational structure, building an enterprise-level department tree to provide an organizational foundation for permission allocation and task ownership. | |||||
Position Management | Supports position definition and user binding, enabling integrated "person-role-permission" management to improve alignment between personnel responsibilities and system permissions. | |||||
Dictionary Management | Provides system-level data dictionary management capabilities, supports unified maintenance of public codes such as status, types, and categories, ensuring consistent data standards and standardized front-end display. | |||||
Parameter Settings | Supports centralized configuration and dynamic adjustment of system runtime parameters, enhancing system flexibility and maintainability. | |||||
Notifications & Announcements | Supports publishing, editing, and managing system announcements, operation/maintenance notifications, and business alerts, enabling targeted push and organization-wide delivery of important information. | |||||
Log Management | Centrally records user operation logs and system runtime logs, supports multi-dimensional search, download, and audit by time, user, operation type, etc., meeting compliance and troubleshooting requirements. | |||||
2 | Basic Management | Theme Management | Provides thematic classification capabilities for data assets, supports organizing assets by business domains (e.g., customer, finance, supply chain), improving asset discoverability and management logic. One of the core dimensions of data asset management. | ✅ | ✅ | / |
Application Management | Provides unified management of integrated applications, supports application registration, authentication, invocation, and monitoring, ensuring secure, standardized, and controllable cross-system data services. | |||||
Category Management | Provides multi-dimensional category management capabilities, supporting unified classification of data assets, logical models, data elements, tasks, data development, jobs, and APIs. Through tree structures and multi-level management, enables flexible organization and rapid search, improving asset management standardization and clarity. 1. Data Asset Categories Supports add, modify, delete, and query operations, and allows binding data assets to categories in the asset menu; uses tree structure and multi-level classification to improve asset classification and search efficiency. 2. Logical Model Categories Supports category management and binding of logical models, provides tree-based hierarchical display for unified organization and cross-project search. 3. Data Element Categories Provides category creation, maintenance, and binding for data elements, supports multi-level classification, ensuring centralized management and rapid location of standards. 4. Task Categories Supports categorized management and binding of integration tasks, uses hierarchical display structure to help users clearly manage different tasks, facilitating cross-task scheduling and reuse. 5. Data Development Categories Provides category management and binding for data development tasks, organizes development assets through tree-based hierarchical structure to improve manageability. 6. Job Categories Supports maintenance and binding of job categories, uses hierarchical display to help operation staff intuitively manage jobs, improving scheduling and monitoring efficiency. 7. API Categories Provides functions to add, modify, and bind API categories, organizes services through tree-based hierarchical structure for unified management and standardized invocation. | |||||
3 | Data Collection | Data Source Management | Provides capabilities to access and manage multiple databases and message queues, meeting diverse access needs for relational databases, big data platforms, and real-time message streams, providing foundation for subsequent data processing, analysis, and governance. | ✅ | ✅ | / |
1. Relational Database Types Supports access and management of mainstream relational databases including MySQL, PostgreSQL, Oracle, SQL Server, Dameng (DM8), Renmin University KingbaseES, etc., convenient for configuration and management on a unified platform. | 🟡 (Partially Included) | ✅ | Open source version supports: MySQL, Oracle, Dameng (DM8). | |||
2. Big Data Database Types Supports connection and management of big data databases such as Hive, Phoenix (based on HBase), Doris, ClickHouse, suitable for metadata collection and data access preparation in big data environments, enhancing big data asset usability. | ❌ (Not Included) | ✅ | / | |||
3. Message Queue Types Supports Kafka and other mainstream message queue data source access, used for real-time data stream access and configuration management, ensuring continuity of real-time data processing and analysis. | ✅ | ✅ | / | |||
4. File Types Supports data source access for FTP, Alibaba Cloud OSS, HDFS, etc. | ❌ | ✅ | / | |||
Connection Test | Provides validation of connectivity and availability for integrated data sources, services, or external systems, ensuring rapid detection of connectivity after configuration, reducing access failure risk, and improving operational efficiency. | ✅ | ✅ | / | ||
4 | Data Standards | Logical Model | Provides visual logical modeling and management capabilities, unifies data structure design, bridges standard definition and physical implementation, improves modeling efficiency and consistency, a core tool for data standardization. 1. Logical Model Management Supports creation, modification, query, and deletion of logical models, provides flexible table structure and field configuration to meet diverse data modeling needs. | ✅ | ✅ | / |
2. Table Structure Import and Modeling Supports directly extracting table structures from databases such as MySQL, PostgreSQL, Oracle, SQL Server, Dameng (DM8), Renmin University KingbaseES, Hive, Doris, ClickHouse, and allows adjustment and saving to accelerate logical model construction and reuse. | 🟡 | ✅ | Open source version supports: MySQL, Oracle, Dameng (DM8). | |||
3. Field Standardization Supports associating logical model fields with standard data elements, achieving unified naming, type, and format at the field level, promoting standard implementation, and supporting automated inspection and cleaning. 4. Materialization and Deployment Supports materializing logical models into physical tables and deploying them to various databases (e.g., MySQL, Oracle, Dameng), achieving integrated management of standard models and physical data, ensuring consistency between design and implementation. | ✅ | ✅ | / | |||
Dictionary Table Management | Provides unified definition and maintenance capabilities for dictionary tables, supports adding, modifying, deleting, and querying code items, ensuring consistent dictionary definitions and standardized value ranges, enhancing data management standardization and usability. 1. Dictionary Table Maintenance Supports adding, editing, and deleting dictionary tables, covering basic information such as name, type, format, ensuring clear and complete definitions. 2. Dictionary Item Management Provides add, modify, delete, and query functions for dictionary items, supports batch maintenance and quick search, allowing flexible management of dictionary item content. 3. Standardization Support Through unified dictionary tables, standardizes value ranges in business systems, avoiding inconsistent definitions across multiple systems, improving data sharing and integration accuracy. | ✅ | ✅ | / | ||
Data Element Management | Provides unified definition and management capabilities for data elements, supports adding, modifying, deleting, and querying data elements, clarifying standard information such as name, type, length, and format of fields, ensuring consistency and reusability of data across different systems and scenarios. 1. Data Element Maintenance Supports adding, editing, deleting, and querying data elements, covering elements such as field name, type, length, format, ensuring complete and standardized definitions. 2. Data Element Binding Supports associating data elements with asset fields or logical model fields, ensuring actual data remains consistent with standard definitions, enhancing controllability. 3. Rule Association Allows binding cleaning or inspection rules to data elements, achieving deep integration of standards and quality control. 4. Standard Reuse Provides cross-project, cross-system data element reuse mechanisms, avoiding redundant definitions, improving data management efficiency and consistency. | ✅ | ✅ | / | ||
Logical Materialization | Quickly transform logical models into physical tables, achieving integration of model design and data implementation, improving modeling efficiency and consistency. 1. Model Materialization Supports directly generating physical table structures from logical models, including table names, field definitions, constraints, etc., ensuring accurate implementation of logical designs. | ✅ | ✅ | Open source version supports: MySQL, Oracle, Dameng (DM8). | ||
2. Multi-Database Deployment Provides materialization support for multiple relational databases such as MySQL, PostgreSQL, Oracle, SQL Server, Dameng (DM8), Renmin University KingbaseES, Hive, Doris, ClickHouse, meeting application needs in different environments. | 🟡 | ✅ | ||||
3. Standardized Management Integrates logical modeling, materialization deployment, and data standards for unified management, reducing redundant table creation, improving development efficiency and data consistency. | ✅ | ✅ | ||||
Materialization Records | Provides recording and tracking capabilities for the logical model materialization process, helping users understand materialization execution, version changes, and deployment history, improving process transparency and controllability. 1. Materialization Execution Records Automatically saves execution details of each logical model materialization, including operator, execution time, target database, and result status, facilitating subsequent audit and problem location. 2. Version Change Tracking Supports recording version changes during materialization, allowing users to trace different versions of materialized content, ensuring model iteration is controllable. 3. Deployment History Management Provides query and management functions for materialization deployment history, helping users quickly understand the generation and change trajectory of physical tables. 4. Exception Handling Support Automatically records error information when materialization fails or exceptions occur, combined with log output, assisting users in troubleshooting and repair. | ✅ | ✅ | / | ||
5 | Data Assets | Data Discovery | Provides data discovery, structure analysis, change tracking, and task scheduling capabilities, helping enterprises comprehensively understand data asset status and evolution, enhancing metadata management transparency and controllability. 1. Data Discovery Tasks Supports adding, modifying, deleting, querying, and online/offline operations of tasks, automatically extracts and summarizes table and field structure, scale, and change information from multiple relational databases (MySQL, Oracle, SQL Server, Dameng, etc.), providing a foundation for data asset inventory and unified management. 2. Field and Structure Analysis Automatically detects table structure, identifies changes in field names, types, primary keys, partitions, etc., supports comparison analysis of field additions, deletions, type adjustments, helping users quickly grasp structural evolution, ensuring data model stability. 3. Metadata Change Management Tracks metadata changes (creation, modification, deletion) throughout the lifecycle, supports versioning and historical rollback, ensuring metadata evolution is transparent and controllable, facilitating audit and issue tracking. 4. Status Monitoring Real-time monitors table additions and deletions in data sources, automatically captures data asset changes, and pushes alerts to help users quickly perceive and respond to asset changes. 5. Scheduling Management Provides visual task configuration and scheduling capabilities, supports timed, periodic, and manual scheduling strategies, and allows real-time viewing of execution status and logs, achieving flexible control and efficient operations of task runs. | ❌ | ✅ | / |
Asset Management | Provides list-based management capabilities for asset maps, displaying and searching generated asset maps in a structured way, facilitating quick location, viewing, and maintenance of the asset panorama. 1. Asset List Management Supports centralized display of all generated asset maps in list form, including map name, category, asset description, creation time, data tags, etc., for unified management. | ✅ | ✅ | / | ||
2. Multi-Type Asset Management Database Table Type — Manages table assets in various business databases, supports structured data management. API Type — Supports registration and invocation of external or internal API assets. File Type — Covers storage and management of common file assets such as Excel, CSV, documents, facilitating unified archiving and sharing. | 🟡 | ✅ | Open source version does not support external APIs and unstructured data. | |||
3. Multi-Dimensional Search Provides quick search functions by name, type, theme, creation time, etc., helping users efficiently find target maps. 4. Operations and Maintenance Provides operations such as adding, editing, and deleting maps, helping users flexibly maintain and optimize asset map content. | ✅ | ✅ | / | |||
Asset Details | Provides comprehensive information display and quality monitoring capabilities for individual data assets, covering basic attributes, field structure, quality assessment, lineage relationships, etc., helping users fully understand asset status and value. 1. Basic Information Display Shows basic attributes of assets, including name, type, theme, category, creator, creation time, ensuring asset information is clear and visible. 2. Structure and Field Information Displays asset table structure and field details, including field name, type, length, constraints, default values, etc., allowing users to quickly grasp data structure. | ✅ | ✅ | / | ||
3. Lineage and Dependency Relationships Provides upstream and downstream lineage analysis of assets, visually showing data dependency paths, helping users understand data flow and impact scope. | ❌ | ✅ | ||||
4. Quality Assessment Task Management Supports configuring quality assessment tasks for individual assets, displays task name, execution strategy, execution status, and execution time, facilitating unified scheduling and monitoring. 5. Quality Dimension Statistics Provides quality statistics in dimensions such as completeness, accuracy, consistency, timeliness, and standardization, outputs overall data quality score, and shows problem data ratio. 6. Quality Trend Analysis Uses charts to show changes in data quality over different time periods, helping users track quality improvement effects. 7. Rule Configuration and Management Displays quality rules bound to assets, supports adding, editing, deleting, enabling/disabling rules, ensuring flexibility of quality control. | ✅ | ✅ | ||||
8. Problem Data Handling Provides a repair entry for abnormal data found during assessment, supports manual intervention, ensuring continuous optimization of data quality. | ❌ | ✅ | ||||
Asset Review | Provides review and release process management for data assets, ensuring new or changed assets undergo compliance, standardization, and completeness checks before going live, enhancing controllability and trustworthiness of asset management. 1. Review Task Management Provides a pending review asset list, supports viewing asset basic information, change content, and submitter information, facilitating quick location and processing by reviewers. 2. Review Operations Supports operations such as approval, rejection, return for modification on assets, and can add review comments, ensuring transparent review results. | ✅ | ✅ | / | ||
Data Query | Provides unified query and access capabilities for multi-source data, supports flexible query condition configuration and result display, helping users quickly obtain required data, improving convenience and efficiency of data usage. 1. Multi-Source Query Support Supports unified queries for multiple integrated data sources (relational databases, big data platforms, etc.), avoiding cross-system switching. 2. Result Display and Export Query results support tabular display and can be exported to formats such as Excel/CSV as needed, facilitating subsequent analysis and sharing. | ✅ | ✅ | |||
Data Lineage | Provides visual tracking capabilities for upstream and downstream relationships among data assets, helping users understand data flow from source to application, improving data traceability, impact analysis, and problem troubleshooting efficiency. 1. Lineage Relationship Visualization Visually displays upstream and downstream dependencies between tables, fields, tasks in graphical form, allowing users to quickly understand data flow paths. 2. Field-Level Lineage Analysis Supports precise lineage tracking down to the field level, clarifying field sources and destinations, ensuring data definition consistency and interpretability. 3. Upstream and Downstream Impact Analysis Automatically identifies affected downstream objects when source data or logical models change, helping users assess change impact scope and reduce risks. 4. Multi-Dimensional Association Lineage information can be linked with modules such as assets, tasks, rules, supporting cross-module traceability and analysis. 5. Dynamic Update Lineage relationships are dynamically updated with data integration and task execution, ensuring displayed results remain consistent with actual operations. | ❌ | ✅ | / | ||
Asset Quality | Provides quality assessment and monitoring capabilities for individual data assets, covering dimensions such as standardization, completeness, accuracy, consistency, and timeliness, helping users fully understand asset health and continuously improve. 1. Quality Task Management Supports configuring quality assessment tasks for assets, viewing task name, execution strategy, execution status, and execution time, facilitating unified scheduling and tracking. 2. Quality Dimension Statistics Tests assets from five dimensions: standardization, completeness, accuracy, consistency, and timeliness, outputs overall quality score and problem data ratio. 3. Quality Trend Analysis Provides visual charts showing changes in data quality over different time periods, helping users track improvement effectiveness. 4. Rule Configuration and Management Supports adding, editing, deleting, and enabling/disabling quality rules at the asset level, ensuring flexible and configurable quality control. | ✅ | ✅ | / | ||
5. Problem Data Handling Provides a repair entry for abnormal data detected, supports manual intervention, ensuring stable and controllable data quality of assets. | ❌ | ✅ | ||||
6 | Data Governance | Inspection Rules | Basing on national standard methodologies, provides inspection capabilities in five quality dimensions: completeness, uniqueness, validity, consistency, and timeliness, helping enterprises quickly establish a unified data quality assessment and control system, ensuring data accuracy and reliability. 1. Rule Cleanup Enterprises can re-develop quality rules based on their own business needs, improving rule flexibility and controllability, avoiding redundancy and unnecessary interference. 2. Quality Dimension Configuration The left rule tree supports categorized management of five quality dimensions: completeness, uniqueness, validity, consistency, and timeliness, facilitating quick location and maintenance. 3. Standard Rule Entry Supports batch rule entry via external links, including code, name, description, usage scenarios, and examples, ensuring clear rule definitions, reusability, and unified application in different scenarios. | 🟡 | ✅ | 1. Open source version includes 3 inspection rules, reference for secondary development; 2. Commercial version includes 20+ inspection rules. |
Cleaning Rules | Provides configurable cleaning capabilities for six quality dimensions (accuracy, completeness, consistency, uniqueness, validity, timeliness), driven by standardized rules to automatically process, improving data reliability and usability. 1. Accuracy Correction Locates and corrects incorrect or inconsistent values, covers handling of outliers, format standardization, etc., improving data credibility. 2. Completeness Repair Fills missing values, deletes invalid records, completes mandatory fields according to rules, ensuring key information is complete. 3. Consistency Correction Unifies units, formats, codes, and value domains, eliminating cross-source/cross-table differences, ensuring consistent definitions. 4. Uniqueness Maintenance Deduplicates and merges duplicate entities, generates or verifies unique keys, avoiding statistical bias caused by duplicate records. 5. Validity Processing Identifies and replaces illegal values and dirty data, filters according to value ranges and validation rules, ensuring data usability. 6. Timeliness Adjustment Corrects timestamps, fills time gaps, aligns time zones/timeliness strategies, ensuring accurate time dimensions and complete sequences. | 🟡 | ✅ | 1. Open source version includes 5 cleaning rules, reference for secondary development; 2. Commercial version includes 30+ cleaning rules. | ||
Data Integration | Provides configuration capabilities for input, output, and transformation nodes, covering multiple databases, big data platforms, and streaming message systems, enabling flexible data collection, cleaning, and distribution, supporting complex data integration and processing needs. 1. Input Nodes Supports access from multiple data sources, including big data platforms (Hive, Doris, ClickHouse, Hbase), mainstream relational databases (MySQL, PostgreSQL, Oracle, SQL Server, Dameng (DM8), Renmin University KingbaseES), streaming message queues (Kafka), and external API interfaces, meeting multi-source data collection needs. 2. Transformation Nodes Supports parsing fields from input nodes, combining data cleaning rules to complete standardization and quality assurance, ensuring accuracy and consistency of output data. 3. Output Nodes Supports writing data to big data platforms (HDFS, Hive, HBase), relational databases (MySQL, Oracle, Dameng DM8, Kingbase8), and can output to streaming message queues like Kafka, achieving multi-target data distribution. | 🟡 | ✅ | 1. Open source version only supports data integration from relational databases to relational databases; 2. Open source version includes 3 transformation components, reference for secondary development; 3. Commercial version includes 15+ transformation components. | ||
Data Inspection | Provides systematic inspection and audit capabilities for data, discovers anomalies, defects, and inconsistencies based on predefined rules, helping users promptly identify risks, improve data quality and reliability. 1. Inspection Rule Configuration Supports configuring inspection rules by dimensions such as completeness, uniqueness, validity, consistency, timeliness, ensuring comprehensive coverage. 2. Result Analysis and Display Presents inspection results in reports or charts, visually showing the quantity, distribution, and proportion of problematic data, assisting users in understanding the severity of issues. 3. Problem Data Handling Provides marking, export, and repair entry for problematic data, supports manual intervention or linkage with cleaning rules, achieving closed-loop problem handling. | ✅ | ✅ | / | ||
Data Cleaning | Provides automated cleaning and correction rules for raw data, supports multi-dimensional data quality processing, eliminates anomalies, missing values, and inconsistencies, ensuring data accuracy, completeness, and usability. Cleaning Rule Configuration: Supports configuring cleaning rules by dimensions such as accuracy, completeness, consistency, uniqueness, validity, timeliness, covering common quality issue handling scenarios. | ✅ | ✅ | / | ||
Data Development | Provides lifecycle management, development, and debugging capabilities for real-time stream processing tasks, based on Flink and other stream engines, supports high-throughput, low-latency data processing needs, helping enterprises build efficient and stable real-time computing and development environments. 1. Data Development Task Management Supports full lifecycle management of real-time data tasks, provides configuration and viewing of task name, type, execution engine, scheduling cycle, etc.; and has task status monitoring and query capabilities, helping users grasp task progress and running status in real time. 2. Real-Time Stream Data Development Based on Flink execution engine, supports real-time synchronization and computing tasks, provides scheduling cycle and resource configuration options, ensuring tasks can be flexibly scheduled and efficiently executed; simultaneously meets real-time processing needs in high-throughput, low-latency scenarios. 3. IDE Workspace Provides a visual integrated development environment, supports SQL script writing, real-time log viewing, and task debugging; has auxiliary functions such as syntax highlighting and auto-completion, improving development efficiency and user experience. | 🟡 | ✅ | Open source version lacks big data execution engines (Hive, Spark, Flink). | ||
Job Management | Provides unified configuration, scheduling, monitoring, and optimization capabilities for data jobs, covering dependency relationships, resource scheduling, exception handling, and cross-module orchestration, ensuring efficient, stable, and controllable operation of data processing workflows. 1. Visual Configuration of Task Dependency Relationships Provides a graphical interface, supports drag-and-drop configuration of task dependencies, intuitively defines execution order, automatically generates task flow diagrams, improving clarity and maintainability of job orchestration. 2. Distributed Load Balancing Strategy Management Supports load balancing strategy configuration in distributed environments, can dynamically allocate computing resources based on task priority and resource usage, improving overall system performance and task execution stability. 3. Automatic Retry Strategy Configuration Supports setting automatic retry mechanisms for tasks, users can customize retry times, intervals, and failure handling logic, reducing the risk of task interruption due to temporary errors. 4. Task Exception Monitoring and Alert Center Monitors task running status in real time, alerts for failures, timeouts, insufficient resources, etc., and supports problem handling mechanisms (e.g., task re-run), ensuring timely response to exceptions. 5. Integration of Data Integration and Data Development Nodes Integrates data integration tasks (e.g., ETL) and data development tasks (e.g., SQL scripts) into the job management platform, achieving cross-module task orchestration and collaborative operation, improving overall data processing synergy and scalability. | ❌ | ✅ | / | ||
Operations Management | Provides centralized management and run tracking capabilities for jobs and data development tasks, supports instance run, log viewing, and problem troubleshooting, ensuring task execution process is transparent, controllable, and reproducible. 1. Job Task Management Supports viewing job instance lists, each execution automatically generates an instance for run record tracking. Provides tree-structured display of subtasks, intuitively presenting task hierarchy and dependencies. Supports viewing and downloading execution logs of each task node, facilitating problem location and analysis. Provides job instance re-run functionality, ensuring tasks can be re-executed after exceptions, improving stability. 2. Data Development Task Management Displays data development task instance lists, supports unified task management and progress tracking. Provides viewing and downloading of execution logs, assisting developers in debugging and problem handling. | ✅ | ✅ | / | ||
Data Entry | Provides template-based data entry and reporting capabilities, supports users to enter, modify, and submit business data as needed, ensuring completeness and timeliness of data collection, meeting business needs for data supplementation and correction. 1. Entry Template Management Supports creating and maintaining entry templates, flexibly configures field names, types, validation rules, etc., ensuring standardized and unified entry process. 2. Online Data Entry Provides an online form-based data entry interface, supports adding and editing data, improving entry efficiency. | ❌ | ✅ | / | ||
Project Basic Management | Provides unified management capabilities for projects within the platform, supports project creation, maintenance, and configuration, clarifies project boundaries and resource ownership, helping enterprises organize and manage data and tasks by project. 1. Project Creation and Maintenance Supports adding, editing, deleting, and querying projects, forming a unified project list for centralized management. 2. Member and Role Management Supports assigning members to projects and configuring roles and permissions, ensuring clear responsibilities and controllable permissions for different roles within the project. | ✅ | ✅ | / | ||
7 | Data Quality | Data Quality Tasks | Provides rule-based data quality detection and task management capabilities, supports task configuration, scheduling, and result tracking, helping users continuously monitor data quality, ensuring data reliability and usability. 1. Task Configuration and Management Supports adding, editing, deleting, and categorizing data quality tasks, covering elements such as task name, execution strategy, evaluation objects, and rules. 2. Multi-Dimensional Quality Detection Can apply completeness, uniqueness, validity, consistency, timeliness, and other quality rules in tasks to comprehensively test target data. 3. Scheduling and Execution Supports timed, periodic, and manual scheduling methods, ensuring flexible task operation, meeting quality monitoring needs in different business scenarios. 4. Execution Monitoring and Logs Provides real-time monitoring of task execution status, supports log query and download, facilitating anomaly location and task configuration optimization. 5. Result Display and Processing Displays test results in reports and charts, marks problematic data, and provides repair entries, supports manual intervention or subsequent cleaning linkage. | ✅ | ✅ | / |
Quality Task Logs | Provides log recording and tracking capabilities for the execution process of data quality tasks, helping users fully understand task operation, quickly locate problem areas, ensuring transparency and controllability of quality monitoring processes. 1. Execution Process Recording Automatically records the entire task execution process, including start time, execution nodes, rule application, and end status, ensuring traceability. 2. Exception Information Capture In case of task execution errors or rule validation failures, captures exception details in real time, helping users quickly locate and handle issues. | ✅ | ✅ | / | ||
Task Quality Reports | Provides visual report display and analysis capabilities for data quality task execution results, outputs test results and problem distribution from multiple dimensions, helping users intuitively assess data quality levels, guiding subsequent optimization and improvement. 1. Report Generation Automatically generates quality reports after each task execution, including task basic information, execution time, execution objects, and rule coverage. 2. Quality Indicator Statistics Outputs test results by dimensions such as completeness, uniqueness, validity, consistency, timeliness, provides problem data quantity and proportion, comprehensively reflects data health. 3. Problem Distribution Analysis Uses charts to show distribution of abnormal data, supports problem location by table, field, rule, helping users quickly identify quality weak links. 4. Trend Comparison Supports comparison with historical reports, shows improvement trends of quality indicators, helping track quality improvement effectiveness. | ✅ | ✅ | / | ||
Problem Data Handling | During data quality evaluation, the system can automatically identify abnormal or non-compliant data, and supports users to manually repair and adjust, ensuring data accuracy and consistency. This function helps enterprises quickly discover problems and handle them promptly, preventing erroneous data from spreading in business, ensuring overall data quality reliability. | ❌ | ✅ | / | ||
8 | Data Security | Asset Sensitivity Level | Provides classification management capabilities for data asset sensitivity, labels asset levels based on data type and usage scenarios, helping enterprises implement data security and compliance requirements, reducing the risk of sensitive data leakage. 1. Level Definition and Maintenance Supports defining multiple sensitivity levels (e.g., public, internal, sensitive, core), and can be flexibly adjusted according to enterprise norms, ensuring unified classification standards. 2. Automatic Identification and Labeling Combines data classification and rule libraries, supports automatic identification of common sensitive information (e.g., ID numbers, phone numbers, bank card numbers), and labels corresponding levels. | ✅ | ✅ | / |
Data Asset Classification and Grading Management | Provides multi-dimensional classification and grading management capabilities for data assets, organizing and labeling assets through business attributes, sensitivity levels, etc., improving asset management standardization, security, and usability. 1. Multi-Dimensional Classification Management Supports classifying data assets by business domain, theme, category, etc., using tree structures and multi-level management, ensuring clear asset organization, facilitating quick search and location. 2. Sensitivity Level Grading Provides a grading management mechanism for asset sensitivity, supports defining different levels such as public, internal, sensitive, core, ensuring data security and compliance during use and sharing. | ✅ | ✅ | / | ||
Data Security Technical Protection | Provides multi-layered data security protection capabilities, covering access control, data encryption, desensitization, and operation audit, ensuring data security and compliance during storage, transmission, and use. 1. Access Control Through user, role, and permission systems, implements fine-grained control over data access scope, preventing unauthorized access. 2. Data Encryption Supports encrypting stored and transmitted data, ensuring sensitive information is not leaked during storage and transmission. 3. Data Desensitization Provides static and dynamic desensitization capabilities, masking or replacing sensitive fields (e.g., name, phone number, ID number), ensuring data security and controllability during sharing and use. | ❌ | ✅ | / | ||
Access Control and Permission Management | Provides fine-grained permission management capabilities based on users, roles, and resources, supports multi-level authorization and flexible control, ensuring data and function access is secure, compliant, and traceable. 1. User and Role Management Supports unified management of user accounts and roles, provides role creation, assignment, binding operations, forming a clear access subject system. 2. Menu and Function Permissions Provides role-based menu and function permission configuration, flexibly controls the visible range and operation permissions of different roles in the system. 3. Multi-Level Authorization Mechanism Provides multi-level authorization capabilities at department, project, resource levels, meeting complex organizational structure permission allocation needs. 4. Data Permission Control Achieves fine-grained data access control by organization, role, or user dimensions. | ❌ | ✅ | / | ||
9 | Data Services | API Management | Provides unified data service API management and operation capabilities, supports full lifecycle management of APIs (creation, release, invocation, monitoring), helping enterprises achieve data service delivery, improving data sharing and reuse efficiency. 1. API Definition and Creation Supports quickly generating APIs based on data tables, views, or models, configuring request methods, parameters, return results, etc., reducing service development costs. 2. API Release and Invocation Provides API release, offline, and version management functions, supports calling through a unified entry, ensuring standardized and controllable service delivery. | ✅ | ✅ | / |
Invocation Logs | Provides log recording and analysis capabilities for API invocation processes, helping users fully understand invocation status, locate anomalies, and conduct audits, ensuring transparency and controllability of data service usage. 1. Invocation Record Management Automatically records each API invocation time, caller, request parameters, and return results, facilitating traceability and problem analysis. 2. Performance Metric Statistics Provides key metrics such as invocation count, response time, success rate, failure rate, helping assess API performance and stability. 3. Log Search and Export Supports searching logs by API name, invocation time, caller, etc. | ✅ | ✅ | / | ||
API Authentication | Provides multiple authentication mechanisms to ensure API invocation security, ensuring data services are accessed and used only within authorized scopes, reducing the risk of unauthorized access and data leakage. 1. Multiple Authentication Methods Supports multiple authentication methods such as Token, API Key, OAuth, meeting security needs in different scenarios. 2. Access Control Allows configuring invocation permissions for APIs, restricting caller identity, role, and access scope, achieving fine-grained control. | ✅ | ✅ | / | ||
API Online Testing | Provides online debugging and verification capabilities for released APIs, helping users quickly check API availability and return results, improving development and operation efficiency. 1. Online Debugging Tool Provides a visual testing interface, supports inputting request parameters, selecting request methods, and directly initiating calls, simplifying the testing process. 2. Real-Time Result Feedback Returns response results and status codes immediately after invocation, allowing users to verify API correctness and stability. | ✅ | ✅ | / | ||
API Blacklist, Rate Limiting | Provides security protection capabilities based on blacklist and rate limiting strategies, controls malicious invocations and excessive access, ensuring API service stability and security. 1. Invocation Blacklist Supports adding specified callers (IP, user, application, etc.) to the blacklist, blocking their access to APIs, preventing malicious invocations and unauthorized access. 2. Access Rate Limiting Can set invocation frequency, concurrency, and traffic thresholds, preventing single users or applications from causing service congestion or resource exhaustion due to high-frequency requests. | ✅ | ✅ | / | ||
10 | Data Resource Portal | Portal Home | Provides a unified portal entry and homepage display capability, centrally presents core information such as data assets, task operations, quality monitoring, and service invocations in a visual way, helping users quickly understand the overall platform operation status and key data indicators. 1. Overall Overview Centrally displays core indicators such as data asset quantity, task execution status, quality assessment results, API invocation status, forming a global view. 2. Visual Dashboard Provides chart-based dashboards, visually presenting data distribution, trend changes, and operation status, helping users quickly identify problems and value points. | ❌ | ✅ | / |
Service Resources | Provides a unified directory of data service resources, supports search, filtering, and detailed viewing, helping users quickly discover, understand, and apply for required services, improving service acquisition efficiency and standardization. 1. Directory Viewing Supports viewing published data service directories, helping users quickly locate target service resources. 2. Search and Filtering Provides general search and multi-dimensional filtering functions, supports searching by service name, type, category, etc., improving service discovery efficiency. 3. Service Resource List Centrally displays the service resource list, supports viewing detailed information, status, and application processes, ensuring orderly and compliant service access. 4. Service Detail Page Provides detailed introduction and usage instructions for services, including interface information, invocation methods, parameter descriptions, etc., helping users quickly understand and use services. | |||||
Data Maintenance | Provides unified maintenance capabilities for basic data, supports adding, modifying, deleting, and querying data, ensuring data completeness, accuracy, and consistency across the entire scope. | |||||
Data Entry | Provides a unified data entry point on the portal, supports users to input and submit business data through visual forms, ensuring data collection is convenient and standardized, and can directly enter data processing and analysis processes. | |||||
Document Center | Provides a unified document center on the portal, centrally displays platform usage guides, specification documents, training materials, and case resources, helping users quickly obtain required information, improving self-service learning and application capabilities. | |||||
My Applied Services | Provides a unified management entry for personal applied services on the service portal, supports tracking application progress, viewing service details, and operation management, helping users efficiently grasp the status and usage of applied services. | |||||
Online Approval | Provides a unified online approval entry on the service portal, supports viewing, processing, and routing of user-submitted service applications, ensuring efficient and transparent approval processes, improving service delivery efficiency. | |||||
Backend Management System | Provides backend management capabilities for the service portal, supports unified configuration and maintenance of portal content, users, permissions, and service resources, ensuring efficient operation and sustainable optimization of the portal. | |||||
11 | Data Visualization | Report Design | Provides multi-dimensional data analysis and visualization display capabilities, supports custom report design and data drill-down, quickly generates charts and dashboards through visual drag-and-drop, helping users efficiently gain business insights. 1. Multi-Dimensional Data Analysis Supports statistical analysis of data by business domain, theme, time, etc., meeting multi-scenario data exploration needs. 2. Rich Charts and Templates Provides various chart types such as bar charts, line charts, pie charts, dashboards, and commonly used report templates, facilitating quick report generation. 3. Custom Reports and Drill-Down Supports users to customize report structure and display style, and provides data drill-down functionality, achieving step-by-step analysis from overall indicators to detailed data. | ❌ | ✅ | / |
Dashboard Design | Provides visual dashboard design and display capabilities, supports drag-and-drop layout and multi-type component configuration, helping users quickly build data dashboards, achieving intuitive presentation and real-time monitoring of business indicators. 1. Visual Editor Provides a drag-and-drop design interface, users can freely add, adjust, and combine charts, controls, and backgrounds, simplifying dashboard design process. 2. Rich Component Library Built-in various chart types (bar charts, line charts, pie charts, maps, dashboards, etc.) and display controls, meeting data visualization needs in multiple scenarios. 3. Free Layout Supports arbitrary layout and size adjustment of multiple components, forming personalized, scenario-specific dashboard display effects. 4. Real-Time Data Access Supports connecting to multi-source data, and achieving real-time refresh and dynamic updates, ensuring timeliness and accuracy of displayed content. 5. Preview and Release Provides dashboard effect preview functionality, supports one-click release to display terminals or portals, convenient for sharing and display. | |||||
Dashboard Design | Provides flexible dashboard design and display capabilities, supports multi-dimensional indicator combination, visual component configuration, and interactive analysis, helping users quickly build business monitoring panels, achieving real-time grasp of core indicators. 1. Visual Editor Provides a drag-and-drop design interface, supports free combination of charts, text, indicator cards, etc., simplifying dashboard construction process. 2. Multi-Dimensional Indicator Display Supports combining business indicators by theme, time, region, etc., forming comprehensive monitoring views. 3. Rich Chart Types Built-in line charts, bar charts, pie charts, area charts, radar charts, etc., meeting data visualization needs in multiple scenarios. | |||||
12 | Artificial Intelligence | Text2SQL | Provides intelligent query capabilities for natural language to SQL conversion, users only need to input text questions, the system automatically generates corresponding SQL and executes, helping non-technical personnel conveniently access data, improving data usage inclusiveness and efficiency. 1. Natural Language Parsing Supports automatically parsing user-input natural language questions (e.g., "query last quarter's sales") into standard SQL statements. 2. Semantic Understanding and Optimization Combines domain semantics and data dictionaries to optimize SQL generation results, ensuring queries conform to business context and database structure. 3. Multi-Data Source Support Can connect to multiple databases such as MySQL, Oracle, SQL Server, Hive, achieving cross-source queries. 4. Visual Result Display Query results are displayed in tables or charts, users can view analysis results without additional operations. | ❌ | ✅ | / |
ChatBI | Provides intelligent data analysis capabilities based on conversational interaction, users can complete data queries, report generation, and trend analysis through natural language conversation, helping business personnel obtain data insights with the lowest threshold, improving decision efficiency. 1. Conversational Query Users ask questions in natural language (e.g., "how much did this month's sales grow year-on-year?"), the system automatically parses and returns results, no need to write SQL. 2. Instant Visualization Supports instantly presenting query results in tables, bar charts, line charts, pie charts, etc., enhancing data understanding. 3. Multi-Data Source Access Can connect to multiple data sources such as relational databases, big data platforms, achieving unified analysis across databases and domains. 4. Intelligent Insights Based on historical queries and data models, automatically generates trend interpretations and business insights, assisting users in discovering potential issues and opportunities. | |||||
13 | Others | Online Documentation | Provides more comprehensive official documentation, including deployment, operations, API, best practices, updated timely, well-structured. | ✅ | ✅ | / |
Technical Support | Provides enterprise-level technical support services, dedicated technical contacts, supports SLA, 7x24 or 5x8 support methods. | 🟡 | ✅ | Open source version gets community support via Issues. | ||
Source Code Updates | Provides stable version update channels, accompanied by upgrade guides, patch notes, long-term maintenance of compatibility and security. | ✅ | ✅ | / |