home Home > News > Rockchip NPU and CPU Ecosystem (including Rockchip CPU List)
Industry News, News

Rockchip NPU and CPU Ecosystem (including Rockchip CPU List)

Published: Oct 28, 2025

Share:

Rockchip NPU and CPU Ecosystem (including Rockchip CPU List)

Introduction

Today, we focus on the role of Neural Processing Units (NPUs) and their integration with CPUs in embedded and edge-computing markets. Rockchip is an important company in this area, offering both high-performance CPUs and dedicated NPUs within their system-on-chip (SoC) designs. In this overview, we will look at Rockchip‘s CPU options, explain how their NPU functions, and review the software development tools they provide. We’ll include technical details and comparisons to help understand how these platforms can be used in practice.

Overview of the Rockchip CPU

When we mention rockchip CPU, we are talking about the processing cores and systems-on-chips (SoCs) made by Rockchip that act as the main processors in embedded devices.

What constitutes a Rockchip CPU?

A Rockchip CPU typically denotes an SoC from Rockchip that integrates one or more ARM-based cores (for example Cortex-A-series), a GPU, multimedia engines, peripherals and sometimes an NPU for AI acceleration. According to Rockchip’s own product information, the company offers CPUs and complete SoCs targeted at intelligent IoT, multimedia and edge-AI domains. For example, the RK3576 is described as an octa-core 64-bit high-performance ARM processor with rich interfaces, and including 6 TOPS self-developed high-efficiency AI NPU in the same package. Thus, the Rockchip CPU often means the entire chip, but for clarity we treat it as the host processor portion of the device.

Rockchip CPU list

It is instructive to list out several representative CPU/SoC entries from Rockchip, showing how they compare and what their features are. Here is a summarized table:

ModelCPU architectureProcess / Key features
RK32884× Cortex-A17 (32-bit)High-performance 2014 era SoC.
RK33992× Cortex-A72 + 4× Cortex-A53 (64-bit)Big-LITTLE design, multi-media and AI support.
RK3566/RK35684× Cortex-A55 (64-bit)Mid-range edge/AI SoC.
RK35884× Cortex-A76 + 4× Cortex-A55 (64-bit)High-end, 8nm process, advanced AI/NPU.

In more user-friendly form, here are rockchip cpu list entries:

Also, read more about RK3688/RK3668

How to evaluate Rockchip CPUs for your use-case

When selecting a Rockchip CPU (or SoC) from the list, some of the key evaluation criteria include:

  • Core architecture & count: e.g., Cortex-A76 vs A55, number of cores. A higher-end core delivers more single-thread performance but may use more power.
  • Process node: e.g., RK3588 is built on an advanced node (8nm) which helps with power and thermal.
  • Multimedia and peripheral support: Video codec capabilities, display interfaces, GPU, memory channels all matter.
  • NPU / AI acceleration: If the application requires on-device AI, the NPU integrated with the SoC becomes critical (which leads naturally into the Rockchip NPU discussion).
  • Ecosystem and SDK support: Availability of SDKs, model conversion tools, software drivers etc.
  • Cost, power envelope, board support: Particularly in embedded and edge systems, power/pin/thermal budgets matter.

In my opinion, for embedded engineers, the Rockchip CPU list provides a very strong starting point: the mid-range RK3566/68 offers a good balance, while the high-end RK3588 is compelling for AI/vision-heavy workloads. On the other hand, older entries like RK3288 may still be viable for cost-sensitive or legacy devices.

Rockchip NPU: Architecture, Performance and Use Cases

Beyond general-purpose CPU cores, the inclusion of a dedicated Neural Processing Unit (NPU) in a SoC is a major differentiator for modern embedded AI systems. Let’s dive into what the Rockchip NPU is, how it integrates with the platform, and why it matters.

What is the Rockchip NPU?

In essence, the Rockchip NPU is a dedicated accelerator block within Rockchip SoCs designed to perform neural-network inference (and sometimes parts of training) more efficiently than CPUs or even general-purpose GPUs. It offloads tasks such as convolution, quantised matrix multiply, activation, pooling, etc, from the main CPU, thereby enabling lower latency, lower power, and often higher throughput for AI inference.

For example, documentation for boards with the RK3568 state that “RK3568 has a NPU … up to 1 TOPS processing performance” and that “Using this NPU module needs to download RKNN SDK which provides programming interfaces for RK series chips platforms with NPU”.

Moreover, a roadmap article indicates that the RK3588‘s NPU supports 6 TOPS and multiple precision modes (INT4/INT8/INT16/FP16/BF16/TF32). From this we can deduce that Rockchip is positioning its SoCs not merely for general compute and multimedia, but explicitly for edge-AI use.

How the Rockchip NPU integrates with the SoC

Integration of the NPU means that the SoC has the following:

  • • One or more hardware processing blocks dedicated to neural-network layers (e.g., convolution, matmul) and supporting quantised data formats.
  • • Software stack (driver, runtime) that permits the model compiled/trained in a PC environment to be run on the NPU with minimal friction.
  • • A model conversion toolchain (SDK) that handles model conversion, optimization, quantisation, memory layout, tensor formats, etc.
  • • APIs that allow the developer to embed inference calls into their application (e.g., onboard camera / vision system).

Rockchip offers the above via the RKNN-Toolkit2 and associated runtime libraries (often termed the Rockchip NPU SDK). For instance, one GitHub readme states: RKNN-Toolkit2 is a software development kit for users to perform model conversion, inference and performance evaluation on PC and Rockchip NPU platforms.” Elsewhere we find the RKNPU2 interface that provides access to Rockchip NPU platforms.

Performance and real-world use-cases of the NPU

From the data points:

  • • The RK3568’s NPU up to ~1 TOPS (Terra OPS) in earlier-generation boards.
  • • The RK3588 claims 6 TOPS with advanced precision support.

In practical terms, for machine-vision applications (e.g., object detection, segmentation, autonomous vehicles, smart cameras) the presence of an NPU is a real enabler: you get higher fps inference (25+ fps or more) at lower power footprint than if you tried to run on the CPU/GPU alone. For example, a recent academic paper reports that a model was deployed on the Rockchip RV1126 embedded platform (which uses a Rockchip SoC with NPU) achieving >25 FPS real-time on traffic-light recognition.

My own opinion is that for any embedded project where AI inference (vision, audio, sensor fusion) needs to run locally (i.e., on-device), you should strongly consider a Rockchip SoC with NPU support rather than a CPU-only part. The combination of CPU + NPU is what gives you both general logic and dedicated AI acceleration.

Summary

In summary, the combination of Rockchip’s CPU portfolio (Rockchip CPU list) and dedicated NPU hardware (Rockchip npu) presents a very strong foundation for embedded AI and edge computing systems. By leveraging the Rockchip NPU SDK (such as RKNN-Toolkit2 and RKNPU2), engineers can efficiently convert models, deploy inference on device and utilise hardware acceleration rather than rely solely on CPU compute.

close_white

Contact US

    Name *

    Email Address *

    Phone

    Your Campany Name

    Message *