qnn.CPU
Interface to predict responses of deep learning model for QNN CPU backend
Since R2025b
Description
The qnn.CPU System object is an interface to predict responses of deep
learning model represented as a QNN model or QNN context binary for the CPU backend of
Qualcomm® AI Direct Engine.
To create the interface to predict responses of QNN CPU:
Create the
qnn.CPUobject and set its properties.Call the object with arguments, as if it were a function.
To learn more about how System objects work, see What Are System Objects?
The code generated using qnn.CPU System object can be deployed to one
of these boards that are available under the Hardware board parameter
in Configuration Parameters:
Qualcomm Android Board
Qualcomm Linux Board
Creation
Syntax
Description
qnnhtp = qnn.CPU("QNN-Model",
creates an interface to predict responses of QNN models (compiled shared object
(.so)) for host and target) for the CPU backend.QNNHostModel=qnnhostmodel.so, QNNTargetModel=qnntargetmodel.so)
qnnhtp = qnn.CPU("QNN-Model",
creates an interface similar to the previous syntax and performs dequantization of
the output.QNNHostModel=qnnhostmodel.so,QNNTargetModel=qnntargetmodel.so,DeQuantizeOutput=true)
Properties
Usage
Syntax
Description
predicts responses for QNN CPU backend using qnnresponse = qnncpu(x)qnncpu System object,
based on the input data, x
Instead of calling the System object directly, you can also use the predict function to obtain the response.
Input Arguments
Output Arguments
Object Functions
To use an object function, specify the
System object™ as the first input argument. For
example, to release system resources of a System object named obj, use
this syntax:
release(obj)
Examples
Version History
Introduced in R2025b