Main Content

Deep Pitch Estimator

Estimate pitch with CREPE deep learning neural network

Since R2023a

Libraries:
Audio Toolbox / Deep Learning

Description

The Deep Pitch Estimator block uses a CREPE pretrained neural network to estimate the pitch from audio signals. The block combines necessary audio preprocessing, network inference, and postprocessing of network output to return pitch estimations in Hz. This block requires Deep Learning Toolbox™.

Examples

expand all

This example shows how to use the Deep Pitch Estimator block to estimate the pitch of an audio signal in Simulink®. See Estimate Pitch Using CREPE Blocks for an example that uses the CREPE Preprocess, CREPE, and CREPE Postprocess blocks to perform the same task.

Adjust the block parameters to speed up computation and see the pitch estimations in real time as the audio plays.

  • Set the Overlap percentage (%) parameter to 50. With a lower overlap percentage, the block computes and outputs pitch estimations less frequently.

  • Set the Number of buffered pitch estimations parameter to 5. A higher value for this parameter allows the block to improve computational efficiency by operating on multiple audio frames in parallel. However, a higher value also increases latency because the block returns pitch estimations in batches instead of one at a time.

  • Set the Model capacity parameter to Large. This model has fewer parameters than the full-size model, leading to faster computation at the cost of slightly lower accuracy.

Run the model to listen to a singing voice and view the estimated pitch in real time.

Ports

Input

expand all

Audio input, specified as a one-channel signal (vector). If Sample rate of input signal (Hz) is 16e3, there are no restrictions on the input frame length. If Sample rate of input signal (Hz) is different from 16e3, then the input frame length must be a multiple of the decimation factor of the resampling operation that the block performs. If the input frame length does not satisfy this condition, the block generates an error message with information on the decimation factor.

Data Types: single | double

Output

expand all

Estimated fundamental frequency in Hz, returned as an N-by-1 vector, where N is the number of pitch estimations specified by Number of buffered pitch estimations.

Data Types: single

Parameters

expand all

Sample rate of the input signal in Hz, specified as a positive scalar.

Specify the overlap percentage between consecutive input frames as a scalar in the range [0, 100).

Number of pitch estimations in output, specified as a positive integer.

A higher value allows the block to improve computational efficiency by operating on multiple audio frames in parallel. However, it also increases latency because the block buffers the specified number of pitch estimations before returning them.

Pitch confidence threshold, specified as a scalar in the range [0, 1). In postprocessing, the block suppresses fundamental frequencies where the network confidence is below the threshold.

Note

If the maximum value of the network output is less than the confidence threshold, the block returns NaN.

Model capacity, specified as Full, Large, Medium, Small, or Tiny. The smaller sizes correspond to fewer parameters in the model, leading to faster computation but lower accuracy.

Block Characteristics

Data Types

double | single

Direct Feedthrough

no

Multidimensional Signals

no

Variable-Size Signals

no

Zero-Crossing Detection

no

References

[1] Kim, Jong Wook, Justin Salamon, Peter Li, and Juan Pablo Bello. “Crepe: A Convolutional Representation for Pitch Estimation.” In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 161–65. Calgary, AB: IEEE, 2018. https://doi.org/10.1109/ICASSP.2018.8461329.

Extended Capabilities

Version History

Introduced in R2023a