create a piece of music using matlab??
이전 댓글 표시
given a note file named “toneA.m"
------ note A-----
clear all
Fs=8000;
Ts=1/Fs;
t=[0:Ts:1];
F_A=440; %Frequency of note A is 440 Hz
A=sin(2*pi*F_A*t);
sound(A,Fs);
The frequencies of notes B, C#, D, E and F# are 493.88 Hz, 554.37 Hz, 587.33 Hz, 659.26 Hz and 739.99 Hz, respectively.
how to write a MATLAB file to produce a piece of music with notes in the following order : A, A, E, E, F#, F#, E, E, D, D, C#, C#, B, B, A, A. Assign the duration of each note as 0.3s.
댓글 수: 4
Passband Modulation
2012년 9월 21일
Sean de Wolski
2012년 9월 21일
편집: Sean de Wolski
2012년 9월 21일
In grad school I figured out how to play every Linkin Park song:
sound(rand(100000,1),20000)
Saravanakumar Chandrasekaran
2021년 5월 23일
How to store this sound as audio file say for example a .WAV or .MP3 in matlab
Walter Roberson
2021년 5월 23일
답변 (7개)
Star Strider
2012년 9월 21일
I suggest:
notecreate = @(frq,dur) sin(2*pi* [1:dur]/8192 * (440*2.^((frq-1)/12)));
notename = {'A' 'A#' 'B' 'C' 'C#' 'D' 'D#' 'E' 'F' 'F#' 'G' 'G#'};
song = {'A' 'A' 'E' 'E' 'F#' 'F#' 'E' 'E' 'D' 'D' 'C#' 'C#' 'B' 'B' 'A' 'A'};
for k1 = 1:length(song)
idx = strcmp(song(k1), notename);
songidx(k1) = find(idx);
end
dur = 0.3*8192;
songnote = [];
for k1 = 1:length(songidx)
songnote = [songnote; [notecreate(songidx(k1),dur) zeros(1,75)]'];
end
soundsc(songnote, 8192)
댓글 수: 21
talha farooq
2019년 4월 15일
Can you provide the sequence of Notes for playing Game of Thrones Music
Dylan Vizcarra
2019년 11월 24일
Can you explain how the value 8192 was derived. Thank you
Star Strider
2019년 11월 24일
@Dylan Vizcarra — At the time (seven years ago), it was the default sampling frequency for sound and soundsc.
Saeed Haidar
2020년 6월 7일
does anyone know how to make a code that plays several notes at the same time?
such as this part plays the song note by note "song = {'A' 'A' 'E' 'E' 'F#' 'F#' 'E' 'E' 'D' 'D' 'C#' 'C#' 'B' 'B' 'A' 'A'};"
if i want to create a piece of music that contains two instruments or more playing at the same time what should i add to the code so it would work?
Walter Roberson
2020년 6월 7일
Fs = 8192;
notecreate = @(frq,dur) [sin(2*pi* [1:dur]/8192 * (440*2.^((frq-1)/12))), zeros(1,75)];
notename = {'A' 'A#' 'B' 'C' 'C#' 'D' 'D#' 'E' 'F' 'F#' 'G' 'G#'};
song = {{'A'} {'A'} {'E' 'D#'} {'E' 'D'} {'A' 'F#'} {'A#' 'F#'} {'E' 'C'} {'E' 'C' 'A'} {'D'} {'D' 'D#'} {'C#' 'G'} {'C' 'G#'} {'B'} {'B' 'B#'} {'A' 'A#'} {'A' 'A#'}};
songidx = cellfun(@(Notes) 1+sum(cumprod(string(Notes)~=string(notename'))), song, 'uniform', 0);
dur = 0.3*Fs;
songnote = [];
songnotes = cellfun(@(NoteIdxs) sum(cell2mat(arrayfun(@(NoteIdx) notecreate(NoteIdx, dur), NoteIdxs(:), 'uniform', 0)),1), songidx, 'uniform', 0);
songnote = [songnotes{:}];
sound(songnote, Fs)
Saeed Haidar
2020년 6월 9일
Thank you, does it work on Matlab R2016a? Or do i need a newer version?
ghaith deeb
2020년 6월 9일
Hi, does anybody know how can i distribute the notes over a matrix? So that i dont manually include them with every instrument. Thankyou
Walter Roberson
2020년 6월 9일
The code I posted requires R2016b or later for the string() datatype.
Fs = 8192;
notecreate = @(frq,dur) [sin(2*pi* [1:dur]/8192 * (440*2.^((frq-1)/12))), zeros(1,75)];
notename = {'A' 'A#' 'B' 'C' 'C#' 'D' 'D#' 'E' 'F' 'F#' 'G' 'G#'};
song = {{'A'} {'A'} {'E' 'D#'} {'E' 'D'} {'A' 'F#'} {'A#' 'F#'} {'E' 'C'} {'E' 'C' 'A'} {'D'} {'D' 'D#'} {'C#' 'G'} {'C' 'G#'} {'B'} {'B' 'B#'} {'A' 'A#'} {'A' 'A#'}};
songidx = cellfun(@(Notes) cell2mat(cellfun(@(Note) find(strcmp(Note, notename)), Notes, 'uniform', 0)), song,'uniform',0 );
songnotes = cellfun(@(NoteIdxs) sum(cell2mat(arrayfun(@(NoteIdx) notecreate(NoteIdx, dur), NoteIdxs(:), 'uniform', 0)),1), songidx, 'uniform', 0);
songnote = [songnotes{:}];
sound(songnote, Fs)
Walter Roberson
2020년 6월 9일
Hi, does anybody know how can i distribute the notes over a matrix? So that i dont manually include them with every instrument.
In this thread, there is no provision for multiple instruments, so you must be talking about other code, and you should be asking in a location appropriate for that code.
Saeed Haidar
2020년 6월 20일
i'm trying to use the code for other musical instruments such as violin and guitar but they all sound like a piano even though i'm using the right notes for the instruments. Can anyone help me fix it? or give me something or somewhere to start from!
Thanks.
Star Strider
2020년 6월 20일
Different instruments have different acoustical properties. You will need to model those properties (most likely using some sort of digital filter) to get the result you want.
Saeed Haidar
2020년 7월 15일
Hello, can someone help me create an envelope to my code for a better audio signal. I tried the interp1 but i had an error about the length Regards,
Walter Roberson
2020년 7월 15일
Caution: audio signals are represented as columns, one column per channel. The time associated with each sample is (0:NumberOfRows-1)/SampleFrequency . Be sure to extract only one channel at a time when you use interp1()
Saeed Haidar
2020년 7월 24일
ok but how can i extract one channel at a time from this, if i applied it on "song=[line1 line2 line3]" it would still be a problem with "line1" "line2" "line3"
Regards,
fs = 44100;
c5=key(52,8,fs);
a4=key(49,8,fs);
b4=key(51,8,fs);
e4=key(44,8,fs);
e5=key(56,8,fs);
d5=key(54,8,fs);
f5=key(57,8,fs);
line1= [ e4 a4 b4 c5 a4 ];
line2= [ e4 a4 b4 c5 a4 ];
line3= [ e4 a4 b4 c5 b4 a4 c5 b4 a4 e5 e5 e5 d5 e5 f5 f5];
song=[line1 line2 line3];
sound(song,fs,24);
function wave= key(p,n,fs)
t=0:1/fs:4/n;
idx=440*2^((p-49)/12);
tt=4/n:1/fs:0;
wave=(sin(2*pi*idx*t)).*exp(tt);
wave=wave./max(wave);
end
Walter Roberson
2020년 7월 24일
Are line1, line2, and line3 intended to be different channels, three channels simultaneously? If so then do you want silence on channel 1 and channel 2 while channel 3 continues playing ?
Are you wanting to mix down three channels to two channels ? If not, then is the third channel intended to be the bass for something like a Dolby 5 or Dolby 7.1 system?
Saeed Haidar
2020년 7월 24일
the lines {line1 line2 line3} corresponds to the same song but we used them in this term so we can separate them with a rest time, and line1 line2 line3 plays respectively as shown not at the same time.
And yes when each line finishes playing the notes, then the other line starts playing
sorry, i forgot to add the rest function before.
zero=rest(4,fs);
zeroh=rest(8,fs); %0.5 seconds
song=[line1 zeroh line2 zeroh line3];
funtion wave= rest(n,fs)
t=0:1/fs:4/n;
tt=4/n:-1/fs:0;
wave=0*sin(2*pi*t).*exp(tt);
Walter Roberson
2020년 7월 24일
tt=4/n:1/fs:0;
That constructs an empty vector unless n is negative.
If n is negative then t=0:1/fs:4/n; would be an empty vector.
It looks to me as if you might be wanting
tt = fliplr(t);
Saeed Haidar
2020년 7월 24일
tt=4/n:-1/fs:0;
i think "tt" works the same as fliplr(t); since the step is -1/fs in descending order.
I can fix it, but my problem is with extracting the channels for the envelope i can't figure how to do it.
Walter Roberson
2020년 7월 24일
tt=4/n:-1/fs:0; is not exactly the same as fliplr(t). For example let n=5 and fs=6. In forward direction the result would be approximations of 0, 1/6, 2/6, 3/6, 4/6 and then would stop because 5/6 would exceed 4/5. In the reverse direction you would get 4/5, 4/5-1/6=19/30, 4/5-2/6=7/15, 4/5-3/6=3/10, 4/5-4/6=2/15 and then stop because subtraction of another 1/fs would go below 0.
Walter Roberson
2020년 7월 24일
You do not have multiple channels, so there is no need to extract channels for the envelope. Multiple channels would require that you have multiple notes at the same time. I showed a framework for representing that, above, with each cell array entry indicating notes to be played at the same time.
Though it is not actually notes playing at the same time that is important. What is important for multiple channels is that you have representation of sound being emitted at different physical points, or sounds for different devices that might potentially get mixed down to a single physical point but with different treatment, but different flows of notes to be merged together for a single physical point is also a possibility. It depends on what you are trying to do. At the moment you are synthesizing one note at a time with one emission location, and one logical flow of notes, so you only have one channel. Multiple channels implies that there are multiple sources that you want to treat differently but you have a single mono source.
Ryan Black
2017년 12월 27일
편집: Ryan Black
2020년 4월 16일
Yep, I built a comprehensive music synthesizer in MATLAB. Hear a song HERE:
Additive Synthesis manipulates and superimposes fundamental sine waves to create sounds with unique timbres. This models differential equation solution methods derived by Fourier (steady state) and Laplace (transient). The methods can be used to analyze periodic motion from springs, electrical circuits, heat transfer, sound! If you look the stuff up on Wikipedia, you will be disheartened by its complexity, yet it can be explained intuitively:
Most people live their whole mathematical lives thinking in terms of time (distance/speed of car vs time, force vs time, profit vs time). But not all systems are best understood/easy to solve this way. Like, it’s possible to graph the position of a spring vs time, but when conditions become more complex its insanely hard to!!! So we transform from the TIME domain to the…………. FREQUENCY domain!!! One such method is called a Fast Fourier Transform (FFT).
This allows us to analyze seemingly chaotic time-domain signals (cello tones, vowel sounds, etc.) by making sense of them in the frequency domain. The signal becomes a superposition of simpler frequency components with different scaling factors (rather than random wave scribbles). The more pleasing the signal, the more ordered (harmonic) the frequency components are on the graph. Such that {whistle, flute, ahhhhhh vowel} will look more clean on a frequency domain power spectrum graph and {ssssss consonant, engine grumbling} will be less clean though in the time domain this might be hard to distinguish.
At this point we can collect data or clean up the signal and inverse FFT back to the time domain for realistic sounds. This whole process is called sampling but you don't HAVE to be so technical. If you want to just add some harmonic (non-harmonic, too) sin waves together willy nilly and play them, they could still sound good, it will just be harder to achieve a desired effect without the empirical data. Although, I created a realistic bell using a more developed version of this guess-and-check method (using rand functions and transient power spectrums, mostly).
Other than the main theory (and applied music theory), the rest of building a synthesizer is being able to store and call arrays of data in fast user-friendly ways, figuring out quantitative equations for beats per minute, how to insert a variable diminished sustain into a volume envelope array, how to keep everything organized as to continue building the program.. So just tedious coding stuff is the bulk of the work.
Additive Synthesis Discrete Equation below (arrays/scalars are undefined in the code because I don't want to give you my entire program. Do some work on your own! To fully help you start I explained the arrays/scalars in the comments)
%%-------------------BUILD MASTER WAVE EQUATION-------------------------%%
%%----------------------------------------------------------------------%%
%%fLS for Loop Section, loop through "chord, harmonics, clusters"-------%%
for mm=1:length(chord)
if chord(mm)>0
[AM,FM]=modperiodic(modpackl,modpackh,chord(mm)*freq(qq),...
t,volumemast,volchildren,transfreq);
for nn=1:size(ppp,1)
if aspec(nn)>0
place=1-(((clustersize-1)/2)*offset);
flip1=0;
for oo=1:clustersize
dil=dilsize^(((clustersize-1)/2)+oo-1-flip1);
[randstab]=rndgen(error_overtonal);
if nn==1 || nn==2
[randstab]=rndgen(error_tonal);
end
%build wave
y = y + dil*AM.*ppp(nn,:).*...
sin(FM.*transfreq.*t*randstab*place*...
2*pi*nn*freq(qq)*chord(mm));
place=place+offset;
if oo>=clustersize/2
flip1=flip1+2;
end
end
end
end
end
end
end
%%NaL Noise added and loudness envelope applied-------------------------%%
y=((randi(100,1,length(volumemast))/200)-.25)...
.*noisethres.*volumemast.^2.5+y; %noise
y=y.*volumemast.^2.5; %volume envelope final contour
y=y/max(abs(y(1,:))); %and normalize!
% y = single row accumulative sound wave vector, time/amplitude normalized to volumemast
% ppp = transient amplitude spectrum proportion array (colsize is equal length as y, rowsize is equal to # overtones), time/amplitude normalized to volumemast
% mm, nn, oo = array element iterators
% freq(qq) = fundamental frequency, scalar (dependent on melody/modulation/8va/vb/chord iteration data)
% transfreq = transient non-sinusoidal frequency modulation envelope, time normalized to volumemast (must be smooth)
% chord(mm) = fundamental frequency multiplier, scalar
% t = note time vector (equal length as y) sampling at Fs
% FM = high/low transient Frequency Modulation vector (equal length as y), dependent on freq(qq) and time/amplitude normalized to volumemast
% AM = high/low transient Amplitude Modulation Array (equal length as y), dependent of freq(qq) and time/amplitude normalized to volumemast
% randstab = random number between 1+delta and 1-delta regenerated each loop in function: randgen... Acts as a pitch destabilizer for tones and overtones.
% error_tonal/error_overtonal = pitch destabilizer scalars for randgen function
% place = tone cluster scalar, superimposes equally-spaced-f notes around a max-power tonal center
% offset = linear additive iterator for place, scalar
% clustersize = number of superimposed clustered notes for place, scalar
% dil/dilsize/flip1 = cluster power dilation variables
% volumemast = MASTER transient ADSR vector (equal length as y)
% noisethres = transient noise vector, time/amplitude normalized to volumemast
댓글 수: 8
Star Strider
2017년 12월 27일
If you haven’t already, upload it to the File Exchange (link) with all necessary documentation, then post that link instead.
Ryan Black
2017년 12월 30일
I'll be linking a small technical acoustical processing application to the file exchange shortly.
The application scans a sound file containing many distinct time-domain signals then analyzes the transient frequency spectrum of each signal separately (using noise and sound trigger thresholds for signal distinction and fft of equal sized time windows for frequency transience).
The app then compares a test signal with each distinct signal to find the strongest cross-correlation. A user could implement this function as a musical chord identifier or as a custom voice commander.
Star Strider
2017년 12월 30일
They sound interesting. I’ll download both to my desktop machine (Ryzen 7 1800/64GB) and experiment with them there.
Ryan Black
2017년 12월 30일
Cool. If you have any general code optimization advice, please let me know. I'm just getting started with all of this!
Star Strider
2017년 12월 30일
I will.
It seems you’re already doing relatively sophisticated coding, though.
Ryan Black
2018년 3월 20일
I am finishing up MATLAB MUSIC GUI (Sound Sampling/Recognition/Synthesis/Sequencing suite) and will put it on the file exchange when complete, clean, and user-friendly (hopefully 1-3 months)! In the meantime, interested people can preview the remodeled "Fully-Transient Master Wave Equation" I published above.
Chiemela Victor Amaechi
2019년 11월 24일
편집: Chiemela Victor Amaechi
2019년 11월 24일
@Ryan, that's cool. I will like to see the Mr Polygon GUI created for this.
@Passband, you might want to see some files created on File Exchange on Music Piano and other links. Do look at these links and other references:
For References:
Ryan Black
2020년 4월 16일
Oh my I have come so far since this post :/ I don't even know what to say. Uh...
Look here?
Daniel Shub
2012년 9월 21일
1 개 추천
Wayne King
2012년 9월 21일
편집: Wayne King
2012년 9월 21일
Simple sine waves are not going to sound like music even if you string them together. I'm not a music expert by any stretch of the imagination, but an A played on a piano vs. guitar sounds different (and much richer) because of the harmonic structure.
I have not done the notes in the order you give, but you can easily modify:
Fs=8000;
Ts=1/Fs;
t=[0:Ts:0.3];
F_A = 440; %Frequency of note A is 440 Hz
F_B = 493.88;
F_Csharp = 554.37;
F_D = 587.33;
F_E = 659.26;
F_Fsharp = 739.9;
notes = [F_A ; F_B; F_Csharp; F_D; F_E; F_Fsharp];
x = cos(2*pi*notes*t);
sig = reshape(x',6*length(t),1);
soundsc(sig,1/Ts)
댓글 수: 12
Franklin
2014년 10월 10일
I'm able to replicate your example.
But, what if instead of x being cos (2*pi*note*t), x is defined as: 0.25* (2*pi*F_A*t) + 0.5* (2*pi*F_B*t) + 0.75* (2*pi*F_C*t)?
I'm not sure how to setup the constants outside of the cosine functions.
Jonas Törne
2018년 5월 27일
Could you explain what sig=reshape(x',6*length(t),1); means? What does it do and why?
Walter Roberson
2018년 5월 27일
The frequencies are defined across rows, one row per note. x' flips that so that they are down columns, one column per note. reshape() with final component 1 rearranges that into a single column vector -- so all of the samples of the first note, then all of the samples of the second note, and so on.
The more general method would be to use:
x = cos(2*pi*notes*t) .';
sig = x(:);
Saravanakumar Chandrasekaran
2021년 5월 20일
Is it possible to extract the notes back from sig?
Walter Roberson
2021년 5월 20일
Because all of the samples are the same size, you could reshape sig back to rows of length(t); that would get you one note per column. After that, you would need to determine the frequency of each column.
You could try determining the frequency using fft(). Or you could use
cols = reshape(sig, length(t), []);
npeak = sum(islocalmax(cols));
approx_freq = npeak ./ max(t);
This gives
436.666666666667 493.333333333333 553.333333333333 586.666666666667 656.666666666667 736.666666666667
The real frequency will be between that and what would be implied by up to one more full cycle.
Saravanakumar Chandrasekaran
2021년 5월 21일
Thank you Walter
tom cohen
2021년 10월 14일
how can i plot the time/amplitude plot for each one of these notes separately?
Walter Roberson
2021년 10월 14일
If you were to do
cols = reshape(sig, length(t), []);
then you could do
plot(t, cols)
legend(string(notes))
Note: duplicate notes would be drawn in exactly the same location. Times plotted would be relative to the beginning of the note, not absolute time.
tom cohen
2021년 10월 15일
thanks very much Walter. how about a plot in frequency domain AND also a plot with dB scale?
Walter Roberson
2021년 10월 15일
See the first example in the fft() documentation for frequency plot.
dB scales are for power, but when you have two pure sine or pure cosine signals that are the same amplitude, then I calculate that their powers are the same no matter what their frequencies are.
tom cohen
2021년 10월 21일
thank you. maybe you could help with this too please?
why can't i play this?
fs = 8192;
dt = 1/fs;
L = 1;
L=L*fs;
t = (0:1:L-1)*dt-0.5;
NoteFreq1 = 261.63;
NoteFreq2 = 659.26;
NoteFreq3 = 440;
NoteDuration = 0.25;
NoteSpace =0.1;
Note=abs(t)<=((NoteDuration/2).*cos(2*pi*NoteFreq1*t)+((NoteDuration/2)+ NoteSpace).*cos(2*pi*NoteFreq2*t)+((NoteDuration/2)- NoteSpace).*cos(2*pi*NoteFreq3*t));
CleanInput = audioplayer(Note, fs);
play(CleanInput);
Walter Roberson
2021년 10월 21일
Note=abs(t)<=((NoteDuration/2).*cos(2*pi*NoteFreq1*t)+((NoteDuration/2)+ NoteSpace).*cos(2*pi*NoteFreq2*t)+((NoteDuration/2)- NoteSpace).*cos(2*pi*NoteFreq3*t));
% ^^^
You have a comparison, so the output is logical. You cannot play a logical vector.
You could double(Note) to get a series of 0 and 1 values and play that, but it isn't clear that is what you would want.
Cliff Bradshaw
2015년 7월 22일
0 개 추천
There is a free set of four files called "MATLAB JukeBox" that can be downloaded from GitHub:
If you look at the individual song files you'll be able to figure out the syntax.
By typing "JukeBox()" into the console you can play three songs that come in the package!
Hope this helps!
Aurelija V
2016년 3월 10일
0 개 추천
Hi, I have a question about the music playing in MatLab... How to make the melody sound like violin? I get a melody, but it's like a sintenizer, and i have no idea how to make it sound nice.
댓글 수: 1
Image Analyst
2016년 3월 11일
To make a natural sounding piece of music, you'll have to play a recording of an actual recording. I don't know how natural is "natural enough" for you but even with good synthesized music, a trained musician such as Itzhak Perlman would most probably be able to tell natural from synthesized.
Khai Nguyen
2022년 5월 31일
0 개 추천
Hi
How I can create the geometry and material properties (which determines k & m) of an instrument body, a bridge, and one string (ex: A, D, G, or C), and develop a 2D, lump-mass model of this model on software MATLAB.
댓글 수: 1
Walter Roberson
2022년 5월 31일
I suspect that you would need PDE Toolbox
카테고리
도움말 센터 및 File Exchange에서 Audio and Video Data에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!