Referring only to the code in the original question, it looks like there is a mix up between the variables used to define the duration of the signal and the sample period to generate the samples of the same, but it's hard to say because the entire code is not posted in the question. I think you're looking for something like this.
Define the signal of interest
xfunc = @(t) exp(-a.*t).*((t >= 0) & (t <= T));
Pick a sampling period Ts. Though not really necessary, choose Ts such that T/Ts is a nice integer
Define the frequency vector for the CTFT of x(t) and then compute the CTFT
omega_c = linspace(-pi/Ts,pi/Ts,4097);
XCTFT = (1 - exp(-(a + 1i*omega_c)*T)) ./ (a + 1i*omega_c);
Samples of x(t)
Adjust the endpoint to account for the effect of impulse sampling at discontinuities. I think this is similar to your adjustment of xf(1).
Compute the DFT
Do the fftshift and get the associated frequency vector for N odd
omega_n = (-(N-1)/2 : (N-1)/2)*2*pi/N/Ts;
Compare, note the scaling by Ts on the DFT