The error being thrown is owing to 'CBT' format leaving the convolution1dLayer and 'CB' leaving the featureInputLayer. This disagreement throws the error when trying to concatenate. If these formats do agree however, if you have a sequenceInputLayer, this can be the only input layer to the network so you cannot have an additional featureInputLayer.
From your comment "feature to be 1xNumberOfData and sequence to be 25xNumberOfData", do you have sequence data with a time dimension, or is this 25 scalar values that you would like to convolve over? If so, you could treat these 25 scalar values as spatial dimensions.
One way to do this is to set up an imageInputLayer with 25x1x1, 2D images (SSC). You can then convolve over just the first dimension and then flatten the spatial dimensions into the convolution filters (in your example it looks like you have 32 in each convolution layer). You can then concatenate the 25x32 = 800 channels from the convolution output with the 1 channel from the feature input to pass through 801 channels to the subsequent fullyConnectedLayers.
For example, this code snippet constructs such a layer graph.
imageInputLayer([25 1 1],"Name","imageinput")
convolution2dLayer([3 1],32,"Name","conv_1","Padding","same")
convolution2dLayer([3 1],32,"Name","conv_2","Padding","same")
flattenLayer("Name","flatten")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = featureInputLayer(1,"Name","featureinput");
lgraph = addLayers(lgraph,tempLayers);
concatenationLayer(1,2,"Name","concat")
fullyConnectedLayer(10,"Name","fc_1")
fullyConnectedLayer(10,"Name","fc_2")
fullyConnectedLayer(10,"Name","fc_3")
classificationLayer("Name","classoutput")];
lgraph = addLayers(lgraph,tempLayers);
lgraph = connectLayers(lgraph,"featureinput","concat/in1");
lgraph = connectLayers(lgraph,"flatten","concat/in2");