Sadly, there is no optimal amount of data, no optimal distribution of points. Only as much data as you can get.
The quality of the surface reconstruction will improve with more data, and more evenly distributed data on the surface. How could it not?
At the same time, there would also be areas where a finer sampling will be necessary, because those areas have cusps or other highly nonlinear shapes.
It is ironic, that the one part of your shape where not as much information is needed is the area where the surface is very well behaved. There any scheme will work well, and will probably yield a decent approximation, even with a less dense sampling. But at the same time, that will also be a region where one probably could generate more data easily, but who cares? You don't need it there.
Can you somehow generate extra data, purely from only the data you have? That question gets a fairly emphatic no. At the very least, it would be difficult as hell, and would require a great deal of information about the surface.
Consider this very simple shape, described only be some scattered data points:
px = [0 1 1 .6 .5 .4 0 0];
py = [0 0 1 1 .2 1 1 0];
S = polyshape(px,py);
xy = rand(1000,2);
in = isinterior(S,xy)
xy = xy(in,:);
Where exactly does the end of that interior cusp lie? How far down does it go? Yes, if you were willing to make a variety of assumptions based on the known shape of the bounding polygon, you could find the shape of that hole, reasonably well. But making up more data, based on nothing more than the data you already have will not improve your ability to predict the shape of that internal cusp.
Essentially, if you want more data, then you need to improve the process where the data was generated, not make up more data from the existing data.