You will need to explain how "coverage" is determined.
Presumably a camera placed in a low area would not be able to see "through" a high area, but by the same measure a camera placed in a high area would not necessarily be able to see into a dip, depending on the angle. Are the two directions to be weighted evenly?
What about degraded quality of image for viewing a curved surface? What about degraded quality of image according to distance?
Should the algorithm used be to ray-trace from each proposed camera location to calculate the angular area of the projection of the surface as seen at the proposed location? But then one should really include direction of the lighting and reflectivity and diffusion. And if the distances are high enough, refraction as well. Or, for that matter, if the distances are small enough, refraction of the light rays near the surface could be important. Frequency [mix] of the illumination should be included.
With the area divided into sections, is the view of each camera to be cut off at the boundary of the section? If one camera is at a high point in section 1 that has a good view of portions of section 2, then potentially the best position for the camera in section 2 would be to capture some additional local detail rather than trying to get the best view of just that one section.
Oh yes, nearly forgot, camera lens off-axis distortion needs to be taken into account as well: a look directly at an object is, in a lensed world, less distorted than further away from the lens axis. You are going to have different results if you have a fish-eye lens, for example.