Intuitively, with a constant weight initialization, all the layer outputs during the initial forward pass of a network are essentially the same and this makes it very hard for the network to figure out which weights to be updated. And, so any constant initialization would produce a poor result and so better to avoid using it.
Having the weights initialized with values sampled from a random distribution instead of constant values like zeros and ones actually helps the network to train better and faster. Moreover, neural networks being very sensitive and prone to overfitting, having random weight initialization actually prevents the neurons from learning the same features. Also, this imposed randomness is highly suitable for gradient-based optimization techniques and helps a network to better guide which weights to update. Hence, random weight initialization is more actively used.