The Math approach has been surprisingly successful in some astoundingly large tasks, at least by standards of a few decades ago.
Calculating pi to many billions of digits has required about 10^{17} operations without error.
The results have been checked.
Similar more useful calculations have successfully adopted the Math approach.

The claim about 10^{17} operations requires qualification here.
This site recounts in detail the saga of failing CPU’s and disks and the extra code to detect these failures.
The claim is not that the 10^{17} operations were executed without error, but that the checked calculation could not have succeeded except for the code expressing the algorithm being correct.
There is no explanation of the success of this calculation other than that the code was right and the concatenation of good runs indeed amounted to 10^{17} consecutive operations.
The engineering described to tolerate and overcome these errors is worth study.

I know of no program that can tolerate even a 0.0001 error rate in conditional branch decisions.
I suspect that the flakiest of large programs requires an error rate of less than 10^{−8}.
No successful programmer is innocent of the Math viewpoint.

Most crypto algorithms that are shipped are error free. (The entire crypto suite is another issue as is the correctness of crypto protocols.) There is one “leap of faith” which is the belief that there are no algorithms which will decrypt the data without the key in a “reasonable” time.

Much of the Keykos design has a Math style of design.