This code returns the n Lagrange Interpolation Coefficients, `coef[0], ... coef[n-1]`, in the linear combination of the surviving portions to produce the lost portion.
`r` is the argument of the desired lost portion and av[0], ... av[n−1] are the arguments of the surviving portions.

void LagCoef(int n, char r, char av[n], char coef[n]);

Routine `tc` illustrates deployment for one byte portions.

- a
- is a random permutation of the 256 possible arguments.
- d
- is an array consisting of the n pay-load portions each of one byte and also the q auxiliary portions.
- c
- (two instances) are Lagrange coefficients to respectively compute the auxiliary data, and recover lost data.
- A & D
- are shuffled versions of a & d.
- pr
- is the permutation that does both shuffles.
`val(n, c, D)`- performs the linear combination and thus computes the auxiliary portions and the pay-load data upon loss of portions.

For files split into portions and k auxiliary portions, each auxiliary portion requires one op per bit. A typical calculation of 8 bytes of a portion would look something like:

typedef unsigned long int ui; ui ww(ui a) {ui const m = 0x0101010101010101L; return ({ui h = a &m; h<<1^h<<2^h<<3^h<<5;}) ^ ({ui h = a>>1&m; h^h<<1^h<<3^h<<4^h<<6;}) ^ ({ui h = a>>2&m; h^h<<1^h<<3^h<<7;}) ^ ({ui h = a>>3&m; h<<2^h<<3^h<<5^h<<6;}) ^ ({ui h = a>>4&m; h<<3^h<<5^h<<6^h<<7;}) ^ ({ui h = a>>5&m; h<<1^h^h<<3^h<<2^h<<6;}) ^ ({ui h = a>>6&m; h^h<<1^h<<3^h<<6;}) ^ ({ui h = a>>7&m; h<<1^h<<3^h<<5;});}where argument is from plain hunk 0 returns some auxiliary hunk. About 64 ops compute the contribution of one plain file to one of the aux files. This gives 64 bits of results however. Notice that there are no conditional branches and virtually no critical path. This is ideal code for fast processors. The memory access pattern is also simple and cache friendly.

clang xx.c xy.c -O3 -Wmost; time ./a.outtests such a pattern. Most of the time goes to arc4random.

The loss of a terabyte data keeper would require about k ops per bit of reconstructed portions where k is the typical number of pay-load portions.
These ops are mostly simple shift and xor ops with little critical path dependency.
A terabyte is 10^{13} bits and a tera-op is less than 1000 seconds.
If the number of pay-load portions is typically 20 then the work to recompute the lost terabyte is 2*10^{14} ops or 200000 seconds—two or three days.
Normal file backup for most keepers should greatly diminish this cost.

A Performance Evaluation and Examination of Open-Source Erasure Coding Libraries For Storage