There are 2 main algorithms defined in the standard, mu-law algorithm (used in America) and a-law algorithm (used in Europe and the rest of the world). Both are logarithmic, but the later a-law was specifically designed to be simpler for a computer to process.
The equations are:
mu-law:
y = ln(1 + ux) / ln(1 + u) with u=255
A-law:
y = Ax / (1 + ln A) for x <= 1/A where A=87.6
y = (1 + ln Ax) / (1 + ln A) for 1/A <= x <= 1
a-law encoding thus takes a 12 or 16 bit audio sample as input and converts it to an 8 bit value as follows:
Linear Input Code | Compressed Code |
s0000000wxyza... | s000wxyz |
s0000001wxyza... | s001wxyz |
s000001wxyzab... | s010wxyz |
s00001wxyzabc... | s011wxyz |
s0001wxyzabcd... | s100wxyz |
s001wxyzabcde... | s101wxyz |
s01wxyzabcdef... | s110wxyz |
s1wxyzabcdefg... | s111wxyz |
Where s is the sign bit.