The worst case failure rate over bases for the Fermat test of primality approaches 100% in the case of Carmichael numbers. However what is its average failure rate, say, over all 8192-bit numbers, if the base is chosen randomly (or even deterministically, e.g., 2)? This should be easy to measure exhaustively for small numbers, then extrapolate.
(As comparison, the worst case failure rate for the Miller-Rabin test is bounded above by 25% over all bases, so even if you happen to hit one of those worst case numbers, repeated tests with different bases will drive 0.25^n to zero. I do not know the average failure rate of Miller-Rabin, though it is probably very low.)
If one is randomly sampling large numbers in search of a prime, then, at some size, the probability of accidentally selecting a Carmichael number becomes lower than the probability of machine error over some number of repeated Fermat tests. What size is the the break even, after which it is not worth the additional programming effort to implement Miller-Rabin? (This is not for the case when there might be an adversary who might deliberately feed you Carmichael numbers.)