Does the conjecture still stand that the most difficult random 3-SAT instances under the fixed clause length model (in this case, fixed at 3) have a ratio of 4.25 clauses per variable?

Selman, Mitchell, and Levesque, Generating hard satisfiability problems. (1996)

Incidentally, Cook and Mitchell, "Finding Hard Instances of the Satisfiability Problem: A Survey" (1997) does not cite the above 1996 paper; I'm guessing it was due to the timing of submissions and publications. The same Mitchell is co-author on both.

Assuming the conjecture to be true, this suggests a straightforward progressive contest in which a fully specified pseudorandom number generator (including specification of the seed) generates an unbounded sequence of 3-SAT problems of increasing size all at the 4.25 ratio. The goal of everyone in the field is to solve one larger than has been previously solved.

How narrow is the hard region around the conjectured ratio 4.25? Suppose the true number is 4.26. For large n, does that make most random instances generated at 4.25 very easy?

For each size, it would probably be good to have several problems in case some of them are easy just by chance. How many? Maybe an unlimited number? We might want families of random number generators, e.g., Haskell's RandomGen.split function. Are there any simple filters (stopping short of "run this SAT solver for n steps") to exclude easy problems from the contest?

Should the sizes of the problems grow linearly or superlinearly? On one hand, assuming P /= NP, the inherent difficulty of the problem is expected to grow exponentially, so linear growth of the problem size should be fine even with Moore's Law. On the other hand, I suspect the state of the art nowadays is somewhere between thousands and millions of variables, in which case demonstrating your algorithm succeeding at size (say) a thousand and one or million plus one does not seem that great of an achievement over the previous record.

## No comments :

Post a Comment