Imagine you had a computer for which memory accesses incur no latency: anywhere you might have a machine instruction that reads or writes a register, you can have register indirect addressing into memory, and the instruction still proceeds at full speed: one cycle at modern processor speeds.
What kind computations would be possible that are not possible now? Large fast hash tables (associative arrays). But what to do with them?
Next imagine you had much more of this instantaneously random access memory, say petabyte. What then?
What if you had similarly low-latency locks, etc., for shared memory multiprocessing?
What are we missing out on, because these things are hard, if not impossible?
No comments :
Post a Comment