Deactivating interrupts and hard-coded tight loops are both definite no-nos for real-time systems. This seems to be eludingalluding to "racing-the-beam" applications and other tight synchronisation to video timing techniques, mainly used in the home computer demo scene. While those techniques are definitly closely dependant on timing, they are basically the opposite of a real-time system - as they hog the CPU for quite a long time for one single task, like busy-waiting until the CRT beam reaches some definite position. This indeed does rely on low interrupt latency, but also heavily increases latency for anything else. Nonwithstanding that, there is no reason you couldn't do the same thing on an x86-based computer (maybe not the IBM PC, again because of its architecture)