This paragraph in Wikipedia really is no candidate for the best entry-of-the-year award. It seems to be comparing Apples with Oranges. (Or rather starts talking about CPUs, then commences on PCs vs. "something else", i.e. system architecture). A comparison of interrupt latency and predictability thereof doesn't make much sense on CPU level - It must be done on system level, as all components of a system can affect this timing and need to be taken into account for comparison.
Wikipedia seems to have realized that and warns you with a "largely unverified" banner. So, take it with a grain of salt (or, rather, a spoonful).
While it is true that the 8259 in a PC (contemporary to the mentioned 68k and 6510, not a modern one) adds some overhead to the interrupt latency, this is not at all a trait of the x86 CPU, but rather one of the IBM PC's architecture. You can easily add an interrupt controller to a 68k system and it will add the same latency there. Lots of 68k computers BTW did that - The Atari ST (with its MFP) and the Amiga, had both interrupt and DMA controllers that introduced similar latency and bus contention overheads.
Intel's x86 architecture can just as well be used for real-time applications - and this has been done successfully in the past - many embedded systems were based on 80186 and 80386 CPUs - just not on an IBM PC architecture.
MS Windows didn't help much as well, so it was out of the question for running realtime applications - But there were (and are) quite some real-time OS for x86, like QNX or VxWorks, and there are even real-time Linux derivates for x86 CPUs.
Of the three computers mentioned, the Commodore Amiga is probably the computer with the least predictable interrupt latency - Its custom chips are allowed to take over and occupy the bus for a significant and relatively unpredictable amount of time (length of a possible DMA access is of concern here).
Taking superscalar traits of a CPU (like out-of-order-execution, parallel instruction execution, branch prediction, caches,...) into the argument (started with the Pentium on Intel's range of CPUs),
still doesn't distinguish the Intel CPU range as particularly bad - Motorola had, with the 68060, very similar technology in its portfolio with very similar consequences on predictability (obviously, there is no 6502 derivate with such features).
Some more comments on the Wikipedia paragraph:
On ...anybody could use their home computer as a real-time system.:
Well, yes and no. To my knowledge, there was no real-time multitasking OS available for any of the mentioned home computers other than the Atari ST, which had OS-9 and RTOS-UH, both respectable RTOSs. So, using any other such HC as a real-time system required you to write your OS from scratch - Not a task for the faint-hearted, and not distinguishing these computers at all from x86-based computers - you could have done the very same thing there.
On ... possibility to deactivate other interrupts allowed for hard-coded loops with defined timing and the low interrupt latency
Deactivating interrupts and hard-coded tight loops are both definite no-nos for real-time systems. This seems to be alluding to "racing-the-beam" applications and other tight synchronisation to video timing techniques, mainly used in the home computer demo scene. While those techniques are definitly closely dependant on timing, they are basically the opposite of a real-time system - as they hog the CPU for quite a long time for one single task, like busy-waiting until the CRT beam reaches some definite position. This indeed does rely on low interrupt latency, but also heavily increases latency for anything else. Nonwithstanding that, there is no reason you couldn't do the same thing on an x86-based computer (maybe not the IBM PC, again because of its architecture)