Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

12
  • 5
    I can't see why low power consumption is a precondition for suitability in real-time applications. It might be for some embedded systems, but has nothing to do with real-time performance.
    – tofro
    Commented Oct 17, 2020 at 8:27
  • 6
    @tofro High-performance CPUs, especially modern x86 designs, typically can't go from sleep mode to full execution instantaneously, because they have to go through power gating and clock frequency settling steps which, under ideal conditions, take milliseconds. A CPU which can sit at full nominal clock speed without consuming any significant power is ready to do useful work in less than a microsecond from a standing start.
    – Chromatix
    Commented Oct 17, 2020 at 11:19
  • 1
    No, soundly engineered realtime systems do not depend on predictable execution time, but only on boundable time, as they use hardware timers for the precise part. Things like reduced power modes only come into it if system software uses them; a typical mains-powered real time system would not, but would rather run the processor at a suitable performance tier all the time. Commented Oct 19, 2020 at 17:38
  • @ChrisStratton Verifying that the execution time will always be within the bounds over the life of the system requires that the execution time is consistent, hence predictable. It is possible to implement branch prediction and caching in ways that do not result in unpredictable variations in performance, and that is what Cortex-R does. But a CPU that lacks those features entirely is even more predictable.
    – Chromatix
    Commented Oct 20, 2020 at 0:49
  • @ChrisStratton Let's suppose you have an algorithm that needs to run in bounded time. You test the execution time many times, and it does in fact run within that time under test. But you tested it on an unloaded system, so the branch predictors and caches remain primed for the algorithm between runs. After transferring to a loaded system, the predictors and caches are re-primed for these other loads, and memory bandwidth is occupied by other threads, so sometimes your algorithm takes significantly longer. That's the problem.
    – Chromatix
    Commented Oct 20, 2020 at 0:54