Skip to main content
13 events
when toggle format what by license comment
Oct 21, 2020 at 11:59 comment added Chris Stratton Only a small subset of hard real time systems need such "certification". Most simply work with the kind of unknown hardware bug driven failure you are obsessing about being far down the list of probabilities of what can go wrong overall - actual failures which do happen are things like sensors and drives or the surrounding world. If they failed to meet timing that would be a failure, but with correct software design they meet timing, end of story.
Oct 21, 2020 at 3:37 comment added Chromatix @ChrisStratton The problem is not whether the CPU is fast enough to run the task. The problem is proving that - ie. the test and verification stage. You can have a fully working system that is impossible to certify for actual use.
Oct 20, 2020 at 12:23 comment added Chris Stratton No, complex CPU's are fully capable of actual hard real-time when correctly programmed for that goal (vs real timey behavior under a modified conventional scheduler) because they are well more than fast enough even in the worst case. You may not really understand what correctly programming them for this task looks like.
Oct 20, 2020 at 7:00 comment added Chromatix @ChrisStratton Well, there is a distinction between "soft realtime", in which occasional excursions beyond the task timeslot can be tolerated, and "hard realtime" where they cannot. Simple CPUs, the Cortex-R and similarly predictable CPUs are essentially mandatory for "hard realtime", which is what I interpret the original question as referring to.
Oct 20, 2020 at 3:39 comment added Chris Stratton The question isn't about the Cortex-R though. And it isn't really about life safety systems. It's a general question about realtime; and the reality is that the problem with modern x86 processors for that is not what you have been mistakenly claiming with arguments you shift every time they got disproven, and is not actually a problem with the CPUs at all, but simply that modern desktop platforms lack low latency I/O, so people don't bother writing this type of software for them at all. The in rare x86's that do have suitable I/O do get used in realtime roles.
Oct 20, 2020 at 3:35 comment added Chromatix @ChrisStratton In the realtime world, faster is not always better. Being able to prove that a task always completes within its timeslot is often paramount. This is especially true when safety-of-life is at stake. Such proofs are vastly easier to construct on simple CPUs than on complex ones; even if the complex ones almost always execute the task faster, there can still be rare counterexamples. Modern alternatives such as Cortex-R are specifically designed to enable constructing these timing proofs, while also offering improved performance over older, simpler designs.
Oct 20, 2020 at 1:25 comment added Chris Stratton No, that's both mistaken thinking and an invalid comparison. The processors you are criticizing are orders of magnitude faster than the mentioned legacy chips even in the case where all of these adaptive speed tricks come out wrong. As long as you design the software sensibly, you still come out far ahead, and realtime suitable SoC's have predictave tricks too. The actual hardware issue with a typical x86 is the lack of low latency I/O - that's what the realtime suitable SoC's fix, even while they keep branch prediction, caches, etc.
Oct 20, 2020 at 0:54 comment added Chromatix @ChrisStratton Let's suppose you have an algorithm that needs to run in bounded time. You test the execution time many times, and it does in fact run within that time under test. But you tested it on an unloaded system, so the branch predictors and caches remain primed for the algorithm between runs. After transferring to a loaded system, the predictors and caches are re-primed for these other loads, and memory bandwidth is occupied by other threads, so sometimes your algorithm takes significantly longer. That's the problem.
Oct 20, 2020 at 0:49 comment added Chromatix @ChrisStratton Verifying that the execution time will always be within the bounds over the life of the system requires that the execution time is consistent, hence predictable. It is possible to implement branch prediction and caching in ways that do not result in unpredictable variations in performance, and that is what Cortex-R does. But a CPU that lacks those features entirely is even more predictable.
Oct 19, 2020 at 17:38 comment added Chris Stratton No, soundly engineered realtime systems do not depend on predictable execution time, but only on boundable time, as they use hardware timers for the precise part. Things like reduced power modes only come into it if system software uses them; a typical mains-powered real time system would not, but would rather run the processor at a suitable performance tier all the time.
Oct 17, 2020 at 11:19 comment added Chromatix @tofro High-performance CPUs, especially modern x86 designs, typically can't go from sleep mode to full execution instantaneously, because they have to go through power gating and clock frequency settling steps which, under ideal conditions, take milliseconds. A CPU which can sit at full nominal clock speed without consuming any significant power is ready to do useful work in less than a microsecond from a standing start.
Oct 17, 2020 at 8:27 comment added tofro I can't see why low power consumption is a precondition for suitability in real-time applications. It might be for some embedded systems, but has nothing to do with real-time performance.
Oct 17, 2020 at 0:13 history answered Chromatix CC BY-SA 4.0