Skip to main content
added 230 characters in body
Source Link

While many of the performance capabilities of modern desktop processors intended to run multi-tasking operating systems are somewhat wasted in a real time context, they are not actually impediments.

A soundly designed real time operating system uses hardware timers for the precise timing needs in terms of scheduling when code runs. Desktop operating systems typically have trouble performing to the true capability of the hardware, though there are various strategies for altering scheduling to prioritize certain tasks; at an extreme, a hard realtime schedular can own the actual hardware, and run the kernel of a conventional multi-tasking operating system as its lowest priority task. Most of the software architecture problems inherent in using commodity desktop hardware for real time control are thus solvable. Ad

However, most modern commodity computer boards are ill-suited for realtime control for another reason: the lack of low-latency I/O, at least in any easily interfaced form. Once local-bus I/O channels like true parallel ports were replace by things proxied through multiple levels of protocol indirection (USB being the notorious case) it became far harder for even carefully engineering code to interact with the external world in a timely fashion.

What's fundamentally different about a modern ARM SoC or MCU that is suitable for real time applications, vs a typical x86 desktop platform that is not, is the provision of simple I/O schemes directly from the processor, eg, memory-mapped GPIOs, hardware timers with input/output channels, etc. Nothing says that a part with a computational architecture in the x86 tradition could not have these (and indeed, from time to time vendors try an offering...) but these tend to lose in the marketplace, both to the flash-based ARM parts suitable for small problems, and the more tablet/router-class ARM/MIPS/etc SoC's used for larger ones.

Typically what this all points to is the use of an distinct processor for real time tasks. Some modern control-oriented SoC's even include one right on chip; in the PC world the existence of I/O copressors goes right back to the start and far simpler problems, eg, the original IBM PC has an early MCU on it simply to deal with the keyboard interface (and another in the keyboard) and the presence of additional processor continues to this day. In the control realm, it's common to see things like a 3d printer which has a realtime G-code interpreter running in a small flash based MCU, and then a larger platform like PC or raspberry pi (or an ESP8266 talking to an Android tablet) which then provides a user interface and drip-feeds stored programs just ahead of need. This not only solves the I/O latency problem, it also simplifies software by having the real-time and non-real-time code run on entirely separate computational engines, rather than having to fight over a single execution core.

While many of the performance capabilities of modern desktop processors intended to run multi-tasking operating systems are somewhat wasted in a real time context, they are not actually impediments.

A soundly designed real time operating system uses hardware timers for the precise timing needs in terms of scheduling when code runs. Desktop operating systems typically have trouble performing to the true capability of the hardware, though there are various strategies for altering scheduling to prioritize certain tasks; at an extreme, a hard realtime schedular can own the actual hardware, and run the kernel of a conventional multi-tasking operating system as its lowest priority task. Most of the software architecture problems inherent in using commodity desktop hardware for real time control are thus solvable. Ad

However, most modern commodity computer boards are ill-suited for realtime control for another reason: the lack of low-latency I/O, at least in any easily interfaced form. Once local-bus I/O channels like true parallel ports were replace by things proxied through multiple levels of protocol indirection (USB being the notorious case) it became far harder for even carefully engineering code to interact with the external world in a timely fashion.

What's fundamentally different about a modern ARM SoC or MCU that is suitable for real time applications, vs a typical x86 desktop platform that is not, is the provision of simple I/O schemes directly from the processor, eg, memory-mapped GPIOs, hardware timers with input/output channels, etc. Nothing says that a part with a computational architecture in the x86 tradition could not have these (and indeed, from time to time vendors try an offering...) but these tend to lose in the marketplace, both to the flash-based ARM parts suitable for small problems, and the more tablet/router-class ARM/MIPS/etc SoC's used for larger ones.

Typically what this all points to is the use of an distinct processor for real time tasks. Some modern control-oriented SoC's even include one right on chip; in the PC world the existence of I/O copressors goes right back to the start and far simpler problems, eg, the original IBM PC has an early MCU on it simply to deal with the keyboard interface (and another in the keyboard) and the presence of additional processor continues to this day. In the control realm, it's common to see things like a 3d printer which has a realtime G-code interpreter running in a small flash based MCU, and then a larger platform like PC or raspberry pi (or an ESP8266 talking to an Android tablet) which then provides a user interface and drip-feeds stored programs just ahead of need.

While many of the performance capabilities of modern desktop processors intended to run multi-tasking operating systems are somewhat wasted in a real time context, they are not actually impediments.

A soundly designed real time operating system uses hardware timers for the precise timing needs in terms of scheduling when code runs. Desktop operating systems typically have trouble performing to the true capability of the hardware, though there are various strategies for altering scheduling to prioritize certain tasks; at an extreme, a hard realtime schedular can own the actual hardware, and run the kernel of a conventional multi-tasking operating system as its lowest priority task. Most of the software architecture problems inherent in using commodity desktop hardware for real time control are thus solvable. Ad

However, most modern commodity computer boards are ill-suited for realtime control for another reason: the lack of low-latency I/O, at least in any easily interfaced form. Once local-bus I/O channels like true parallel ports were replace by things proxied through multiple levels of protocol indirection (USB being the notorious case) it became far harder for even carefully engineering code to interact with the external world in a timely fashion.

What's fundamentally different about a modern ARM SoC or MCU that is suitable for real time applications, vs a typical x86 desktop platform that is not, is the provision of simple I/O schemes directly from the processor, eg, memory-mapped GPIOs, hardware timers with input/output channels, etc. Nothing says that a part with a computational architecture in the x86 tradition could not have these (and indeed, from time to time vendors try an offering...) but these tend to lose in the marketplace, both to the flash-based ARM parts suitable for small problems, and the more tablet/router-class ARM/MIPS/etc SoC's used for larger ones.

Typically what this all points to is the use of an distinct processor for real time tasks. Some modern control-oriented SoC's even include one right on chip; in the PC world the existence of I/O copressors goes right back to the start and far simpler problems, eg, the original IBM PC has an early MCU on it simply to deal with the keyboard interface (and another in the keyboard) and the presence of additional processor continues to this day. In the control realm, it's common to see things like a 3d printer which has a realtime G-code interpreter running in a small flash based MCU, and then a larger platform like PC or raspberry pi (or an ESP8266 talking to an Android tablet) which then provides a user interface and drip-feeds stored programs just ahead of need. This not only solves the I/O latency problem, it also simplifies software by having the real-time and non-real-time code run on entirely separate computational engines, rather than having to fight over a single execution core.

Source Link

While many of the performance capabilities of modern desktop processors intended to run multi-tasking operating systems are somewhat wasted in a real time context, they are not actually impediments.

A soundly designed real time operating system uses hardware timers for the precise timing needs in terms of scheduling when code runs. Desktop operating systems typically have trouble performing to the true capability of the hardware, though there are various strategies for altering scheduling to prioritize certain tasks; at an extreme, a hard realtime schedular can own the actual hardware, and run the kernel of a conventional multi-tasking operating system as its lowest priority task. Most of the software architecture problems inherent in using commodity desktop hardware for real time control are thus solvable. Ad

However, most modern commodity computer boards are ill-suited for realtime control for another reason: the lack of low-latency I/O, at least in any easily interfaced form. Once local-bus I/O channels like true parallel ports were replace by things proxied through multiple levels of protocol indirection (USB being the notorious case) it became far harder for even carefully engineering code to interact with the external world in a timely fashion.

What's fundamentally different about a modern ARM SoC or MCU that is suitable for real time applications, vs a typical x86 desktop platform that is not, is the provision of simple I/O schemes directly from the processor, eg, memory-mapped GPIOs, hardware timers with input/output channels, etc. Nothing says that a part with a computational architecture in the x86 tradition could not have these (and indeed, from time to time vendors try an offering...) but these tend to lose in the marketplace, both to the flash-based ARM parts suitable for small problems, and the more tablet/router-class ARM/MIPS/etc SoC's used for larger ones.

Typically what this all points to is the use of an distinct processor for real time tasks. Some modern control-oriented SoC's even include one right on chip; in the PC world the existence of I/O copressors goes right back to the start and far simpler problems, eg, the original IBM PC has an early MCU on it simply to deal with the keyboard interface (and another in the keyboard) and the presence of additional processor continues to this day. In the control realm, it's common to see things like a 3d printer which has a realtime G-code interpreter running in a small flash based MCU, and then a larger platform like PC or raspberry pi (or an ESP8266 talking to an Android tablet) which then provides a user interface and drip-feeds stored programs just ahead of need.