I've been investigating the possibility of hard real time software development with Python. From some brief research, I've come up with the following:
* A Real Time OS is required.
* The .NET framework apparently cannot be used in hard real time software.
* Windows is not a RTOS, however there is a real time Linux kernel (Which I can install with Ubuntu!).
Assuming I do have an RTOS available, I might be able to use Python if:
* I turn off GC
* I don't create referential cycles
* I avoid using threads
* I use some best practices for real time software development.
* I read a good book on HRT software development.
I'm hoping that I can somehow get access (in Python) to high resolution timer and integrate that into my fibra framework so I can add some temporal based tests/constraints for individual tasklets.
Has anyone done this sort of thing with Python before? I'm looking for advice. I'm willing to consider other languages and platforms if they fit.
Subscribe to:
Post Comments (Atom)
Popular Posts
-
These are the robots I've been working on for the last 12 months. They each weigh about 11 tonnes and have a 17 meter reach. The control...
-
This hard-to-see screenshot is a Generic Node Graph Editing framework I'm building. I'm hoping it can be used for any kind of node...
-
So, you've created a car prefab using WheelCollider components, and now you can apply a motorTorque to make the whole thing move along. ...
-
MiddleMan: A Pub/Sub and Request/Response server in Go. This is my first Go project. It is a rewrite of an existing Python server, based o...
-
Why would I ask that question? Python 3 has been available for some time now, yet uptake is slow. There aren't a whole lot of packages i...
-
It is about 8 degrees C this morning. So cold, especially when last week we had high twenties. To help solve the problem, a friend suggeste...
-
After my last post, I decided to benchmark the scaling properties of Stackless, Kamaelia, Fibra using the same hackysack algorithm. Left axi...
-
I'm now using bzr instead of svn. I'm pushing my repositories to: http://exactlysimilar.org/bzr/ I'm also auto publishing docume...
-
I've just read a newspaper article (courtesy of Kranzky ) from WA Business News documenting the malfeasance, gross negligence and misc...
-
Possibly slightly more correct lighting. The rim light is now only applied in the direction of the sun, rather than being purely based on vi...
10 comments:
I know next to nothing about RT systems, but I would think that Python's heavy reliance on hash tables would explode the calculated time constraints. For example, some operation involving N hash table lookups would take N units of time in the expected case, but N*M time in the worst case over the size of the hash table(s) [assuming linear probing; less deterministic times if the hash table needs to grow and/or a chaining method is used].
So, how much determinism do you have? Theoretically you can constrain it to some worst case, but your runtime measurements would be far, far lower than this, as a hash table almost never degenerates into the worst case in practice.
This would result in the user requesting a task to be completed in 5ms, and having that task rejected, as the runtime could only guarantee that it can be completed in 5 seconds (though in practice it might likely take less than 1ms).
A quick Google search revealed nothing about hash tables in RT systems; perhaps they are assumed to have O(1) complexity anyway, and dash the possible-yet-unlikely consequences.
I'd also be interested if you have any ideas on accurately calculating the expected realtime limit, given the complexity of the Python VM.
I would think twice before choosing the strategy of hard real time. Often people forget there is a whole 'nother category of soft real-time applications. 99% of all real-time is in the soft category. Very few things need hard real-time.
If you do need hard real-time, then no, Python will never hit what you need. Soft real-time? I've done some things (like driving LCD display panels, etc) in Python.
Chris: I've thought more than twice, and yes, hard real time is absolutely needed.
Why do you think that Python will never hit what I need?
Apart from Alex's comment about hash tables, the only non-deterministic behavior I know of in C-Python is the GC, which can be turned off. Is there more I should be considering?
Is there more I should be considering?
Yes.
For example, PREEMPT_RT-based kernels like Ubuntu's are for soft real-time, not hard. Furthermore, all PREEMPT_RT does, ultimately, is minimize interrupt latency--which means kernel code can be real-time, not userspace.
Generally, RTOSes don't offer real-time features to userspace, assuming they even support a userspace.
Notwithstanding that, another problem is that CPython calls libc's malloc() all the time, which is very, very nondeterministic. That's just the first one I thought of.
Is hard real time Python possible?
Well...theoretically, I suppose you could write a Python implementation that ran close enough to the metal to be hard real time. You'd probably have to do extremely aggressive static analysis, maybe even restrict yourself to a subset of language features ala PyPy's RPython. But I can imagine it being possible.
Certainly not with CPython or any other existing implementation, however.
This would be an absolutely huge project. Linus and company won't even let C++ into Linux because of possible unbounded overhead, and mainline Linux isn't any real-time, much less hard.
That said, after three years in the embedded industry, I've noticed one thing: most things people insist are hard real-time are, actually, not. Often, they require guaranteed throughput, not latency, and when hard real-time is required, the alleged hard real-time solution is actually soft real-time with a low probability of failure. But nobody cares, because it works anyway, and was a hell of a lot easier to write and debug.
Sorry for the long post:
Timers:
With a patched linux kernel I believe nanosleep and gettimeofday will be as accurate as you can get. You should be able to use ctypes to get at nanosleep from python.
My RT Understanding:
From the way I understand any system call that could cause your process or thread to be swapped off the CPU in favor or another breaks hard real time. In the case of Python that means you have to find a way for the Python's memory allocator to preallocate all the memory it needs so it doens't have to OS for it. You will also have to make sure now extensions use malloc/free during there use. As the above poster said, you have to also have to avoid most, if not all system calls.
Do you really need hard real time?:
Exactly what update rate are you talking about? At the lab I work the we use systems with Linux kernel and the real time patch set (both Ubuntu and Timesys) to run safety critical system at up to a 250 Hz update rate. We have had experience with stability up as high as 1000 Hz. We characterize this as soft realtime because of some of the system call issues I mentioned above.
My Solution:
In a project I am working on (http://ram.umd.edu) we use soft real time C++ threads with a normal python thread pulling all the strings. This leads to lots of C++ code, but that can probably be minimized with more planning.
For easy interfacing you could go with a small C module (ctypes wrapped) which operates in real time, that talks over something like sockets, or a simple message que.
(The laster poster beat me to it)
Simon asked me to comment, but I have to start off asking the same question others have basically asked: do you really need hard realtime?
I'll expand on the question to get a more useful answer: given that "hard realtime" by definition means that the results are not correct if time limits are exceeded, what is the impact of failure in your case? Will somebody die? Will costly machines or products be destroyed? Will something just jam, requiring human intervention and possibly a little maintenance? Or will some mass-produced product simply have a slightly higher defect rate? In other words, what level of criticality is involved?
To put it another way, anyone claiming to have hard realtime constraints should be prepared to justify it, and should understand that "hard" is not an absolute in a practical sense, just an indication that, unlike "soft", somebody thinks "late" means "unacceptable" as opposed to just "undesirable". But that means "hard" and "soft" express personal values, not absolutes. How "hard" is your realtime?
A second dimension that is more obvious than criticality but which also matters is at what sort of timescale is your realtime hard?
The reason this all matters is because Python is not suited to a realtime system far along in either dimension (high criticality or very low latency), where for example someone could die or where it must respond in a few milliseconds. Absolutely possibly suited *in theory*, but it would be impractical to achieve the level of certainty you require. Others have already pointed out the major areas involved, but to truly claim to have a hard realtime Python, you'd need to have analyzed every bytecode, many flows of control, the standard library code, etc, at minute levels of detail and in a dozen ways.
Given that it's impractical to do the analysis that would be required to truly give hard realtime guarantees, consider dividing the system into the truly hard part (if any), and the softer parts. Do everything soft with Python, and interface to a dedicated embedded controller with hard realtime performance. This has been our standard approach (at Engenuity Corporation) and has worked well so far. And we haven't yet spent the millions of dollars that would be required to show at what timescale Python can be considered "hard", if it can, or to modify it so it could be.
anyone claiming to have hard realtime constraints should be prepared to justify it
This application involves large, mobile machinery, working around humans, and possibly other machinery. A failure could result in destroyed machines or humans. I'm not directly involved in this project yet, I'm just interested to know what is involved, and see if I could contribute meanifully to such a project. This RT stuff is completely new to me.
These comments have been very enlightening. Thanks for the feedback chaps!
Yep, Simon, that's pretty darn hard realtime, as hardness goes. :)
I won't say stay away from Python in the project, but don't consider using it for the safety-critical parts.
Try for the split approach, ensuring nobody talks someone into building the entire thing (UI and all) in, say, C on a custom controller, or even QNX unless your UI is really part of the hard realtime areas.
Find ways to use Python for automated tests, or to automatically generate other parts of the system, or just for prototyping or modelling parts of it.
Just please don't anyone write "if self.bigRedButtonPressed(): self.emergencyShutdown()" for your robots.
I have to agree with Peter on this. You should split the application into a life-critical component, which would likely be implemented on top of something like QNX or VxWorks. The other parts, such as user interface (except where the UI might be looped into the life-critical components), debug, testing, etc., can be written in a non real-time environment.
When dealing in truly hard real-time applications (like medical control devices), this is the strategy that is often chosen. Often the UI is written in Windows, but the "back end" is all in VERY carefully controlled code because every microsecond matters. Even where things like Windows, Python, Linux aren't used, it's normal to split your application into hard and soft components. All major RTOS support this concept, and discourage you from using more hard real-time guarantees than you absolutely need as it introduces more hysteresis in the system
As others did already say, it depends on what "hard realtime" means and what the involved processing is. In principle, doing daily metorological forecasts is very hard realtime 'cause you get into serious trouble if you don't have your forecast at eight o'clock pm.
I've used Python on Alpha Linux with Andrew Morton's low latency patches some eight years ago quite successful for evaluation of audio processing algorithms. It just used the mlockall() syscall and real-time scheduling. And it did all memory allocation needed in the in the main loop before (remember that garbage collection is only called when no reference points to an object any more). So, glibc memory allocation wasn't causing any problem actually. Has tables weren't used apart from namespaces with not too many entries. Control was done with a socket interface and a simple protocol interpreting commands from another computer. The ATA disk was changed succesfully to a SCSI disk after running into trouble with this. The application demanded worst-case latencies of about 5 to 8 ms (I think with a machine of today you would get well under 1 ms). Not meeting the latency requirement would have resulted in unacceptable noise and dropouts in the audio output. In fact, one could fire up a web browser and use it on the same machine without any dropouts. Perhaps worth to note that it wasn't an Intel CPU but an Alpha Processor.
There exist also latency measurements for Linux and usually worst case latencies are well below 0.5 ms. Certain things mess it up, for example using ATA disks and the floppy device. Of course, I wouldn't at all use this for life-critical systems which have to respond in the microsecond range. My personal impression is that when avoiding gc, Python runs quite deterministic in the ms range because it is written in C.
Post a Comment