Originally Posted by spiderd
This is a very general skeleton of implementation of a single-thread simulation program. The pseudo-code for my little time-driven example has a BIG LOOP whose counter value is the number of seconds from the start of the simulation.
Each teller, when presented with a new customer, generates, and keeps track of, a "random" end time that allows the server to check at each successive pass through the loop, to see whether that much time has elapsed since service started.
There is nothing in the loop that blocks execution, and, for text output, the simulation does not run in real time (so there is no sleeping); it runs as fast as it can and reports changes that are encountered as a result of the simulated
time elapsed since scheduling each new event.
If you want to create some kind of "real-time" animation based on this, then just put a statement at the end of the BIG LOOP that makes the program sleep for a second (or whatever time granularity you are simulating). Then, for each elapsed (real-time) second the program makes a trip through the BIG LOOP.
A more sophisticated time-based program like this might have a different thread for each server and a dispatcher thread that puts each new arrival into one of the servers' queues. Then each server could sleep for a number of seconds when it encounters a new customer at the head of its queue while the others merrily keep doing their things. For some people that is conceptually simpler. It is, for some people, easier to visualize actual servers taking different times to process their customers and relate this to a multi-threaded simulation. Implementing the details could have significant educational value.
I mean, learning about interaction and communication among several threads in a program, and, particularly debugging such a beast, can be a sobering experience, but might be interesting enough to make such exploration worthwhile to you. It might serve you well in your future development. (A little queuing theory pun here. Get it? Serve you
. No? Oh, well...)
Since the servers are not actually doing
anything except removing things from queues, and seeing if time is up, there is no particular operational advantage (performance or otherwise) to multi-threading the simulation, even if you have a lot of cores in your processor that can take advantage of multi-threading.
On the other hand...
A simple, single BIG LOOP like my example (no sleeping or other program blocking by individual servers) illustrates a method that many embedded systems use to implement real-time programs without benefit of multi-threading and without multi-tasking operating systems. The clock keeps on ticking, relentlessly, and at each tick of the clock the program determines whether some event needs processing.
There are, of course, other ways...