# Multi-queue simulation theory

• July 21st, 2012, 08:40 PM
Herah
Multi-queue simulation theory
I am currenty working on a program involving queues for my data structures class.

The premise is very common among college assignments I'm told, but I am having trouble puzzling it out. Here are the basics:

Code needs to handle 1-9 tellers based on input.
Customers come in and join the shortest line, so there are as many lines as tellers, not a single queue.
Event based - customer arrival time is random, time spent at teller is random.

So, I have my teller queues set up, and am handling the random arrival of customers by predermining arrival times and placing them into an array. This may not be the best way to do it, but it's where I am right now. What I am not sure how to do is to keep track of all the teller queues, with their random customer service times, and continue moving the clock forward without missing anyone. Not sure if I am explaining that right...

Lets say, customers arrive every 2-3 seconds while concurrently tellers take 3-8 seconds to service each customer.

If there are 3 tellers, how to I make sure that I am processing arrival and departure events in the right order for all of them without missing anything?
Should I predermine customer service times like I did arrivals?

Maybe this is simple, I just can't figure out priority. I don't want code, just help with understanding.
• July 22nd, 2012, 01:20 AM
jps
Re: Multi-queue simulation theory
If I understand your assignment, I would say a multi-threaded set of tellers with a shared database. Each teller can operate on its own, and all share the sama data set.
• July 22nd, 2012, 03:06 PM
Zaphod_b
Re: Multi-queue simulation theory
I might set it up something like this:

Time-based simulation considerations for a multiple-queue, multiple-server system:

There is a simulation timer variable, t. Start with t = 0, and there is a big loop that increments t at the end of the loop. (For convenience, assume t is an integer, and represents seconds from the time the bank opened.)

There is a variable, arrivalTime, that keeps track of the next scheduled customer arrival time. When current simulation time becomes greater than arrivalTime, an entry is made into the shortest teller queue with current time as the arrival time.

Then a new customer arrival is scheduled at current time plus some value from a random number generator.

There are as many customer queues as there are tellers.

Each teller queue has entries that show arrival times of customers.
(So that statistics such as average wait time can be calculated.)

Each teller has a variable, endTime, that keeps track of whether that teller is busy. If the teller is not busy, endTime = 0 for that teller. It stays at that value until the teller sees another entry in its queue.

When a teller begins with a new customer (removes an entry from its queue), the teller schedules a new value of endTime according to some random number generator.

Code :

```For each teller, create an empty queue and set each teller's endTime to zero.   Set arrivalTime equal to zero.   for (t = 0; t < timeLimit; t++) BIG LOOP IF t is greater than or equal to arrivalTime THEN Find shortest teller queue   Put an entry in that teller queue with arrival time = t   Schedule a new arrivalTime equal to t + some random number END IF   Make a loop that looks at all teller queues LOOP THROUGH TELLERS IF t is greater than this teller's endTime THEN IF this queue is empty THEN Set this teller's endTime equal to zero ELSE Remove that queue entry. Wait time for this customer was current time, t, minus arrival time as shown in the queue entry. Use this however you have it set up to keep statistics.   Set this teller's endTime equal to current time, t, plus some random number END IF END IF END LOOP THROUGH TELLERS END BIG LOOP```

Implementation details might re-arrange some of the logic (slightly) to attempt to get better performance, but the important part is to understand the problem in some context and get something to work. Implement first, optimize later (if appropriate).

Random functions:
1. Exponential distribution random number generator with 1/lambda = average interval between customer arrivals. Since the exponential distribution is continuous, for discrete time simulations, sometimes a Poisson distribution is assumed, where the value is the number of customers arriving in a given interval.

2. Uniform distribution random number generator with mu = average time for service by a teller. (Could be some other distribution)

If you want to calculate arrival times and service times ahead of time and put them in arrays instead of generating random deviates on the fly, that would work too, I guess.

Alternative simulation implementation: Have an event list (sorted according according to time, smallest time value is at the top of the list). There are two types of events: Arrivals and Service. New arrivals are generated the same way as above, and new Service events are scheduled whenever a teller begins service. (Arrival events have an arrival time; service events have a server number as well as an arrival time.) Now, you don' t actually have a timer variable that is incremented each time through a loop.

Your loop simply sets simulation time to the event at the top of the event list and it services all events that have that time. Then after removing the event or events with that time, you just go back to the top of the loop and you advance simulation time to the next event.

This would be a "real" event-driven (rather than time-driven) simulator.

It's only a little more complicated to handle different event types, but it's probably not worth it for small-potatoes stuff like a bank with nine tellers and events happening on one-second boundaries.

On the other hand, simulating systems with thousands or hundreds of thousands (or even millions) of components with nanosecond or picosecond time resolution, you wouldn't want to spend a lot of time spinning through loops where nothing was happening...

Cheers!

Z
• April 2nd, 2013, 04:18 PM
spiderd
Re: Multi-queue simulation theory
Hi im working on something similar just to ask how did you implement the 3-8 seconds to service each customer. For that I just used System.out.print and used sleep method. How did you do it ?
• April 4th, 2013, 10:38 PM
Zaphod_b
Re: Multi-queue simulation theory
Quote:

Originally Posted by spiderd
...how did you implement the 3-8 seconds to service each customer...

This is a very general skeleton of implementation of a single-thread simulation program. The pseudo-code for my little time-driven example has a BIG LOOP whose counter value is the number of seconds from the start of the simulation.

Each teller, when presented with a new customer, generates, and keeps track of, a "random" end time that allows the server to check at each successive pass through the loop, to see whether that much time has elapsed since service started.

There is nothing in the loop that blocks execution, and, for text output, the simulation does not run in real time (so there is no sleeping); it runs as fast as it can and reports changes that are encountered as a result of the simulated time elapsed since scheduling each new event.

If you want to create some kind of "real-time" animation based on this, then just put a statement at the end of the BIG LOOP that makes the program sleep for a second (or whatever time granularity you are simulating). Then, for each elapsed (real-time) second the program makes a trip through the BIG LOOP.

A more sophisticated time-based program like this might have a different thread for each server and a dispatcher thread that puts each new arrival into one of the servers' queues. Then each server could sleep for a number of seconds when it encounters a new customer at the head of its queue while the others merrily keep doing their things. For some people that is conceptually simpler. It is, for some people, easier to visualize actual servers taking different times to process their customers and relate this to a multi-threaded simulation. Implementing the details could have significant educational value.

I mean, learning about interaction and communication among several threads in a program, and, particularly debugging such a beast, can be a sobering experience, but might be interesting enough to make such exploration worthwhile to you. It might serve you well in your future development. (A little queuing theory pun here. Get it? Serve you. No? Oh, well...)

Anyhow...

Since the servers are not actually doing anything except removing things from queues, and seeing if time is up, there is no particular operational advantage (performance or otherwise) to multi-threading the simulation, even if you have a lot of cores in your processor that can take advantage of multi-threading.

On the other hand...

A simple, single BIG LOOP like my example (no sleeping or other program blocking by individual servers) illustrates a method that many embedded systems use to implement real-time programs without benefit of multi-threading and without multi-tasking operating systems. The clock keeps on ticking, relentlessly, and at each tick of the clock the program determines whether some event needs processing.

There are, of course, other ways...

Cheers!

Z