EENG 383

Real Time Operating Systems

Real Time Operating Systems

We will be entering into the real time operating system portion of the course. At this point we will be using the Simon's text. Hence the lectures will now be accompanied with a reading that accompanies the lecture.

Interrupt background

In order to start our exploration of real time operating systems we should have some context of interrupts. The author has posed the following good questions.

Shared Data Problem

As soon as we use interrupts to share information between an ISR and main we can create problems. The foundation for these problems arises from the fact that we don't want the ISR to perform all the work. Generally, we use the ISR to manipulate some I/O device and pass all the work of actually processing the information from the sensor to main.

The problems that can arise from sharing data between an ISR and main are shown in the following code snippet. In this example we are to monitor the temperature from two sensors and set off an alarm if these are different (we are to assume that this condition indicates that there is a problem with the sensors that someone needs to know about). Figure 4.4 So where is the error? Well it arises out of a sequsnce of events.
  1. The temperature is changing.
  2. We have just finished executing the "iTemp0..." line in main
  3. The interrupt occurs and updates both values of iTemp[]
The alarm will sound even though there was no real error. Even worse this error will not be repeatiable. This is a instance of a so-called Heisenbug. How about trying the following fix? Figure 4.5

Well this really doesn;t work either. So how to we solve this problem. Well go the source. The interrupt is the real culprit, if it occurs when we are making the comparition then we can get a problem. The solution will consist of making a "critical section" around the comparison, not allowing interrupts to occur. Figure 4.7

Shared Data Problem

Last week we started talking about the shared date problem. These are logical errors caused when independent programs are allowed to read and write the same variable. In order to refresh your memory, lets look at the following example, Figure 4.9 from the text. In this example, a hardware interrupt triggers the UpdateTime once every second. The SecSinceMidnight function is called by a user program to determine the number of seconds since midnight. Can you see where the shared data bug resides?

The bug occurs if an interrupt occurs while the "return" statement of the SecSinceModnight is being executed. Since we know that this C statement translates into several lines of assembly language, changing the values of iHr, iMin, and iSec when some of them have already been used can create incorrect values being generated by this function. An interesting follow-up question is, "how wrong can this function be?"

	Before		After		Value used by
	Interrupt	Interrupt	SecSinceMidnight
iHr	3		4		3
iMin	59		00		00
iSec	59		00		00

Interrupt Latency

If we enable and disable interrupts to solve the problems associated with the shared data problem then we will increase the interrupts latency; the time delay between the occurrence of an interrupting event and its being serviced. In some cases you will need to calculate the latency, so how can you do this? You need to know 4 things.
  1. The longest period of time that the interrupt is disabled.
  2. The length of time required to service interrupts at a higher priority level.
  3. The time required to enter an ISR. This is MCU book-keeping required to save the state of the MCU so that its not perturbed by the ISR.
  4. How long if takes the ISR to set itself up and then "service" the interrupting event.
Lets look at an example
  1. You have to disable interrupts for 125uS for your task code to use a pair of temperature variables it shares with the interrupt routine that reads the temperatures from the hardware and writes them into the variables.
  2. You have to disable interrupts for 250uS for your task code to get the tie variables from variables it shares with the interrupt routine that responds to the timer interrupt.
  3. It takes 10uS for the MSU to switch contexts.
  4. You must complete a response within 625uS when you get a special signal (an interrupt) from another processor in your system; the inter-processor interrupt routine takes 300uS to complete.
This solution assumes that the interprocessor communication is given the highest priority and there are no other interrupts with that interrupt priority. Always work these types of problems assuming a worst-case scenerio from the current state of the MCU to the resolution of the interrupt. That is assume that we have just entered the portion of the foreground code which disables the intterputs. Interrupts are disabled for 250uS, it takes 10uS to switch to the interprocess ISR, and interprocess communications require 300uS. Thus it takes 560uS to service this requires, well within the 625uS requirement. What if we assumed that all the interrupts were of the same priority?

Non-blocking Solutions to the Shared Data Problem

Figure 4.15 of the text shows a non-blocking solution to the shared data problem.

Volatile

At many points during this semester we have used variables to communicate between ISR and the "main" program. The following code excerpt shows a typical example.
    int8 flag;
void main() {
    ...
    while(flag == FALSE);
    ...
} // end main

void TMR0_ISR() {
    ...
    flag = TRUE;
    ...
} // end ISR
On a computer with many general purpose registers a C-compiler might translate the above code snippet as follows:
	mov	flag, R1
loop:	sub	R1,ZERO
	btfsc	status,z
	goto	loop
The problem here is that the C-compiler did not know that the flag variable could be changed outside the context of the main routine; all your programs in college to date have abided by this assumption. However, programs which use variables to communicate between processes need to use the key word volatile. This will cause the compiler to generate code which reloads the variable in question every time that it is reference because its value is volatile. For example the C-code snippet above should have defined flag as:
    volatile int8 flag;
This would have cause the c compiler to translate the c-code snippet as:
loop:	mov	flag, R1
	sub	R1,ZERO
	btfsc	status,z
	goto	loop
Solving the problem.

Embedded Software Architectures

The architectures are driven by the need for response time.

Round-Robin (or superloop)

A main loop checks each of the I/O devices and services each in a prescribed order.
void main() {
    init();
    while(1) {
	task1();
	task2();
	...
}   }
Example A digital multimeter which checks the position of a switch, reads a value from a proble, performs an ADC conversion and then displays the result on an LCD is such an example.

Advantage Works well when there are few I/O devices, no lengthy processing, and no tight response requirements.

Disadvantage If any device has a response time which is less than the time required to get around the superloop. If any of the tasks requires length processing. Modification made to meet requirements results in a fragile architecture.

Round-Robin with interrupts

A main loop checks each of the I/O devices and services each in a prescribed order. Interrupts are used to deal with the time constrained I/O devices.
int8 data_for_device_A;

void main() {
    init();
    while(1) {
	task1();
	task2();
	...
	if (global_flag_A) taskA();
}   }

void ISR_deviceA() {
    service_A(data_for_device_A);
    set(global_flag_A);
}
Example 36 position rotary encoder which selects which function to perform on a DS1302 real time clock. Assume that it takes 100mS to read the time from the DS1302. We assume that we want to be able to monitor the rotary encoder when it is turned slower than 1 rotation per second. This means that we must examine the rotary encoder at least 36 clicks/sec * 4 detents / click = 144 detents / sec or 7mS / detent. Thus, we need to put either the DS1302 or the rotart encoder onto an interrupt so that we can perform both tasks.

Advantage Simple.

Disadvantage Open to problems associated with shared data. All the tasks in main operate with the same priority. For example, a laser printer spends lots of time calculating where to put the tiny dots of ink. Main would then get "stuck" working on this task at the exclusion of all the other tasks. Moving the other tasks into ISRs is a solution, but then a low priority interrupt might take to long to service. In addition, if there were a pair of time consuming tasks then one of them would always have to wait for the other.

Non-preemptive Real-Time Operating System

The problem is divided into a collection of independent programs called tasks. Each task is a mini-superloop program that can be running, ready, or blocked. A running task can transition into a non-running state (blocked) by executing a WAIT statement or by requiring a value from a message. When a task enters the blocked state, the RTOS determines which task to run next the numerical priority assigned by the programmer. In a non-preemptive RTOS a task will never be forced to give up the CPU (preempted). The highest priority will always gets the CPU next. Tasks communicate with one another using messages and semaphores.

Example lab12.c

Advantage Simple to write a non-preemptive RTOS. Simple to program applications.

Disadvantage The longest delay to service a high priority event is the time required by the longest task. The RTOS cannot preempt any running task. Consequently a bug in one task may very well bring the entire system down. Using an RTOS consumes system resources (memory and MCU processing time).

Preemptive Real-Time Operating System

A preemptive RTOS can suspend one task to run another. Advantage The response time of the system is stable if the code is changed.

Disadvantage Using an RTOS consumes system resources (memory and MCU processing time). They increase the delivery cost of your product.

Conclusion

Examine each architecture wrt each of the following factors:
Date: April 17
Lecture: 22
Reading: Chapter 6

Real Time Operating Systems

The real time operating systems in embedded systems are and are not like a modern operating system like windows.

Similarities Differences

Tasks

A task is a small program which can be in one of 3 states. In class I will draw a state diagram with the following 3 states.

Scheduler

The scheduler is part of the RTOS software which keeps track of each tasks state and decides which one should be put into the running state. Generally, the highest priority task gets the MCU. If you write a RTOS application in which one task gets to hog the MCU and all the lower priority tasks have to wait; the scheduler assumes that you knew what you were doing when you set the task priorities.

Common Questions

A Simple Example

The commands in bold are provided by the RTOS. All others need to be defined by the user.
event GREEN_CHEESE, CLOCK_STRIKE_ONE;
int8 NUM_MICE;

//---------------------------------------------------
//---------------------------------------------------
main() {
    CreateTask(CowTask, high);
    CreateTask(MouseTask, low);

    CreateEvent(FULL_MOON);
    CreateEvent(CLOCK_STRIKE_ONE);
    CreateEvent(GREEN_CHEESE);

    StartMultitasking();
}


//---------------------------------------------------
//---------------------------------------------------
CowTask() {
    while(1) {
	WaitForEvent(FULL_MOON);
	JUMP_OVER_MOON();
	SignalEvent(GREEN_CHEESE);
}   }

//---------------------------------------------------
//---------------------------------------------------
MouseTask() {
    while(1) {
	WaitForEvent(GREEN_CHEESE);
	RUN_UP_THE_CLOCK();
	WaitForEvent(CLOCK_STIKE_ONE);
	mice -= 1;
	RUN_DOWN_THE_CLOCK();
}   }


//---------------------------------------------------
//---------------------------------------------------
TMR0_ISR() {
    static int8 hour=0;
    if (hour == 24) hour = 0;
    if (hour == 1) SignalEvent(CLOCK_STRIKE_ONE);
}	

Allocation of memory

static int8 x;
int8 y=4;
char *string = "Where does it go?";
void *ptr;

void fnc(int8 a, int8 *b) {
    static int8 a;
    int8 local;
    ...
}
variables are either stored on the system stack or in fixed memory locations. For each of the declarations in the preceeding program define where they should be assigned and why.
Date: April 19
Lecture: 23
Reading: Chapter 6

Review

Draw the state diagram for the states of a task and review how each transition can occur. Compare preemptive vs. non-preemptive.


A very simple RTOS application

In the following RTOS application main is excluded; main contains calls to initialize the RTOS, declare the tasks, declare events, and start the RTOS. Starting a RTOS is like hitting the "frappe" button on a mixer, it stirs all the tasks together and selects which to run. Anyway, the following is a very simple application with 2 tasks which wait for one another.
void taskB() {
    while(1) {
	WaitForEvent(eventA);
	SignalEvent(eventB);
	taskB_stuff();
}   }

void taskA() {
    while(1) {
	SignalEvent(eventA);
	WaitForEvent(eventB);
	taskA_stuff();
}   }

The class period was taken by examining the interplay between these two tasks. The behavior was illustrated using a sequence diagram. The verticle axis represents time, avertical line is drawn for each task. The state of each task is noted on this line. Message passing is denoted by drawing arrows between the verticle line at the point during which the message is passed. In our non-preemptive operating system it was noted that a task cannot have its CPU time slice taken away from it, it must give the CPU away. This proved important in understanding the expected behavior of the system when drawing the sequence diagram.
Date: April 19
Lecture: 23
Reading: Chapter 6

Review

Draw the state diagram for the states of a task and review how each transition can occur. Compare preemptive vs. non-preemptive. Nonpreemptive - Condition of execution in which the running thread or process retains control of the processor until it explicitly or implicitly relinquishes it. Definition courtesy of Novell. We can interpret this definition in the context of the following figure. A nonpreemptive RTOS will only take the CPU away from a task which blocks itself.


  1. Process do not go from the blocked to the running state. First a process must be unblocked, it then moves to the ready queue. From their the RTOS might move it to the running state.
  2. A running process can block itself by waiting for an event.
  3. A ready process can become the running process if
    1. The running process blocked itself.
    2. The ready process (lets call it task B) has a higher priority then the actively running process (called task A). Lets look at this situation more carefully. We know that when the RTOS is given a choice of tasks to run it will always pick the one with the highest priority. From this we can infer that when the RTOS choose to run task A, task B was blocked. While task A was running, task B must have become unblocked. There are two different ways that this could have happened:
      1. If task A signaled some event which task B was waiting on. Hence task A is responsible for its loosing the CPU.
      2. An ISR signaled some event on which task B was waiting.
  4. A running task is moved to the ready state if another process with a higher priority enters the ready state and the RTOS preempts the currently running task. Thus, this can only happen in a preemptive RTOS.
  5. A task is moves from the blocked to the ready state when an event is triggered by the currently running task or an ISR.
  6. Ready tasks are never blocked. They must first run, even if for a very short period of time.

Priority Inversion

In a nonpreemptive RTOS its possible for a low priority task to be "holding" the CPU while one or more higher priority tasks are in the ready state. This can happen, if the low priority task signals events on which the high priority events are waiting.

Shared Data Problem

In the following RTOS application main is excluded; main contains calls to initialize the RTOS, declare the tasks, declare events, and start the RTOS. Starting a RTOS is like hitting the "frappe" button on a mixer, it stirs all the tasks together and selects which to run. Anyway, the following is a very simple application with 2 tasks which wait for one another. While they are doing this they are calling a
signed int8 total=0;

// High priority task 
void taskA() {
    while(1) {
	WaitForEvent(eventISR);
	foobar(-1);
	taskA_stuff();
}   } // end taskA

// Low priority task 
void taskB() {
    while(1) {
	foobar(+1);
	taskB_stuff();
}   } // end taskB

void foobar(int8 val) {
    // a VERY complex time consuming function which
    // manipulates static locals and global variables.
} // end foobar

void tmr0_isr() {
    SignalEvent(eventISR);
    // Set TMR0 to some value
} // end tmr0_isr
  1. RTOS initializes and puts taskA and taskB into the ready state. The RTOS then picks taskA to run since it has the higher priority.
  2. taskA blocks on eventISR
  3. The RTOS moves taskA to the blocked state.
  4. The RTOS moves taskB to the running state.
  5. taskB calls foobar(+1) (and will be a while completing it).
  6. The ISR wakes up signals eventISR.
  7. The RTOS move taskA from the blocked state to the ready state.
  8. The preemptive RTOS notices that a ready task has a higher priority then the running task.
  9. The preemptive RTOS move taskB to the ready state and moves taskA to the running state.
  10. taskA runs foobar(-1) and contaminates the local variables what taskB was using. At some point taskA will block waiting for eventISR>
  11. RTOS will move taskA to the blocked state and taskB to the running state.
  12. taskB will resume execution with the corrupted values for the global variables used by foobar.
The subroutine foobar is said to be non-reentrant. A reentrant task can be in invoked any number of times in parallel, without interference between the various invocations.
Date: April 19
Lecture: 23
Reading: Chapter 6

Reentrant functions

//--------------------------------------------------------
// foobar(int8 val)
// A VERY complex time consuming function.  This function
// has many reentrant problems.
//--------------------------------------------------------
void foobar(int8 val, char *string) {
    int8 temp;		// locals are allocated on the stack
    static int8 temp;	// static locals are allocated on the heap

    temp = global;
    temp += 1;		// interrupting here is bad
    global = temp;
    *ptr  = val;
    WRITE_LCD(string);	// interrupting here is bad
    global = fnc(val);	// interrupting here is bad
} // end foobar

//--------------------------------------------------------
// fnc(int8 val)
// A nonreentrant function
//--------------------------------------------------------
void fnc(int8 val) {
    nonreentrant_stuff();
}
Th subroutine foobar is said to be non-reentrant. A reentrant task can be in invoked any number of times in parallel, without interference between the various invocations. Clearly a function which is called by one or more tasks needs to be reentrant. There are 3 criteria for designing reentrant functions.
  1. A reentrant function may not use variables in a non-atomic way unless they are stored on the stack of the task that called the function or are otherwise the private variables of that task. An atomic operation is one which requires a single assembly instructions to complete. An atomic operation is not interruptable. Thus a variable which can be shared between several invocations must be manipulated in an atomic way, otherwise the function is non-atomic.
  2. A reentrant function may not call any other functions that are not themselves reentrant.
  3. A reentrant function may not use the hardware in a non-atomic way.
You might consider all this discussion over shared variable to be so much hoopla, but the problems are more insidious then you might at first imagine. For example, lets say that you wanted to use the SQRT function provided by the CCS compiler. Well, it almost certain that this function will have local variables and use these variables as part of non-atomic operations. Consequently, the system libraries of a compiler are more than likely non-reentrant. Is there any hope of salvaging the valuable functions which are part of the compiler's libraries?

Semaphore

A semaphore is a mechanism for restricting access to critical sections of code to a single user or process at a time. Typically semaphores are binary variables (having the values of either 0 or 1) which represents the state of a sharable resources (like a variable or even a function call).
void TaskA() {
    ....
    TakeSemaphore(semaphoreSQRT);
    a = SQRT(b);
    ReleaseSemaphore(semaphoreSQRT);
    ....
} 

void TaskB() {
    ....
    TakeSemaphore(semaphoreSQRT);
    c = SQRT(d);
    ReleaseSemaphore(semaphoreSQRT);
    ....
} 
There are 2 keys to making this paradigm work.
  1. The failure of a task to claim a semaphore must prohibit the task from using the shared resources. In general a task which is unable to claim a semaphore should be blocked.
  2. The access to the sempahore must be atomic.
The requirement for the atomic manipulation of a semaphore has a direct impact on the computer architecture. Almost all modern computers have an assembly language instruction which both manipulates and tests a variable. For example the 18F452 has: These 4 instructions could be used to implement an atomic semaphore locking and release mechanism.

Semaphore Problems

void TaskA() {
    ....
    TakeSemaphore(A);
    TakeSemaphore(B);
    stuff();
    ....
    ReleaseSemaphore(B);
    ReleaseSemaphore(A);
} 

void TaskB() {
    ....
    TakeSemaphore(A);
    TakeSemaphore(B);
    stuff();
    ....
    ReleaseSemaphore(B);
    ReleaseSemaphore(A);
    ....
}