aboutsummaryrefslogtreecommitdiff
path: root/Pintos.wiki
diff options
context:
space:
mode:
authorHugo Hörnquist <hugo@lysator.liu.se>2020-06-10 22:18:50 +0200
committerHugo Hörnquist <hugo@lysator.liu.se>2020-06-10 22:18:50 +0200
commite9062d00bd9e4956af6efa515be5c4e18f7fcf0d (patch)
treed81befbf48022f4e02448efa12167c94d5979fe7 /Pintos.wiki
parentRemove bookmarks page. (diff)
downloadwiki-public-e9062d00bd9e4956af6efa515be5c4e18f7fcf0d.tar.gz
wiki-public-e9062d00bd9e4956af6efa515be5c4e18f7fcf0d.tar.xz
Wed, 10 Jun 2020 22:18:50 +0200
Diffstat (limited to 'Pintos.wiki')
-rw-r--r--Pintos.wiki238
1 files changed, 238 insertions, 0 deletions
diff --git a/Pintos.wiki b/Pintos.wiki
index d47eb1c..05f6d26 100644
--- a/Pintos.wiki
+++ b/Pintos.wiki
@@ -46,3 +46,241 @@ ttps://www.ida.liu.se/~TDDB68/labs/Lab1.pdf
- `threads/interrupt.{h,c}` :: Important structures
- `lib/syscal-nr.h` :: Syscall numbers
- `filesys/filesys.{h,c}` :: Pintos file system
+
+
+
+
+----------------------------------------
+
+ A barrier synchronization is a function that does not return control to the
+ caller until all p threads of a multithreaded process have called it.
+
+ A possible implementation of the barrier function uses a shared counter
+ variable that is initialized to 0 at program start and incremented by each
+ barrier-invoking thread, and the barrier function returns if counter has
+ reached value p.
+
+ We assume here that p is fixed and can be obtained by calling a function
+ get_nthreads(), that load and store operations perform atomically, and that
+ each thread will only call the barrier function once.
+
+ The following code is given as a starting point:
+
+
+ {{{c
+ static volatile int counter = 0; // shared variable
+
+ void barrier( void )
+ {
+ counter = counter + 1;
+ while (counter != get_nthreads())
+ ; // busy waiting
+ return;
+ }
+ }}}
+
+1.
+
+{{{c
+static volatile int counter = 0; // shared variable
+static mutex lock; // shared mutex
+
+void barrier( void )
+{
+ lock(lock);
+ counter = counter + 1;
+ unlock(lock);
+ while (counter != get_nthreads())
+ ; // busy waiting
+ return;
+}
+}}}
+
+
+2.
+
+ Show by an example scenario with p=2 threads (i.e., some unfortunate
+ interleaving of thread execution over time) that this implementation of barrier
+ may cause a program calling it (such as the following) to hang.
+
+ {{{c
+ void main( void )
+ {
+ ... // create p threads in total
+ ...
+ barrier();
+ ...
+ }
+ }}}
+
+One simple case would be that thread 1 gets swapped out between reading and
+writing the counter, meaning that two threads write the same incremented value
+back, dropping one of the +1s. This mean that both threads are stuck waiting
+counter to hit 2, which it never will.
+
+| Thread 1 | Thread 2 | `counter` |
+|-------------------|-------------------|-----------|
+| load counter | | 0 |
+| | load counter | 0 |
+| | increment counter | 0 |
+| | store counter | 0 + 1 |
+| increment counter | | 0 + 1 |
+| store counter | | 0 + 1 |
+
+3.
+ Today, many processors offer some type of atomic operation(s). Can you use here
+ an atomic fetch_and_add operation instead of the mutex lock to guarantee
+ correct execution? If yes, show how to modify the code above and explain. If
+ not, explain why.
+
+
+{{{c
+static volatile int counter = 0; // shared variable
+
+void barrier( void )
+{
+ fetch_and_add(counter, 1);
+
+ while (counter != get_nthreads())
+ ; // busy waiting
+ return;
+}
+}}}
+
+A fetch and add intstruction should behave just as our above example with a
+lock around the modification.
+
+4.
+ Suggest a suitable way to extend the (properly synchronized) code from question
+ 2 to avoid busy waiting. Show the resulting pseudocode.
+
+{{{c
+static volatile int counter = 0; // shared variable
+static mutex lock; // shared mutex
+semaphore bar = 0;
+
+void barrier( void )
+{
+ lock(lock);
+ counter = counter + 1;
+ unlock(lock);
+ if (counter != get_nthreads()) {
+ // unlock all locked threads if we are the last thread.
+ for ( i ∈ [0, get_nthreads()) ) {
+ sema_up (bar);
+ }
+ } else {
+ // lock this thread
+ sema_down (bar);
+ }
+ return;
+}
+}}}
+
+5.
+ Consider the following (Unix) C program.
+
+ {{{c
+ #include <stdio.h>
+ #include <sys/types.h>
+ #include <unistd.h>
+ int main() {
+ pid_t pid1 = fork();
+ if (pid1 == 0) {
+ printf("A");
+ }
+ printf("A");
+ }
+ }}}
+
+ How many times will the letter A be printed to the standard output when
+ executing this program?
+
+3
+
+6.
+
+ Every process is associated with a number of areas in memory used to store the
+ program code, variables, etc. For each memory item below connect it with
+ correct allocation option. Note that that options can be used more than once
+ and not all options must be chosen.
+
+| The page table of a process | __allocated in kernel memory__ |
+| The memory used to store the content of a function parameter | stack |
+| Memory allocated using malloc | heap |
+| The compiled machine code of a program | text section |
+| A global (static) variable | data section |
+| The memory used to store a local variable declared in a function | stack |
+| | |
+
+7.
+
+ Give an example of a situation (table with jobs) where the SJF
+ algorithm provides better performance as measured by average
+ turnaround time compared to FCFS. There must be at least 4 jobs in the
+ table and your example must not be copied from anywhere. Motivate by
+ showing the average turnaround time for both algorithms.
+
+Please view answer with a monospaced font.
+
+| P | time | arrival |
+|---|------|---------|
+| 1 | 1000 | 0 |
+| 2 | 1 | 1 |
+| 3 | 1 | 2 |
+| 4 | 1 | 3 |
+
+*First come first serve* would use the first 1000 time units for
+process 1, and then do process 2, 3 & 4 thereafter. Giving waiting
+times
+
+| P | wait time |
+|---|-----------|
+| 1 | 1000 |
+| 2 | 1000 |
+| 3 | 1000 |
+| 4 | 1000 |
+(Forgive possible of by one errors from staggered start times)
+
+or an average of 1000 time units.
+
+*Shortest Job first* (with preemption) would instead force the first process to
+wait for all the shorter processes. Giving the following wait times.
+| P | wait time |
+|---|-----------|
+| 1 | 1003 |
+| 2 | 1 |
+| 3 | 1 |
+| 4 | 1 |
+Which is an average of ≈ 250 time units. Which is significantly lower than 1000.
+
+8.
+
+Banker's algorithm is a deadlock [detecting] algorithm. Freedom from deadlocks
+is a [necessary] condition for ensuring progress. The algorithm guarantees that
+only so-called [safe] states will be reached by checking every resource
+allocation request. If a resource allocation leads to an undesired state, the
+request is [rejected].
+
+
+
+== 9. Explain how paging supports sharing of memory between processes. ==
+
+Common code (such as libraries) are can be loaded once into a set of pages, and
+then simply mapped into all the processes that need them. Since they should be
+mapped read only it should be indistinguishable from having the library copied
+into memory for each process (except the smaller memory usage).
+
+Memory pages can also manually be mapped into multiple processes (e.g. mmap),
+and then be used as a shared data area.
+
+10.
+ Explain why page faults occur, under what circumstances and what happens
+ after. Describe the set of events step by step, considering also the
+ possibility of there being no free frames. For each step describe where the
+ logic that executes this step is located (e.g., OS kernel, MMU, program
+ code).
+
+A page fault occurs when a process attempts to access memory in a currently
+unmapped page.
+