aboutsummaryrefslogtreecommitdiff
path: root/Pintos.wiki
diff options
context:
space:
mode:
authorHugo Hörnquist <hugo@lysator.liu.se>2020-07-14 23:21:59 +0200
committerHugo Hörnquist <hugo@lysator.liu.se>2020-07-14 23:21:59 +0200
commit93706775e738a24c69fbf9769c72d56875f26326 (patch)
tree824da15d70131a5c85cd030d2de9e04439f7a0c8 /Pintos.wiki
parentWed, 01 Jul 2020 17:11:16 +0200 (diff)
downloadwiki-public-93706775e738a24c69fbf9769c72d56875f26326.tar.gz
wiki-public-93706775e738a24c69fbf9769c72d56875f26326.tar.xz
Tue, 14 Jul 2020 23:21:59 +0200
Diffstat (limited to 'Pintos.wiki')
-rw-r--r--Pintos.wiki24
1 files changed, 24 insertions, 0 deletions
diff --git a/Pintos.wiki b/Pintos.wiki
index 05f6d26..feddc40 100644
--- a/Pintos.wiki
+++ b/Pintos.wiki
@@ -96,6 +96,7 @@ void barrier( void )
}
}}}
+*2 poäng*
2.
@@ -127,6 +128,8 @@ counter to hit 2, which it never will.
| increment counter | | 0 + 1 |
| store counter | | 0 + 1 |
+*2 poäng*
+
3.
Today, many processors offer some type of atomic operation(s). Can you use here
an atomic fetch_and_add operation instead of the mutex lock to guarantee
@@ -150,6 +153,8 @@ void barrier( void )
A fetch and add intstruction should behave just as our above example with a
lock around the modification.
+*2 poäng*
+
4.
Suggest a suitable way to extend the (properly synchronized) code from question
2 to avoid busy waiting. Show the resulting pseudocode.
@@ -177,6 +182,8 @@ void barrier( void )
}
}}}
+*1 poäng*
+
5.
Consider the following (Unix) C program.
@@ -198,6 +205,8 @@ void barrier( void )
3
+*1 poäng*
+
6.
Every process is associated with a number of areas in memory used to store the
@@ -213,6 +222,8 @@ void barrier( void )
| The memory used to store a local variable declared in a function | stack |
| | |
+*3 poäng*
+
7.
Give an example of a situation (table with jobs) where the SJF
@@ -254,6 +265,10 @@ wait for all the shorter processes. Giving the following wait times.
| 4 | 1 |
Which is an average of ≈ 250 time units. Which is significantly lower than 1000.
+*.5 poäng*
+ *Comment*: the idea is right, but calculated the waiting time instead of turnaround
+
+
8.
Banker's algorithm is a deadlock [detecting] algorithm. Freedom from deadlocks
@@ -262,6 +277,7 @@ only so-called [safe] states will be reached by checking every resource
allocation request. If a resource allocation leads to an undesired state, the
request is [rejected].
+*1.5 poäng*
== 9. Explain how paging supports sharing of memory between processes. ==
@@ -274,6 +290,8 @@ into memory for each process (except the smaller memory usage).
Memory pages can also manually be mapped into multiple processes (e.g. mmap),
and then be used as a shared data area.
+*1 poäng*
+
10.
Explain why page faults occur, under what circumstances and what happens
after. Describe the set of events step by step, considering also the
@@ -284,3 +302,9 @@ and then be used as a shared data area.
A page fault occurs when a process attempts to access memory in a currently
unmapped page.
+*1 poäng*
+ *Comment*: it is right, but missing more information on why do
+ they occur and when. Missing the second part ( handling of page
+ faults) completely.
+
+