aboutsummaryrefslogtreecommitdiff
path: root/Pintos.wiki
blob: a6901879e0217d16731e11eed8d675e04052ac8a (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
pintos --qemu -m 128 -- run "recursor_ng pintosmaster 6 1" 

Omregga inte i webreg
Typ på torsdag

- maila filippe [sic?] angående status
- Gör gammla eller nya labbar

- lab 0 uppdaterad
- Lab3 delad i 3

- fler kodexempel

----------------------------------------

- 29 mars lab deadline
- labbar lämnas kanske inte via gitlab

== Tekniskt om pintos ==

- Return från syscall ligger i `f->eax`
- `f->esp` is the stack of the calling process
- The syscall number is ath the top, then the arguments
- `0xCC` means that memmory was `free()`d 
- `backtrace kernel [ hexcode ] ...`
- gdb `x/{5w,s}` examine...?

== Debug ==
{{{
# Stand in src/userprog/build
pintos-mkdisk fs.dsk 2 # make disk
pintos --qemu -- -f -q # format disk
pintos --qemu -p local-file -a file-in-pintos -- -q

pintos --qemu --gdb -- run printf
pintos-gdb src/userprog/build/kernel.o
(gdb) target remote localhost:1234
(gdb) continue
}}}

ttps://www.ida.liu.se/~TDDB68/labs/Lab1.pdf


=== Filer ===

- `pintos/src/lib/user/syscal.{h,c}` :: The syscall wrapper
- `userprog/syscall.{h,c}`           :: Implement syscalls here
- `threads/thread.{h,c}`             :: Implement something here
- `threads/interrupt.{h,c}`          :: Important structures
- `lib/syscal-nr.h`                  :: Syscall numbers
- `filesys/filesys.{h,c}`            :: Pintos file system




----------------------------------------

  A barrier synchronization is a function that does not return control to the
  caller until all p threads of a multithreaded process have called it.

  A possible implementation of the barrier function uses a shared counter
  variable that is initialized to 0 at program start and incremented by each
  barrier-invoking thread, and the barrier function returns if counter has
  reached value p.

  We assume here that p is fixed and can be obtained by calling a function
  get_nthreads(), that load and store operations perform atomically, and that
  each thread will only call the barrier function once.

  The following code is given as a starting point:


  {{{c
  static volatile int counter = 0;  // shared variable

  void barrier( void )
  {
    counter = counter + 1;
    while (counter != get_nthreads()) 
       ;  // busy waiting
    return;
  }
  }}}

1.

{{{c
static volatile int counter = 0;  // shared variable
static mutex lock; // shared mutex

void barrier( void )
{
  lock(lock);
  counter = counter + 1;
  unlock(lock);
  while (counter != get_nthreads()) 
     ;  // busy waiting
  return;
}
}}}

*2 poäng*

2.

  Show by an example scenario with p=2 threads (i.e., some unfortunate
  interleaving of thread execution over time) that this implementation of barrier
  may cause a program calling it (such as the following) to hang.

  {{{c
  void main( void )
  {
   ... // create p threads in total
   ...
   barrier();
   ...
  }
  }}}

One simple case would be that thread 1 gets swapped out between reading and
writing the counter, meaning that two threads write the same incremented value
back, dropping one of the +1s. This mean that both threads are stuck waiting
counter to hit 2, which it never will.

| Thread 1          | Thread 2          | `counter` |
|-------------------|-------------------|-----------|
| load counter      |                   | 0         |
|                   | load counter      | 0         |
|                   | increment counter | 0         |
|                   | store counter     | 0 + 1     |
| increment counter |                   | 0 + 1     |
| store counter     |                   | 0 + 1     |

*2 poäng*

3.
  Today, many processors offer some type of atomic operation(s). Can you use here
  an atomic fetch_and_add operation instead of the mutex lock to guarantee
  correct execution? If yes, show how to modify the code above and explain. If
  not, explain why.


{{{c
static volatile int counter = 0;  // shared variable

void barrier( void )
{
  fetch_and_add(counter, 1);
  
  while (counter != get_nthreads()) 
     ;  // busy waiting
  return;
}
}}}

A fetch and add intstruction should behave just as our above example with a
lock around the modification.

*2 poäng*

4.
  Suggest a suitable way to extend the (properly synchronized) code from question
  2 to avoid busy waiting. Show the resulting pseudocode.

{{{c
static volatile int counter = 0;  // shared variable
static mutex lock; // shared mutex
semaphore bar = 0;

void barrier( void )
{
  lock(lock);
  counter = counter + 1;
  unlock(lock);
  if (counter != get_nthreads()) {
    // unlock all locked threads if we are the last thread.
    for ( i ∈ [0, get_nthreads()) ) {
      sema_up (bar);
    }
  } else {
    // lock this thread
    sema_down (bar);
  }
  return;
}
}}}

*1 poäng*

5.
  Consider the following (Unix) C program.

  {{{c
  #include <stdio.h>
  #include <sys/types.h> 
  #include <unistd.h> 
  int main() { 
    pid_t pid1 = fork();
    if (pid1 == 0) {  
      printf("A"); 
    }
    printf("A"); 
  } 
  }}}

  How many times will the letter A be printed to the standard output when
  executing this program?

3

*1 poäng*

6.

  Every process is associated with a number of areas in memory used to store the
  program code, variables, etc. For each memory item below connect it with
  correct allocation option. Note that that options can be used more than once
  and not all options must be chosen.

| The page table of a process                                      | __allocated in kernel memory__ |
| The memory used to store the content of a function parameter     | stack                          |
| Memory allocated using malloc                                    | heap                           |
| The compiled machine code of a program                           | text section                   |
| A global (static) variable                                       | data section                   |
| The memory used to store a local variable declared in a function | stack                          |
|                                                                  |                                |

*3 poäng*

7.

  Give an example of a situation (table with jobs) where the SJF
  algorithm provides better performance as measured by average
  turnaround time compared to FCFS. There must be at least 4 jobs in the
  table and your example must not be copied from anywhere. Motivate by
  showing the average turnaround time for both algorithms.

Please view answer with a monospaced font.

| P | time | arrival |
|---|------|---------|
| 1 | 1000 | 0       |
| 2 | 1    | 1       |
| 3 | 1    | 2       |
| 4 | 1    | 3       |

*First come first serve* would use the first 1000 time units for
process 1, and then do process 2, 3 & 4 thereafter. Giving waiting
times

| P | wait time |
|---|-----------|
| 1 | 1000      |
| 2 | 1000      |
| 3 | 1000      |
| 4 | 1000      |
(Forgive possible of by one errors from staggered start times)

or an average of 1000 time units.

*Shortest Job first* (with preemption) would instead force the first process to
wait for all the shorter processes. Giving the following wait times.
| P | wait time |
|---|-----------|
| 1 | 1003      |
| 2 | 1         |
| 3 | 1         |
| 4 | 1         |
Which is an average of ≈ 250 time units. Which is significantly lower than 1000.

*.5 poäng*
    *Comment*: the idea is right, but calculated the waiting time instead of turnaround


8.

Banker's algorithm is a deadlock [detecting] algorithm. Freedom from deadlocks
is a [necessary] condition for ensuring progress. The algorithm guarantees that
only so-called [safe] states will be reached by checking every resource
allocation request. If a resource allocation leads to an undesired state, the
request is [rejected].

*1.5 poäng*


== 9. Explain how paging supports sharing of memory between processes. ==

Common code (such as libraries) are can be loaded once into a set of pages, and
then simply mapped into all the processes that need them. Since they should be
mapped read only it should be indistinguishable from having the library copied
into memory for each process (except the smaller memory usage).

Memory pages can also manually be mapped into multiple processes (e.g. mmap),
and then be used as a shared data area.

*1 poäng*

10.
  Explain why page faults occur, under what circumstances and what happens
  after. Describe the set of events step by step, considering also the
  possibility of there being no free frames. For each step describe where the
  logic that executes this step is located (e.g., OS kernel, MMU, program
  code).

A page fault occurs when a process attempts to access memory in a currently
unmapped page.

*1 poäng*
    *Comment*: it is right, but missing more information on why do
    they occur and when. Missing the second part ( handling of page
    faults) completely.