Mercurial > cgi-bin > hgwebdir.cgi > VMS > VMS_Implementations > Vthread_impls > Vthread_MC_shared_impl
changeset 4:cfd2f7e54129
new sequential version
| author | Merten Sach <msach@mailbox.tu-berlin.de> |
|---|---|
| date | Thu, 02 Jun 2011 14:47:50 +0200 |
| parents | 505d3c674ce8 |
| children | f36e9ab2e030 |
| files | DESIGN_NOTES__VPThread_lib.txt VPThread_lib.c |
| diffstat | 2 files changed, 97 insertions(+), 91 deletions(-) [+] |
line diff
1.1 --- a/DESIGN_NOTES__VPThread_lib.txt Sat Nov 20 08:39:58 2010 +0100 1.2 +++ b/DESIGN_NOTES__VPThread_lib.txt Thu Jun 02 14:47:50 2011 +0200 1.3 @@ -1,82 +1,82 @@ 1.4 - 1.5 -Implement VPThread this way: 1.6 - 1.7 -We implemented a subset of PThreads functionality, called VMSPThd, that 1.8 -includes: mutex_lock, mutex_unlock, cond_wait, and cond_notify, which we name 1.9 -as VMSPThd__mutix_lock and so forth. \ All VMSPThd functions take a reference 1.10 -to the AppVP that is animating the function call, in addition to any other 1.11 -parameters. 1.12 - 1.13 -A mutex variable is an integer, returned by VMSPThd__mutex_create(), which is 1.14 -used inside the request handler as a key to lookup an entry in a hash table, 1.15 -that lives in the SemanticEnv. \ Such an entry has a field holding a 1.16 -reference to the AppVP that currently owns the lock, and a queue of AppVPs 1.17 -waiting to acquire the lock. \ 1.18 - 1.19 -Acquiring a lock is done with VMSPThd__mutex_lock(), which generates a 1.20 -request. \ Recall that all request sends cause the suspention of the AppVP 1.21 -that is animating the library call that generates the request, in this case 1.22 -the AppVP animating VMSPThd__mutex_lock() is suspended. \ The request 1.23 -includes a reference to that animating AppVP, and the mutex integer value. 1.24 -\ When the request reaches the request handler, the mutex integer is used as 1.25 -key to look up the hash entry, then if the owner field is null (or the same 1.26 -as the AppVP in the request), the AppVP in the request is placed into the 1.27 -owner field, and that AppVP is queued to be scheduled for re-animation. 1.28 -\ However, if a different AppVP is listed in the owner field, then the AppVP 1.29 -in the request is added to the queue of those trying to acquire. \ Notice 1.30 -that this is a purely sequential algorithm that systematic reasoning can be 1.31 -used on. 1.32 - 1.33 -VMSPThd__mutex_unlock(), meanwhile, generates a request that causes the 1.34 -request handler to queue for re-animation the AppVP that animated the call. 1.35 -\ It also pops the queue of AppVPs waiting to acquire the lock, and writes 1.36 -the AppVP that comes out as the current owner of the lock and queues that 1.37 -AppVP for re-animation (unless the popped value is null, in which case the 1.38 -current owner is just set to null). 1.39 - 1.40 -Implementing condition variables takes a similar approach, in that 1.41 -VMSPThd__init_cond() returns an integer that is then used to look up an entry 1.42 -in a hash table, where the entry contains a queue of AppVPs waiting on the 1.43 -condition variable. \ VMSPThd__cond_wait() generates a request that pushes 1.44 -the AppVP into the queue, while VMSPThd__cond_signal() takes a wait request 1.45 -from the queue. 1.46 - 1.47 -Notice that this is again a purely sequential algorithm, and sidesteps issues 1.48 -such as ``simultaneous'' wait and signal requests -- the wait and signal get 1.49 -serialized automatically, even though they take place at the same instant of 1.50 -program virtual time. \ 1.51 - 1.52 -It is the fact of having a program virtual time that allows ``virtual 1.53 -simultaneous'' actions to be handled <em|outside> of the virtual time. \ That 1.54 -ability to escape outside of the virtual time is what enables a 1.55 -<em|sequential> algorithm to handle the simultaneity that is at the heart of 1.56 -making implementing locks in physical time so intricately tricky 1.57 -<inactive|<cite|LamportLockImpl>> <inactive|<cite|DijkstraLockPaper>> 1.58 -<inactive|<cite|LamportRelativisticTimePaper>>.\ 1.59 - 1.60 -What's nice about this approach is that the design and implementation are 1.61 -simple and straight forward. \ It took just X days to design, implement, and 1.62 -debug, and is in a form that should be amenable to proof of freedom from race 1.63 -conditions, given a correct implementation of VMS. \ The hash-table based 1.64 -approach also makes it reasonably high performance, with (essentially) no 1.65 -slowdown when the number of locks or number of AppVPs grows large. 1.66 - 1.67 -=========================== 1.68 -Behavior: 1.69 -Cond variables are half of a two-piece mechanism. The other half is a mutex. 1.70 - Every cond var owns a mutex -- the two intrinsically work 1.71 - together, as a pair. The mutex must only be used with the condition var 1.72 - and not used on its own in other ways. 1.73 - 1.74 -cond_wait is called with a cond-var and its mutex. 1.75 -The animating processor must have acquired the mutex before calling cond_wait 1.76 -The call adds the animating processor to the queue associated with the cond 1.77 -variable and then calls mutex_unlock on the mutex. 1.78 - 1.79 -cond_signal can only be called after acquiring the cond var's mutex. It is 1.80 -called with the cond-var. 1.81 - The call takes the next processor from the condition-var's wait queue and 1.82 - transfers it to the waiting-for-lock queue of the cond-var's mutex. 1.83 -The processor that called the cond_signal next has to perform a mutex_unlock 1.84 - on the cond-var's mutex -- that, finally, lets the waiting processor acquire 1.85 - the mutex and proceed. 1.86 + 1.87 1.88 +Implement VPThread this way: 1.89 1.90 + 1.91 1.92 +We implemented a subset of PThreads functionality, called VMSPThd, that 1.93 1.94 +includes: mutex_lock, mutex_unlock, cond_wait, and cond_notify, which we name 1.95 1.96 +as VMSPThd__mutix_lock and so forth. \ All VMSPThd functions take a reference 1.97 1.98 +to the AppVP that is animating the function call, in addition to any other 1.99 1.100 +parameters. 1.101 1.102 + 1.103 1.104 +A mutex variable is an integer, returned by VMSPThd__mutex_create(), which is 1.105 1.106 +used inside the request handler as a key to lookup an entry in a hash table, 1.107 1.108 +that lives in the SemanticEnv. \ Such an entry has a field holding a 1.109 1.110 +reference to the AppVP that currently owns the lock, and a queue of AppVPs 1.111 1.112 +waiting to acquire the lock. \ 1.113 1.114 + 1.115 1.116 +Acquiring a lock is done with VMSPThd__mutex_lock(), which generates a 1.117 1.118 +request. \ Recall that all request sends cause the suspention of the AppVP 1.119 1.120 +that is animating the library call that generates the request, in this case 1.121 1.122 +the AppVP animating VMSPThd__mutex_lock() is suspended. \ The request 1.123 1.124 +includes a reference to that animating AppVP, and the mutex integer value. 1.125 1.126 +\ When the request reaches the request handler, the mutex integer is used as 1.127 1.128 +key to look up the hash entry, then if the owner field is null (or the same 1.129 1.130 +as the AppVP in the request), the AppVP in the request is placed into the 1.131 1.132 +owner field, and that AppVP is queued to be scheduled for re-animation. 1.133 1.134 +\ However, if a different AppVP is listed in the owner field, then the AppVP 1.135 1.136 +in the request is added to the queue of those trying to acquire. \ Notice 1.137 1.138 +that this is a purely sequential algorithm that systematic reasoning can be 1.139 1.140 +used on. 1.141 1.142 + 1.143 1.144 +VMSPThd__mutex_unlock(), meanwhile, generates a request that causes the 1.145 1.146 +request handler to queue for re-animation the AppVP that animated the call. 1.147 1.148 +\ It also pops the queue of AppVPs waiting to acquire the lock, and writes 1.149 1.150 +the AppVP that comes out as the current owner of the lock and queues that 1.151 1.152 +AppVP for re-animation (unless the popped value is null, in which case the 1.153 1.154 +current owner is just set to null). 1.155 1.156 + 1.157 1.158 +Implementing condition variables takes a similar approach, in that 1.159 1.160 +VMSPThd__init_cond() returns an integer that is then used to look up an entry 1.161 1.162 +in a hash table, where the entry contains a queue of AppVPs waiting on the 1.163 1.164 +condition variable. \ VMSPThd__cond_wait() generates a request that pushes 1.165 1.166 +the AppVP into the queue, while VMSPThd__cond_signal() takes a wait request 1.167 1.168 +from the queue. 1.169 1.170 + 1.171 1.172 +Notice that this is again a purely sequential algorithm, and sidesteps issues 1.173 1.174 +such as ``simultaneous'' wait and signal requests -- the wait and signal get 1.175 1.176 +serialized automatically, even though they take place at the same instant of 1.177 1.178 +program virtual time. \ 1.179 1.180 + 1.181 1.182 +It is the fact of having a program virtual time that allows ``virtual 1.183 1.184 +simultaneous'' actions to be handled <em|outside> of the virtual time. \ That 1.185 1.186 +ability to escape outside of the virtual time is what enables a 1.187 1.188 +<em|sequential> algorithm to handle the simultaneity that is at the heart of 1.189 1.190 +making implementing locks in physical time so intricately tricky 1.191 1.192 +<inactive|<cite|LamportLockImpl>> <inactive|<cite|DijkstraLockPaper>> 1.193 1.194 +<inactive|<cite|LamportRelativisticTimePaper>>.\ 1.195 1.196 + 1.197 1.198 +What's nice about this approach is that the design and implementation are 1.199 1.200 +simple and straight forward. \ It took just X days to design, implement, and 1.201 1.202 +debug, and is in a form that should be amenable to proof of freedom from race 1.203 1.204 +conditions, given a correct implementation of VMS. \ The hash-table based 1.205 1.206 +approach also makes it reasonably high performance, with (essentially) no 1.207 1.208 +slowdown when the number of locks or number of AppVPs grows large. 1.209 1.210 + 1.211 1.212 +=========================== 1.213 1.214 +Behavior: 1.215 1.216 +Cond variables are half of a two-piece mechanism. The other half is a mutex. 1.217 1.218 + Every cond var owns a mutex -- the two intrinsically work 1.219 1.220 + together, as a pair. The mutex must only be used with the condition var 1.221 1.222 + and not used on its own in other ways. 1.223 1.224 + 1.225 1.226 +cond_wait is called with a cond-var and its mutex. 1.227 1.228 +The animating processor must have acquired the mutex before calling cond_wait 1.229 1.230 +The call adds the animating processor to the queue associated with the cond 1.231 1.232 +variable and then calls mutex_unlock on the mutex. 1.233 1.234 + 1.235 1.236 +cond_signal can only be called after acquiring the cond var's mutex. It is 1.237 1.238 +called with the cond-var. 1.239 1.240 + The call takes the next processor from the condition-var's wait queue and 1.241 1.242 + transfers it to the waiting-for-lock queue of the cond-var's mutex. 1.243 1.244 +The processor that called the cond_signal next has to perform a mutex_unlock 1.245 1.246 + on the cond-var's mutex -- that, finally, lets the waiting processor acquire 1.247 1.248 + the mutex and proceed. 1.249
2.1 --- a/VPThread_lib.c Sat Nov 20 08:39:58 2010 +0100 2.2 +++ b/VPThread_lib.c Thu Jun 02 14:47:50 2011 +0200 2.3 @@ -156,19 +156,25 @@ 2.4 VPThread__init() 2.5 { 2.6 VMS__init(); 2.7 + //masterEnv, a global var, now is partially set up by init_VMS 2.8 + 2.9 + //Moved here from VMS.c because this is not parallel construct independent 2.10 + MakeTheMeasHists 2.11 + 2.12 + VPThread__init_Helper(); 2.13 + } 2.14 + 2.15 +#ifdef SEQUENTIAL 2.16 +void 2.17 +VPThread__init_Seq() 2.18 + { 2.19 + VMS__init_Seq(); 2.20 + flushRegisters(); 2.21 //masterEnv, a global var, now is partially set up by init_VMS 2.22 2.23 VPThread__init_Helper(); 2.24 } 2.25 - 2.26 -void 2.27 -VPThread__init_Seq() 2.28 - { 2.29 - VMS__init_Seq(); 2.30 - //masterEnv, a global var, now is partially set up by init_VMS 2.31 - 2.32 - VPThread__init_Helper(); 2.33 - } 2.34 +#endif 2.35 2.36 void 2.37 VPThread__init_Helper()
