Mercurial > cgi-bin > hgwebdir.cgi > VMS > VMS_Implementations > Vthread_impls > Vthread_MC_shared_impl
changeset 1:1d3157ac56c4
Updated to Nov 8 version of VMS -- added singleton, trans, etc
| author | SeanHalle |
|---|---|
| date | Thu, 11 Nov 2010 04:29:10 -0800 |
| parents | 4aca264971b5 |
| children | e960a8d18f7c |
| files | DESIGN_NOTES__VPThread__lib.txt DESIGN_NOTES__VPThread_lib.txt VPThread.h VPThread_PluginFns.c VPThread_Request_Handlers.c VPThread_Request_Handlers.h VPThread__PluginFns.c VPThread__Request_Handlers.c VPThread__Request_Handlers.h VPThread__lib.c VPThread_lib.c |
| diffstat | 11 files changed, 1174 insertions(+), 857 deletions(-) [+] |
line diff
1.1 --- a/DESIGN_NOTES__VPThread__lib.txt Fri Sep 17 11:34:02 2010 -0700 1.2 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 1.3 @@ -1,82 +0,0 @@ 1.4 - 1.5 -Implement VPThread this way: 1.6 - 1.7 -We implemented a subset of PThreads functionality, called VMSPThd, that 1.8 -includes: mutex_lock, mutex_unlock, cond_wait, and cond_notify, which we name 1.9 -as VMSPThd__mutix_lock and so forth. \ All VMSPThd functions take a reference 1.10 -to the AppVP that is animating the function call, in addition to any other 1.11 -parameters. 1.12 - 1.13 -A mutex variable is an integer, returned by VMSPThd__mutex_create(), which is 1.14 -used inside the request handler as a key to lookup an entry in a hash table, 1.15 -that lives in the SemanticEnv. \ Such an entry has a field holding a 1.16 -reference to the AppVP that currently owns the lock, and a queue of AppVPs 1.17 -waiting to acquire the lock. \ 1.18 - 1.19 -Acquiring a lock is done with VMSPThd__mutex_lock(), which generates a 1.20 -request. \ Recall that all request sends cause the suspention of the AppVP 1.21 -that is animating the library call that generates the request, in this case 1.22 -the AppVP animating VMSPThd__mutex_lock() is suspended. \ The request 1.23 -includes a reference to that animating AppVP, and the mutex integer value. 1.24 -\ When the request reaches the request handler, the mutex integer is used as 1.25 -key to look up the hash entry, then if the owner field is null (or the same 1.26 -as the AppVP in the request), the AppVP in the request is placed into the 1.27 -owner field, and that AppVP is queued to be scheduled for re-animation. 1.28 -\ However, if a different AppVP is listed in the owner field, then the AppVP 1.29 -in the request is added to the queue of those trying to acquire. \ Notice 1.30 -that this is a purely sequential algorithm that systematic reasoning can be 1.31 -used on. 1.32 - 1.33 -VMSPThd__mutex_unlock(), meanwhile, generates a request that causes the 1.34 -request handler to queue for re-animation the AppVP that animated the call. 1.35 -\ It also pops the queue of AppVPs waiting to acquire the lock, and writes 1.36 -the AppVP that comes out as the current owner of the lock and queues that 1.37 -AppVP for re-animation (unless the popped value is null, in which case the 1.38 -current owner is just set to null). 1.39 - 1.40 -Implementing condition variables takes a similar approach, in that 1.41 -VMSPThd__init_cond() returns an integer that is then used to look up an entry 1.42 -in a hash table, where the entry contains a queue of AppVPs waiting on the 1.43 -condition variable. \ VMSPThd__cond_wait() generates a request that pushes 1.44 -the AppVP into the queue, while VMSPThd__cond_signal() takes a wait request 1.45 -from the queue. 1.46 - 1.47 -Notice that this is again a purely sequential algorithm, and sidesteps issues 1.48 -such as ``simultaneous'' wait and signal requests -- the wait and signal get 1.49 -serialized automatically, even though they take place at the same instant of 1.50 -program virtual time. \ 1.51 - 1.52 -It is the fact of having a program virtual time that allows ``virtual 1.53 -simultaneous'' actions to be handled <em|outside> of the virtual time. \ That 1.54 -ability to escape outside of the virtual time is what enables a 1.55 -<em|sequential> algorithm to handle the simultaneity that is at the heart of 1.56 -making implementing locks in physical time so intricately tricky 1.57 -<inactive|<cite|LamportLockImpl>> <inactive|<cite|DijkstraLockPaper>> 1.58 -<inactive|<cite|LamportRelativisticTimePaper>>.\ 1.59 - 1.60 -What's nice about this approach is that the design and implementation are 1.61 -simple and straight forward. \ It took just X days to design, implement, and 1.62 -debug, and is in a form that should be amenable to proof of freedom from race 1.63 -conditions, given a correct implementation of VMS. \ The hash-table based 1.64 -approach also makes it reasonably high performance, with (essentially) no 1.65 -slowdown when the number of locks or number of AppVPs grows large. 1.66 - 1.67 -=========================== 1.68 -Behavior: 1.69 -Cond variables are half of a two-piece mechanism. The other half is a mutex. 1.70 - Every cond var owns a mutex -- the two intrinsically work 1.71 - together, as a pair. The mutex must only be used with the condition var 1.72 - and not used on its own in other ways. 1.73 - 1.74 -cond_wait is called with a cond-var and its mutex. 1.75 -The animating processor must have acquired the mutex before calling cond_wait 1.76 -The call adds the animating processor to the queue associated with the cond 1.77 -variable and then calls mutex_unlock on the mutex. 1.78 - 1.79 -cond_signal can only be called after acquiring the cond var's mutex. It is 1.80 -called with the cond-var. 1.81 - The call takes the next processor from the condition-var's wait queue and 1.82 - transfers it to the waiting-for-lock queue of the cond-var's mutex. 1.83 -The processor that called the cond_signal next has to perform a mutex_unlock 1.84 - on the cond-var's mutex -- that, finally, lets the waiting processor acquire 1.85 - the mutex and proceed.
2.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 2.2 +++ b/DESIGN_NOTES__VPThread_lib.txt Thu Nov 11 04:29:10 2010 -0800 2.3 @@ -0,0 +1,82 @@ 2.4 + 2.5 +Implement VPThread this way: 2.6 + 2.7 +We implemented a subset of PThreads functionality, called VMSPThd, that 2.8 +includes: mutex_lock, mutex_unlock, cond_wait, and cond_notify, which we name 2.9 +as VMSPThd__mutix_lock and so forth. \ All VMSPThd functions take a reference 2.10 +to the AppVP that is animating the function call, in addition to any other 2.11 +parameters. 2.12 + 2.13 +A mutex variable is an integer, returned by VMSPThd__mutex_create(), which is 2.14 +used inside the request handler as a key to lookup an entry in a hash table, 2.15 +that lives in the SemanticEnv. \ Such an entry has a field holding a 2.16 +reference to the AppVP that currently owns the lock, and a queue of AppVPs 2.17 +waiting to acquire the lock. \ 2.18 + 2.19 +Acquiring a lock is done with VMSPThd__mutex_lock(), which generates a 2.20 +request. \ Recall that all request sends cause the suspention of the AppVP 2.21 +that is animating the library call that generates the request, in this case 2.22 +the AppVP animating VMSPThd__mutex_lock() is suspended. \ The request 2.23 +includes a reference to that animating AppVP, and the mutex integer value. 2.24 +\ When the request reaches the request handler, the mutex integer is used as 2.25 +key to look up the hash entry, then if the owner field is null (or the same 2.26 +as the AppVP in the request), the AppVP in the request is placed into the 2.27 +owner field, and that AppVP is queued to be scheduled for re-animation. 2.28 +\ However, if a different AppVP is listed in the owner field, then the AppVP 2.29 +in the request is added to the queue of those trying to acquire. \ Notice 2.30 +that this is a purely sequential algorithm that systematic reasoning can be 2.31 +used on. 2.32 + 2.33 +VMSPThd__mutex_unlock(), meanwhile, generates a request that causes the 2.34 +request handler to queue for re-animation the AppVP that animated the call. 2.35 +\ It also pops the queue of AppVPs waiting to acquire the lock, and writes 2.36 +the AppVP that comes out as the current owner of the lock and queues that 2.37 +AppVP for re-animation (unless the popped value is null, in which case the 2.38 +current owner is just set to null). 2.39 + 2.40 +Implementing condition variables takes a similar approach, in that 2.41 +VMSPThd__init_cond() returns an integer that is then used to look up an entry 2.42 +in a hash table, where the entry contains a queue of AppVPs waiting on the 2.43 +condition variable. \ VMSPThd__cond_wait() generates a request that pushes 2.44 +the AppVP into the queue, while VMSPThd__cond_signal() takes a wait request 2.45 +from the queue. 2.46 + 2.47 +Notice that this is again a purely sequential algorithm, and sidesteps issues 2.48 +such as ``simultaneous'' wait and signal requests -- the wait and signal get 2.49 +serialized automatically, even though they take place at the same instant of 2.50 +program virtual time. \ 2.51 + 2.52 +It is the fact of having a program virtual time that allows ``virtual 2.53 +simultaneous'' actions to be handled <em|outside> of the virtual time. \ That 2.54 +ability to escape outside of the virtual time is what enables a 2.55 +<em|sequential> algorithm to handle the simultaneity that is at the heart of 2.56 +making implementing locks in physical time so intricately tricky 2.57 +<inactive|<cite|LamportLockImpl>> <inactive|<cite|DijkstraLockPaper>> 2.58 +<inactive|<cite|LamportRelativisticTimePaper>>.\ 2.59 + 2.60 +What's nice about this approach is that the design and implementation are 2.61 +simple and straight forward. \ It took just X days to design, implement, and 2.62 +debug, and is in a form that should be amenable to proof of freedom from race 2.63 +conditions, given a correct implementation of VMS. \ The hash-table based 2.64 +approach also makes it reasonably high performance, with (essentially) no 2.65 +slowdown when the number of locks or number of AppVPs grows large. 2.66 + 2.67 +=========================== 2.68 +Behavior: 2.69 +Cond variables are half of a two-piece mechanism. The other half is a mutex. 2.70 + Every cond var owns a mutex -- the two intrinsically work 2.71 + together, as a pair. The mutex must only be used with the condition var 2.72 + and not used on its own in other ways. 2.73 + 2.74 +cond_wait is called with a cond-var and its mutex. 2.75 +The animating processor must have acquired the mutex before calling cond_wait 2.76 +The call adds the animating processor to the queue associated with the cond 2.77 +variable and then calls mutex_unlock on the mutex. 2.78 + 2.79 +cond_signal can only be called after acquiring the cond var's mutex. It is 2.80 +called with the cond-var. 2.81 + The call takes the next processor from the condition-var's wait queue and 2.82 + transfers it to the waiting-for-lock queue of the cond-var's mutex. 2.83 +The processor that called the cond_signal next has to perform a mutex_unlock 2.84 + on the cond-var's mutex -- that, finally, lets the waiting processor acquire 2.85 + the mutex and proceed.
3.1 --- a/VPThread.h Fri Sep 17 11:34:02 2010 -0700 3.2 +++ b/VPThread.h Thu Nov 11 04:29:10 2010 -0800 3.3 @@ -14,14 +14,21 @@ 3.4 #include "VMS/DynArray/DynArray.h" 3.5 3.6 3.7 +/*This header defines everything specific to the VPThread semantic plug-in 3.8 + */ 3.9 + 3.10 + 3.11 //=========================================================================== 3.12 #define INIT_NUM_MUTEX 10000 3.13 #define INIT_NUM_COND 10000 3.14 + 3.15 +#define NUM_STRUCS_IN_SEM_ENV 1000 3.16 //=========================================================================== 3.17 3.18 -/*This header defines everything specific to the VPThread semantic plug-in 3.19 - */ 3.20 -typedef struct _VPThreadSemReq VPThreadSemReq; 3.21 +//=========================================================================== 3.22 +typedef struct _VPThreadSemReq VPThdSemReq; 3.23 +typedef void (*PtrToAtomicFn ) ( void * ); //executed atomically in master 3.24 +//=========================================================================== 3.25 3.26 3.27 /*Semantic layer-specific data sent inside a request from lib called in app 3.28 @@ -35,7 +42,13 @@ 3.29 make_cond, 3.30 cond_wait, 3.31 cond_signal, 3.32 - make_procr 3.33 + make_procr, 3.34 + malloc_req, 3.35 + free_req, 3.36 + singleton, 3.37 + atomic, 3.38 + trans_start, 3.39 + trans_end 3.40 }; 3.41 3.42 struct _VPThreadSemReq 3.43 @@ -43,28 +56,55 @@ 3.44 VirtProcr *requestingPr; 3.45 int32 mutexIdx; 3.46 int32 condIdx; 3.47 + 3.48 void *initData; 3.49 VirtProcrFnPtr fnPtr; 3.50 + int32 coreToScheduleOnto; 3.51 + 3.52 + int32 sizeToMalloc; 3.53 + void *ptrToFree; 3.54 + 3.55 + int32 singletonID; 3.56 + void *endJumpPt; 3.57 + 3.58 + PtrToAtomicFn fnToExecInMaster; 3.59 + void *dataForFn; 3.60 + 3.61 + int32 transID; 3.62 } 3.63 /* VPThreadSemReq */; 3.64 3.65 + 3.66 +typedef struct 3.67 + { 3.68 + VirtProcr *VPCurrentlyExecuting; 3.69 + PrivQueueStruc *waitingVPQ; 3.70 + } 3.71 +VPThdTrans; 3.72 + 3.73 + 3.74 typedef struct 3.75 { 3.76 //Standard stuff will be in most every semantic env 3.77 - PrivQueueStruc **readyVPQs; 3.78 - int32 numVirtPr; 3.79 - int32 nextCoreToGetNewPr; 3.80 + PrivQueueStruc **readyVPQs; 3.81 + int32 numVirtPr; 3.82 + int32 nextCoreToGetNewPr; 3.83 + int32 primitiveStartTime; 3.84 3.85 //Specific to this semantic layer 3.86 - int32 currMutexIdx; 3.87 - DynArray32 *mutexDynArray; 3.88 - 3.89 - int32 currCondIdx; 3.90 - DynArray32 *condDynArray; 3.91 + VPThdMutex **mutexDynArray; 3.92 + PrivDynArrayInfo *mutexDynArrayInfo; 3.93 3.94 - void *applicationGlobals; 3.95 + VPThdCond **condDynArray; 3.96 + PrivDynArrayInfo *condDynArrayInfo; 3.97 + 3.98 + void *applicationGlobals; 3.99 + 3.100 + //fix limit on num with dynArray 3.101 + int32 singletonHasBeenExecutedFlags[NUM_STRUCS_IN_SEM_ENV]; 3.102 + VPThdTrans transactionStrucs[NUM_STRUCS_IN_SEM_ENV]; 3.103 } 3.104 -VPThreadSemEnv; 3.105 +VPThdSemEnv; 3.106 3.107 3.108 typedef struct 3.109 @@ -73,21 +113,36 @@ 3.110 VirtProcr *holderOfLock; 3.111 PrivQueueStruc *waitingQueue; 3.112 } 3.113 -VPTMutex; 3.114 +VPThdMutex; 3.115 3.116 3.117 typedef struct 3.118 { 3.119 int32 condIdx; 3.120 PrivQueueStruc *waitingQueue; 3.121 - VPTMutex *partnerMutex; 3.122 + VPThdMutex *partnerMutex; 3.123 } 3.124 -VPTCond; 3.125 +VPThdCond; 3.126 + 3.127 +typedef struct _TransListElem TransListElem; 3.128 +struct _TransListElem 3.129 + { 3.130 + int32 transID; 3.131 + TransListElem *nextTrans; 3.132 + }; 3.133 +//TransListElem 3.134 + 3.135 +typedef struct 3.136 + { 3.137 + int32 highestTransEntered; 3.138 + TransListElem *lastTransEntered; 3.139 + } 3.140 +VPThdSemData; 3.141 3.142 3.143 //=========================================================================== 3.144 3.145 -void 3.146 +inline void 3.147 VPThread__create_seed_procr_and_do_work( VirtProcrFnPtr fn, void *initData ); 3.148 3.149 //======================= 3.150 @@ -96,50 +151,54 @@ 3.151 VPThread__create_thread( VirtProcrFnPtr fnPtr, void *initData, 3.152 VirtProcr *creatingPr ); 3.153 3.154 -void 3.155 +inline VirtProcr * 3.156 +VPThread__create_thread_with_affinity( VirtProcrFnPtr fnPtr, void *initData, 3.157 + VirtProcr *creatingPr, int32 coreToScheduleOnto ); 3.158 + 3.159 +inline void 3.160 VPThread__dissipate_thread( VirtProcr *procrToDissipate ); 3.161 3.162 //======================= 3.163 -void 3.164 +inline void 3.165 VPThread__set_globals_to( void *globals ); 3.166 3.167 -void * 3.168 +inline void * 3.169 VPThread__give_globals(); 3.170 3.171 //======================= 3.172 -int32 3.173 +inline int32 3.174 VPThread__make_mutex( VirtProcr *animPr ); 3.175 3.176 -void 3.177 +inline void 3.178 VPThread__mutex_lock( int32 mutexIdx, VirtProcr *acquiringPr ); 3.179 3.180 -void 3.181 +inline void 3.182 VPThread__mutex_unlock( int32 mutexIdx, VirtProcr *releasingPr ); 3.183 3.184 3.185 //======================= 3.186 -int32 3.187 +inline int32 3.188 VPThread__make_cond( int32 ownedMutexIdx, VirtProcr *animPr); 3.189 3.190 -void 3.191 +inline void 3.192 VPThread__cond_wait( int32 condIdx, VirtProcr *waitingPr); 3.193 3.194 -void * 3.195 +inline void * 3.196 VPThread__cond_signal( int32 condIdx, VirtProcr *signallingPr ); 3.197 3.198 3.199 3.200 3.201 //========================= Internal use only ============================= 3.202 -void 3.203 +inline void 3.204 VPThread__Request_Handler( VirtProcr *requestingPr, void *_semEnv ); 3.205 3.206 -VirtProcr * 3.207 +inline VirtProcr * 3.208 VPThread__schedule_virt_procr( void *_semEnv, int coreNum ); 3.209 3.210 //======================= 3.211 -void 3.212 -VPThread__free_semantic_request( VPThreadSemReq *semReq ); 3.213 +inline void 3.214 +VPThread__free_semantic_request( VPThdSemReq *semReq ); 3.215 3.216 //======================= 3.217
4.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 4.2 +++ b/VPThread_PluginFns.c Thu Nov 11 04:29:10 2010 -0800 4.3 @@ -0,0 +1,203 @@ 4.4 +/* 4.5 + * Copyright 2010 OpenSourceCodeStewardshipFoundation 4.6 + * 4.7 + * Licensed under BSD 4.8 + */ 4.9 + 4.10 +#include <stdio.h> 4.11 +#include <stdlib.h> 4.12 +#include <malloc.h> 4.13 + 4.14 +#include "VMS/Queue_impl/PrivateQueue.h" 4.15 +#include "VPThread.h" 4.16 +#include "VPThread_Request_Handlers.h" 4.17 + 4.18 +//=========================== Local Fn Prototypes =========================== 4.19 +void inline 4.20 +resume_procr( VirtProcr *procr, VPThdSemEnv *semEnv ); 4.21 + 4.22 +void inline 4.23 +handleSemReq( VMSReqst *req, VirtProcr *requestingPr, VPThdSemEnv *semEnv ); 4.24 + 4.25 +void 4.26 +handleDissipate( VirtProcr *requestingPr, VPThdSemEnv *semEnv ); 4.27 + 4.28 +void 4.29 +handleCreate( VMSReqst *req, VirtProcr *requestingPr, VPThdSemEnv *semEnv ); 4.30 + 4.31 + 4.32 +//============================== Scheduler ================================== 4.33 +// 4.34 +/*For VPThread, scheduling a slave simply takes the next work-unit off the 4.35 + * ready-to-go work-unit queue and assigns it to the slaveToSched. 4.36 + *If the ready-to-go work-unit queue is empty, then nothing to schedule 4.37 + * to the slave -- return FALSE to let Master loop know scheduling that 4.38 + * slave failed. 4.39 + */ 4.40 +VirtProcr * 4.41 +VPThread__schedule_virt_procr( void *_semEnv, int coreNum ) 4.42 + { VirtProcr *schedPr; 4.43 + VPThdSemEnv *semEnv; 4.44 + 4.45 + semEnv = (VPThdSemEnv *)_semEnv; 4.46 + 4.47 + schedPr = readPrivQ( semEnv->readyVPQs[coreNum] ); 4.48 + //Note, using a non-blocking queue -- it returns NULL if queue empty 4.49 + 4.50 + return( schedPr ); 4.51 + } 4.52 + 4.53 + 4.54 + 4.55 +//=========================== Request Handler ============================= 4.56 +// 4.57 +/*Will get requests to send, to receive, and to create new processors. 4.58 + * Upon send, check the hash to see if a receive is waiting. 4.59 + * Upon receive, check hash to see if a send has already happened. 4.60 + * When other is not there, put in. When other is there, the comm. 4.61 + * completes, which means the receiver P gets scheduled and 4.62 + * picks up right after the receive request. So make the work-unit 4.63 + * and put it into the queue of work-units ready to go. 4.64 + * Other request is create a new Processor, with the function to run in the 4.65 + * Processor, and initial data. 4.66 + */ 4.67 +void 4.68 +VPThread__Request_Handler( VirtProcr *requestingPr, void *_semEnv ) 4.69 + { VPThdSemEnv *semEnv; 4.70 + VMSReqst *req; 4.71 + 4.72 + semEnv = (VPThdSemEnv *)_semEnv; 4.73 + 4.74 + req = VMS__take_next_request_out_of( requestingPr ); 4.75 + 4.76 + while( req != NULL ) 4.77 + { 4.78 + switch( req->reqType ) 4.79 + { case semantic: handleSemReq( req, requestingPr, semEnv); 4.80 + break; 4.81 + case createReq: handleCreate( req, requestingPr, semEnv); 4.82 + break; 4.83 + case dissipate: handleDissipate( requestingPr, semEnv); 4.84 + break; 4.85 + case VMSSemantic: VMS__handle_VMSSemReq(req, requestingPr, semEnv, 4.86 + &resume_procr); 4.87 + break; 4.88 + default: 4.89 + break; 4.90 + } 4.91 + 4.92 + req = VMS__take_next_request_out_of( requestingPr ); 4.93 + } //while( req != NULL ) 4.94 + } 4.95 + 4.96 + 4.97 +void inline 4.98 +handleSemReq( VMSReqst *req, VirtProcr *reqPr, VPThdSemEnv *semEnv ) 4.99 + { VPThdSemReq *semReq; 4.100 + 4.101 + semReq = VMS__take_sem_reqst_from(req); 4.102 + if( semReq == NULL ) return; 4.103 + switch( semReq->reqType ) 4.104 + { 4.105 + case make_mutex: handleMakeMutex( semReq, semEnv); 4.106 + break; 4.107 + case mutex_lock: handleMutexLock( semReq, semEnv); 4.108 + break; 4.109 + case mutex_unlock: handleMutexUnlock(semReq, semEnv); 4.110 + break; 4.111 + case make_cond: handleMakeCond( semReq, semEnv); 4.112 + break; 4.113 + case cond_wait: handleCondWait( semReq, semEnv); 4.114 + break; 4.115 + case cond_signal: handleCondSignal( semReq, semEnv); 4.116 + break; 4.117 + } 4.118 + } 4.119 + 4.120 +//=========================== VMS Request Handlers =========================== 4.121 +// 4.122 +void 4.123 +handleDissipate( VirtProcr *requestingPr, VPThdSemEnv *semEnv ) 4.124 + { 4.125 + //free any semantic data allocated to the virt procr 4.126 + VMS__free( requestingPr->semanticData ); 4.127 + 4.128 + //Now, call VMS to free_all AppVP state -- stack and so on 4.129 + VMS__dissipate_procr( requestingPr ); 4.130 + 4.131 + semEnv->numVirtPr -= 1; 4.132 + if( semEnv->numVirtPr == 0 ) 4.133 + { //no more work, so shutdown 4.134 + VMS__shutdown(); 4.135 + } 4.136 + } 4.137 + 4.138 +/*Re-use this in the entry-point fn 4.139 + */ 4.140 +inline VirtProcr * 4.141 +VPThread__create_procr_helper( VirtProcrFnPtr fnPtr, void *initData, 4.142 + VPThdSemEnv *semEnv, int32 coreToScheduleOnto ) 4.143 + { VirtProcr *newPr; 4.144 + VPThdSemData semData; 4.145 + 4.146 + //This is running in master, so use internal version 4.147 + newPr = VMS__create_procr( fnPtr, initData ); 4.148 + 4.149 + semEnv->numVirtPr += 1; 4.150 + 4.151 + semData = VMS__malloc( sizeof(VPThdSemData) ); 4.152 + semData->highestTransEntered = -1; 4.153 + semData->lastTransEntered = NULL; 4.154 + 4.155 + newPr->semanticData = semData; 4.156 + 4.157 + //=================== Assign new processor to a core ===================== 4.158 + #ifdef SEQUENTIAL 4.159 + newPr->coreAnimatedBy = 0; 4.160 + 4.161 + #else 4.162 + 4.163 + if(coreToScheduleOnto < 0 || coreToScheduleOnto >= NUM_CORES ) 4.164 + { //out-of-range, so round-robin assignment 4.165 + newPr->coreAnimatedBy = semEnv->nextCoreToGetNewPr; 4.166 + 4.167 + if( semEnv->nextCoreToGetNewPr >= NUM_CORES - 1 ) 4.168 + semEnv->nextCoreToGetNewPr = 0; 4.169 + else 4.170 + semEnv->nextCoreToGetNewPr += 1; 4.171 + } 4.172 + else //core num in-range, so use it 4.173 + { newPr->coreAnimatedBy = coreToScheduleOnto; 4.174 + } 4.175 + #endif 4.176 + //======================================================================== 4.177 + 4.178 + return newPr; 4.179 + } 4.180 + 4.181 +void 4.182 +handleCreate( VMSReqst *req, VirtProcr *requestingPr, VPThdSemEnv *semEnv ) 4.183 + { VPThdSemReq *semReq; 4.184 + VirtProcr *newPr; 4.185 + 4.186 + semReq = VMS__take_sem_reqst_from( req ); 4.187 + 4.188 + newPr = VPThread__create_procr_helper( semReq->fnPtr, semReq->initData, 4.189 + semEnv, semReq->coreToScheduleOnto); 4.190 + 4.191 + //For VPThread, caller needs ptr to created processor returned to it 4.192 + requestingPr->dataRetFromReq = newPr; 4.193 + 4.194 + resume_procr( newPr, semEnv ); 4.195 + resume_procr( requestingPr, semEnv ); 4.196 + } 4.197 + 4.198 + 4.199 +//=========================== Helper ============================== 4.200 +void inline 4.201 +resume_procr( VirtProcr *procr, VPThdSemEnv *semEnv ) 4.202 + { 4.203 + writePrivQ( procr, semEnv->readyVPQs[ procr->coreAnimatedBy] ); 4.204 + } 4.205 + 4.206 +//=========================================================================== 4.207 \ No newline at end of file
5.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 5.2 +++ b/VPThread_Request_Handlers.c Thu Nov 11 04:29:10 2010 -0800 5.3 @@ -0,0 +1,329 @@ 5.4 +/* 5.5 + * Copyright 2010 OpenSourceCodeStewardshipFoundation 5.6 + * 5.7 + * Licensed under BSD 5.8 + */ 5.9 + 5.10 +#include <stdio.h> 5.11 +#include <stdlib.h> 5.12 +#include <malloc.h> 5.13 + 5.14 +#include "VMS/VMS.h" 5.15 +#include "VMS/Queue_impl/PrivateQueue.h" 5.16 +#include "VMS/Hash_impl/PrivateHash.h" 5.17 +#include "VPThread.h" 5.18 + 5.19 + 5.20 + 5.21 +//=============================== Mutexes ================================= 5.22 +/*The semantic request has a mutexIdx value, which acts as index into array. 5.23 + */ 5.24 +inline void 5.25 +handleMakeMutex( VPThdSemReq *semReq, VPThdSemEnv *semEnv) 5.26 + { VPThdMutex *newMutex; 5.27 + VirtProcr *requestingPr; 5.28 + 5.29 + requestingPr = semReq->requestingPr; 5.30 + newMutex = VMS__malloc( sizeof(VPThdMutex), requestingPr ); 5.31 + newMutex->waitingQueue = makePrivQ( requestingPr ); 5.32 + newMutex->holderOfLock = NULL; 5.33 + 5.34 + //The mutex struc contains an int that identifies it -- use that as 5.35 + // its index within the array of mutexes. Add the new mutex to array. 5.36 + newMutex->mutexIdx = addToDynArray( newMutex, semEnv->mutexDynArrayInfo ); 5.37 + 5.38 + //Now communicate the mutex's identifying int back to requesting procr 5.39 + semReq->requestingPr->dataRetFromReq = newMutex->mutexIdx; 5.40 + 5.41 + //re-animate the requester 5.42 + resume_procr( requestingPr ); 5.43 + } 5.44 + 5.45 + 5.46 +inline void 5.47 +handleMutexLock( VPThdSemReq *semReq, VPThdSemEnv *semEnv) 5.48 + { VPThdMutex *mutex; 5.49 + 5.50 + //=================== Deterministic Replay ====================== 5.51 + #ifdef RECORD_DETERMINISTIC_REPLAY 5.52 + 5.53 + #endif 5.54 + //================================================================= 5.55 + //lookup mutex struc, using mutexIdx as index 5.56 + mutex = semEnv->mutexDynArray[ semReq->mutexIdx ]; 5.57 + 5.58 + //see if mutex is free or not 5.59 + if( mutex->holderOfLock == NULL ) //none holding, give lock to requester 5.60 + { 5.61 + mutex->holderOfLock = semReq->requestingPr; 5.62 + 5.63 + //re-animate requester, now that it has the lock 5.64 + resume_procr( semReq->requestingPr ); 5.65 + } 5.66 + else //queue up requester to wait for release of lock 5.67 + { 5.68 + writePrivQ( semReq->requestingPr, mutex->waitingQueue ); 5.69 + } 5.70 + } 5.71 + 5.72 +/* 5.73 + */ 5.74 +inline void 5.75 +handleMutexUnlock( VPThdSemReq *semReq, VPThdSemEnv *semEnv) 5.76 + { VPThdMutex *mutex; 5.77 + 5.78 + //lookup mutex struc, using mutexIdx as index 5.79 + mutex = semEnv->mutexDynArray[ semReq->mutexIdx ]; 5.80 + 5.81 + //set new holder of mutex-lock to be next in queue (NULL if empty) 5.82 + mutex->holderOfLock = readPrivQ( mutex->waitingQueue ); 5.83 + 5.84 + //if have new non-NULL holder, re-animate it 5.85 + if( mutex->holderOfLock != NULL ) 5.86 + { 5.87 + resume_procr( mutex->holderOfLock ); 5.88 + } 5.89 + 5.90 + //re-animate the releaser of the lock 5.91 + resume_procr( semReq->requestingPr ); 5.92 + } 5.93 + 5.94 +//=========================== Condition Vars ============================== 5.95 +/*The semantic request has the cond-var value and mutex value, which are the 5.96 + * indexes into the array. Not worrying about having too many mutexes or 5.97 + * cond vars created, so using array instead of hash table, for speed. 5.98 + */ 5.99 + 5.100 + 5.101 +/*Make cond has to be called with the mutex that the cond is paired to 5.102 + * Don't have to implement this way, but was confusing learning cond vars 5.103 + * until deduced that each cond var owns a mutex that is used only for 5.104 + * interacting with that cond var. So, make this pairing explicit. 5.105 + */ 5.106 +inline void 5.107 +handleMakeCond( VPThdSemReq *semReq, VPThdSemEnv *semEnv) 5.108 + { VPThdCond *newCond; 5.109 + VirtProcr *requestingPr; 5.110 + 5.111 + requestingPr = semReq->requestingPr; 5.112 + newCond = VMS__malloc( sizeof(VPThdCond), requestingPr ); 5.113 + newCond->partnerMutex = semEnv->mutexDynArray[ semReq->mutexIdx ]; 5.114 + 5.115 + newCond->waitingQueue = makePrivQ(); 5.116 + 5.117 + //The cond struc contains an int that identifies it -- use that as 5.118 + // its index within the array of conds. Add the new cond to array. 5.119 + newCond->condIdx = addToDynArray( newCond, semEnv->condDynArrayInfo ); 5.120 + 5.121 + //Now communicate the cond's identifying int back to requesting procr 5.122 + semReq->requestingPr->dataRetFromReq = newCond->condIdx; 5.123 + 5.124 + //re-animate the requester 5.125 + resume_procr( requestingPr ); 5.126 + } 5.127 + 5.128 + 5.129 +/*Mutex has already been paired to the cond var, so don't need to send the 5.130 + * mutex, just the cond var. Don't have to do this, but want to bitch-slap 5.131 + * the designers of Posix standard ; ) 5.132 + */ 5.133 +inline void 5.134 +handleCondWait( VPThdSemReq *semReq, VPThdSemEnv *semEnv) 5.135 + { VPThdCond *cond; 5.136 + VPThdMutex *mutex; 5.137 + VirtProcr *pr; 5.138 + 5.139 + //get cond struc out of array of them that's in the sem env 5.140 + cond = semEnv->condDynArray[ semReq->condIdx ]; 5.141 + 5.142 + //add requester to queue of wait-ers 5.143 + writePrivQ( semReq->requestingPr, cond->waitingQueue ); 5.144 + 5.145 + //unlock mutex -- can't reuse above handler 'cause not queuing releaser 5.146 + mutex = cond->partnerMutex; 5.147 + mutex->holderOfLock = readPrivQ( mutex->waitingQueue ); 5.148 + 5.149 + if( mutex->holderOfLock != NULL ) 5.150 + { 5.151 + resume_procr( mutex->holderOfLock ); 5.152 + } 5.153 + } 5.154 + 5.155 + 5.156 +/*Note that have to implement this such that guarantee the waiter is the one 5.157 + * that gets the lock 5.158 + */ 5.159 +inline void 5.160 +handleCondSignal( VPThdSemReq *semReq, VPThdSemEnv *semEnv) 5.161 + { VPThdCond *cond; 5.162 + VPThdMutex *mutex; 5.163 + VirtProcr *waitingPr, *pr; 5.164 + 5.165 + //get cond struc out of array of them that's in the sem env 5.166 + cond = semEnv->condDynArray[ semReq->condIdx ]; 5.167 + 5.168 + //take next waiting procr out of queue 5.169 + waitingPr = readPrivQ( cond->waitingQueue ); 5.170 + 5.171 + //transfer waiting procr to wait queue of mutex 5.172 + // mutex is guaranteed to be held by signalling procr, so no check 5.173 + mutex = cond->partnerMutex; 5.174 + pushPrivQ( waitingPr, mutex->waitingQueue ); //is first out when read 5.175 + 5.176 + //re-animate the signalling procr 5.177 + resume_procr( semReq->requestingPr ); 5.178 + } 5.179 + 5.180 + 5.181 + 5.182 +//============================================================================ 5.183 +// 5.184 +/* 5.185 + */ 5.186 +void inline 5.187 +handleMalloc(VPThdSemReq *semReq, VirtProcr *requestingPr,VPThdSemEnv *semEnv) 5.188 + { void *ptr; 5.189 + 5.190 + ptr = VMS__malloc( semReq->sizeToMalloc ); 5.191 + requestingPr->dataRetFromReq = ptr; 5.192 + resume_procr( requestingPr, semEnv ); 5.193 + } 5.194 + 5.195 +/* 5.196 + */ 5.197 +void inline 5.198 +handleFree( VPThdSemReq *semReq, VirtProcr *requestingPr, VPThdSemEnv *semEnv) 5.199 + { 5.200 + VMS__free( semReq->ptrToFree ); 5.201 + resume_procr( requestingPr, semEnv ); 5.202 + } 5.203 + 5.204 + 5.205 +//============================================================================ 5.206 +// 5.207 +/*Uses ID as index into array of flags. If flag already set, resumes from 5.208 + * end-label. Else, sets flag and resumes normally. 5.209 + */ 5.210 +void inline 5.211 +handleSingleton( VPThdSemReq *semReq, VirtProcr *requestingPr, 5.212 + VPThdSemEnv *semEnv ) 5.213 + { 5.214 + if( semEnv->singletonHasBeenExecutedFlags[ semReq->singletonID ] ) 5.215 + requestingPr->nextInstrPt = semReq->endJumpPt; 5.216 + else 5.217 + semEnv->singletonHasBeenExecutedFlags[ semReq->singletonID ] = TRUE; 5.218 + 5.219 + resume_procr( requestingPr, semEnv ); 5.220 + } 5.221 + 5.222 + 5.223 +/*This executes the function in the masterVP, take the function 5.224 + * pointer out of the request and call it, then resume the VP. 5.225 + */ 5.226 +void inline 5.227 +handleAtomic(VPThdSemReq *semReq, VirtProcr *requestingPr,VPThdSemEnv *semEnv) 5.228 + { 5.229 + semReq->fnToExecInMaster( semReq->dataForFn ); 5.230 + resume_procr( requestingPr, semEnv ); 5.231 + } 5.232 + 5.233 +/*First, it looks at the VP's semantic data, to see the highest transactionID 5.234 + * that VP 5.235 + * already has entered. If the current ID is not larger, it throws an 5.236 + * exception stating a bug in the code. 5.237 + *Otherwise it puts the current ID 5.238 + * there, and adds the ID to a linked list of IDs entered -- the list is 5.239 + * used to check that exits are properly ordered. 5.240 + *Next it is uses transactionID as index into an array of transaction 5.241 + * structures. 5.242 + *If the "VP_currently_executing" field is non-null, then put requesting VP 5.243 + * into queue in the struct. (At some point a holder will request 5.244 + * end-transaction, which will take this VP from the queue and resume it.) 5.245 + *If NULL, then write requesting into the field and resume. 5.246 + */ 5.247 +void inline 5.248 +handleTransStart( VPThdSemReq *semReq, VirtProcr *requestingPr, 5.249 + VPThdSemEnv *semEnv ) 5.250 + { VPThdSemData *semData; 5.251 + TransListElem *nextTransElem; 5.252 + 5.253 + //check ordering of entering transactions is correct 5.254 + semData = requestingPr->semanticData; 5.255 + if( semData->highestTransEntered > semReq->transID ) 5.256 + { //throw VMS exception, which shuts down VMS. 5.257 + VMS__throw_exception( "transID smaller than prev", requestingPr, NULL); 5.258 + } 5.259 + //add this trans ID to the list of transactions entered -- check when 5.260 + // end a transaction 5.261 + semData->highestTransEntered = semReq->transID; 5.262 + nextTransElem = VMS__malloc( sizeof(TransListElem) ); 5.263 + nextTransElem->transID = semReq->transID; 5.264 + nextTransElem->nextTrans = semData->lastTransEntered; 5.265 + semData->lastTransEntered = nextTransElem; 5.266 + 5.267 + //get the structure for this transaction ID 5.268 + VPThdTrans * 5.269 + transStruc = &(semEnv->transactionStrucs[ semReq->transID ]); 5.270 + 5.271 + if( transStruc->VPCurrentlyExecuting == NULL ) 5.272 + { 5.273 + transStruc->VPCurrentlyExecuting = requestingPr; 5.274 + resume_procr( requestingPr, semEnv ); 5.275 + } 5.276 + else 5.277 + { //note, might make future things cleaner if save request with VP and 5.278 + // add this trans ID to the linked list when gets out of queue. 5.279 + // but don't need for now, and lazy.. 5.280 + writePrivQ( requestingPr, transStruc->waitingVPQ ); 5.281 + } 5.282 + } 5.283 + 5.284 + 5.285 +/*Use the trans ID to get the transaction structure from the array. 5.286 + *Look at VP_currently_executing to be sure it's same as requesting VP. 5.287 + * If different, throw an exception, stating there's a bug in the code. 5.288 + *Next, take the first element off the list of entered transactions. 5.289 + * Check to be sure the ending transaction is the same ID as the next on 5.290 + * the list. If not, incorrectly nested so throw an exception. 5.291 + * 5.292 + *Next, get from the queue in the structure. 5.293 + *If it's empty, set VP_currently_executing field to NULL and resume 5.294 + * requesting VP. 5.295 + *If get somethine, set VP_currently_executing to the VP from the queue, then 5.296 + * resume both. 5.297 + */ 5.298 +void inline 5.299 +handleTransEnd( VPThdSemReq *semReq, VirtProcr *requestingPr, 5.300 + VPThdSemEnv *semEnv) 5.301 + { VPThdSemData *semData; 5.302 + VirtProcr *waitingPr; 5.303 + VPThdTrans *transStruc; 5.304 + TransListElem *lastTrans; 5.305 + 5.306 + transStruc = &(semEnv->transactionStrucs[ semReq->transID ]); 5.307 + 5.308 + //make sure transaction ended in same VP as started it. 5.309 + if( transStruc->VPCurrentlyExecuting != requestingPr ) 5.310 + { 5.311 + VMS__throw_exception( "trans ended in diff VP", requestingPr, NULL ); 5.312 + } 5.313 + 5.314 + //make sure nesting is correct -- last ID entered should == this ID 5.315 + semData = requestingPr->semanticData; 5.316 + lastTrans = semData->lastTransEntered; 5.317 + if( lastTrans->transID != semReq->transID ) 5.318 + { 5.319 + VMS__throw_exception( "trans incorrectly nested", requestingPr, NULL ); 5.320 + } 5.321 + 5.322 + semData->lastTransEntered = semData->lastTransEntered->nextTrans; 5.323 + 5.324 + 5.325 + waitingPr = readPrivQ( transStruc->waitingVPQ ); 5.326 + transStruc->VPCurrentlyExecuting = waitingPr; 5.327 + 5.328 + if( waitingPr != NULL ) 5.329 + resume_procr( waitingPr, semEnv ); 5.330 + 5.331 + resume_procr( requestingPr, semEnv ); 5.332 + }
6.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 6.2 +++ b/VPThread_Request_Handlers.h Thu Nov 11 04:29:10 2010 -0800 6.3 @@ -0,0 +1,31 @@ 6.4 +/* 6.5 + * Copyright 2009 OpenSourceStewardshipFoundation.org 6.6 + * Licensed under GNU General Public License version 2 6.7 + * 6.8 + * Author: seanhalle@yahoo.com 6.9 + * 6.10 + */ 6.11 + 6.12 +#ifndef _VPThread_REQ_H 6.13 +#define _VPThread_REQ_H 6.14 + 6.15 +#include "VPThread.h" 6.16 + 6.17 +/*This header defines everything specific to the VPThread semantic plug-in 6.18 + */ 6.19 + 6.20 +void 6.21 +handleMakeMutex( VPThdSemReq *semReq, VPThdSemEnv *semEnv); 6.22 +void 6.23 +handleMutexLock( VPThdSemReq *semReq, VPThdSemEnv *semEnv); 6.24 +void 6.25 +handleMutexUnlock(VPThdSemReq *semReq, VPThdSemEnv *semEnv); 6.26 +void 6.27 +handleMakeCond( VPThdSemReq *semReq, VPThdSemEnv *semEnv); 6.28 +void 6.29 +handleCondWait( VPThdSemReq *semReq, VPThdSemEnv *semEnv); 6.30 +void 6.31 +handleCondSignal( VPThdSemReq *semReq, VPThdSemEnv *semEnv); 6.32 + 6.33 +#endif /* _VPThread_REQ_H */ 6.34 +
7.1 --- a/VPThread__PluginFns.c Fri Sep 17 11:34:02 2010 -0700 7.2 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 7.3 @@ -1,110 +0,0 @@ 7.4 -/* 7.5 - * Copyright 2010 OpenSourceCodeStewardshipFoundation 7.6 - * 7.7 - * Licensed under BSD 7.8 - */ 7.9 - 7.10 -#include <stdio.h> 7.11 -#include <stdlib.h> 7.12 -#include <malloc.h> 7.13 - 7.14 -#include "VMS/Queue_impl/PrivateQueue.h" 7.15 -#include "VPThread.h" 7.16 -#include "VPThread__Request_Handlers.h" 7.17 - 7.18 - 7.19 -/*Will get requests to send, to receive, and to create new processors. 7.20 - * Upon send, check the hash to see if a receive is waiting. 7.21 - * Upon receive, check hash to see if a send has already happened. 7.22 - * When other is not there, put in. When other is there, the comm. 7.23 - * completes, which means the receiver P gets scheduled and 7.24 - * picks up right after the receive request. So make the work-unit 7.25 - * and put it into the queue of work-units ready to go. 7.26 - * Other request is create a new Processor, with the function to run in the 7.27 - * Processor, and initial data. 7.28 - */ 7.29 -void 7.30 -VPThread__Request_Handler( VirtProcr *requestingPr, void *_semEnv ) 7.31 - { VPThreadSemEnv *semEnv; 7.32 - VMSReqst *req; 7.33 - VPThreadSemReq *semReq; 7.34 - 7.35 - semEnv = (VPThreadSemEnv *)_semEnv; 7.36 - 7.37 - req = VMS__take_top_request_from( requestingPr ); 7.38 - 7.39 - while( req != NULL ) 7.40 - { 7.41 - if( VMS__isSemanticReqst( req ) ) 7.42 - { 7.43 - semReq = VMS__take_sem_reqst_from( req ); 7.44 - if( semReq == NULL ) goto DoneHandlingReqst; 7.45 - switch( semReq->reqType ) 7.46 - { 7.47 - case make_mutex: handleMakeMutex( semReq, semEnv); 7.48 - break; 7.49 - case mutex_lock: handleMutexLock( semReq, semEnv); 7.50 - break; 7.51 - case mutex_unlock: handleMutexUnlock(semReq, semEnv); 7.52 - break; 7.53 - case make_cond: handleMakeCond( semReq, semEnv); 7.54 - break; 7.55 - case cond_wait: handleCondWait( semReq, semEnv); 7.56 - break; 7.57 - case cond_signal: handleCondSignal( semReq, semEnv); 7.58 - //need? VPThread__free_semantic_request( semReq ); 7.59 - break; 7.60 - case make_procr: handleMakeProcr( semReq, semEnv); 7.61 - break; 7.62 - //TODO: make sure free the semantic request! 7.63 - } 7.64 - //NOTE: freeing semantic request data strucs handled inside these 7.65 - } 7.66 - else if( VMS__isDissipateReqst( req ) ) //Standard VMS request 7.67 - { //Another standard VMS request that the plugin has to handle 7.68 - //This time, plugin has to free the semantic data it may have 7.69 - // allocated into the virt procr -- and clear the AppVP out of 7.70 - // any data structs the plug-in may have put it into, like hash 7.71 - // tables. 7.72 - 7.73 - //Now, call VMS to free all AppVP state -- stack and so on 7.74 - VMS__handle_dissipate_reqst( requestingPr ); 7.75 - 7.76 - //Keep count of num AppVPs, so know when to shutdown 7.77 - semEnv->numVirtPr -= 1; 7.78 - if( semEnv->numVirtPr == 0 ) 7.79 - { //no more work, so shutdown 7.80 - VMS__handle_shutdown_reqst( requestingPr ); 7.81 - } 7.82 - } 7.83 - 7.84 - DoneHandlingReqst: 7.85 - //Here, free VMS's request structure, no matter what -- even though 7.86 - // semantic request struc instances may still be around.. 7.87 - //This call frees VMS's portion, then returns the next request 7.88 - req = VMS__free_top_and_give_next_request_from( requestingPr ); 7.89 - } //while( req != NULL ) 7.90 - } 7.91 - 7.92 -//=========================================================================== 7.93 - 7.94 - 7.95 -/*For VPThread, scheduling a slave simply takes the next work-unit off the 7.96 - * ready-to-go work-unit queue and assigns it to the slaveToSched. 7.97 - *If the ready-to-go work-unit queue is empty, then nothing to schedule 7.98 - * to the slave -- return FALSE to let Master loop know scheduling that 7.99 - * slave failed. 7.100 - */ 7.101 -VirtProcr * 7.102 -VPThread__schedule_virt_procr( void *_semEnv, int coreNum ) 7.103 - { VirtProcr *schedPr; 7.104 - VPThreadSemEnv *semEnv; 7.105 - 7.106 - semEnv = (VPThreadSemEnv *)_semEnv; 7.107 - 7.108 - schedPr = readPrivQ( semEnv->readyVPQs[coreNum] ); 7.109 - //Note, using a non-blocking queue -- it returns NULL if queue empty 7.110 - 7.111 - return( schedPr ); 7.112 - } 7.113 -
8.1 --- a/VPThread__Request_Handlers.c Fri Sep 17 11:34:02 2010 -0700 8.2 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 8.3 @@ -1,222 +0,0 @@ 8.4 -/* 8.5 - * Copyright 2010 OpenSourceCodeStewardshipFoundation 8.6 - * 8.7 - * Licensed under BSD 8.8 - */ 8.9 - 8.10 -#include <stdio.h> 8.11 -#include <stdlib.h> 8.12 -#include <malloc.h> 8.13 - 8.14 -#include "VMS/VMS.h" 8.15 -#include "VMS/Queue_impl/PrivateQueue.h" 8.16 -#include "VMS/Hash_impl/PrivateHash.h" 8.17 -#include "VPThread.h" 8.18 - 8.19 - 8.20 - 8.21 -//=============================== Mutexes ================================= 8.22 -/*The semantic request has a mutexIdx value, which acts as index into array. 8.23 - */ 8.24 -void 8.25 -handleMakeMutex( VPThreadSemReq *semReq, VPThreadSemEnv *semEnv) 8.26 - { VPTMutex *newMutex; 8.27 - VirtProcr *pr; 8.28 - int32 mutexIdx; 8.29 - 8.30 - newMutex = malloc( sizeof(VPTMutex) ); 8.31 - newMutex->waitingQueue = makePrivQ(); 8.32 - newMutex->holderOfLock = NULL; 8.33 - newMutex->mutexIdx = semEnv->currMutexIdx++; 8.34 - mutexIdx = newMutex->mutexIdx; 8.35 - 8.36 - //The mutex struc contains an int that identifies it -- use that as 8.37 - // its index within the array of mutexes. Add the new mutex to array. 8.38 - makeArray32BigEnoughForIndex( semEnv->mutexDynArray, mutexIdx ); 8.39 - semEnv->mutexDynArray->array[ mutexIdx ] = newMutex; 8.40 - 8.41 - //Now communicate the mutex's identifying int back to requesting procr 8.42 - semReq->requestingPr->semanticData = newMutex->mutexIdx; 8.43 - 8.44 - //re-animate the requester 8.45 - pr = semReq->requestingPr; 8.46 - writePrivQ( pr, semEnv->readyVPQs[pr->coreAnimatedBy] ); 8.47 - } 8.48 - 8.49 - 8.50 -void 8.51 -handleMutexLock( VPThreadSemReq *semReq, VPThreadSemEnv *semEnv) 8.52 - { VPTMutex *mutex; 8.53 - VirtProcr *pr; 8.54 - 8.55 - //=================== Deterministic Replay ====================== 8.56 - #ifdef RECORD_DETERMINISTIC_REPLAY 8.57 - 8.58 - #endif 8.59 - //================================================================= 8.60 - //lookup mutex struc, using mutexIdx as index 8.61 - mutex = semEnv->mutexDynArray->array[ semReq->mutexIdx ]; 8.62 - 8.63 - //see if mutex is free or not 8.64 - if( mutex->holderOfLock == NULL ) //none holding, give lock to requester 8.65 - { 8.66 - mutex->holderOfLock = semReq->requestingPr; 8.67 - 8.68 - //re-animate requester, now that it has the lock 8.69 - pr = semReq->requestingPr; 8.70 - writePrivQ( pr, semEnv->readyVPQs[pr->coreAnimatedBy] ); 8.71 - } 8.72 - else //queue up requester to wait for release of lock 8.73 - { 8.74 - writePrivQ( semReq->requestingPr, mutex->waitingQueue ); 8.75 - } 8.76 - } 8.77 - 8.78 -/* 8.79 - */ 8.80 -void 8.81 -handleMutexUnlock( VPThreadSemReq *semReq, VPThreadSemEnv *semEnv) 8.82 - { VPTMutex *mutex; 8.83 - VirtProcr *pr; 8.84 - 8.85 - //lookup mutex struc, using mutexIdx as index 8.86 - mutex = semEnv->mutexDynArray->array[ semReq->mutexIdx ]; 8.87 - 8.88 - //set new holder of mutex-lock to be next in queue (NULL if empty) 8.89 - mutex->holderOfLock = readPrivQ( mutex->waitingQueue ); 8.90 - 8.91 - //if have new non-NULL holder, re-animate it 8.92 - if( mutex->holderOfLock != NULL ) 8.93 - { 8.94 - pr = mutex->holderOfLock; 8.95 - writePrivQ( pr, semEnv->readyVPQs[pr->coreAnimatedBy] ); 8.96 - } 8.97 - 8.98 - //re-animate the releaser of the lock 8.99 - pr = semReq->requestingPr; 8.100 - writePrivQ( pr, semEnv->readyVPQs[pr->coreAnimatedBy] ); 8.101 - } 8.102 - 8.103 -//=========================== Condition Vars ============================== 8.104 -/*The semantic request has the cond-var value and mutex value, which are the 8.105 - * indexes into the array. Not worrying about having too many mutexes or 8.106 - * cond vars created, so using array instead of hash table, for speed. 8.107 - */ 8.108 - 8.109 - 8.110 -/*Make cond has to be called with the mutex that the cond is paired to 8.111 - * Don't have to implement this way, but was confusing learning cond vars 8.112 - * until deduced that each cond var owns a mutex that is used only for 8.113 - * interacting with that cond var. So, make this pairing explicit. 8.114 - */ 8.115 -void 8.116 -handleMakeCond( VPThreadSemReq *semReq, VPThreadSemEnv *semEnv) 8.117 - { VPTCond *newCond; 8.118 - VirtProcr *pr; 8.119 - int32 condIdx; 8.120 - 8.121 - newCond = malloc( sizeof(VPTCond) ); 8.122 - newCond->partnerMutex = semEnv->mutexDynArray->array[ semReq->mutexIdx ]; 8.123 - 8.124 - newCond->waitingQueue = makePrivQ(); 8.125 - newCond->condIdx = semEnv->currCondIdx++; 8.126 - condIdx = newCond->condIdx; 8.127 - 8.128 - //The cond struc contains an int that identifies it -- use that as 8.129 - // its index within the array of conds. Add the new cond to array. 8.130 - makeArray32BigEnoughForIndex( semEnv->condDynArray, condIdx ); 8.131 - semEnv->condDynArray->array[ condIdx ] = newCond; 8.132 - 8.133 - //Now communicate the cond's identifying int back to requesting procr 8.134 - semReq->requestingPr->semanticData = newCond->condIdx; 8.135 - 8.136 - //re-animate the requester 8.137 - pr = semReq->requestingPr; 8.138 - writePrivQ( pr, semEnv->readyVPQs[pr->coreAnimatedBy] ); 8.139 - } 8.140 - 8.141 - 8.142 -/*Mutex has already been paired to the cond var, so don't need to send the 8.143 - * mutex, just the cond var. Don't have to do this, but want to bitch-slap 8.144 - * the designers of Posix standard ; ) 8.145 - */ 8.146 -void 8.147 -handleCondWait( VPThreadSemReq *semReq, VPThreadSemEnv *semEnv) 8.148 - { VPTCond *cond; 8.149 - VPTMutex *mutex; 8.150 - VirtProcr *pr; 8.151 - 8.152 - //get cond struc out of array of them that's in the sem env 8.153 - cond = semEnv->condDynArray->array[ semReq->condIdx ]; 8.154 - 8.155 - //add requester to queue of wait-ers 8.156 - writePrivQ( semReq->requestingPr, cond->waitingQueue ); 8.157 - 8.158 - //unlock mutex -- can't reuse above handler 'cause not queuing releaser 8.159 - mutex = cond->partnerMutex; 8.160 - mutex->holderOfLock = readPrivQ( mutex->waitingQueue ); 8.161 - 8.162 - if( mutex->holderOfLock != NULL ) 8.163 - { 8.164 - pr = mutex->holderOfLock; 8.165 - writePrivQ( pr, semEnv->readyVPQs[pr->coreAnimatedBy] ); 8.166 - } 8.167 - } 8.168 - 8.169 - 8.170 -/*Note that have to implement this such that guarantee the waiter is the one 8.171 - * that gets the lock 8.172 - */ 8.173 -void 8.174 -handleCondSignal( VPThreadSemReq *semReq, VPThreadSemEnv *semEnv) 8.175 - { VPTCond *cond; 8.176 - VPTMutex *mutex; 8.177 - VirtProcr *waitingPr, *pr; 8.178 - 8.179 - //get cond struc out of array of them that's in the sem env 8.180 - cond = semEnv->condDynArray->array[ semReq->condIdx ]; 8.181 - 8.182 - //take next waiting procr out of queue 8.183 - waitingPr = readPrivQ( cond->waitingQueue ); 8.184 - 8.185 - //transfer waiting procr to wait queue of mutex 8.186 - // mutex is guaranteed to be held by signalling procr, so no check 8.187 - mutex = cond->partnerMutex; 8.188 - pushPrivQ( waitingPr, mutex->waitingQueue ); //is first out when read 8.189 - 8.190 - //re-animate the signalling procr 8.191 - pr = semReq->requestingPr; 8.192 - writePrivQ( pr, semEnv->readyVPQs[pr->coreAnimatedBy] ); 8.193 - } 8.194 - 8.195 - 8.196 - 8.197 -/*Make cond has to be called with the mutex that the cond is paired to 8.198 - * Don't have to implement this way, but was confusing learning cond vars 8.199 - * until deduced that each cond var owns a mutex that is used only for 8.200 - * interacting with that cond var. So, make this pairing explicit. 8.201 - */ 8.202 -void 8.203 -handleMakeProcr( VPThreadSemReq *semReq, VPThreadSemEnv *semEnv) 8.204 - { VirtProcr *newPr, *pr; 8.205 - 8.206 - newPr = VMS__create_procr( semReq->fnPtr, semReq->initData ); 8.207 - 8.208 - semEnv->numVirtPr += 1; 8.209 - 8.210 - //Assign new processor to next core in line & queue it up 8.211 - #ifdef SEQUENTIAL 8.212 - newPr->coreAnimatedBy = 0; 8.213 - #else 8.214 - newPr->coreAnimatedBy = semEnv->nextCoreToGetNewPr; 8.215 - if( semEnv->nextCoreToGetNewPr >= NUM_CORES - 1 ) 8.216 - semEnv->nextCoreToGetNewPr = 0; 8.217 - else 8.218 - semEnv->nextCoreToGetNewPr += 1; 8.219 - #endif 8.220 - writePrivQ( newPr, semEnv->readyVPQs[newPr->coreAnimatedBy] ); 8.221 - 8.222 - //re-animate the requester 8.223 - pr = semReq->requestingPr; 8.224 - writePrivQ( pr, semEnv->readyVPQs[pr->coreAnimatedBy] ); 8.225 - }
9.1 --- a/VPThread__Request_Handlers.h Fri Sep 17 11:34:02 2010 -0700 9.2 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 9.3 @@ -1,33 +0,0 @@ 9.4 -/* 9.5 - * Copyright 2009 OpenSourceStewardshipFoundation.org 9.6 - * Licensed under GNU General Public License version 2 9.7 - * 9.8 - * Author: seanhalle@yahoo.com 9.9 - * 9.10 - */ 9.11 - 9.12 -#ifndef _VPThread_REQ_H 9.13 -#define _VPThread_REQ_H 9.14 - 9.15 -#include "VPThread.h" 9.16 - 9.17 -/*This header defines everything specific to the VPThread semantic plug-in 9.18 - */ 9.19 - 9.20 -void 9.21 -handleMakeMutex( VPThreadSemReq *semReq, VPThreadSemEnv *semEnv); 9.22 -void 9.23 -handleMutexLock( VPThreadSemReq *semReq, VPThreadSemEnv *semEnv); 9.24 -void 9.25 -handleMutexUnlock(VPThreadSemReq *semReq, VPThreadSemEnv *semEnv); 9.26 -void 9.27 -handleMakeCond( VPThreadSemReq *semReq, VPThreadSemEnv *semEnv); 9.28 -void 9.29 -handleCondWait( VPThreadSemReq *semReq, VPThreadSemEnv *semEnv); 9.30 -void 9.31 -handleCondSignal( VPThreadSemReq *semReq, VPThreadSemEnv *semEnv); 9.32 -void 9.33 -handleMakeProcr( VPThreadSemReq *semReq, VPThreadSemEnv *semEnv); 9.34 - 9.35 -#endif /* _VPThread_REQ_H */ 9.36 -
10.1 --- a/VPThread__lib.c Fri Sep 17 11:34:02 2010 -0700 10.2 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 10.3 @@ -1,379 +0,0 @@ 10.4 -/* 10.5 - * Copyright 2010 OpenSourceCodeStewardshipFoundation 10.6 - * 10.7 - * Licensed under BSD 10.8 - */ 10.9 - 10.10 -#include <stdio.h> 10.11 -#include <stdlib.h> 10.12 -#include <malloc.h> 10.13 - 10.14 -#include "VMS/VMS.h" 10.15 -#include "VPThread.h" 10.16 -#include "VMS/Queue_impl/PrivateQueue.h" 10.17 -#include "VMS/Hash_impl/PrivateHash.h" 10.18 - 10.19 - 10.20 -//========================================================================== 10.21 - 10.22 -void 10.23 -VPThread__init(); 10.24 - 10.25 -void 10.26 -VPThread__init_Seq(); 10.27 - 10.28 -void 10.29 -VPThread__init_Helper(); 10.30 -//========================================================================== 10.31 - 10.32 - 10.33 -/*TODO: Q: dealing with library f()s and DKU vs WT vs FoR 10.34 - * (still want to do FoR, with time-lines as syntax, could be super cool) 10.35 - * A: thinking pin the coreLoops for all of BLIS -- let Master arbitrate 10.36 - * among library, DKU, WT, FoR -- all the patterns in terms of virtual 10.37 - * processors (or equivalently work-units), so Master picks which virt procr 10.38 - * from which portions of app (DKU, WT, FoR) onto which sched slots 10.39 - *Might even do hierarchy of masters -- group of sched slots for each core 10.40 - * has its own master, that keeps generated work local 10.41 - * single-reader-single-writer sync everywhere -- no atomic primitives 10.42 - * Might have the different schedulers talk to each other, to negotiate 10.43 - * larger-grain sharing of resources, according to predicted critical 10.44 - * path, and expansion of work 10.45 - */ 10.46 - 10.47 - 10.48 - 10.49 -//=========================================================================== 10.50 - 10.51 - 10.52 -/*These are the library functions *called in the application* 10.53 - * 10.54 - *There's a pattern for the outside sequential code to interact with the 10.55 - * VMS_HW code. 10.56 - *The VMS_HW system is inside a boundary.. every VPThread system is in its 10.57 - * own directory that contains the functions for each of the processor types. 10.58 - * One of the processor types is the "seed" processor that starts the 10.59 - * cascade of creating all the processors that do the work. 10.60 - *So, in the directory is a file called "EntryPoint.c" that contains the 10.61 - * function, named appropriately to the work performed, that the outside 10.62 - * sequential code calls. This function follows a pattern: 10.63 - *1) it calls VPThread__init() 10.64 - *2) it creates the initial data for the seed processor, which is passed 10.65 - * in to the function 10.66 - *3) it creates the seed VPThread processor, with the data to start it with. 10.67 - *4) it calls startVPThreadThenWaitUntilWorkDone 10.68 - *5) it gets the returnValue from the transfer struc and returns that 10.69 - * from the function 10.70 - * 10.71 - *For now, a new VPThread system has to be created via VPThread__init every 10.72 - * time an entry point function is called -- later, might add letting the 10.73 - * VPThread system be created once, and let all the entry points just reuse 10.74 - * it -- want to be as simple as possible now, and see by using what makes 10.75 - * sense for later.. 10.76 - */ 10.77 - 10.78 - 10.79 - 10.80 -//=========================================================================== 10.81 - 10.82 -/*This is the "border crossing" function -- the thing that crosses from the 10.83 - * outside world, into the VMS_HW world. It initializes and starts up the 10.84 - * VMS system, then creates one processor from the specified function and 10.85 - * puts it into the readyQ. From that point, that one function is resp. 10.86 - * for creating all the other processors, that then create others, and so 10.87 - * forth. 10.88 - *When all the processors, including the seed, have dissipated, then this 10.89 - * function returns. The results will have been written by side-effect via 10.90 - * pointers read from, or written into initData. 10.91 - * 10.92 - *NOTE: no Threads should exist in the outside program that might touch 10.93 - * any of the data reachable from initData passed in to here 10.94 - */ 10.95 -void 10.96 -VPThread__create_seed_procr_and_do_work( VirtProcrFnPtr fnPtr, void *initData ) 10.97 - { VPThreadSemEnv *semEnv; 10.98 - VirtProcr *seedPr; 10.99 - 10.100 - #ifdef SEQUENTIAL 10.101 - VPThread__init_Seq(); //debug sequential exe 10.102 - #else 10.103 - VPThread__init(); //normal multi-thd 10.104 - #endif 10.105 - semEnv = _VMSMasterEnv->semanticEnv; 10.106 - 10.107 - //VPThread starts with one processor, which is put into initial environ, 10.108 - // and which then calls create() to create more, thereby expanding work 10.109 - seedPr = VMS__create_procr( fnPtr, initData ); 10.110 - 10.111 - seedPr->coreAnimatedBy = semEnv->nextCoreToGetNewPr++; 10.112 - 10.113 - writePrivQ( seedPr, semEnv->readyVPQs[seedPr->coreAnimatedBy] ); 10.114 - semEnv->numVirtPr = 1; 10.115 - 10.116 - #ifdef SEQUENTIAL 10.117 - VMS__start_the_work_then_wait_until_done_Seq(); //debug sequential exe 10.118 - #else 10.119 - VMS__start_the_work_then_wait_until_done(); //normal multi-thd 10.120 - #endif 10.121 - 10.122 - VPThread__cleanup_after_shutdown(); 10.123 - } 10.124 - 10.125 - 10.126 -//=========================================================================== 10.127 - 10.128 -/*Initializes all the data-structures for a VPThread system -- but doesn't 10.129 - * start it running yet! 10.130 - * 10.131 - * 10.132 - *This sets up the semantic layer over the VMS system 10.133 - * 10.134 - *First, calls VMS_Setup, then creates own environment, making it ready 10.135 - * for creating the seed processor and then starting the work. 10.136 - */ 10.137 -void 10.138 -VPThread__init() 10.139 - { 10.140 - VMS__init(); 10.141 - //masterEnv, a global var, now is partially set up by init_VMS 10.142 - 10.143 - VPThread__init_Helper(); 10.144 - } 10.145 - 10.146 -void 10.147 -VPThread__init_Seq() 10.148 - { 10.149 - VMS__init_Seq(); 10.150 - //masterEnv, a global var, now is partially set up by init_VMS 10.151 - 10.152 - VPThread__init_Helper(); 10.153 - } 10.154 - 10.155 -void 10.156 -VPThread__init_Helper() 10.157 - { VPThreadSemEnv *semanticEnv; 10.158 - PrivQueueStruc **readyVPQs; 10.159 - int coreIdx; 10.160 - 10.161 - //Hook up the semantic layer's plug-ins to the Master virt procr 10.162 - _VMSMasterEnv->requestHandler = &VPThread__Request_Handler; 10.163 - _VMSMasterEnv->slaveScheduler = &VPThread__schedule_virt_procr; 10.164 - 10.165 - //create the semantic layer's environment (all its data) and add to 10.166 - // the master environment 10.167 - semanticEnv = malloc( sizeof( VPThreadSemEnv ) ); 10.168 - _VMSMasterEnv->semanticEnv = semanticEnv; 10.169 - 10.170 - //create the ready queue 10.171 - readyVPQs = malloc( NUM_CORES * sizeof(PrivQueueStruc *) ); 10.172 - 10.173 - for( coreIdx = 0; coreIdx < NUM_CORES; coreIdx++ ) 10.174 - { 10.175 - readyVPQs[ coreIdx ] = makePrivQ(); 10.176 - } 10.177 - 10.178 - semanticEnv->readyVPQs = readyVPQs; 10.179 - 10.180 - semanticEnv->numVirtPr = 0; 10.181 - semanticEnv->nextCoreToGetNewPr = 0; 10.182 - 10.183 - semanticEnv->currMutexIdx = 0; 10.184 - semanticEnv->mutexDynArray = createDynArray32( INIT_NUM_MUTEX ); 10.185 - 10.186 - semanticEnv->currCondIdx = 0; 10.187 - semanticEnv->condDynArray = createDynArray32( INIT_NUM_COND ); 10.188 - } 10.189 - 10.190 - 10.191 -/*Frees any memory allocated by VPThread__init() then calls VMS__shutdown 10.192 - */ 10.193 -void 10.194 -VPThread__cleanup_after_shutdown() 10.195 - { VPThreadSemEnv *semEnv; 10.196 - int32 coreIdx, idx, highestIdx; 10.197 - VPTMutex **mutexArray, *mutex; 10.198 - VPTCond **condArray, *cond; 10.199 - 10.200 - semEnv = _VMSMasterEnv->semanticEnv; 10.201 - 10.202 -//TODO: double check that all sem env locations freed 10.203 - 10.204 - for( coreIdx = 0; coreIdx < NUM_CORES; coreIdx++ ) 10.205 - { 10.206 - free( semEnv->readyVPQs[coreIdx]->startOfData ); 10.207 - free( semEnv->readyVPQs[coreIdx] ); 10.208 - } 10.209 - 10.210 - free( semEnv->readyVPQs ); 10.211 - 10.212 - 10.213 - //==== Free mutexes and mutex array ==== 10.214 - mutexArray = semEnv->mutexDynArray->array; 10.215 - highestIdx = semEnv->mutexDynArray->highestIdxInArray; 10.216 - for( idx=0; idx < highestIdx; idx++ ) 10.217 - { mutex = mutexArray[ idx ]; 10.218 - if( mutex == NULL ) continue; 10.219 - free( mutex ); 10.220 - } 10.221 - free( mutexArray ); 10.222 - free( semEnv->mutexDynArray ); 10.223 - //====================================== 10.224 - 10.225 - 10.226 - //==== Free conds and cond array ==== 10.227 - condArray = semEnv->condDynArray->array; 10.228 - highestIdx = semEnv->condDynArray->highestIdxInArray; 10.229 - for( idx=0; idx < highestIdx; idx++ ) 10.230 - { cond = condArray[ idx ]; 10.231 - if( cond == NULL ) continue; 10.232 - free( cond ); 10.233 - } 10.234 - free( condArray ); 10.235 - free( semEnv->condDynArray ); 10.236 - //=================================== 10.237 - 10.238 - 10.239 - free( _VMSMasterEnv->semanticEnv ); 10.240 - VMS__cleanup_after_shutdown(); 10.241 - } 10.242 - 10.243 - 10.244 -//=========================================================================== 10.245 - 10.246 -/* 10.247 - */ 10.248 -VirtProcr * 10.249 -VPThread__create_thread( VirtProcrFnPtr fnPtr, void *initData, 10.250 - VirtProcr *animPr ) 10.251 - { VPThreadSemReq *reqData; 10.252 - 10.253 - reqData = malloc( sizeof(VPThreadSemReq) ); 10.254 - reqData->reqType = make_procr; 10.255 - reqData->initData = initData; 10.256 - reqData->fnPtr = fnPtr; 10.257 - reqData->requestingPr = animPr; 10.258 - 10.259 - VMS__add_sem_request( reqData, animPr ); 10.260 - VMS__suspend_procr( animPr ); //will suspend then resume and continue 10.261 - return animPr->semanticData; //result communicated back via semData field 10.262 - } 10.263 - 10.264 - 10.265 -inline void 10.266 -VPThread__dissipate_thread( VirtProcr *procrToDissipate ) 10.267 - { 10.268 - VMS__dissipate_procr( procrToDissipate ); 10.269 - } 10.270 - 10.271 - 10.272 -//=========================================================================== 10.273 - 10.274 -void 10.275 -VPThread__set_globals_to( void *globals ) 10.276 - { 10.277 - ((VPThreadSemEnv *) 10.278 - (_VMSMasterEnv->semanticEnv))->applicationGlobals = globals; 10.279 - } 10.280 - 10.281 -void * 10.282 -VPThread__give_globals() 10.283 - { 10.284 - return((VPThreadSemEnv *) 10.285 - (_VMSMasterEnv->semanticEnv))->applicationGlobals; 10.286 - } 10.287 - 10.288 - 10.289 - 10.290 -//=========================================================================== 10.291 - 10.292 -int32 10.293 -VPThread__make_mutex( VirtProcr *animPr ) 10.294 - { VPThreadSemReq *reqData; 10.295 - 10.296 - reqData = malloc( sizeof(VPThreadSemReq) ); 10.297 - reqData->reqType = make_mutex; 10.298 - reqData->requestingPr = animPr; 10.299 - 10.300 - VMS__add_sem_request( reqData, animPr ); 10.301 - VMS__suspend_procr( animPr ); //will suspend then resume and continue 10.302 - return animPr->semanticData; //result communicated back via semData field 10.303 - } 10.304 - 10.305 -void 10.306 -VPThread__mutex_lock( int32 mutexIdx, VirtProcr *acquiringPr ) 10.307 - { VPThreadSemReq *reqData; 10.308 - 10.309 - reqData = malloc( sizeof(VPThreadSemReq) ); 10.310 - reqData->reqType = mutex_lock; 10.311 - reqData->mutexIdx = mutexIdx; 10.312 - reqData->requestingPr = acquiringPr; 10.313 - 10.314 - VMS__add_sem_request( reqData, acquiringPr ); 10.315 - VMS__suspend_procr( acquiringPr ); //will resume when has the lock 10.316 - } 10.317 - 10.318 -void 10.319 -VPThread__mutex_unlock( int32 mutexIdx, VirtProcr *releasingPr ) 10.320 - { VPThreadSemReq *reqData; 10.321 - 10.322 - reqData = malloc( sizeof(VPThreadSemReq) ); 10.323 - reqData->reqType = mutex_unlock; 10.324 - reqData->mutexIdx = mutexIdx; 10.325 - reqData->requestingPr = releasingPr; 10.326 - 10.327 - VMS__add_sem_request( reqData, releasingPr ); 10.328 - VMS__suspend_procr( releasingPr ); //lock released when resumes 10.329 - } 10.330 - 10.331 - 10.332 -//======================= 10.333 -int32 10.334 -VPThread__make_cond( int32 ownedMutexIdx, VirtProcr *animPr) 10.335 - { VPThreadSemReq *reqData; 10.336 - 10.337 - reqData = malloc( sizeof(VPThreadSemReq) ); 10.338 - reqData->reqType = make_cond; 10.339 - reqData->mutexIdx = ownedMutexIdx; 10.340 - reqData->requestingPr = animPr; 10.341 - 10.342 - VMS__add_sem_request( reqData, animPr ); 10.343 - VMS__suspend_procr( animPr ); //will suspend then resume and continue 10.344 - return animPr->semanticData; //result communicated back via semData field 10.345 - } 10.346 - 10.347 -void 10.348 -VPThread__cond_wait( int32 condIdx, VirtProcr *waitingPr) 10.349 - { VPThreadSemReq *reqData; 10.350 - 10.351 - reqData = malloc( sizeof(VPThreadSemReq) ); 10.352 - reqData->reqType = cond_wait; 10.353 - reqData->condIdx = condIdx; 10.354 - reqData->requestingPr = waitingPr; 10.355 - 10.356 - VMS__add_sem_request( reqData, waitingPr ); 10.357 - VMS__suspend_procr( waitingPr ); //resume when signalled & has lock 10.358 - } 10.359 - 10.360 -void * 10.361 -VPThread__cond_signal( int32 condIdx, VirtProcr *signallingPr ) 10.362 - { VPThreadSemReq *reqData; 10.363 - 10.364 - reqData = malloc( sizeof(VPThreadSemReq) ); 10.365 - reqData->reqType = cond_signal; 10.366 - reqData->condIdx = condIdx; 10.367 - reqData->requestingPr = signallingPr; 10.368 - 10.369 - VMS__add_sem_request( reqData, signallingPr ); 10.370 - VMS__suspend_procr( signallingPr );//resumes right away, still having lock 10.371 - } 10.372 -//=========================================================================== 10.373 - 10.374 -/*Just thin wrapper for now -- semantic request is still a simple thing 10.375 - * (July 3, 2010) 10.376 - */ 10.377 -inline void 10.378 -VPThread__free_semantic_request( VPThreadSemReq *semReq ) 10.379 - { 10.380 - free( semReq ); 10.381 - } 10.382 -
11.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 11.2 +++ b/VPThread_lib.c Thu Nov 11 04:29:10 2010 -0800 11.3 @@ -0,0 +1,439 @@ 11.4 +/* 11.5 + * Copyright 2010 OpenSourceCodeStewardshipFoundation 11.6 + * 11.7 + * Licensed under BSD 11.8 + */ 11.9 + 11.10 +#include <stdio.h> 11.11 +#include <stdlib.h> 11.12 +#include <malloc.h> 11.13 + 11.14 +#include "VMS/VMS.h" 11.15 +#include "VPThread.h" 11.16 +#include "VMS/Queue_impl/PrivateQueue.h" 11.17 +#include "VMS/Hash_impl/PrivateHash.h" 11.18 + 11.19 + 11.20 +//========================================================================== 11.21 + 11.22 +void 11.23 +VPThread__init(); 11.24 + 11.25 +void 11.26 +VPThread__init_Seq(); 11.27 + 11.28 +void 11.29 +VPThread__init_Helper(); 11.30 + 11.31 + 11.32 +//=========================================================================== 11.33 + 11.34 + 11.35 +/*These are the library functions *called in the application* 11.36 + * 11.37 + *There's a pattern for the outside sequential code to interact with the 11.38 + * VMS_HW code. 11.39 + *The VMS_HW system is inside a boundary.. every VPThread system is in its 11.40 + * own directory that contains the functions for each of the processor types. 11.41 + * One of the processor types is the "seed" processor that starts the 11.42 + * cascade of creating all the processors that do the work. 11.43 + *So, in the directory is a file called "EntryPoint.c" that contains the 11.44 + * function, named appropriately to the work performed, that the outside 11.45 + * sequential code calls. This function follows a pattern: 11.46 + *1) it calls VPThread__init() 11.47 + *2) it creates the initial data for the seed processor, which is passed 11.48 + * in to the function 11.49 + *3) it creates the seed VPThread processor, with the data to start it with. 11.50 + *4) it calls startVPThreadThenWaitUntilWorkDone 11.51 + *5) it gets the returnValue from the transfer struc and returns that 11.52 + * from the function 11.53 + * 11.54 + *For now, a new VPThread system has to be created via VPThread__init every 11.55 + * time an entry point function is called -- later, might add letting the 11.56 + * VPThread system be created once, and let all the entry points just reuse 11.57 + * it -- want to be as simple as possible now, and see by using what makes 11.58 + * sense for later.. 11.59 + */ 11.60 + 11.61 + 11.62 + 11.63 +//=========================================================================== 11.64 + 11.65 +/*This is the "border crossing" function -- the thing that crosses from the 11.66 + * outside world, into the VMS_HW world. It initializes and starts up the 11.67 + * VMS system, then creates one processor from the specified function and 11.68 + * puts it into the readyQ. From that point, that one function is resp. 11.69 + * for creating all the other processors, that then create others, and so 11.70 + * forth. 11.71 + *When all the processors, including the seed, have dissipated, then this 11.72 + * function returns. The results will have been written by side-effect via 11.73 + * pointers read from, or written into initData. 11.74 + * 11.75 + *NOTE: no Threads should exist in the outside program that might touch 11.76 + * any of the data reachable from initData passed in to here 11.77 + */ 11.78 +void 11.79 +VPThread__create_seed_procr_and_do_work( VirtProcrFnPtr fnPtr, void *initData ) 11.80 + { VPThdSemEnv *semEnv; 11.81 + VirtProcr *seedPr; 11.82 + 11.83 + #ifdef SEQUENTIAL 11.84 + VPThread__init_Seq(); //debug sequential exe 11.85 + #else 11.86 + VPThread__init(); //normal multi-thd 11.87 + #endif 11.88 + semEnv = _VMSMasterEnv->semanticEnv; 11.89 + 11.90 + //VPThread starts with one processor, which is put into initial environ, 11.91 + // and which then calls create() to create more, thereby expanding work 11.92 + seedPr = VPThread__create_procr_helper( fnPtr, initData, semEnv, -1 ); 11.93 + 11.94 + resume_procr( seedPr, semEnv ); 11.95 + 11.96 + #ifdef SEQUENTIAL 11.97 + VMS__start_the_work_then_wait_until_done_Seq(); //debug sequential exe 11.98 + #else 11.99 + VMS__start_the_work_then_wait_until_done(); //normal multi-thd 11.100 + #endif 11.101 + 11.102 + VPThread__cleanup_after_shutdown(); 11.103 + } 11.104 + 11.105 + 11.106 +inline int32 11.107 +VPThread__giveMinWorkUnitCycles( float32 percentOverhead ) 11.108 + { 11.109 + return MIN_WORK_UNIT_CYCLES; 11.110 + } 11.111 + 11.112 +inline int32 11.113 +VPThread__giveIdealNumWorkUnits() 11.114 + { 11.115 + return NUM_SCHED_SLOTS * NUM_CORES; 11.116 + } 11.117 + 11.118 +inline int32 11.119 +VPThread__give_number_of_cores_to_schedule_onto() 11.120 + { 11.121 + return NUM_CORES; 11.122 + } 11.123 + 11.124 +/*For now, use TSC -- later, make these two macros with assembly that first 11.125 + * saves jump point, and second jumps back several times to get reliable time 11.126 + */ 11.127 +inline void 11.128 +VPThread__start_primitive() 11.129 + { saveLowTimeStampCountInto( ((VPThreadSemEnv *)(_VMSMasterEnv->semanticEnv))-> 11.130 + primitiveStartTime ); 11.131 + } 11.132 + 11.133 +/*Just quick and dirty for now -- make reliable later 11.134 + * will want this to jump back several times -- to be sure cache is warm 11.135 + * because don't want comm time included in calc-time measurement -- and 11.136 + * also to throw out any "weird" values due to OS interrupt or TSC rollover 11.137 + */ 11.138 +inline int32 11.139 +VPThread__end_primitive_and_give_cycles() 11.140 + { int32 endTime, startTime; 11.141 + //TODO: fix by repeating time-measurement 11.142 + saveLowTimeStampCountInto( endTime ); 11.143 + startTime=((VPThdSemEnv*)(_VMSMasterEnv->semanticEnv))->primitiveStartTime; 11.144 + return (endTime - startTime); 11.145 + } 11.146 + 11.147 +//=========================================================================== 11.148 +// 11.149 +/*Initializes all the data-structures for a VPThread system -- but doesn't 11.150 + * start it running yet! 11.151 + * 11.152 + * 11.153 + *This sets up the semantic layer over the VMS system 11.154 + * 11.155 + *First, calls VMS_Setup, then creates own environment, making it ready 11.156 + * for creating the seed processor and then starting the work. 11.157 + */ 11.158 +void 11.159 +VPThread__init() 11.160 + { 11.161 + VMS__init(); 11.162 + //masterEnv, a global var, now is partially set up by init_VMS 11.163 + 11.164 + VPThread__init_Helper(); 11.165 + } 11.166 + 11.167 +void 11.168 +VPThread__init_Seq() 11.169 + { 11.170 + VMS__init_Seq(); 11.171 + //masterEnv, a global var, now is partially set up by init_VMS 11.172 + 11.173 + VPThread__init_Helper(); 11.174 + } 11.175 + 11.176 +void 11.177 +VPThread__init_Helper() 11.178 + { VPThdSemEnv *semanticEnv; 11.179 + PrivQueueStruc **readyVPQs; 11.180 + int coreIdx, i; 11.181 + 11.182 + //Hook up the semantic layer's plug-ins to the Master virt procr 11.183 + _VMSMasterEnv->requestHandler = &VPThread__Request_Handler; 11.184 + _VMSMasterEnv->slaveScheduler = &VPThread__schedule_virt_procr; 11.185 + 11.186 + //create the semantic layer's environment (all its data) and add to 11.187 + // the master environment 11.188 + semanticEnv = VMS__malloc( sizeof( VPThdSemEnv ) ); 11.189 + _VMSMasterEnv->semanticEnv = semanticEnv; 11.190 + 11.191 + //create the ready queue 11.192 + readyVPQs = VMS__malloc( NUM_CORES * sizeof(PrivQueueStruc *) ); 11.193 + 11.194 + for( coreIdx = 0; coreIdx < NUM_CORES; coreIdx++ ) 11.195 + { 11.196 + readyVPQs[ coreIdx ] = makePrivQ(); 11.197 + } 11.198 + 11.199 + semanticEnv->readyVPQs = readyVPQs; 11.200 + 11.201 + semanticEnv->numVirtPr = 0; 11.202 + semanticEnv->nextCoreToGetNewPr = 0; 11.203 + 11.204 + semanticEnv->mutexDynArrayInfo = 11.205 + makePrivDynArrayOfSize( &(semanticEnv->mutexDynArray), INIT_NUM_MUTEX ); 11.206 + 11.207 + semanticEnv->condDynArrayInfo = 11.208 + makePrivDynArrayOfSize( &(semanticEnv->condDynArray), INIT_NUM_COND ); 11.209 + 11.210 + //TODO: bug -- turn these arrays into dyn arrays to eliminate limit 11.211 + //semanticEnv->singletonHasBeenExecutedFlags = makeDynArrayInfo( ); 11.212 + //semanticEnv->transactionStrucs = makeDynArrayInfo( ); 11.213 + for( i = 0; i < NUM_STRUCS_IN_SEM_ENV; i++ ) 11.214 + { 11.215 + semanticEnv->singletonHasBeenExecutedFlags[i] = FALSE; 11.216 + semanticEnv->transactionStrucs[i].waitingVPQ = makePrivQ(); 11.217 + } 11.218 + } 11.219 + 11.220 + 11.221 +/*Frees any memory allocated by VPThread__init() then calls VMS__shutdown 11.222 + */ 11.223 +void 11.224 +VPThread__cleanup_after_shutdown() 11.225 + { VPThdSemEnv *semEnv; 11.226 + int32 coreIdx, idx, highestIdx; 11.227 + VPThdMutex **mutexArray, *mutex; 11.228 + VPThdCond **condArray, *cond; 11.229 + 11.230 + /* It's all allocated inside VMS's big chunk -- that's about to be freed, so 11.231 + * nothing to do here 11.232 + semEnv = _VMSMasterEnv->semanticEnv; 11.233 + 11.234 +//TODO: double check that all sem env locations freed 11.235 + 11.236 + for( coreIdx = 0; coreIdx < NUM_CORES; coreIdx++ ) 11.237 + { 11.238 + free( semEnv->readyVPQs[coreIdx]->startOfData ); 11.239 + free( semEnv->readyVPQs[coreIdx] ); 11.240 + } 11.241 + 11.242 + free( semEnv->readyVPQs ); 11.243 + 11.244 + 11.245 + //==== Free mutexes and mutex array ==== 11.246 + mutexArray = semEnv->mutexDynArray->array; 11.247 + highestIdx = semEnv->mutexDynArray->highestIdxInArray; 11.248 + for( idx=0; idx < highestIdx; idx++ ) 11.249 + { mutex = mutexArray[ idx ]; 11.250 + if( mutex == NULL ) continue; 11.251 + free( mutex ); 11.252 + } 11.253 + free( mutexArray ); 11.254 + free( semEnv->mutexDynArray ); 11.255 + //====================================== 11.256 + 11.257 + 11.258 + //==== Free conds and cond array ==== 11.259 + condArray = semEnv->condDynArray->array; 11.260 + highestIdx = semEnv->condDynArray->highestIdxInArray; 11.261 + for( idx=0; idx < highestIdx; idx++ ) 11.262 + { cond = condArray[ idx ]; 11.263 + if( cond == NULL ) continue; 11.264 + free( cond ); 11.265 + } 11.266 + free( condArray ); 11.267 + free( semEnv->condDynArray ); 11.268 + //=================================== 11.269 + 11.270 + 11.271 + free( _VMSMasterEnv->semanticEnv ); 11.272 + */ 11.273 + VMS__cleanup_at_end_of_shutdown(); 11.274 + } 11.275 + 11.276 + 11.277 +//=========================================================================== 11.278 + 11.279 +/* 11.280 + */ 11.281 +inline VirtProcr * 11.282 +VPThread__create_thread( VirtProcrFnPtr fnPtr, void *initData, 11.283 + VirtProcr *creatingPr ) 11.284 + { VPThdSemReq reqData; 11.285 + 11.286 + //the semantic request data is on the stack and disappears when this 11.287 + // call returns -- it's guaranteed to remain in the VP's stack for as 11.288 + // long as the VP is suspended. 11.289 + reqData.reqType = 0; //know the type because is a VMS create req 11.290 + reqData.coreToScheduleOnto = -1; //means round-robin schedule 11.291 + reqData.fnPtr = fnPtr; 11.292 + reqData.initData = initData; 11.293 + reqData.requestingPr = creatingPr; 11.294 + 11.295 + VMS__send_create_procr_req( &reqData, creatingPr ); 11.296 + 11.297 + return creatingPr->dataRetFromReq; 11.298 + } 11.299 + 11.300 +inline VirtProcr * 11.301 +VPThread__create_thread_with_affinity( VirtProcrFnPtr fnPtr, void *initData, 11.302 + VirtProcr *creatingPr, int32 coreToScheduleOnto ) 11.303 + { VPThdSemReq reqData; 11.304 + 11.305 + //the semantic request data is on the stack and disappears when this 11.306 + // call returns -- it's guaranteed to remain in the VP's stack for as 11.307 + // long as the VP is suspended. 11.308 + reqData.reqType = 0; //know type because in a VMS create req 11.309 + reqData.coreToScheduleOnto = coreToScheduleOnto; 11.310 + reqData.fnPtr = fnPtr; 11.311 + reqData.initData = initData; 11.312 + reqData.requestingPr = creatingPr; 11.313 + 11.314 + VMS__send_create_procr_req( &reqData, creatingPr ); 11.315 + } 11.316 + 11.317 +inline void 11.318 +VPThread__dissipate_thread( VirtProcr *procrToDissipate ) 11.319 + { 11.320 + VMS__send_dissipate_req( procrToDissipate ); 11.321 + } 11.322 + 11.323 + 11.324 +//=========================================================================== 11.325 + 11.326 +void * 11.327 +VPThread__malloc( int32 sizeToMalloc, VirtProcr *animPr ) 11.328 + { VPThdSemReq reqData; 11.329 + 11.330 + reqData.reqType = malloc_req; 11.331 + reqData.sizeToMalloc = sizeToMalloc; 11.332 + reqData.requestingPr = animPr; 11.333 + 11.334 + VMS__send_sem_request( &reqData, animPr ); 11.335 + 11.336 + return animPr->dataRetFromReq; 11.337 + } 11.338 + 11.339 + 11.340 +/*Sends request to Master, which does the work of freeing 11.341 + */ 11.342 +void 11.343 +VPThread__free( void *ptrToFree, VirtProcr *animPr ) 11.344 + { VPThdSemReq reqData; 11.345 + 11.346 + reqData.reqType = free_req; 11.347 + reqData.ptrToFree = ptrToFree; 11.348 + reqData.requestingPr = animPr; 11.349 + 11.350 + VMS__send_sem_request( &reqData, animPr ); 11.351 + } 11.352 + 11.353 + 11.354 +//=========================================================================== 11.355 + 11.356 +inline void 11.357 +VPThread__set_globals_to( void *globals ) 11.358 + { 11.359 + ((VPThdSemEnv *) 11.360 + (_VMSMasterEnv->semanticEnv))->applicationGlobals = globals; 11.361 + } 11.362 + 11.363 +inline void * 11.364 +VPThread__give_globals() 11.365 + { 11.366 + return((VPThdSemEnv *) (_VMSMasterEnv->semanticEnv))->applicationGlobals; 11.367 + } 11.368 + 11.369 + 11.370 +//=========================================================================== 11.371 + 11.372 +inline int32 11.373 +VPThread__make_mutex( VirtProcr *animPr ) 11.374 + { VPThdSemReq reqData; 11.375 + 11.376 + reqData.reqType = make_mutex; 11.377 + reqData.requestingPr = animPr; 11.378 + 11.379 + VMS__send_sem_request( &reqData, animPr ); 11.380 + 11.381 + return animPr->dataRetFromReq; 11.382 + } 11.383 + 11.384 +inline void 11.385 +VPThread__mutex_lock( int32 mutexIdx, VirtProcr *acquiringPr ) 11.386 + { VPThdSemReq reqData; 11.387 + 11.388 + reqData.reqType = mutex_lock; 11.389 + reqData.mutexIdx = mutexIdx; 11.390 + reqData.requestingPr = acquiringPr; 11.391 + 11.392 + VMS__send_sem_request( &reqData, acquiringPr ); 11.393 + } 11.394 + 11.395 +inline void 11.396 +VPThread__mutex_unlock( int32 mutexIdx, VirtProcr *releasingPr ) 11.397 + { VPThdSemReq reqData; 11.398 + 11.399 + reqData.reqType = mutex_unlock; 11.400 + reqData.mutexIdx = mutexIdx; 11.401 + reqData.requestingPr = releasingPr; 11.402 + 11.403 + VMS__send_sem_request( &reqData, releasingPr ); 11.404 + } 11.405 + 11.406 + 11.407 +//======================= 11.408 +inline int32 11.409 +VPThread__make_cond( int32 ownedMutexIdx, VirtProcr *animPr) 11.410 + { VPThdSemReq reqData; 11.411 + 11.412 + reqData.reqType = make_cond; 11.413 + reqData.mutexIdx = ownedMutexIdx; 11.414 + reqData.requestingPr = animPr; 11.415 + 11.416 + VMS__send_sem_request( &reqData, animPr ); 11.417 + 11.418 + return animPr->dataRetFromReq; 11.419 + } 11.420 + 11.421 +inline void 11.422 +VPThread__cond_wait( int32 condIdx, VirtProcr *waitingPr) 11.423 + { VPThdSemReq reqData; 11.424 + 11.425 + reqData.reqType = cond_wait; 11.426 + reqData.condIdx = condIdx; 11.427 + reqData.requestingPr = waitingPr; 11.428 + 11.429 + VMS__send_sem_request( &reqData, waitingPr ); 11.430 + } 11.431 + 11.432 +inline void * 11.433 +VPThread__cond_signal( int32 condIdx, VirtProcr *signallingPr ) 11.434 + { VPThdSemReq reqData; 11.435 + 11.436 + reqData.reqType = cond_signal; 11.437 + reqData.condIdx = condIdx; 11.438 + reqData.requestingPr = signallingPr; 11.439 + 11.440 + VMS__send_sem_request( &reqData, signallingPr ); 11.441 + } 11.442 +//===========================================================================
