Mercurial > cgi-bin > hgwebdir.cgi > VMS > VMS_Implementations > Vthread_impls > Vthread_MC_shared_impl
changeset 28:b3a881f25c5a
Nearly compiles with in-process common_ancestor vers of VMS
| author | Some Random Person <seanhalle@yahoo.com> |
|---|---|
| date | Sun, 04 Mar 2012 14:29:42 -0800 |
| parents | e5d4d5871ac9 |
| children | b94dc57e4455 |
| files | DESIGN_NOTES__Vthread_lib.txt Vthread.h Vthread.s Vthread_Meas.h Vthread_Measurement.h Vthread_PluginFns.c Vthread_Request_Handlers.c Vthread_Request_Handlers.h Vthread_helper.c Vthread_helper.h Vthread_lib.c |
| diffstat | 11 files changed, 570 insertions(+), 606 deletions(-) [+] |
line diff
1.1 --- a/DESIGN_NOTES__Vthread_lib.txt Thu Mar 01 13:20:51 2012 -0800 1.2 +++ b/DESIGN_NOTES__Vthread_lib.txt Sun Mar 04 14:29:42 2012 -0800 1.3 @@ -1,44 +1,44 @@ 1.4 1.5 -Implement VPThread this way: 1.6 +Implement Vthread this way: 1.7 1.8 We implemented a subset of PThreads functionality, called VMSPThd, that 1.9 includes: mutex_lock, mutex_unlock, cond_wait, and cond_notify, which we name 1.10 as VMSPThd__mutix_lock and so forth. \ All VMSPThd functions take a reference 1.11 -to the AppVP that is animating the function call, in addition to any other 1.12 +to the AppSlv that is animating the function call, in addition to any other 1.13 parameters. 1.14 1.15 A mutex variable is an integer, returned by VMSPThd__mutex_create(), which is 1.16 used inside the request handler as a key to lookup an entry in a hash table, 1.17 that lives in the SemanticEnv. \ Such an entry has a field holding a 1.18 -reference to the AppVP that currently owns the lock, and a queue of AppVPs 1.19 +reference to the AppSlv that currently owns the lock, and a queue of AppSlvs 1.20 waiting to acquire the lock. \ 1.21 1.22 Acquiring a lock is done with VMSPThd__mutex_lock(), which generates a 1.23 -request. \ Recall that all request sends cause the suspention of the AppVP 1.24 +request. \ Recall that all request sends cause the suspention of the AppSlv 1.25 that is animating the library call that generates the request, in this case 1.26 -the AppVP animating VMSPThd__mutex_lock() is suspended. \ The request 1.27 -includes a reference to that animating AppVP, and the mutex integer value. 1.28 +the AppSlv animating VMSPThd__mutex_lock() is suspended. \ The request 1.29 +includes a reference to that animating AppSlv, and the mutex integer value. 1.30 \ When the request reaches the request handler, the mutex integer is used as 1.31 key to look up the hash entry, then if the owner field is null (or the same 1.32 -as the AppVP in the request), the AppVP in the request is placed into the 1.33 -owner field, and that AppVP is queued to be scheduled for re-animation. 1.34 -\ However, if a different AppVP is listed in the owner field, then the AppVP 1.35 +as the AppSlv in the request), the AppSlv in the request is placed into the 1.36 +owner field, and that AppSlv is queued to be scheduled for re-animation. 1.37 +\ However, if a different AppSlv is listed in the owner field, then the AppSlv 1.38 in the request is added to the queue of those trying to acquire. \ Notice 1.39 that this is a purely sequential algorithm that systematic reasoning can be 1.40 used on. 1.41 1.42 VMSPThd__mutex_unlock(), meanwhile, generates a request that causes the 1.43 -request handler to queue for re-animation the AppVP that animated the call. 1.44 -\ It also pops the queue of AppVPs waiting to acquire the lock, and writes 1.45 -the AppVP that comes out as the current owner of the lock and queues that 1.46 -AppVP for re-animation (unless the popped value is null, in which case the 1.47 +request handler to queue for re-animation the AppSlv that animated the call. 1.48 +\ It also pops the queue of AppSlvs waiting to acquire the lock, and writes 1.49 +the AppSlv that comes out as the current owner of the lock and queues that 1.50 +AppSlv for re-animation (unless the popped value is null, in which case the 1.51 current owner is just set to null). 1.52 1.53 Implementing condition variables takes a similar approach, in that 1.54 VMSPThd__init_cond() returns an integer that is then used to look up an entry 1.55 -in a hash table, where the entry contains a queue of AppVPs waiting on the 1.56 +in a hash table, where the entry contains a queue of AppSlvs waiting on the 1.57 condition variable. \ VMSPThd__cond_wait() generates a request that pushes 1.58 -the AppVP into the queue, while VMSPThd__cond_signal() takes a wait request 1.59 +the AppSlv into the queue, while VMSPThd__cond_signal() takes a wait request 1.60 from the queue. 1.61 1.62 Notice that this is again a purely sequential algorithm, and sidesteps issues 1.63 @@ -59,7 +59,7 @@ 1.64 debug, and is in a form that should be amenable to proof of freedom from race 1.65 conditions, given a correct implementation of VMS. \ The hash-table based 1.66 approach also makes it reasonably high performance, with (essentially) no 1.67 -slowdown when the number of locks or number of AppVPs grows large. 1.68 +slowdown when the number of locks or number of AppSlvs grows large. 1.69 1.70 =========================== 1.71 Behavior:
2.1 --- a/Vthread.h Thu Mar 01 13:20:51 2012 -0800 2.2 +++ b/Vthread.h Sun Mar 04 14:29:42 2012 -0800 2.3 @@ -6,21 +6,24 @@ 2.4 * 2.5 */ 2.6 2.7 -#ifndef _VPThread_H 2.8 -#define _VPThread_H 2.9 +#ifndef _Vthread_H 2.10 +#define _Vthread_H 2.11 + 2.12 +#define _LANG_NAME_ "Vthread" 2.13 2.14 #include "VMS_impl/VMS.h" 2.15 #include "C_Libraries/Queue_impl/PrivateQueue.h" 2.16 #include "C_Libraries/DynArray/DynArray.h" 2.17 2.18 2.19 -/*This header defines everything specific to the VPThread semantic plug-in 2.20 +/*This header defines everything specific to the Vthread semantic plug-in 2.21 */ 2.22 2.23 2.24 //=========================================================================== 2.25 //turn on the counter measurements of language overhead -- comment to turn off 2.26 #define MEAS__TURN_ON_LANG_MEAS 2.27 +#include "Vthread_Overhead_Meas.h" 2.28 2.29 #define INIT_NUM_MUTEX 10000 2.30 #define INIT_NUM_COND 10000 2.31 @@ -29,7 +32,7 @@ 2.32 //=========================================================================== 2.33 2.34 //=========================================================================== 2.35 -typedef struct _VPThreadSemReq VPThdSemReq; 2.36 +typedef struct _VthreadSemReq VthdSemReq; 2.37 typedef void (*PtrToAtomicFn ) ( void * ); //executed atomically in master 2.38 //=========================================================================== 2.39 2.40 @@ -38,17 +41,17 @@ 2.41 */ 2.42 typedef struct 2.43 { 2.44 - void *endInstrAddr; 2.45 + void *savedRetAddr; 2.46 int32 hasBeenStarted; 2.47 int32 hasFinished; 2.48 PrivQueueStruc *waitQ; 2.49 } 2.50 -VPThdSingleton; 2.51 +VthdSingleton; 2.52 2.53 /*Semantic layer-specific data sent inside a request from lib called in app 2.54 * to request handler called in MasterLoop 2.55 */ 2.56 -enum VPThreadReqType 2.57 +enum VthreadReqType 2.58 { 2.59 make_mutex = 1, 2.60 mutex_lock, 2.61 @@ -56,7 +59,7 @@ 2.62 make_cond, 2.63 cond_wait, 2.64 cond_signal, 2.65 - make_procr, 2.66 + make_slaveVP, 2.67 malloc_req, 2.68 free_req, 2.69 singleton_fn_start, 2.70 @@ -68,9 +71,9 @@ 2.71 trans_end 2.72 }; 2.73 2.74 -struct _VPThreadSemReq 2.75 - { enum VPThreadReqType reqType; 2.76 - SlaveVP *requestingVP; 2.77 +struct _VthreadSemReq 2.78 + { enum VthreadReqType reqType; 2.79 + SlaveVP *requestingSlv; 2.80 int32 mutexIdx; 2.81 int32 condIdx; 2.82 2.83 @@ -82,22 +85,22 @@ 2.84 void *ptrToFree; 2.85 2.86 int32 singletonID; 2.87 - VPThdSingleton **singletonPtrAddr; 2.88 + VthdSingleton *singleton; 2.89 2.90 PtrToAtomicFn fnToExecInMaster; 2.91 void *dataForFn; 2.92 2.93 int32 transID; 2.94 } 2.95 -/* VPThreadSemReq */; 2.96 +/* VthreadSemReq */; 2.97 2.98 2.99 typedef struct 2.100 { 2.101 - SlaveVP *VPCurrentlyExecuting; 2.102 - PrivQueueStruc *waitingVPQ; 2.103 + SlaveVP *SlvCurrentlyExecuting; 2.104 + PrivQueueStruc *waitingSlvQ; 2.105 } 2.106 -VPThdTrans; 2.107 +VthdTrans; 2.108 2.109 2.110 typedef struct 2.111 @@ -106,16 +109,16 @@ 2.112 SlaveVP *holderOfLock; 2.113 PrivQueueStruc *waitingQueue; 2.114 } 2.115 -VPThdMutex; 2.116 +VthdMutex; 2.117 2.118 2.119 typedef struct 2.120 { 2.121 int32 condIdx; 2.122 PrivQueueStruc *waitingQueue; 2.123 - VPThdMutex *partnerMutex; 2.124 + VthdMutex *partnerMutex; 2.125 } 2.126 -VPThdCond; 2.127 +VthdCond; 2.128 2.129 typedef struct _TransListElem TransListElem; 2.130 struct _TransListElem 2.131 @@ -130,130 +133,129 @@ 2.132 int32 highestTransEntered; 2.133 TransListElem *lastTransEntered; 2.134 } 2.135 -VPThdSemData; 2.136 +VthdSemData; 2.137 2.138 2.139 typedef struct 2.140 { 2.141 //Standard stuff will be in most every semantic env 2.142 - PrivQueueStruc **readyVPQs; 2.143 - int32 numVirtVP; 2.144 - int32 nextCoreToGetNewVP; 2.145 + PrivQueueStruc **readySlvQs; 2.146 + int32 nextCoreToGetNewSlv; 2.147 int32 primitiveStartTime; 2.148 2.149 //Specific to this semantic layer 2.150 - VPThdMutex **mutexDynArray; 2.151 + VthdMutex **mutexDynArray; 2.152 PrivDynArrayInfo *mutexDynArrayInfo; 2.153 2.154 - VPThdCond **condDynArray; 2.155 + VthdCond **condDynArray; 2.156 PrivDynArrayInfo *condDynArrayInfo; 2.157 2.158 void *applicationGlobals; 2.159 2.160 //fix limit on num with dynArray 2.161 - VPThdSingleton fnSingletons[NUM_STRUCS_IN_SEM_ENV]; 2.162 + VthdSingleton fnSingletons[NUM_STRUCS_IN_SEM_ENV]; 2.163 2.164 - VPThdTrans transactionStrucs[NUM_STRUCS_IN_SEM_ENV]; 2.165 + VthdTrans transactionStrucs[NUM_STRUCS_IN_SEM_ENV]; 2.166 } 2.167 -VPThdSemEnv; 2.168 +VthdSemEnv; 2.169 2.170 2.171 //=========================================================================== 2.172 2.173 inline void 2.174 -VPThread__create_seed_procr_and_do_work( TopLevelFnPtr fn, void *initData ); 2.175 +Vthread__create_seed_slaveVP_and_do_work( TopLevelFnPtr fn, void *initData ); 2.176 2.177 //======================= 2.178 2.179 inline SlaveVP * 2.180 -VPThread__create_thread( TopLevelFnPtr fnPtr, void *initData, 2.181 - SlaveVP *creatingVP ); 2.182 +Vthread__create_thread( TopLevelFnPtr fnPtr, void *initData, 2.183 + SlaveVP *creatingSlv ); 2.184 2.185 inline SlaveVP * 2.186 -VPThread__create_thread_with_affinity( TopLevelFnPtr fnPtr, void *initData, 2.187 - SlaveVP *creatingVP, int32 coreToScheduleOnto ); 2.188 +Vthread__create_thread_with_affinity( TopLevelFnPtr fnPtr, void *initData, 2.189 + SlaveVP *creatingSlv, int32 coreToScheduleOnto ); 2.190 2.191 inline void 2.192 -VPThread__dissipate_thread( SlaveVP *procrToDissipate ); 2.193 +Vthread__dissipate_thread( SlaveVP *procrToDissipate ); 2.194 2.195 //======================= 2.196 inline void 2.197 -VPThread__set_globals_to( void *globals ); 2.198 +Vthread__set_globals_to( void *globals ); 2.199 2.200 inline void * 2.201 -VPThread__give_globals(); 2.202 +Vthread__give_globals(); 2.203 2.204 //======================= 2.205 inline int32 2.206 -VPThread__make_mutex( SlaveVP *animVP ); 2.207 +Vthread__make_mutex( SlaveVP *animSlv ); 2.208 2.209 inline void 2.210 -VPThread__mutex_lock( int32 mutexIdx, SlaveVP *acquiringVP ); 2.211 +Vthread__mutex_lock( int32 mutexIdx, SlaveVP *acquiringSlv ); 2.212 2.213 inline void 2.214 -VPThread__mutex_unlock( int32 mutexIdx, SlaveVP *releasingVP ); 2.215 +Vthread__mutex_unlock( int32 mutexIdx, SlaveVP *releasingSlv ); 2.216 2.217 2.218 //======================= 2.219 inline int32 2.220 -VPThread__make_cond( int32 ownedMutexIdx, SlaveVP *animPr); 2.221 +Vthread__make_cond( int32 ownedMutexIdx, SlaveVP *animSlv); 2.222 2.223 inline void 2.224 -VPThread__cond_wait( int32 condIdx, SlaveVP *waitingPr); 2.225 +Vthread__cond_wait( int32 condIdx, SlaveVP *waitingSlv); 2.226 2.227 inline void * 2.228 -VPThread__cond_signal( int32 condIdx, SlaveVP *signallingVP ); 2.229 +Vthread__cond_signal( int32 condIdx, SlaveVP *signallingSlv ); 2.230 2.231 2.232 //======================= 2.233 void 2.234 -VPThread__start_fn_singleton( int32 singletonID, SlaveVP *animVP ); 2.235 +Vthread__start_fn_singleton( int32 singletonID, SlaveVP *animSlv ); 2.236 2.237 void 2.238 -VPThread__end_fn_singleton( int32 singletonID, SlaveVP *animVP ); 2.239 +Vthread__end_fn_singleton( int32 singletonID, SlaveVP *animSlv ); 2.240 2.241 void 2.242 -VPThread__start_data_singleton( VPThdSingleton **singeltonAddr, SlaveVP *animVP ); 2.243 +Vthread__start_data_singleton( VthdSingleton *singelton, SlaveVP *animSlv ); 2.244 2.245 void 2.246 -VPThread__end_data_singleton( VPThdSingleton **singletonAddr, SlaveVP *animVP ); 2.247 +Vthread__end_data_singleton( VthdSingleton *singleton, SlaveVP *animSlv ); 2.248 2.249 void 2.250 -VPThread__animate_short_fn_in_isolation( PtrToAtomicFn ptrToFnToExecInMaster, 2.251 - void *data, SlaveVP *animVP ); 2.252 +Vthread__animate_short_fn_in_isolation( PtrToAtomicFn ptrToFnToExecInMaster, 2.253 + void *data, SlaveVP *animSlv ); 2.254 2.255 void 2.256 -VPThread__start_transaction( int32 transactionID, SlaveVP *animVP ); 2.257 +Vthread__start_transaction( int32 transactionID, SlaveVP *animSlv ); 2.258 2.259 void 2.260 -VPThread__end_transaction( int32 transactionID, SlaveVP *animVP ); 2.261 +Vthread__end_transaction( int32 transactionID, SlaveVP *animSlv ); 2.262 2.263 2.264 2.265 //========================= Internal use only ============================= 2.266 inline void 2.267 -VPThread__Request_Handler( SlaveVP *requestingVP, void *_semEnv ); 2.268 +Vthread__Request_Handler( SlaveVP *requestingSlv, void *_semEnv ); 2.269 2.270 inline SlaveVP * 2.271 -VPThread__schedule_virt_procr( void *_semEnv, int coreNum ); 2.272 +Vthread__schedule_slaveVP( void *_semEnv, int coreNum ); 2.273 2.274 //======================= 2.275 inline void 2.276 -VPThread__free_semantic_request( VPThdSemReq *semReq ); 2.277 +Vthread__free_semantic_request( VthdSemReq *semReq ); 2.278 2.279 //======================= 2.280 2.281 void * 2.282 -VPThread__malloc( size_t sizeToMalloc, SlaveVP *animVP ); 2.283 +Vthread__malloc( size_t sizeToMalloc, SlaveVP *animSlv ); 2.284 2.285 void 2.286 -VPThread__init(); 2.287 +Vthread__init(); 2.288 2.289 void 2.290 -VPThread__cleanup_after_shutdown(); 2.291 +Vthread__cleanup_after_shutdown(); 2.292 2.293 void inline 2.294 -resume_procr( SlaveVP *procr, VPThdSemEnv *semEnv ); 2.295 +resume_slaveVP( SlaveVP *procr, VthdSemEnv *semEnv ); 2.296 2.297 -#endif /* _VPThread_H */ 2.298 +#endif /* _Vthread_H */ 2.299
3.1 --- a/Vthread.s Thu Mar 01 13:20:51 2012 -0800 3.2 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 3.3 @@ -1,21 +0,0 @@ 3.4 - 3.5 -//Assembly code takes the return addr off the stack and saves 3.6 -// into the singleton. The first field in the singleton is the 3.7 -// "endInstrAddr" field, and the return addr is at 0x4(%ebp) 3.8 -.globl asm_save_ret_to_singleton 3.9 -asm_save_ret_to_singleton: 3.10 - movq 0x8(%rbp), %rax #get ret address, ebp is the same as in the calling function 3.11 - movq %rax, (%rdi) #write ret addr to endInstrAddr field 3.12 - ret 3.13 - 3.14 - 3.15 -//Assembly code changes the return addr on the stack to the one 3.16 -// saved into the singleton by the end-singleton-fn 3.17 -//The stack's return addr is at 0x4(%%ebp) 3.18 -.globl asm_write_ret_from_singleton 3.19 -asm_write_ret_from_singleton: 3.20 - movq (%rdi), %rax #get endInstrAddr field 3.21 - movq %rax, 0x8(%rbp) #write return addr to the stack of the caller 3.22 - ret 3.23 - 3.24 -
4.1 --- a/Vthread_Meas.h Thu Mar 01 13:20:51 2012 -0800 4.2 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 4.3 @@ -1,109 +0,0 @@ 4.4 -/* 4.5 - * File: VPThread_helper.h 4.6 - * Author: msach 4.7 - * 4.8 - * Created on June 10, 2011, 12:20 PM 4.9 - */ 4.10 - 4.11 -#ifndef VTHREAD_MEAS_H 4.12 -#define VTHREAD_MEAS_H 4.13 - 4.14 -#ifdef MEAS__TURN_ON_LANG_MEAS 4.15 - 4.16 - #ifdef MEAS__Make_Meas_Hists_for_Language 4.17 - #undef MEAS__Make_Meas_Hists_for_Language 4.18 - #endif 4.19 - 4.20 -//=================== Language-specific Measurement Stuff =================== 4.21 -// 4.22 -// 4.23 - #define createHistIdx 1 //note: starts at 1 4.24 - #define mutexLockHistIdx 2 4.25 - #define mutexUnlockHistIdx 3 4.26 - #define condWaitHistIdx 4 4.27 - #define condSignalHistIdx 5 4.28 - 4.29 - #define MEAS__Make_Meas_Hists_for_Language() \ 4.30 - _VMSMasterEnv->measHistsInfo = \ 4.31 - makePrivDynArrayOfSize( (void***)&(_VMSMasterEnv->measHists), 200); \ 4.32 - makeAMeasHist( createHistIdx, "create", 250, 0, 100 ) \ 4.33 - makeAMeasHist( mutexLockHistIdx, "mutex_lock", 50, 0, 100 ) \ 4.34 - makeAMeasHist( mutexUnlockHistIdx, "mutex_unlock", 50, 0, 100 ) \ 4.35 - makeAMeasHist( condWaitHistIdx, "cond_wait", 50, 0, 100 ) \ 4.36 - makeAMeasHist( condSignalHistIdx, "cond_signal", 50, 0, 100 ) 4.37 - 4.38 - 4.39 - #define Meas_startCreate \ 4.40 - int32 startStamp, endStamp; \ 4.41 - saveLowTimeStampCountInto( startStamp ); 4.42 - 4.43 - #define Meas_endCreate \ 4.44 - saveLowTimeStampCountInto( endStamp ); \ 4.45 - addIntervalToHist( startStamp, endStamp, \ 4.46 - _VMSMasterEnv->measHists[ createHistIdx ] ); 4.47 - 4.48 - #define Meas_startMutexLock \ 4.49 - int32 startStamp, endStamp; \ 4.50 - saveLowTimeStampCountInto( startStamp ); 4.51 - 4.52 - #define Meas_endMutexLock \ 4.53 - saveLowTimeStampCountInto( endStamp ); \ 4.54 - addIntervalToHist( startStamp, endStamp, \ 4.55 - _VMSMasterEnv->measHists[ mutexLockHistIdx ] ); 4.56 - 4.57 - #define Meas_startMutexUnlock \ 4.58 - int32 startStamp, endStamp; \ 4.59 - saveLowTimeStampCountInto( startStamp ); 4.60 - 4.61 - #define Meas_endMutexUnlock \ 4.62 - saveLowTimeStampCountInto( endStamp ); \ 4.63 - addIntervalToHist( startStamp, endStamp, \ 4.64 - _VMSMasterEnv->measHists[ mutexUnlockHistIdx ] ); 4.65 - 4.66 - #define Meas_startCondWait \ 4.67 - int32 startStamp, endStamp; \ 4.68 - saveLowTimeStampCountInto( startStamp ); 4.69 - 4.70 - #define Meas_endCondWait \ 4.71 - saveLowTimeStampCountInto( endStamp ); \ 4.72 - addIntervalToHist( startStamp, endStamp, \ 4.73 - _VMSMasterEnv->measHists[ condWaitHistIdx ] ); 4.74 - 4.75 - #define Meas_startCondSignal \ 4.76 - int32 startStamp, endStamp; \ 4.77 - saveLowTimeStampCountInto( startStamp ); 4.78 - 4.79 - #define Meas_endCondSignal \ 4.80 - saveLowTimeStampCountInto( endStamp ); \ 4.81 - addIntervalToHist( startStamp, endStamp, \ 4.82 - _VMSMasterEnv->measHists[ condSignalHistIdx ] ); 4.83 - 4.84 -#else //===================== turned off ========================== 4.85 - 4.86 - #define MEAS__Make_Meas_Hists_for_Language() 4.87 - 4.88 - #define Meas_startCreate 4.89 - 4.90 - #define Meas_endCreate 4.91 - 4.92 - #define Meas_startMutexLock 4.93 - 4.94 - #define Meas_endMutexLock 4.95 - 4.96 - #define Meas_startMutexUnlock 4.97 - 4.98 - #define Meas_endMutexUnlock 4.99 - 4.100 - #define Meas_startCondWait 4.101 - 4.102 - #define Meas_endCondWait 4.103 - 4.104 - #define Meas_startCondSignal 4.105 - 4.106 - #define Meas_endCondSignal 4.107 - 4.108 -#endif 4.109 - 4.110 - 4.111 -#endif /* VTHREAD_MEAS_H */ 4.112 -
5.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 5.2 +++ b/Vthread_Measurement.h Sun Mar 04 14:29:42 2012 -0800 5.3 @@ -0,0 +1,108 @@ 5.4 +/* 5.5 + * 5.6 + * 5.7 + * Created on June 10, 2011, 12:20 PM 5.8 + */ 5.9 + 5.10 +#ifndef VTHREAD_MEAS_H 5.11 +#define VTHREAD_MEAS_H 5.12 + 5.13 +#ifdef MEAS__TURN_ON_LANG_MEAS 5.14 + 5.15 + #ifdef MEAS__Make_Meas_Hists_for_Language 5.16 + #undef MEAS__Make_Meas_Hists_for_Language 5.17 + #endif 5.18 + 5.19 +//=================== Language-specific Measurement Stuff =================== 5.20 +// 5.21 +// 5.22 + #define createHistIdx 1 //note: starts at 1 5.23 + #define mutexLockHistIdx 2 5.24 + #define mutexUnlockHistIdx 3 5.25 + #define condWaitHistIdx 4 5.26 + #define condSignalHistIdx 5 5.27 + 5.28 + #define MEAS__Make_Meas_Hists_for_Language() \ 5.29 + _VMSMasterEnv->measHistsInfo = \ 5.30 + makePrivDynArrayOfSize( (void***)&(_VMSMasterEnv->measHists), 200); \ 5.31 + makeAMeasHist( createHistIdx, "create", 250, 0, 100 ) \ 5.32 + makeAMeasHist( mutexLockHistIdx, "mutex_lock", 50, 0, 100 ) \ 5.33 + makeAMeasHist( mutexUnlockHistIdx, "mutex_unlock", 50, 0, 100 ) \ 5.34 + makeAMeasHist( condWaitHistIdx, "cond_wait", 50, 0, 100 ) \ 5.35 + makeAMeasHist( condSignalHistIdx, "cond_signal", 50, 0, 100 ) 5.36 + 5.37 + 5.38 + #define Meas_startCreate \ 5.39 + int32 startStamp, endStamp; \ 5.40 + saveLowTimeStampCountInto( startStamp ); 5.41 + 5.42 + #define Meas_endCreate \ 5.43 + saveLowTimeStampCountInto( endStamp ); \ 5.44 + addIntervalToHist( startStamp, endStamp, \ 5.45 + _VMSMasterEnv->measHists[ createHistIdx ] ); 5.46 + 5.47 + #define Meas_startMutexLock \ 5.48 + int32 startStamp, endStamp; \ 5.49 + saveLowTimeStampCountInto( startStamp ); 5.50 + 5.51 + #define Meas_endMutexLock \ 5.52 + saveLowTimeStampCountInto( endStamp ); \ 5.53 + addIntervalToHist( startStamp, endStamp, \ 5.54 + _VMSMasterEnv->measHists[ mutexLockHistIdx ] ); 5.55 + 5.56 + #define Meas_startMutexUnlock \ 5.57 + int32 startStamp, endStamp; \ 5.58 + saveLowTimeStampCountInto( startStamp ); 5.59 + 5.60 + #define Meas_endMutexUnlock \ 5.61 + saveLowTimeStampCountInto( endStamp ); \ 5.62 + addIntervalToHist( startStamp, endStamp, \ 5.63 + _VMSMasterEnv->measHists[ mutexUnlockHistIdx ] ); 5.64 + 5.65 + #define Meas_startCondWait \ 5.66 + int32 startStamp, endStamp; \ 5.67 + saveLowTimeStampCountInto( startStamp ); 5.68 + 5.69 + #define Meas_endCondWait \ 5.70 + saveLowTimeStampCountInto( endStamp ); \ 5.71 + addIntervalToHist( startStamp, endStamp, \ 5.72 + _VMSMasterEnv->measHists[ condWaitHistIdx ] ); 5.73 + 5.74 + #define Meas_startCondSignal \ 5.75 + int32 startStamp, endStamp; \ 5.76 + saveLowTimeStampCountInto( startStamp ); 5.77 + 5.78 + #define Meas_endCondSignal \ 5.79 + saveLowTimeStampCountInto( endStamp ); \ 5.80 + addIntervalToHist( startStamp, endStamp, \ 5.81 + _VMSMasterEnv->measHists[ condSignalHistIdx ] ); 5.82 + 5.83 +#else //===================== turned off ========================== 5.84 + 5.85 + #define MEAS__Make_Meas_Hists_for_Language() 5.86 + 5.87 + #define Meas_startCreate 5.88 + 5.89 + #define Meas_endCreate 5.90 + 5.91 + #define Meas_startMutexLock 5.92 + 5.93 + #define Meas_endMutexLock 5.94 + 5.95 + #define Meas_startMutexUnlock 5.96 + 5.97 + #define Meas_endMutexUnlock 5.98 + 5.99 + #define Meas_startCondWait 5.100 + 5.101 + #define Meas_endCondWait 5.102 + 5.103 + #define Meas_startCondSignal 5.104 + 5.105 + #define Meas_endCondSignal 5.106 + 5.107 +#endif /* MEAS__TURN_ON_LANG_MEAS */ 5.108 + 5.109 + 5.110 +#endif /* VTHREAD_MEAS_H */ 5.111 +
6.1 --- a/Vthread_PluginFns.c Thu Mar 01 13:20:51 2012 -0800 6.2 +++ b/Vthread_PluginFns.c Sun Mar 04 14:29:42 2012 -0800 6.3 @@ -8,26 +8,26 @@ 6.4 #include <stdlib.h> 6.5 #include <malloc.h> 6.6 6.7 -#include "VMS/Queue_impl/PrivateQueue.h" 6.8 -#include "VPThread.h" 6.9 -#include "VPThread_Request_Handlers.h" 6.10 -#include "VPThread_helper.h" 6.11 +#include "C_Libraries/Queue_impl/PrivateQueue.h" 6.12 +#include "Vthread.h" 6.13 +#include "Vthread_Request_Handlers.h" 6.14 +#include "Vthread_helper.h" 6.15 6.16 //=========================== Local Fn Prototypes =========================== 6.17 6.18 void inline 6.19 -handleSemReq( VMSReqst *req, SlaveVP *requestingVP, VPThdSemEnv *semEnv ); 6.20 +handleSemReq( VMSReqst *req, SlaveVP *requestingSlv, VthdSemEnv *semEnv ); 6.21 6.22 inline void 6.23 -handleDissipate( SlaveVP *requestingVP, VPThdSemEnv *semEnv ); 6.24 +handleDissipate( SlaveVP *requestingSlv, VthdSemEnv *semEnv ); 6.25 6.26 inline void 6.27 -handleCreate( VMSReqst *req, SlaveVP *requestingVP, VPThdSemEnv *semEnv ); 6.28 +handleCreate( VMSReqst *req, SlaveVP *requestingSlv, VthdSemEnv *semEnv ); 6.29 6.30 6.31 //============================== Scheduler ================================== 6.32 // 6.33 -/*For VPThread, scheduling a slave simply takes the next work-unit off the 6.34 +/*For Vthread, scheduling a slave simply takes the next work-unit off the 6.35 * ready-to-go work-unit queue and assigns it to the slaveToSched. 6.36 *If the ready-to-go work-unit queue is empty, then nothing to schedule 6.37 * to the slave -- return FALSE to let Master loop know scheduling that 6.38 @@ -35,16 +35,16 @@ 6.39 */ 6.40 char __Scheduler[] = "FIFO Scheduler"; //Gobal variable for name in saved histogram 6.41 SlaveVP * 6.42 -VPThread__schedule_virt_procr( void *_semEnv, int coreNum ) 6.43 - { SlaveVP *schedVP; 6.44 - VPThdSemEnv *semEnv; 6.45 +Vthread__schedule_slaveVP( void *_semEnv, int coreNum ) 6.46 + { SlaveVP *schedSlv; 6.47 + VthdSemEnv *semEnv; 6.48 6.49 - semEnv = (VPThdSemEnv *)_semEnv; 6.50 + semEnv = (VthdSemEnv *)_semEnv; 6.51 6.52 - schedVP = readPrivQ( semEnv->readyVPQs[coreNum] ); 6.53 + schedSlv = readPrivQ( semEnv->readySlvQs[coreNum] ); 6.54 //Note, using a non-blocking queue -- it returns NULL if queue empty 6.55 6.56 - return( schedVP ); 6.57 + return( schedSlv ); 6.58 } 6.59 6.60 6.61 @@ -62,40 +62,40 @@ 6.62 * Processor, and initial data. 6.63 */ 6.64 void 6.65 -VPThread__Request_Handler( SlaveVP *requestingVP, void *_semEnv ) 6.66 - { VPThdSemEnv *semEnv; 6.67 +Vthread__Request_Handler( SlaveVP *requestingSlv, void *_semEnv ) 6.68 + { VthdSemEnv *semEnv; 6.69 VMSReqst *req; 6.70 6.71 - semEnv = (VPThdSemEnv *)_semEnv; 6.72 + semEnv = (VthdSemEnv *)_semEnv; 6.73 6.74 - req = VMS__take_next_request_out_of( requestingVP ); 6.75 + req = VMS_PI__take_next_request_out_of( requestingSlv ); 6.76 6.77 while( req != NULL ) 6.78 { 6.79 switch( req->reqType ) 6.80 - { case semantic: handleSemReq( req, requestingVP, semEnv); 6.81 + { case semantic: handleSemReq( req, requestingSlv, semEnv); 6.82 break; 6.83 - case createReq: handleCreate( req, requestingVP, semEnv); 6.84 + case createReq: handleCreate( req, requestingSlv, semEnv); 6.85 break; 6.86 - case dissipate: handleDissipate( requestingVP, semEnv); 6.87 + case dissipate: handleDissipate( requestingSlv, semEnv); 6.88 break; 6.89 - case VMSSemantic: VMS__handle_VMSSemReq(req, requestingVP, semEnv, 6.90 - (ResumeVPFnPtr)&resume_procr); 6.91 + case VMSSemantic: VMS_PI__handle_VMSSemReq(req, requestingSlv, semEnv, 6.92 + (ResumeSlvFnPtr)&resume_slaveVP); 6.93 break; 6.94 default: 6.95 break; 6.96 } 6.97 6.98 - req = VMS__take_next_request_out_of( requestingVP ); 6.99 + req = VMS_PI__take_next_request_out_of( requestingSlv ); 6.100 } //while( req != NULL ) 6.101 } 6.102 6.103 6.104 void inline 6.105 -handleSemReq( VMSReqst *req, SlaveVP *reqVP, VPThdSemEnv *semEnv ) 6.106 - { VPThdSemReq *semReq; 6.107 +handleSemReq( VMSReqst *req, SlaveVP *reqSlv, VthdSemEnv *semEnv ) 6.108 + { VthdSemReq *semReq; 6.109 6.110 - semReq = VMS__take_sem_reqst_from(req); 6.111 + semReq = VMS_PI__take_sem_reqst_from(req); 6.112 if( semReq == NULL ) return; 6.113 switch( semReq->reqType ) 6.114 { 6.115 @@ -111,23 +111,23 @@ 6.116 break; 6.117 case cond_signal: handleCondSignal( semReq, semEnv); 6.118 break; 6.119 - case malloc_req: handleMalloc( semReq, reqVP, semEnv); 6.120 + case malloc_req: handleMalloc( semReq, reqSlv, semEnv); 6.121 break; 6.122 - case free_req: handleFree( semReq, reqVP, semEnv); 6.123 + case free_req: handleFree( semReq, reqSlv, semEnv); 6.124 break; 6.125 - case singleton_fn_start: handleStartFnSingleton(semReq, reqVP, semEnv); 6.126 + case singleton_fn_start: handleStartFnSingleton(semReq, reqSlv, semEnv); 6.127 break; 6.128 - case singleton_fn_end: handleEndFnSingleton( semReq, reqVP, semEnv); 6.129 + case singleton_fn_end: handleEndFnSingleton( semReq, reqSlv, semEnv); 6.130 break; 6.131 - case singleton_data_start:handleStartDataSingleton(semReq,reqVP,semEnv); 6.132 + case singleton_data_start:handleStartDataSingleton(semReq,reqSlv,semEnv); 6.133 break; 6.134 - case singleton_data_end: handleEndDataSingleton(semReq, reqVP, semEnv); 6.135 + case singleton_data_end: handleEndDataSingleton(semReq, reqSlv, semEnv); 6.136 break; 6.137 - case atomic: handleAtomic( semReq, reqVP, semEnv); 6.138 + case atomic: handleAtomic( semReq, reqSlv, semEnv); 6.139 break; 6.140 - case trans_start: handleTransStart( semReq, reqVP, semEnv); 6.141 + case trans_start: handleTransStart( semReq, reqSlv, semEnv); 6.142 break; 6.143 - case trans_end: handleTransEnd( semReq, reqVP, semEnv); 6.144 + case trans_end: handleTransEnd( semReq, reqSlv, semEnv); 6.145 break; 6.146 } 6.147 } 6.148 @@ -135,40 +135,34 @@ 6.149 //=========================== VMS Request Handlers =========================== 6.150 // 6.151 inline void 6.152 -handleDissipate( SlaveVP *requestingVP, VPThdSemEnv *semEnv ) 6.153 +handleDissipate( SlaveVP *requestingSlv, VthdSemEnv *semEnv ) 6.154 { 6.155 //free any semantic data allocated to the virt procr 6.156 - VMS__free( requestingVP->semanticData ); 6.157 + VMS_PI__free( requestingSlv->semanticData ); 6.158 6.159 - //Now, call VMS to free_all AppVP state -- stack and so on 6.160 - VMS__dissipate_procr( requestingVP ); 6.161 - 6.162 - semEnv->numVP -= 1; 6.163 - if( semEnv->numVP == 0 ) 6.164 - { //no more work, so shutdown 6.165 - VMS__shutdown(); 6.166 - } 6.167 + //Now, call VMS to free_all AppSlv state -- stack and so on 6.168 + VMS_PI__dissipate_slaveVP( requestingSlv ); 6.169 } 6.170 6.171 inline void 6.172 -handleCreate( VMSReqst *req, SlaveVP *requestingVP, VPThdSemEnv *semEnv ) 6.173 - { VPThdSemReq *semReq; 6.174 - SlaveVP *newVP; 6.175 +handleCreate( VMSReqst *req, SlaveVP *requestingSlv, VthdSemEnv *semEnv ) 6.176 + { VthdSemReq *semReq; 6.177 + SlaveVP *newSlv; 6.178 6.179 //========================= MEASUREMENT STUFF ====================== 6.180 Meas_startCreate 6.181 //================================================================== 6.182 6.183 - semReq = VMS__take_sem_reqst_from( req ); 6.184 + semReq = VMS_PI__take_sem_reqst_from( req ); 6.185 6.186 - newVP = VPThread__create_procr_helper( semReq->fnPtr, semReq->initData, 6.187 + newSlv = Vthread__create_slaveVP_helper( semReq->fnPtr, semReq->initData, 6.188 semEnv, semReq->coreToScheduleOnto); 6.189 6.190 - //For VPThread, caller needs ptr to created processor returned to it 6.191 - requestingVP->dataRetFromReq = newVP; 6.192 + //For Vthread, caller needs ptr to created processor returned to it 6.193 + requestingSlv->dataRetFromReq = newSlv; 6.194 6.195 - resume_procr( newVP, semEnv ); 6.196 - resume_procr( requestingVP, semEnv ); 6.197 + resume_slaveVP( newSlv, semEnv ); 6.198 + resume_slaveVP( requestingSlv, semEnv ); 6.199 6.200 //========================= MEASUREMENT STUFF ====================== 6.201 Meas_endCreate 6.202 @@ -184,9 +178,9 @@ 6.203 6.204 //=========================== Helper ============================== 6.205 void inline 6.206 -resume_procr( SlaveVP *procr, VPThdSemEnv *semEnv ) 6.207 +resume_slaveVP( SlaveVP *procr, VthdSemEnv *semEnv ) 6.208 { 6.209 - writePrivQ( procr, semEnv->readyVPQs[ procr->coreAnimatedBy] ); 6.210 + writePrivQ( procr, semEnv->readySlvQs[ procr->coreAnimatedBy] ); 6.211 } 6.212 6.213 //=========================================================================== 6.214 \ No newline at end of file
7.1 --- a/Vthread_Request_Handlers.c Thu Mar 01 13:20:51 2012 -0800 7.2 +++ b/Vthread_Request_Handlers.c Sun Mar 04 14:29:42 2012 -0800 7.3 @@ -8,7 +8,7 @@ 7.4 #include <stdlib.h> 7.5 #include <malloc.h> 7.6 7.7 -#include "VMS_Implementations/VMS_impl/VMS.h" 7.8 +#include "VMS_impl/VMS.h" 7.9 #include "C_Libraries/Queue_impl/PrivateQueue.h" 7.10 #include "C_Libraries/Hash_impl/PrivateHash.h" 7.11 #include "Vthread.h" 7.12 @@ -19,13 +19,13 @@ 7.13 /*The semantic request has a mutexIdx value, which acts as index into array. 7.14 */ 7.15 inline void 7.16 -handleMakeMutex( VPThdSemReq *semReq, VPThdSemEnv *semEnv) 7.17 - { VPThdMutex *newMutex; 7.18 - SlaveVP *requestingVP; 7.19 +handleMakeMutex( VthdSemReq *semReq, VthdSemEnv *semEnv) 7.20 + { VthdMutex *newMutex; 7.21 + SlaveVP *requestingSlv; 7.22 7.23 - requestingVP = semReq->requestingVP; 7.24 - newMutex = VMS__malloc( sizeof(VPThdMutex) ); 7.25 - newMutex->waitingQueue = makeVMSPrivQ( requestingVP ); 7.26 + requestingSlv = semReq->requestingSlv; 7.27 + newMutex = VMS_PI__malloc( sizeof(VthdMutex) ); 7.28 + newMutex->waitingQueue = makeVMSQ( requestingSlv ); 7.29 newMutex->holderOfLock = NULL; 7.30 7.31 //The mutex struc contains an int that identifies it -- use that as 7.32 @@ -33,16 +33,16 @@ 7.33 newMutex->mutexIdx = addToDynArray( newMutex, semEnv->mutexDynArrayInfo ); 7.34 7.35 //Now communicate the mutex's identifying int back to requesting procr 7.36 - semReq->requestingVP->dataRetFromReq = (void*)newMutex->mutexIdx; //mutexIdx is 32 bit 7.37 + semReq->requestingSlv->dataRetFromReq = (void*)newMutex->mutexIdx; //mutexIdx is 32 bit 7.38 7.39 //re-animate the requester 7.40 - resume_procr( requestingVP, semEnv ); 7.41 + resume_slaveVP( requestingSlv, semEnv ); 7.42 } 7.43 7.44 7.45 inline void 7.46 -handleMutexLock( VPThdSemReq *semReq, VPThdSemEnv *semEnv) 7.47 - { VPThdMutex *mutex; 7.48 +handleMutexLock( VthdSemReq *semReq, VthdSemEnv *semEnv) 7.49 + { VthdMutex *mutex; 7.50 //=================== Deterministic Replay ====================== 7.51 #ifdef RECORD_DETERMINISTIC_REPLAY 7.52 7.53 @@ -55,14 +55,14 @@ 7.54 //see if mutex is free or not 7.55 if( mutex->holderOfLock == NULL ) //none holding, give lock to requester 7.56 { 7.57 - mutex->holderOfLock = semReq->requestingVP; 7.58 + mutex->holderOfLock = semReq->requestingSlv; 7.59 7.60 //re-animate requester, now that it has the lock 7.61 - resume_procr( semReq->requestingVP, semEnv ); 7.62 + resume_slaveVP( semReq->requestingSlv, semEnv ); 7.63 } 7.64 else //queue up requester to wait for release of lock 7.65 { 7.66 - writePrivQ( semReq->requestingVP, mutex->waitingQueue ); 7.67 + writeVMSQ( semReq->requestingSlv, mutex->waitingQueue ); 7.68 } 7.69 Meas_endMutexLock 7.70 } 7.71 @@ -70,24 +70,24 @@ 7.72 /* 7.73 */ 7.74 inline void 7.75 -handleMutexUnlock( VPThdSemReq *semReq, VPThdSemEnv *semEnv) 7.76 - { VPThdMutex *mutex; 7.77 +handleMutexUnlock( VthdSemReq *semReq, VthdSemEnv *semEnv) 7.78 + { VthdMutex *mutex; 7.79 7.80 Meas_startMutexUnlock 7.81 //lookup mutex struc, using mutexIdx as index 7.82 mutex = semEnv->mutexDynArray[ semReq->mutexIdx ]; 7.83 7.84 //set new holder of mutex-lock to be next in queue (NULL if empty) 7.85 - mutex->holderOfLock = readPrivQ( mutex->waitingQueue ); 7.86 + mutex->holderOfLock = readVMSQ( mutex->waitingQueue ); 7.87 7.88 //if have new non-NULL holder, re-animate it 7.89 if( mutex->holderOfLock != NULL ) 7.90 { 7.91 - resume_procr( mutex->holderOfLock, semEnv ); 7.92 + resume_slaveVP( mutex->holderOfLock, semEnv ); 7.93 } 7.94 7.95 //re-animate the releaser of the lock 7.96 - resume_procr( semReq->requestingVP, semEnv ); 7.97 + resume_slaveVP( semReq->requestingSlv, semEnv ); 7.98 Meas_endMutexUnlock 7.99 } 7.100 7.101 @@ -104,25 +104,25 @@ 7.102 * interacting with that cond var. So, make this pairing explicit. 7.103 */ 7.104 inline void 7.105 -handleMakeCond( VPThdSemReq *semReq, VPThdSemEnv *semEnv) 7.106 - { VPThdCond *newCond; 7.107 - SlaveVP *requestingVP; 7.108 +handleMakeCond( VthdSemReq *semReq, VthdSemEnv *semEnv) 7.109 + { VthdCond *newCond; 7.110 + SlaveVP *requestingSlv; 7.111 7.112 - requestingVP = semReq->requestingVP; 7.113 - newCond = VMS__malloc( sizeof(VPThdCond) ); 7.114 + requestingSlv = semReq->requestingSlv; 7.115 + newCond = VMS_PI__malloc( sizeof(VthdCond) ); 7.116 newCond->partnerMutex = semEnv->mutexDynArray[ semReq->mutexIdx ]; 7.117 7.118 - newCond->waitingQueue = makeVMSPrivQ(); 7.119 + newCond->waitingQueue = makeVMSQ(); 7.120 7.121 //The cond struc contains an int that identifies it -- use that as 7.122 // its index within the array of conds. Add the new cond to array. 7.123 newCond->condIdx = addToDynArray( newCond, semEnv->condDynArrayInfo ); 7.124 7.125 //Now communicate the cond's identifying int back to requesting procr 7.126 - semReq->requestingVP->dataRetFromReq = (void*)newCond->condIdx; //condIdx is 32 bit 7.127 + semReq->requestingSlv->dataRetFromReq = (void*)newCond->condIdx; //condIdx is 32 bit 7.128 7.129 //re-animate the requester 7.130 - resume_procr( requestingVP, semEnv ); 7.131 + resume_slaveVP( requestingSlv, semEnv ); 7.132 } 7.133 7.134 7.135 @@ -131,24 +131,24 @@ 7.136 * the designers of Posix standard ; ) 7.137 */ 7.138 inline void 7.139 -handleCondWait( VPThdSemReq *semReq, VPThdSemEnv *semEnv) 7.140 - { VPThdCond *cond; 7.141 - VPThdMutex *mutex; 7.142 +handleCondWait( VthdSemReq *semReq, VthdSemEnv *semEnv) 7.143 + { VthdCond *cond; 7.144 + VthdMutex *mutex; 7.145 7.146 Meas_startCondWait 7.147 //get cond struc out of array of them that's in the sem env 7.148 cond = semEnv->condDynArray[ semReq->condIdx ]; 7.149 7.150 //add requester to queue of wait-ers 7.151 - writePrivQ( semReq->requestingVP, cond->waitingQueue ); 7.152 + writeVMSQ( semReq->requestingSlv, cond->waitingQueue ); 7.153 7.154 //unlock mutex -- can't reuse above handler 'cause not queuing releaser 7.155 mutex = cond->partnerMutex; 7.156 - mutex->holderOfLock = readPrivQ( mutex->waitingQueue ); 7.157 + mutex->holderOfLock = readVMSQ( mutex->waitingQueue ); 7.158 7.159 if( mutex->holderOfLock != NULL ) 7.160 { 7.161 - resume_procr( mutex->holderOfLock, semEnv ); 7.162 + resume_slaveVP( mutex->holderOfLock, semEnv ); 7.163 } 7.164 Meas_endCondWait 7.165 } 7.166 @@ -158,25 +158,25 @@ 7.167 * that gets the lock 7.168 */ 7.169 inline void 7.170 -handleCondSignal( VPThdSemReq *semReq, VPThdSemEnv *semEnv) 7.171 - { VPThdCond *cond; 7.172 - VPThdMutex *mutex; 7.173 - SlaveVP *waitingVP; 7.174 +handleCondSignal( VthdSemReq *semReq, VthdSemEnv *semEnv) 7.175 + { VthdCond *cond; 7.176 + VthdMutex *mutex; 7.177 + SlaveVP *waitingSlv; 7.178 7.179 Meas_startCondSignal; 7.180 //get cond struc out of array of them that's in the sem env 7.181 cond = semEnv->condDynArray[ semReq->condIdx ]; 7.182 7.183 //take next waiting procr out of queue 7.184 - waitingVP = readPrivQ( cond->waitingQueue ); 7.185 + waitingSlv = readVMSQ( cond->waitingQueue ); 7.186 7.187 //transfer waiting procr to wait queue of mutex 7.188 // mutex is guaranteed to be held by signalling procr, so no check 7.189 mutex = cond->partnerMutex; 7.190 - pushPrivQ( waitingVP, mutex->waitingQueue ); //is first out when read 7.191 + writeVMSQ( waitingSlv, mutex->waitingQueue ); //is first out when read 7.192 7.193 //re-animate the signalling procr 7.194 - resume_procr( semReq->requestingVP, semEnv ); 7.195 + resume_slaveVP( semReq->requestingSlv, semEnv ); 7.196 Meas_endCondSignal; 7.197 } 7.198 7.199 @@ -187,7 +187,7 @@ 7.200 /* 7.201 */ 7.202 void inline 7.203 -handleMalloc(VPThdSemReq *semReq, SlaveVP *requestingVP,VPThdSemEnv *semEnv) 7.204 +handleMalloc(VthdSemReq *semReq, SlaveVP *requestingSlv,VthdSemEnv *semEnv) 7.205 { void *ptr; 7.206 7.207 //========================= MEASUREMENT STUFF ====================== 7.208 @@ -196,9 +196,9 @@ 7.209 saveLowTimeStampCountInto( startStamp ); 7.210 #endif 7.211 //================================================================== 7.212 - ptr = VMS__malloc( semReq->sizeToMalloc ); 7.213 - requestingVP->dataRetFromReq = ptr; 7.214 - resume_procr( requestingVP, semEnv ); 7.215 + ptr = VMS_PI__malloc( semReq->sizeToMalloc ); 7.216 + requestingSlv->dataRetFromReq = ptr; 7.217 + resume_slaveVP( requestingSlv, semEnv ); 7.218 //========================= MEASUREMENT STUFF ====================== 7.219 #ifdef MEAS__TIME_PLUGIN 7.220 saveLowTimeStampCountInto( endStamp ); 7.221 @@ -211,7 +211,7 @@ 7.222 /* 7.223 */ 7.224 void inline 7.225 -handleFree( VPThdSemReq *semReq, SlaveVP *requestingVP, VPThdSemEnv *semEnv) 7.226 +handleFree( VthdSemReq *semReq, SlaveVP *requestingSlv, VthdSemEnv *semEnv) 7.227 { 7.228 //========================= MEASUREMENT STUFF ====================== 7.229 #ifdef MEAS__TIME_PLUGIN 7.230 @@ -219,8 +219,8 @@ 7.231 saveLowTimeStampCountInto( startStamp ); 7.232 #endif 7.233 //================================================================== 7.234 - VMS__free( semReq->ptrToFree ); 7.235 - resume_procr( requestingVP, semEnv ); 7.236 + VMS_PI__free( semReq->ptrToFree ); 7.237 + resume_slaveVP( requestingSlv, semEnv ); 7.238 //========================= MEASUREMENT STUFF ====================== 7.239 #ifdef MEAS__TIME_PLUGIN 7.240 saveLowTimeStampCountInto( endStamp ); 7.241 @@ -237,113 +237,113 @@ 7.242 * end-label. Else, sets flag and resumes normally. 7.243 */ 7.244 void inline 7.245 -handleStartSingleton_helper( VPThdSingleton *singleton, SlaveVP *reqstingVP, 7.246 - VPThdSemEnv *semEnv ) 7.247 +handleStartSingleton_helper( VthdSingleton *singleton, SlaveVP *reqstingSlv, 7.248 + VthdSemEnv *semEnv ) 7.249 { 7.250 if( singleton->hasFinished ) 7.251 { //the code that sets the flag to true first sets the end instr addr 7.252 - reqstingVP->dataRetFromReq = singleton->endInstrAddr; 7.253 - resume_procr( reqstingVP, semEnv ); 7.254 + reqstingSlv->dataRetFromReq = singleton->savedRetAddr; 7.255 + resume_slaveVP( reqstingSlv, semEnv ); 7.256 return; 7.257 } 7.258 else if( singleton->hasBeenStarted ) 7.259 { //singleton is in-progress in a diff slave, so wait for it to finish 7.260 - writePrivQ(reqstingVP, singleton->waitQ ); 7.261 + writeVMSQ(reqstingSlv, singleton->waitQ ); 7.262 return; 7.263 } 7.264 else 7.265 { //hasn't been started, so this is the first attempt at the singleton 7.266 singleton->hasBeenStarted = TRUE; 7.267 - reqstingVP->dataRetFromReq = 0x0; 7.268 - resume_procr( reqstingVP, semEnv ); 7.269 + reqstingSlv->dataRetFromReq = 0x0; 7.270 + resume_slaveVP( reqstingSlv, semEnv ); 7.271 return; 7.272 } 7.273 } 7.274 void inline 7.275 -handleStartFnSingleton( VPThdSemReq *semReq, SlaveVP *requestingVP, 7.276 - VPThdSemEnv *semEnv ) 7.277 - { VPThdSingleton *singleton; 7.278 +handleStartFnSingleton( VthdSemReq *semReq, SlaveVP *requestingSlv, 7.279 + VthdSemEnv *semEnv ) 7.280 + { VthdSingleton *singleton; 7.281 7.282 singleton = &(semEnv->fnSingletons[ semReq->singletonID ]); 7.283 - handleStartSingleton_helper( singleton, requestingVP, semEnv ); 7.284 + handleStartSingleton_helper( singleton, requestingSlv, semEnv ); 7.285 } 7.286 void inline 7.287 -handleStartDataSingleton( VPThdSemReq *semReq, SlaveVP *requestingVP, 7.288 - VPThdSemEnv *semEnv ) 7.289 - { VPThdSingleton *singleton; 7.290 +handleStartDataSingleton( VthdSemReq *semReq, SlaveVP *requestingSlv, 7.291 + VthdSemEnv *semEnv ) 7.292 + { VthdSingleton *singleton; 7.293 7.294 - if( *(semReq->singletonPtrAddr) == NULL ) 7.295 - { singleton = VMS__malloc( sizeof(VPThdSingleton) ); 7.296 - singleton->waitQ = makeVMSPrivQ(); 7.297 - singleton->endInstrAddr = 0x0; 7.298 + if( semReq->singleton == NULL ) 7.299 + { singleton = VMS_PI__malloc( sizeof(VthdSingleton) ); 7.300 + singleton->waitQ = makeVMSQ(); 7.301 + singleton->savedRetAddr = 0x0; 7.302 singleton->hasBeenStarted = FALSE; 7.303 singleton->hasFinished = FALSE; 7.304 - *(semReq->singletonPtrAddr) = singleton; 7.305 + semReq->singleton = singleton; 7.306 } 7.307 else 7.308 - singleton = *(semReq->singletonPtrAddr); 7.309 - handleStartSingleton_helper( singleton, requestingVP, semEnv ); 7.310 + singleton = semReq->singleton; 7.311 + handleStartSingleton_helper( singleton, requestingSlv, semEnv ); 7.312 } 7.313 7.314 7.315 void inline 7.316 -handleEndSingleton_helper( VPThdSingleton *singleton, SlaveVP *requestingVP, 7.317 - VPThdSemEnv *semEnv ) 7.318 - { PrivQueueStruc *waitQ; 7.319 +handleEndSingleton_helper( VthdSingleton *singleton, SlaveVP *requestingSlv, 7.320 + VthdSemEnv *semEnv ) 7.321 + { VMSQueueStruc *waitQ; 7.322 int32 numWaiting, i; 7.323 - SlaveVP *resumingVP; 7.324 + SlaveVP *resumingSlv; 7.325 7.326 if( singleton->hasFinished ) 7.327 { //by definition, only one slave should ever be able to run end singleton 7.328 // so if this is true, is an error 7.329 - //VMS__throw_exception( "singleton code ran twice", requestingVP, NULL); 7.330 + //VMS_PI__throw_exception( "singleton code ran twice", requestingSlv, NULL); 7.331 } 7.332 7.333 singleton->hasFinished = TRUE; 7.334 waitQ = singleton->waitQ; 7.335 - numWaiting = numInPrivQ( waitQ ); 7.336 + numWaiting = numInVMSQ( waitQ ); 7.337 for( i = 0; i < numWaiting; i++ ) 7.338 { //they will resume inside start singleton, then jmp to end singleton 7.339 - resumingVP = readPrivQ( waitQ ); 7.340 - resumingVP->dataRetFromReq = singleton->endInstrAddr; 7.341 - resume_procr( resumingVP, semEnv ); 7.342 + resumingSlv = readVMSQ( waitQ ); 7.343 + resumingSlv->dataRetFromReq = singleton->savedRetAddr; 7.344 + resume_slaveVP( resumingSlv, semEnv ); 7.345 } 7.346 7.347 - resume_procr( requestingVP, semEnv ); 7.348 + resume_slaveVP( requestingSlv, semEnv ); 7.349 7.350 } 7.351 void inline 7.352 -handleEndFnSingleton( VPThdSemReq *semReq, SlaveVP *requestingVP, 7.353 - VPThdSemEnv *semEnv ) 7.354 +handleEndFnSingleton( VthdSemReq *semReq, SlaveVP *requestingSlv, 7.355 + VthdSemEnv *semEnv ) 7.356 { 7.357 - VPThdSingleton *singleton; 7.358 + VthdSingleton *singleton; 7.359 7.360 singleton = &(semEnv->fnSingletons[ semReq->singletonID ]); 7.361 - handleEndSingleton_helper( singleton, requestingVP, semEnv ); 7.362 + handleEndSingleton_helper( singleton, requestingSlv, semEnv ); 7.363 } 7.364 void inline 7.365 -handleEndDataSingleton( VPThdSemReq *semReq, SlaveVP *requestingVP, 7.366 - VPThdSemEnv *semEnv ) 7.367 +handleEndDataSingleton( VthdSemReq *semReq, SlaveVP *requestingSlv, 7.368 + VthdSemEnv *semEnv ) 7.369 { 7.370 - VPThdSingleton *singleton; 7.371 + VthdSingleton *singleton; 7.372 7.373 - singleton = *(semReq->singletonPtrAddr); 7.374 - handleEndSingleton_helper( singleton, requestingVP, semEnv ); 7.375 + singleton = semReq->singleton; 7.376 + handleEndSingleton_helper( singleton, requestingSlv, semEnv ); 7.377 } 7.378 7.379 7.380 /*This executes the function in the masterVP, take the function 7.381 - * pointer out of the request and call it, then resume the VP. 7.382 + * pointer out of the request and call it, then resume the Slv. 7.383 */ 7.384 void inline 7.385 -handleAtomic(VPThdSemReq *semReq, SlaveVP *requestingVP,VPThdSemEnv *semEnv) 7.386 +handleAtomic(VthdSemReq *semReq, SlaveVP *requestingSlv,VthdSemEnv *semEnv) 7.387 { 7.388 semReq->fnToExecInMaster( semReq->dataForFn ); 7.389 - resume_procr( requestingVP, semEnv ); 7.390 + resume_slaveVP( requestingSlv, semEnv ); 7.391 } 7.392 7.393 -/*First, it looks at the VP's semantic data, to see the highest transactionID 7.394 - * that VP 7.395 +/*First, it looks at the Slv's semantic data, to see the highest transactionID 7.396 + * that Slv 7.397 * already has entered. If the current ID is not larger, it throws an 7.398 * exception stating a bug in the code. 7.399 *Otherwise it puts the current ID 7.400 @@ -351,22 +351,22 @@ 7.401 * used to check that exits are properly ordered. 7.402 *Next it is uses transactionID as index into an array of transaction 7.403 * structures. 7.404 - *If the "VP_currently_executing" field is non-null, then put requesting VP 7.405 + *If the "Slv_currently_executing" field is non-null, then put requesting Slv 7.406 * into queue in the struct. (At some point a holder will request 7.407 - * end-transaction, which will take this VP from the queue and resume it.) 7.408 + * end-transaction, which will take this Slv from the queue and resume it.) 7.409 *If NULL, then write requesting into the field and resume. 7.410 */ 7.411 void inline 7.412 -handleTransStart( VPThdSemReq *semReq, SlaveVP *requestingVP, 7.413 - VPThdSemEnv *semEnv ) 7.414 - { VPThdSemData *semData; 7.415 +handleTransStart( VthdSemReq *semReq, SlaveVP *requestingSlv, 7.416 + VthdSemEnv *semEnv ) 7.417 + { VthdSemData *semData; 7.418 TransListElem *nextTransElem; 7.419 7.420 //check ordering of entering transactions is correct 7.421 - semData = requestingVP->semanticData; 7.422 + semData = requestingSlv->semanticData; 7.423 if( semData->highestTransEntered > semReq->transID ) 7.424 { //throw VMS exception, which shuts down VMS. 7.425 - VMS__throw_exception( "transID smaller than prev", requestingVP, NULL); 7.426 + VMS_PI__throw_exception( "transID smaller than prev", requestingSlv, NULL); 7.427 } 7.428 //add this trans ID to the list of transactions entered -- check when 7.429 // end a transaction 7.430 @@ -377,68 +377,68 @@ 7.431 semData->lastTransEntered = nextTransElem; 7.432 7.433 //get the structure for this transaction ID 7.434 - VPThdTrans * 7.435 + VthdTrans * 7.436 transStruc = &(semEnv->transactionStrucs[ semReq->transID ]); 7.437 7.438 - if( transStruc->VPCurrentlyExecuting == NULL ) 7.439 + if( transStruc->SlvCurrentlyExecuting == NULL ) 7.440 { 7.441 - transStruc->VPCurrentlyExecuting = requestingVP; 7.442 - resume_procr( requestingVP, semEnv ); 7.443 + transStruc->SlvCurrentlyExecuting = requestingSlv; 7.444 + resume_slaveVP( requestingSlv, semEnv ); 7.445 } 7.446 else 7.447 - { //note, might make future things cleaner if save request with VP and 7.448 + { //note, might make future things cleaner if save request with Slv and 7.449 // add this trans ID to the linked list when gets out of queue. 7.450 // but don't need for now, and lazy.. 7.451 - writePrivQ( requestingVP, transStruc->waitingVPQ ); 7.452 + writeVMSQ( requestingSlv, transStruc->waitingSlvQ ); 7.453 } 7.454 } 7.455 7.456 7.457 /*Use the trans ID to get the transaction structure from the array. 7.458 - *Look at VP_currently_executing to be sure it's same as requesting VP. 7.459 + *Look at Slv_currently_executing to be sure it's same as requesting Slv. 7.460 * If different, throw an exception, stating there's a bug in the code. 7.461 *Next, take the first element off the list of entered transactions. 7.462 * Check to be sure the ending transaction is the same ID as the next on 7.463 * the list. If not, incorrectly nested so throw an exception. 7.464 * 7.465 *Next, get from the queue in the structure. 7.466 - *If it's empty, set VP_currently_executing field to NULL and resume 7.467 - * requesting VP. 7.468 - *If get somethine, set VP_currently_executing to the VP from the queue, then 7.469 + *If it's empty, set Slv_currently_executing field to NULL and resume 7.470 + * requesting Slv. 7.471 + *If get somethine, set Slv_currently_executing to the Slv from the queue, then 7.472 * resume both. 7.473 */ 7.474 void inline 7.475 -handleTransEnd( VPThdSemReq *semReq, SlaveVP *requestingVP, 7.476 - VPThdSemEnv *semEnv) 7.477 - { VPThdSemData *semData; 7.478 - SlaveVP *waitingVP; 7.479 - VPThdTrans *transStruc; 7.480 +handleTransEnd( VthdSemReq *semReq, SlaveVP *requestingSlv, 7.481 + VthdSemEnv *semEnv) 7.482 + { VthdSemData *semData; 7.483 + SlaveVP *waitingSlv; 7.484 + VthdTrans *transStruc; 7.485 TransListElem *lastTrans; 7.486 7.487 transStruc = &(semEnv->transactionStrucs[ semReq->transID ]); 7.488 7.489 - //make sure transaction ended in same VP as started it. 7.490 - if( transStruc->VPCurrentlyExecuting != requestingVP ) 7.491 + //make sure transaction ended in same Slv as started it. 7.492 + if( transStruc->SlvCurrentlyExecuting != requestingSlv ) 7.493 { 7.494 - VMS__throw_exception( "trans ended in diff VP", requestingVP, NULL ); 7.495 + VMS_PI__throw_exception( "trans ended in diff Slv", requestingSlv, NULL ); 7.496 } 7.497 7.498 //make sure nesting is correct -- last ID entered should == this ID 7.499 - semData = requestingVP->semanticData; 7.500 + semData = requestingSlv->semanticData; 7.501 lastTrans = semData->lastTransEntered; 7.502 if( lastTrans->transID != semReq->transID ) 7.503 { 7.504 - VMS__throw_exception( "trans incorrectly nested", requestingVP, NULL ); 7.505 + VMS_PI__throw_exception( "trans incorrectly nested", requestingSlv, NULL ); 7.506 } 7.507 7.508 semData->lastTransEntered = semData->lastTransEntered->nextTrans; 7.509 7.510 7.511 - waitingVP = readPrivQ( transStruc->waitingVPQ ); 7.512 - transStruc->VPCurrentlyExecuting = waitingVP; 7.513 + waitingSlv = readVMSQ( transStruc->waitingSlvQ ); 7.514 + transStruc->SlvCurrentlyExecuting = waitingSlv; 7.515 7.516 - if( waitingVP != NULL ) 7.517 - resume_procr( waitingVP, semEnv ); 7.518 + if( waitingSlv != NULL ) 7.519 + resume_slaveVP( waitingSlv, semEnv ); 7.520 7.521 - resume_procr( requestingVP, semEnv ); 7.522 + resume_slaveVP( requestingSlv, semEnv ); 7.523 }
8.1 --- a/Vthread_Request_Handlers.h Thu Mar 01 13:20:51 2012 -0800 8.2 +++ b/Vthread_Request_Handlers.h Sun Mar 04 14:29:42 2012 -0800 8.3 @@ -6,52 +6,52 @@ 8.4 * 8.5 */ 8.6 8.7 -#ifndef _VPThread_REQ_H 8.8 -#define _VPThread_REQ_H 8.9 +#ifndef _Vthread_REQ_H 8.10 +#define _Vthread_REQ_H 8.11 8.12 -#include "VPThread.h" 8.13 +#include "Vthread.h" 8.14 8.15 -/*This header defines everything specific to the VPThread semantic plug-in 8.16 +/*This header defines everything specific to the Vthread semantic plug-in 8.17 */ 8.18 8.19 inline void 8.20 -handleMakeMutex( VPThdSemReq *semReq, VPThdSemEnv *semEnv); 8.21 +handleMakeMutex( VthdSemReq *semReq, VthdSemEnv *semEnv); 8.22 inline void 8.23 -handleMutexLock( VPThdSemReq *semReq, VPThdSemEnv *semEnv); 8.24 +handleMutexLock( VthdSemReq *semReq, VthdSemEnv *semEnv); 8.25 inline void 8.26 -handleMutexUnlock(VPThdSemReq *semReq, VPThdSemEnv *semEnv); 8.27 +handleMutexUnlock(VthdSemReq *semReq, VthdSemEnv *semEnv); 8.28 inline void 8.29 -handleMakeCond( VPThdSemReq *semReq, VPThdSemEnv *semEnv); 8.30 +handleMakeCond( VthdSemReq *semReq, VthdSemEnv *semEnv); 8.31 inline void 8.32 -handleCondWait( VPThdSemReq *semReq, VPThdSemEnv *semEnv); 8.33 +handleCondWait( VthdSemReq *semReq, VthdSemEnv *semEnv); 8.34 inline void 8.35 -handleCondSignal( VPThdSemReq *semReq, VPThdSemEnv *semEnv); 8.36 +handleCondSignal( VthdSemReq *semReq, VthdSemEnv *semEnv); 8.37 void inline 8.38 -handleMalloc(VPThdSemReq *semReq, SlaveVP *requestingVP,VPThdSemEnv *semEnv); 8.39 +handleMalloc(VthdSemReq *semReq, SlaveVP *requestingSlv,VthdSemEnv *semEnv); 8.40 void inline 8.41 -handleFree( VPThdSemReq *semReq, SlaveVP *requestingVP, VPThdSemEnv *semEnv); 8.42 +handleFree( VthdSemReq *semReq, SlaveVP *requestingSlv, VthdSemEnv *semEnv); 8.43 inline void 8.44 -handleStartFnSingleton( VPThdSemReq *semReq, SlaveVP *reqstingVP, 8.45 - VPThdSemEnv *semEnv ); 8.46 +handleStartFnSingleton( VthdSemReq *semReq, SlaveVP *reqstingSlv, 8.47 + VthdSemEnv *semEnv ); 8.48 inline void 8.49 -handleEndFnSingleton( VPThdSemReq *semReq, SlaveVP *requestingVP, 8.50 - VPThdSemEnv *semEnv ); 8.51 +handleEndFnSingleton( VthdSemReq *semReq, SlaveVP *requestingSlv, 8.52 + VthdSemEnv *semEnv ); 8.53 inline void 8.54 -handleStartDataSingleton( VPThdSemReq *semReq, SlaveVP *reqstingVP, 8.55 - VPThdSemEnv *semEnv ); 8.56 +handleStartDataSingleton( VthdSemReq *semReq, SlaveVP *reqstingSlv, 8.57 + VthdSemEnv *semEnv ); 8.58 inline void 8.59 -handleEndDataSingleton( VPThdSemReq *semReq, SlaveVP *requestingVP, 8.60 - VPThdSemEnv *semEnv ); 8.61 +handleEndDataSingleton( VthdSemReq *semReq, SlaveVP *requestingSlv, 8.62 + VthdSemEnv *semEnv ); 8.63 void inline 8.64 -handleAtomic( VPThdSemReq *semReq, SlaveVP *requestingVP, 8.65 - VPThdSemEnv *semEnv); 8.66 +handleAtomic( VthdSemReq *semReq, SlaveVP *requestingSlv, 8.67 + VthdSemEnv *semEnv); 8.68 void inline 8.69 -handleTransStart( VPThdSemReq *semReq, SlaveVP *requestingVP, 8.70 - VPThdSemEnv *semEnv ); 8.71 +handleTransStart( VthdSemReq *semReq, SlaveVP *requestingSlv, 8.72 + VthdSemEnv *semEnv ); 8.73 void inline 8.74 -handleTransEnd( VPThdSemReq *semReq, SlaveVP *requestingVP, 8.75 - VPThdSemEnv *semEnv); 8.76 +handleTransEnd( VthdSemReq *semReq, SlaveVP *requestingSlv, 8.77 + VthdSemEnv *semEnv); 8.78 8.79 8.80 -#endif /* _VPThread_REQ_H */ 8.81 +#endif /* _Vthread_REQ_H */ 8.82
9.1 --- a/Vthread_helper.c Thu Mar 01 13:20:51 2012 -0800 9.2 +++ b/Vthread_helper.c Sun Mar 04 14:29:42 2012 -0800 9.3 @@ -1,48 +1,46 @@ 9.4 9.5 #include <stddef.h> 9.6 9.7 -#include "VMS/VMS.h" 9.8 -#include "VPThread.h" 9.9 +#include "VMS_impl/VMS.h" 9.10 +#include "Vthread.h" 9.11 9.12 /*Re-use this in the entry-point fn 9.13 */ 9.14 inline SlaveVP * 9.15 -VPThread__create_procr_helper( TopLevelFnPtr fnPtr, void *initData, 9.16 - VPThdSemEnv *semEnv, int32 coreToScheduleOnto ) 9.17 - { SlaveVP *newVP; 9.18 - VPThdSemData *semData; 9.19 +Vthread__create_slaveVP_helper( TopLevelFnPtr fnPtr, void *initData, 9.20 + VthdSemEnv *semEnv, int32 coreToScheduleOnto ) 9.21 + { SlaveVP *newSlv; 9.22 + VthdSemData *semData; 9.23 9.24 //This is running in master, so use internal version 9.25 - newVP = VMS__create_procr( fnPtr, initData ); 9.26 + newSlv = VMS_WL__create_slaveVP( fnPtr, initData ); 9.27 9.28 - semEnv->numVP += 1; 9.29 - 9.30 - semData = VMS__malloc( sizeof(VPThdSemData) ); 9.31 + semData = VMS_WL__malloc( sizeof(VthdSemData) ); 9.32 semData->highestTransEntered = -1; 9.33 semData->lastTransEntered = NULL; 9.34 9.35 - newVP->semanticData = semData; 9.36 + newSlv->semanticData = semData; 9.37 9.38 //=================== Assign new processor to a core ===================== 9.39 #ifdef SEQUENTIAL 9.40 - newVP->coreAnimatedBy = 0; 9.41 + newSlv->coreAnimatedBy = 0; 9.42 9.43 #else 9.44 9.45 if(coreToScheduleOnto < 0 || coreToScheduleOnto >= NUM_CORES ) 9.46 { //out-of-range, so round-robin assignment 9.47 - newVP->coreAnimatedBy = semEnv->nextCoreToGetNewVP; 9.48 + newSlv->coreAnimatedBy = semEnv->nextCoreToGetNewSlv; 9.49 9.50 - if( semEnv->nextCoreToGetNewVP >= NUM_CORES - 1 ) 9.51 - semEnv->nextCoreToGetNewVP = 0; 9.52 + if( semEnv->nextCoreToGetNewSlv >= NUM_CORES - 1 ) 9.53 + semEnv->nextCoreToGetNewSlv = 0; 9.54 else 9.55 - semEnv->nextCoreToGetNewVP += 1; 9.56 + semEnv->nextCoreToGetNewSlv += 1; 9.57 } 9.58 else //core num in-range, so use it 9.59 - { newVP->coreAnimatedBy = coreToScheduleOnto; 9.60 + { newSlv->coreAnimatedBy = coreToScheduleOnto; 9.61 } 9.62 #endif 9.63 //======================================================================== 9.64 9.65 - return newVP; 9.66 + return newSlv; 9.67 } 9.68 \ No newline at end of file
10.1 --- a/Vthread_helper.h Thu Mar 01 13:20:51 2012 -0800 10.2 +++ b/Vthread_helper.h Sun Mar 04 14:29:42 2012 -0800 10.3 @@ -1,19 +1,19 @@ 10.4 /* 10.5 - * File: VPThread_helper.h 10.6 + * File: Vthread_helper.h 10.7 * Author: msach 10.8 * 10.9 * Created on June 10, 2011, 12:20 PM 10.10 */ 10.11 10.12 -#include "VMS/VMS.h" 10.13 -#include "VPThread.h" 10.14 +#include "VMS_impl/VMS.h" 10.15 +#include "Vthread.h" 10.16 10.17 -#ifndef VPTHREAD_HELPER_H 10.18 -#define VPTHREAD_HELPER_H 10.19 +#ifndef VTHREAD_HELPER_H 10.20 +#define VTHREAD_HELPER_H 10.21 10.22 inline SlaveVP * 10.23 -VPThread__create_procr_helper( TopLevelFnPtr fnPtr, void *initData, 10.24 - VPThdSemEnv *semEnv, int32 coreToScheduleOnto ); 10.25 +Vthread__create_slaveVP_helper( TopLevelFnPtr fnPtr, void *initData, 10.26 + VthdSemEnv *semEnv, int32 coreToScheduleOnto ); 10.27 10.28 -#endif /* VPTHREAD_HELPER_H */ 10.29 +#endif /* VTHREAD_HELPER_H */ 10.30
11.1 --- a/Vthread_lib.c Thu Mar 01 13:20:51 2012 -0800 11.2 +++ b/Vthread_lib.c Sun Mar 04 14:29:42 2012 -0800 11.3 @@ -6,25 +6,24 @@ 11.4 11.5 #include <stdio.h> 11.6 #include <stdlib.h> 11.7 -#include <malloc.h> 11.8 11.9 -#include "VMS/VMS.h" 11.10 -#include "VPThread.h" 11.11 -#include "VPThread_helper.h" 11.12 -#include "VMS/Queue_impl/PrivateQueue.h" 11.13 -#include "VMS/Hash_impl/PrivateHash.h" 11.14 +#include "VMS_impl/VMS.h" 11.15 +#include "Vthread.h" 11.16 +#include "Vthread_helper.h" 11.17 +#include "C_Libraries/Queue_impl/PrivateQueue.h" 11.18 +#include "C_Libraries/Hash_impl/PrivateHash.h" 11.19 11.20 11.21 //========================================================================== 11.22 11.23 void 11.24 -VPThread__init(); 11.25 +Vthread__init(); 11.26 11.27 void 11.28 -VPThread__init_Seq(); 11.29 +Vthread__init_Seq(); 11.30 11.31 void 11.32 -VPThread__init_Helper(); 11.33 +Vthread__init_Helper(); 11.34 11.35 11.36 //=========================================================================== 11.37 @@ -34,24 +33,24 @@ 11.38 * 11.39 *There's a pattern for the outside sequential code to interact with the 11.40 * VMS_HW code. 11.41 - *The VMS_HW system is inside a boundary.. every VPThread system is in its 11.42 + *The VMS_HW system is inside a boundary.. every Vthread system is in its 11.43 * own directory that contains the functions for each of the processor types. 11.44 * One of the processor types is the "seed" processor that starts the 11.45 * cascade of creating all the processors that do the work. 11.46 *So, in the directory is a file called "EntryPoint.c" that contains the 11.47 * function, named appropriately to the work performed, that the outside 11.48 * sequential code calls. This function follows a pattern: 11.49 - *1) it calls VPThread__init() 11.50 + *1) it calls Vthread__init() 11.51 *2) it creates the initial data for the seed processor, which is passed 11.52 * in to the function 11.53 - *3) it creates the seed VPThread processor, with the data to start it with. 11.54 - *4) it calls startVPThreadThenWaitUntilWorkDone 11.55 + *3) it creates the seed Vthread processor, with the data to start it with. 11.56 + *4) it calls startVthreadThenWaitUntilWorkDone 11.57 *5) it gets the returnValue from the transfer struc and returns that 11.58 * from the function 11.59 * 11.60 - *For now, a new VPThread system has to be created via VPThread__init every 11.61 + *For now, a new Vthread system has to be created via Vthread__init every 11.62 * time an entry point function is called -- later, might add letting the 11.63 - * VPThread system be created once, and let all the entry points just reuse 11.64 + * Vthread system be created once, and let all the entry points just reuse 11.65 * it -- want to be as simple as possible now, and see by using what makes 11.66 * sense for later.. 11.67 */ 11.68 @@ -74,47 +73,47 @@ 11.69 * any of the data reachable from initData passed in to here 11.70 */ 11.71 void 11.72 -VPThread__create_seed_procr_and_do_work( TopLevelFnPtr fnPtr, void *initData ) 11.73 - { VPThdSemEnv *semEnv; 11.74 - SlaveVP *seedVP; 11.75 +Vthread__create_seed_slaveVP_and_do_work( TopLevelFnPtr fnPtr, void *initData ) 11.76 + { VthdSemEnv *semEnv; 11.77 + SlaveVP *seedSlv; 11.78 11.79 #ifdef SEQUENTIAL 11.80 - VPThread__init_Seq(); //debug sequential exe 11.81 + Vthread__init_Seq(); //debug sequential exe 11.82 #else 11.83 - VPThread__init(); //normal multi-thd 11.84 + Vthread__init(); //normal multi-thd 11.85 #endif 11.86 semEnv = _VMSMasterEnv->semanticEnv; 11.87 11.88 - //VPThread starts with one processor, which is put into initial environ, 11.89 + //Vthread starts with one processor, which is put into initial environ, 11.90 // and which then calls create() to create more, thereby expanding work 11.91 - seedVP = VPThread__create_procr_helper( fnPtr, initData, semEnv, -1 ); 11.92 + seedSlv = Vthread__create_slaveVP_helper( fnPtr, initData, semEnv, -1 ); 11.93 11.94 - resume_procr( seedVP, semEnv ); 11.95 + resume_slaveVP( seedSlv, semEnv ); 11.96 11.97 #ifdef SEQUENTIAL 11.98 - VMS__start_the_work_then_wait_until_done_Seq(); //debug sequential exe 11.99 + VMS_SS__start_the_work_then_wait_until_done_Seq(); //debug sequential exe 11.100 #else 11.101 - VMS__start_the_work_then_wait_until_done(); //normal multi-thd 11.102 + VMS_SS__start_the_work_then_wait_until_done(); //normal multi-thd 11.103 #endif 11.104 11.105 - VPThread__cleanup_after_shutdown(); 11.106 + Vthread__cleanup_after_shutdown(); 11.107 } 11.108 11.109 11.110 inline int32 11.111 -VPThread__giveMinWorkUnitCycles( float32 percentOverhead ) 11.112 +Vthread__giveMinWorkUnitCycles( float32 percentOverhead ) 11.113 { 11.114 return MIN_WORK_UNIT_CYCLES; 11.115 } 11.116 11.117 inline int32 11.118 -VPThread__giveIdealNumWorkUnits() 11.119 +Vthread__giveIdealNumWorkUnits() 11.120 { 11.121 return NUM_SCHED_SLOTS * NUM_CORES; 11.122 } 11.123 11.124 inline int32 11.125 -VPThread__give_number_of_cores_to_schedule_onto() 11.126 +Vthread__give_number_of_cores_to_schedule_onto() 11.127 { 11.128 return NUM_CORES; 11.129 } 11.130 @@ -123,8 +122,8 @@ 11.131 * saves jump point, and second jumps back several times to get reliable time 11.132 */ 11.133 inline void 11.134 -VPThread__start_primitive() 11.135 - { saveLowTimeStampCountInto( ((VPThdSemEnv *)(_VMSMasterEnv->semanticEnv))-> 11.136 +Vthread__start_primitive() 11.137 + { saveLowTimeStampCountInto( ((VthdSemEnv *)(_VMSMasterEnv->semanticEnv))-> 11.138 primitiveStartTime ); 11.139 } 11.140 11.141 @@ -134,17 +133,17 @@ 11.142 * also to throw out any "weird" values due to OS interrupt or TSC rollover 11.143 */ 11.144 inline int32 11.145 -VPThread__end_primitive_and_give_cycles() 11.146 +Vthread__end_primitive_and_give_cycles() 11.147 { int32 endTime, startTime; 11.148 //TODO: fix by repeating time-measurement 11.149 saveLowTimeStampCountInto( endTime ); 11.150 - startTime=((VPThdSemEnv*)(_VMSMasterEnv->semanticEnv))->primitiveStartTime; 11.151 + startTime=((VthdSemEnv*)(_VMSMasterEnv->semanticEnv))->primitiveStartTime; 11.152 return (endTime - startTime); 11.153 } 11.154 11.155 //=========================================================================== 11.156 // 11.157 -/*Initializes all the data-structures for a VPThread system -- but doesn't 11.158 +/*Initializes all the data-structures for a Vthread system -- but doesn't 11.159 * start it running yet! 11.160 * 11.161 * 11.162 @@ -154,56 +153,52 @@ 11.163 * for creating the seed processor and then starting the work. 11.164 */ 11.165 void 11.166 -VPThread__init() 11.167 +Vthread__init() 11.168 { 11.169 - VMS__init(); 11.170 + VMS_SS__init(); 11.171 //masterEnv, a global var, now is partially set up by init_VMS 11.172 11.173 - //Moved here from VMS.c because this is not parallel construct independent 11.174 - MakeTheMeasHists(); 11.175 - 11.176 - VPThread__init_Helper(); 11.177 + Vthread__init_Helper(); 11.178 } 11.179 11.180 #ifdef SEQUENTIAL 11.181 void 11.182 -VPThread__init_Seq() 11.183 +Vthread__init_Seq() 11.184 { 11.185 - VMS__init_Seq(); 11.186 + VMS_SS__init_Seq(); 11.187 flushRegisters(); 11.188 //masterEnv, a global var, now is partially set up by init_VMS 11.189 11.190 - VPThread__init_Helper(); 11.191 + Vthread__init_Helper(); 11.192 } 11.193 #endif 11.194 11.195 void 11.196 -VPThread__init_Helper() 11.197 - { VPThdSemEnv *semanticEnv; 11.198 - PrivQueueStruc **readyVPQs; 11.199 +Vthread__init_Helper() 11.200 + { VthdSemEnv *semanticEnv; 11.201 + PrivQueueStruc **readySlvQs; 11.202 int coreIdx, i; 11.203 11.204 //Hook up the semantic layer's plug-ins to the Master virt procr 11.205 - _VMSMasterEnv->requestHandler = &VPThread__Request_Handler; 11.206 - _VMSMasterEnv->slaveScheduler = &VPThread__schedule_virt_procr; 11.207 + _VMSMasterEnv->requestHandler = &Vthread__Request_Handler; 11.208 + _VMSMasterEnv->slaveAssigner = &Vthread__schedule_slaveVP; 11.209 11.210 //create the semantic layer's environment (all its data) and add to 11.211 // the master environment 11.212 - semanticEnv = VMS__malloc( sizeof( VPThdSemEnv ) ); 11.213 + semanticEnv = VMS_WL__malloc( sizeof( VthdSemEnv ) ); 11.214 _VMSMasterEnv->semanticEnv = semanticEnv; 11.215 11.216 //create the ready queue 11.217 - readyVPQs = VMS__malloc( NUM_CORES * sizeof(PrivQueueStruc *) ); 11.218 + readySlvQs = VMS_WL__malloc( NUM_CORES * sizeof(PrivQueueStruc *) ); 11.219 11.220 for( coreIdx = 0; coreIdx < NUM_CORES; coreIdx++ ) 11.221 { 11.222 - readyVPQs[ coreIdx ] = makeVMSPrivQ(); 11.223 + readySlvQs[ coreIdx ] = makeVMSQ(); 11.224 } 11.225 11.226 - semanticEnv->readyVPQs = readyVPQs; 11.227 + semanticEnv->readySlvQs = readySlvQs; 11.228 11.229 - semanticEnv->numVP = 0; 11.230 - semanticEnv->nextCoreToGetNewVP = 0; 11.231 + semanticEnv->nextCoreToGetNewSlv = 0; 11.232 11.233 semanticEnv->mutexDynArrayInfo = 11.234 makePrivDynArrayOfSize( (void*)&(semanticEnv->mutexDynArray), INIT_NUM_MUTEX ); 11.235 @@ -216,23 +211,23 @@ 11.236 //semanticEnv->transactionStrucs = makeDynArrayInfo( ); 11.237 for( i = 0; i < NUM_STRUCS_IN_SEM_ENV; i++ ) 11.238 { 11.239 - semanticEnv->fnSingletons[i].endInstrAddr = NULL; 11.240 + semanticEnv->fnSingletons[i].savedRetAddr = NULL; 11.241 semanticEnv->fnSingletons[i].hasBeenStarted = FALSE; 11.242 semanticEnv->fnSingletons[i].hasFinished = FALSE; 11.243 - semanticEnv->fnSingletons[i].waitQ = makeVMSPrivQ(); 11.244 - semanticEnv->transactionStrucs[i].waitingVPQ = makeVMSPrivQ(); 11.245 + semanticEnv->fnSingletons[i].waitQ = makeVMSQ(); 11.246 + semanticEnv->transactionStrucs[i].waitingSlvQ = makeVMSQ(); 11.247 } 11.248 } 11.249 11.250 11.251 -/*Frees any memory allocated by VPThread__init() then calls VMS__shutdown 11.252 +/*Frees any memory allocated by Vthread__init() then calls VMS__shutdown 11.253 */ 11.254 void 11.255 -VPThread__cleanup_after_shutdown() 11.256 - { /*VPThdSemEnv *semEnv; 11.257 +Vthread__cleanup_after_shutdown() 11.258 + { /*VthdSemEnv *semEnv; 11.259 int32 coreIdx, idx, highestIdx; 11.260 - VPThdMutex **mutexArray, *mutex; 11.261 - VPThdCond **condArray, *cond; */ 11.262 + VthdMutex **mutexArray, *mutex; 11.263 + VthdCond **condArray, *cond; */ 11.264 11.265 /* It's all allocated inside VMS's big chunk -- that's about to be freed, so 11.266 * nothing to do here 11.267 @@ -242,11 +237,11 @@ 11.268 11.269 for( coreIdx = 0; coreIdx < NUM_CORES; coreIdx++ ) 11.270 { 11.271 - free( semEnv->readyVPQs[coreIdx]->startOfData ); 11.272 - free( semEnv->readyVPQs[coreIdx] ); 11.273 + free( semEnv->readySlvQs[coreIdx]->startOfData ); 11.274 + free( semEnv->readySlvQs[coreIdx] ); 11.275 } 11.276 11.277 - free( semEnv->readyVPQs ); 11.278 + free( semEnv->readySlvQs ); 11.279 11.280 11.281 //==== Free mutexes and mutex array ==== 11.282 @@ -277,7 +272,7 @@ 11.283 11.284 free( _VMSMasterEnv->semanticEnv ); 11.285 */ 11.286 - VMS__cleanup_at_end_of_shutdown(); 11.287 + VMS_SS__cleanup_at_end_of_shutdown(); 11.288 } 11.289 11.290 11.291 @@ -286,165 +281,165 @@ 11.292 /* 11.293 */ 11.294 inline SlaveVP * 11.295 -VPThread__create_thread( TopLevelFnPtr fnPtr, void *initData, 11.296 - SlaveVP *creatingVP ) 11.297 - { VPThdSemReq reqData; 11.298 +Vthread__create_thread( TopLevelFnPtr fnPtr, void *initData, 11.299 + SlaveVP *creatingSlv ) 11.300 + { VthdSemReq reqData; 11.301 11.302 //the semantic request data is on the stack and disappears when this 11.303 - // call returns -- it's guaranteed to remain in the VP's stack for as 11.304 - // long as the VP is suspended. 11.305 + // call returns -- it's guaranteed to remain in the Slv's stack for as 11.306 + // long as the Slv is suspended. 11.307 reqData.reqType = 0; //know the type because is a VMS create req 11.308 reqData.coreToScheduleOnto = -1; //means round-robin schedule 11.309 reqData.fnPtr = fnPtr; 11.310 reqData.initData = initData; 11.311 - reqData.requestingVP = creatingVP; 11.312 + reqData.requestingSlv = creatingSlv; 11.313 11.314 - VMS__send_create_procr_req( &reqData, creatingVP ); 11.315 + VMS_WL__send_create_slaveVP_req( &reqData, creatingSlv ); 11.316 11.317 - return creatingVP->dataRetFromReq; 11.318 + return creatingSlv->dataRetFromReq; 11.319 } 11.320 11.321 inline SlaveVP * 11.322 -VPThread__create_thread_with_affinity( TopLevelFnPtr fnPtr, void *initData, 11.323 - SlaveVP *creatingVP, int32 coreToScheduleOnto ) 11.324 - { VPThdSemReq reqData; 11.325 +Vthread__create_thread_with_affinity( TopLevelFnPtr fnPtr, void *initData, 11.326 + SlaveVP *creatingSlv, int32 coreToScheduleOnto ) 11.327 + { VthdSemReq reqData; 11.328 11.329 //the semantic request data is on the stack and disappears when this 11.330 - // call returns -- it's guaranteed to remain in the VP's stack for as 11.331 - // long as the VP is suspended. 11.332 + // call returns -- it's guaranteed to remain in the Slv's stack for as 11.333 + // long as the Slv is suspended. 11.334 reqData.reqType = 0; //know type because in a VMS create req 11.335 reqData.coreToScheduleOnto = coreToScheduleOnto; 11.336 reqData.fnPtr = fnPtr; 11.337 reqData.initData = initData; 11.338 - reqData.requestingVP = creatingVP; 11.339 + reqData.requestingSlv = creatingSlv; 11.340 11.341 - VMS__send_create_procr_req( &reqData, creatingVP ); 11.342 + VMS_WL__send_create_slaveVP_req( &reqData, creatingSlv ); 11.343 } 11.344 11.345 inline void 11.346 -VPThread__dissipate_thread( SlaveVP *procrToDissipate ) 11.347 +Vthread__dissipate_thread( SlaveVP *procrToDissipate ) 11.348 { 11.349 - VMS__send_dissipate_req( procrToDissipate ); 11.350 + VMS_WL__send_dissipate_req( procrToDissipate ); 11.351 } 11.352 11.353 11.354 //=========================================================================== 11.355 11.356 void * 11.357 -VPThread__malloc( size_t sizeToMalloc, SlaveVP *animVP ) 11.358 - { VPThdSemReq reqData; 11.359 +Vthread__malloc( size_t sizeToMalloc, SlaveVP *animSlv ) 11.360 + { VthdSemReq reqData; 11.361 11.362 reqData.reqType = malloc_req; 11.363 reqData.sizeToMalloc = sizeToMalloc; 11.364 - reqData.requestingVP = animVP; 11.365 + reqData.requestingSlv = animSlv; 11.366 11.367 - VMS__send_sem_request( &reqData, animVP ); 11.368 + VMS_WL__send_sem_request( &reqData, animSlv ); 11.369 11.370 - return animVP->dataRetFromReq; 11.371 + return animSlv->dataRetFromReq; 11.372 } 11.373 11.374 11.375 /*Sends request to Master, which does the work of freeing 11.376 */ 11.377 void 11.378 -VPThread__free( void *ptrToFree, SlaveVP *animVP ) 11.379 - { VPThdSemReq reqData; 11.380 +Vthread__free( void *ptrToFree, SlaveVP *animSlv ) 11.381 + { VthdSemReq reqData; 11.382 11.383 reqData.reqType = free_req; 11.384 reqData.ptrToFree = ptrToFree; 11.385 - reqData.requestingVP = animVP; 11.386 + reqData.requestingSlv = animSlv; 11.387 11.388 - VMS__send_sem_request( &reqData, animVP ); 11.389 + VMS_WL__send_sem_request( &reqData, animSlv ); 11.390 } 11.391 11.392 11.393 //=========================================================================== 11.394 11.395 inline void 11.396 -VPThread__set_globals_to( void *globals ) 11.397 +Vthread__set_globals_to( void *globals ) 11.398 { 11.399 - ((VPThdSemEnv *) 11.400 + ((VthdSemEnv *) 11.401 (_VMSMasterEnv->semanticEnv))->applicationGlobals = globals; 11.402 } 11.403 11.404 inline void * 11.405 -VPThread__give_globals() 11.406 +Vthread__give_globals() 11.407 { 11.408 - return((VPThdSemEnv *) (_VMSMasterEnv->semanticEnv))->applicationGlobals; 11.409 + return((VthdSemEnv *) (_VMSMasterEnv->semanticEnv))->applicationGlobals; 11.410 } 11.411 11.412 11.413 //=========================================================================== 11.414 11.415 inline int32 11.416 -VPThread__make_mutex( SlaveVP *animVP ) 11.417 - { VPThdSemReq reqData; 11.418 +Vthread__make_mutex( SlaveVP *animSlv ) 11.419 + { VthdSemReq reqData; 11.420 11.421 reqData.reqType = make_mutex; 11.422 - reqData.requestingVP = animVP; 11.423 + reqData.requestingSlv = animSlv; 11.424 11.425 - VMS__send_sem_request( &reqData, animVP ); 11.426 + VMS_WL__send_sem_request( &reqData, animSlv ); 11.427 11.428 - return (int32)animVP->dataRetFromReq; //mutexid is 32bit wide 11.429 + return (int32)animSlv->dataRetFromReq; //mutexid is 32bit wide 11.430 } 11.431 11.432 inline void 11.433 -VPThread__mutex_lock( int32 mutexIdx, SlaveVP *acquiringVP ) 11.434 - { VPThdSemReq reqData; 11.435 +Vthread__mutex_lock( int32 mutexIdx, SlaveVP *acquiringSlv ) 11.436 + { VthdSemReq reqData; 11.437 11.438 reqData.reqType = mutex_lock; 11.439 reqData.mutexIdx = mutexIdx; 11.440 - reqData.requestingVP = acquiringVP; 11.441 + reqData.requestingSlv = acquiringSlv; 11.442 11.443 - VMS__send_sem_request( &reqData, acquiringVP ); 11.444 + VMS_WL__send_sem_request( &reqData, acquiringSlv ); 11.445 } 11.446 11.447 inline void 11.448 -VPThread__mutex_unlock( int32 mutexIdx, SlaveVP *releasingVP ) 11.449 - { VPThdSemReq reqData; 11.450 +Vthread__mutex_unlock( int32 mutexIdx, SlaveVP *releasingSlv ) 11.451 + { VthdSemReq reqData; 11.452 11.453 reqData.reqType = mutex_unlock; 11.454 reqData.mutexIdx = mutexIdx; 11.455 - reqData.requestingVP = releasingVP; 11.456 + reqData.requestingSlv = releasingSlv; 11.457 11.458 - VMS__send_sem_request( &reqData, releasingVP ); 11.459 + VMS_WL__send_sem_request( &reqData, releasingSlv ); 11.460 } 11.461 11.462 11.463 //======================= 11.464 inline int32 11.465 -VPThread__make_cond( int32 ownedMutexIdx, SlaveVP *animPr) 11.466 - { VPThdSemReq reqData; 11.467 +Vthread__make_cond( int32 ownedMutexIdx, SlaveVP *animSlv) 11.468 + { VthdSemReq reqData; 11.469 11.470 reqData.reqType = make_cond; 11.471 reqData.mutexIdx = ownedMutexIdx; 11.472 - reqData.requestingVP = animVP; 11.473 + reqData.requestingSlv = animSlv; 11.474 11.475 - VMS__send_sem_request( &reqData, animVP ); 11.476 + VMS_WL__send_sem_request( &reqData, animSlv ); 11.477 11.478 - return (int32)animVP->dataRetFromReq; //condIdx is 32 bit wide 11.479 + return (int32)animSlv->dataRetFromReq; //condIdx is 32 bit wide 11.480 } 11.481 11.482 inline void 11.483 -VPThread__cond_wait( int32 condIdx, SlaveVP *waitingPr) 11.484 - { VPThdSemReq reqData; 11.485 +Vthread__cond_wait( int32 condIdx, SlaveVP *waitingSlv) 11.486 + { VthdSemReq reqData; 11.487 11.488 reqData.reqType = cond_wait; 11.489 reqData.condIdx = condIdx; 11.490 - reqData.requestingVP = waitingVP; 11.491 + reqData.requestingSlv = waitingSlv; 11.492 11.493 - VMS__send_sem_request( &reqData, waitingVP ); 11.494 + VMS_WL__send_sem_request( &reqData, waitingSlv ); 11.495 } 11.496 11.497 inline void * 11.498 -VPThread__cond_signal( int32 condIdx, SlaveVP *signallingVP ) 11.499 - { VPThdSemReq reqData; 11.500 +Vthread__cond_signal( int32 condIdx, SlaveVP *signallingSlv ) 11.501 + { VthdSemReq reqData; 11.502 11.503 reqData.reqType = cond_signal; 11.504 reqData.condIdx = condIdx; 11.505 - reqData.requestingVP = signallingVP; 11.506 + reqData.requestingSlv = signallingSlv; 11.507 11.508 - VMS__send_sem_request( &reqData, signallingVP ); 11.509 + VMS_WL__send_sem_request( &reqData, signallingSlv ); 11.510 } 11.511 11.512 11.513 @@ -460,27 +455,24 @@ 11.514 * trying to get the data through from different cores. 11.515 */ 11.516 11.517 -/*asm function declarations*/ 11.518 -void asm_save_ret_to_singleton(VPThdSingleton *singletonPtrAddr); 11.519 -void asm_write_ret_from_singleton(VPThdSingleton *singletonPtrAddr); 11.520 - 11.521 /*Fn singleton uses ID as index into array of singleton structs held in the 11.522 * semantic environment. 11.523 */ 11.524 void 11.525 -VPThread__start_fn_singleton( int32 singletonID, SlaveVP *animVP ) 11.526 +Vthread__start_fn_singleton( int32 singletonID, SlaveVP *animSlv ) 11.527 { 11.528 - VPThdSemReq reqData; 11.529 + VthdSemReq reqData; 11.530 11.531 // 11.532 reqData.reqType = singleton_fn_start; 11.533 reqData.singletonID = singletonID; 11.534 11.535 - VMS__send_sem_request( &reqData, animVP ); 11.536 - if( animVP->dataRetFromReq ) //will be 0 or addr of label in end singleton 11.537 + VMS_WL__send_sem_request( &reqData, animSlv ); 11.538 + if( animSlv->dataRetFromReq != 0 ) //addr of matching end-singleton 11.539 { 11.540 - VPThdSemEnv *semEnv = VMS__give_sem_env_for( animVP ); 11.541 - asm_write_ret_from_singleton(&(semEnv->fnSingletons[ singletonID])); 11.542 + VthdSemEnv *semEnv = VMS_int__give_sem_env_for( animSlv ); //not protected! 11.543 + VMS_int__return_to_addr_in_ptd_to_loc( 11.544 + &((semEnv->fnSingletons[singletonID]).savedRetAddr) ); 11.545 } 11.546 } 11.547 11.548 @@ -489,22 +481,21 @@ 11.549 * location. 11.550 */ 11.551 void 11.552 -VPThread__start_data_singleton( VPThdSingleton **singletonAddr, SlaveVP *animVP ) 11.553 +Vthread__start_data_singleton( VthdSingleton *singleton, SlaveVP *animSlv ) 11.554 { 11.555 - VPThdSemReq reqData; 11.556 + VthdSemReq reqData; 11.557 11.558 - if( *singletonAddr && (*singletonAddr)->hasFinished ) 11.559 + if( singleton->savedRetAddr && singleton->hasFinished ) 11.560 goto JmpToEndSingleton; 11.561 11.562 reqData.reqType = singleton_data_start; 11.563 - reqData.singletonPtrAddr = singletonAddr; 11.564 + reqData.singleton = singleton; 11.565 11.566 - VMS__send_sem_request( &reqData, animVP ); 11.567 - if( animVP->dataRetFromReq ) //either 0 or end singleton's return addr 11.568 + VMS_WL__send_sem_request( &reqData, animSlv ); 11.569 + if( animSlv->dataRetFromReq ) //either 0 or end singleton's return addr 11.570 { 11.571 JmpToEndSingleton: 11.572 - asm_write_ret_from_singleton(*singletonAddr); 11.573 - 11.574 + VMS_int__return_to_addr_in_ptd_to_loc(&(singleton->savedRetAddr)); 11.575 } 11.576 //now, simply return 11.577 //will exit either from the start singleton call or the end-singleton call 11.578 @@ -517,25 +508,26 @@ 11.579 * inside is shared by all invocations of a given singleton ID. 11.580 */ 11.581 void 11.582 -VPThread__end_fn_singleton( int32 singletonID, SlaveVP *animVP ) 11.583 +Vthread__end_fn_singleton( int32 singletonID, SlaveVP *animSlv ) 11.584 { 11.585 - VPThdSemReq reqData; 11.586 + VthdSemReq reqData; 11.587 11.588 //don't need this addr until after at least one singleton has reached 11.589 // this function 11.590 - VPThdSemEnv *semEnv = VMS__give_sem_env_for( animVP ); 11.591 - asm_write_ret_from_singleton(&(semEnv->fnSingletons[ singletonID])); 11.592 + VthdSemEnv *semEnv = VMS_int__give_sem_env_for( animSlv ); 11.593 + VMS_int__return_to_addr_in_ptd_to_loc( 11.594 + &((semEnv->fnSingletons[singletonID]).savedRetAddr) ); 11.595 11.596 reqData.reqType = singleton_fn_end; 11.597 reqData.singletonID = singletonID; 11.598 11.599 - VMS__send_sem_request( &reqData, animVP ); 11.600 + VMS_WL__send_sem_request( &reqData, animSlv ); 11.601 } 11.602 11.603 void 11.604 -VPThread__end_data_singleton( VPThdSingleton **singletonPtrAddr, SlaveVP *animVP ) 11.605 +Vthread__end_data_singleton( VthdSingleton *singleton, SlaveVP *animSlv ) 11.606 { 11.607 - VPThdSemReq reqData; 11.608 + VthdSemReq reqData; 11.609 11.610 //don't need this addr until after singleton struct has reached 11.611 // this function for first time 11.612 @@ -544,12 +536,12 @@ 11.613 // one instance in the code of this function. However, can use this 11.614 // function in different places for different data-singletons. 11.615 11.616 - asm_save_ret_to_singleton(*singletonPtrAddr); 11.617 + VMS_int__save_return_into_ptd_to_loc_then_do_ret(&(singleton->savedRetAddr)); 11.618 11.619 - reqData.reqType = singleton_data_end; 11.620 - reqData.singletonPtrAddr = singletonPtrAddr; 11.621 + reqData.reqType = singleton_data_end; 11.622 + reqData.singleton = singleton; 11.623 11.624 - VMS__send_sem_request( &reqData, animVP ); 11.625 + VMS_WL__send_sem_request( &reqData, animSlv ); 11.626 } 11.627 11.628 11.629 @@ -558,69 +550,69 @@ 11.630 * at a time. 11.631 * 11.632 *It suspends to the master, and the request handler takes the function 11.633 - * pointer out of the request and calls it, then resumes the VP. 11.634 + * pointer out of the request and calls it, then resumes the Slv. 11.635 *Only very short functions should be called this way -- for longer-running 11.636 * isolation, use transaction-start and transaction-end, which run the code 11.637 * between as work-code. 11.638 */ 11.639 void 11.640 -VPThread__animate_short_fn_in_isolation( PtrToAtomicFn ptrToFnToExecInMaster, 11.641 - void *data, SlaveVP *animVP ) 11.642 +Vthread__animate_short_fn_in_isolation( PtrToAtomicFn ptrToFnToExecInMaster, 11.643 + void *data, SlaveVP *animSlv ) 11.644 { 11.645 - VPThdSemReq reqData; 11.646 + VthdSemReq reqData; 11.647 11.648 // 11.649 reqData.reqType = atomic; 11.650 reqData.fnToExecInMaster = ptrToFnToExecInMaster; 11.651 reqData.dataForFn = data; 11.652 11.653 - VMS__send_sem_request( &reqData, animVP ); 11.654 + VMS_WL__send_sem_request( &reqData, animSlv ); 11.655 } 11.656 11.657 11.658 /*This suspends to the master. 11.659 - *First, it looks at the VP's data, to see the highest transactionID that VP 11.660 + *First, it looks at the Slv's data, to see the highest transactionID that Slv 11.661 * already has entered. If the current ID is not larger, it throws an 11.662 * exception stating a bug in the code. Otherwise it puts the current ID 11.663 * there, and adds the ID to a linked list of IDs entered -- the list is 11.664 * used to check that exits are properly ordered. 11.665 *Next it is uses transactionID as index into an array of transaction 11.666 * structures. 11.667 - *If the "VP_currently_executing" field is non-null, then put requesting VP 11.668 + *If the "Slv_currently_executing" field is non-null, then put requesting Slv 11.669 * into queue in the struct. (At some point a holder will request 11.670 - * end-transaction, which will take this VP from the queue and resume it.) 11.671 + * end-transaction, which will take this Slv from the queue and resume it.) 11.672 *If NULL, then write requesting into the field and resume. 11.673 */ 11.674 void 11.675 -VPThread__start_transaction( int32 transactionID, SlaveVP *animVP ) 11.676 +Vthread__start_transaction( int32 transactionID, SlaveVP *animSlv ) 11.677 { 11.678 - VPThdSemReq reqData; 11.679 + VthdSemReq reqData; 11.680 11.681 // 11.682 reqData.reqType = trans_start; 11.683 reqData.transID = transactionID; 11.684 11.685 - VMS__send_sem_request( &reqData, animVP ); 11.686 + VMS_WL__send_sem_request( &reqData, animSlv ); 11.687 } 11.688 11.689 /*This suspends to the master, then uses transactionID as index into an 11.690 * array of transaction structures. 11.691 - *It looks at VP_currently_executing to be sure it's same as requesting VP. 11.692 + *It looks at Slv_currently_executing to be sure it's same as requesting Slv. 11.693 * If different, throws an exception, stating there's a bug in the code. 11.694 *Next it looks at the queue in the structure. 11.695 - *If it's empty, it sets VP_currently_executing field to NULL and resumes. 11.696 - *If something in, gets it, sets VP_currently_executing to that VP, then 11.697 + *If it's empty, it sets Slv_currently_executing field to NULL and resumes. 11.698 + *If something in, gets it, sets Slv_currently_executing to that Slv, then 11.699 * resumes both. 11.700 */ 11.701 void 11.702 -VPThread__end_transaction( int32 transactionID, SlaveVP *animVP ) 11.703 +Vthread__end_transaction( int32 transactionID, SlaveVP *animSlv ) 11.704 { 11.705 - VPThdSemReq reqData; 11.706 + VthdSemReq reqData; 11.707 11.708 // 11.709 reqData.reqType = trans_end; 11.710 reqData.transID = transactionID; 11.711 11.712 - VMS__send_sem_request( &reqData, animVP ); 11.713 + VMS_WL__send_sem_request( &reqData, animSlv ); 11.714 } 11.715 //===========================================================================
