# HG changeset patch # User Sean Halle # Date 1339030536 25200 # Node ID 468b8638ff92b0c1513f8abfc11d4ed78a769a84 # Parent f2ed1c379fe74f6f3d329cb90f253a1b24209e57 Works -- first working version, includes slave pruning and shutdown detection diff -r f2ed1c379fe7 -r 468b8638ff92 Measurement/VSs_Counter_Recording.c --- a/Measurement/VSs_Counter_Recording.c Wed May 30 15:02:38 2012 -0700 +++ b/Measurement/VSs_Counter_Recording.c Wed Jun 06 17:55:36 2012 -0700 @@ -5,7 +5,7 @@ #include "VSs_Counter_Recording.h" #include "VMS_impl/VMS.h" -#include "VSs.h" +#include "VSs_impl/VSs.h" #ifdef HOLISTIC__TURN_ON_PERF_COUNTERS diff -r f2ed1c379fe7 -r 468b8638ff92 VSs.c --- a/VSs.c Wed May 30 15:02:38 2012 -0700 +++ b/VSs.c Wed Jun 06 17:55:36 2012 -0700 @@ -12,7 +12,7 @@ #include "Hash_impl/PrivateHash.h" #include "VSs.h" -#include "VSs_Counter_Recording.h" +#include "Measurement/VSs_Counter_Recording.h" //========================================================================== @@ -74,7 +74,7 @@ void VSs__create_seed_slave_and_do_work( TopLevelFnPtr fnPtr, void *initData ) { VSsSemEnv *semEnv; - SlaveVP *seedPr; + SlaveVP *seedSlv; VSs__init(); //normal multi-thd @@ -82,10 +82,13 @@ //VSs starts with one processor, which is put into initial environ, // and which then calls create() to create more, thereby expanding work - seedPr = VSs__create_slave_helper( fnPtr, initData, - semEnv, semEnv->nextCoreToGetNewPr++ ); + seedSlv = VSs__create_slave_helper( fnPtr, initData, + semEnv, semEnv->nextCoreToGetNewSlv++ ); + + //seedVP doesn't do tasks + ((VSsSemData *)seedSlv->semanticData)->needsTaskAssigned = FALSE; - resume_slaveVP( seedPr, semEnv ); + resume_slaveVP( seedSlv, semEnv ); VMS_SS__start_the_work_then_wait_until_done(); //normal multi-thd @@ -184,13 +187,17 @@ #ifdef HOLISTIC__TURN_ON_PERF_COUNTERS VSs__init_counter_data_structs(); #endif + semanticEnv->shutdownInitiated = FALSE; - for(i=0;iidlePr[i][j] = VMS_int__create_slaveVP(&idle_fn,NULL); - semanticEnv->idlePr[i][j]->coreAnimatedBy = i; + semanticEnv->coreIsDone = VMS_int__malloc( NUM_CORES * sizeof( bool32 ) ); + for( i = 0; i < NUM_CORES; ++i ) + { semanticEnv->coreIsDone[i] = FALSE; + for( j = 0; j < NUM_ANIM_SLOTS; ++j ) + { + semanticEnv->idleSlv[i][j] = VMS_int__create_slaveVP(&idle_fn,NULL); + semanticEnv->idleSlv[i][j]->coreAnimatedBy = i; } - } + } #ifdef HOLISTIC__TURN_ON_OBSERVE_UCC semanticEnv->unitList = makeListOfArrays(sizeof(Unit),128); @@ -203,10 +210,7 @@ memset(semanticEnv->last_in_slot,0,sizeof(NUM_CORES * NUM_ANIM_SLOTS * sizeof(Unit))); #endif - //create the ready queue, hash tables used for pairing send to receive - // and so forth - //TODO: add hash tables for pairing sends with receives, and - // initialize the data ownership system + //create the ready queue, hash tables used for matching and so forth readyVPQs = VMS_int__malloc( NUM_CORES * sizeof(PrivQueueStruc *) ); for( coreIdx = 0; coreIdx < NUM_CORES; coreIdx++ ) @@ -216,10 +220,12 @@ semanticEnv->readyVPQs = readyVPQs; - semanticEnv->nextCoreToGetNewPr = 0; + semanticEnv->taskReadyQ = makeVMSQ(); + + semanticEnv->nextCoreToGetNewSlv = 0; semanticEnv->numSlaveVP = 0; - semanticEnv->argPtrHashTbl = makeHashTable( 1<<16, &VMS_int__free );//start big + semanticEnv->argPtrHashTbl = makeHashTable32( 16, &VMS_int__free ); //TODO: bug -- turn these arrays into dyn arrays to eliminate limit //semanticEnv->singletonHasBeenExecutedFlags = makeDynArrayInfo( ); @@ -376,7 +382,7 @@ */ SlaveVP * VSs__create_slave_with( TopLevelFnPtr fnPtr, void *initData, - SlaveVP *creatingPr ) + SlaveVP *creatingSlv ) { VSsSemReq reqData; //the semantic request data is on the stack and disappears when this @@ -386,30 +392,30 @@ reqData.coreToAssignOnto = -1; //means round-robin assign reqData.fnPtr = fnPtr; reqData.initData = initData; - reqData.callingSlv = creatingPr; + reqData.callingSlv = creatingSlv; - VMS_WL__send_create_slaveVP_req( &reqData, creatingPr ); + VMS_WL__send_create_slaveVP_req( &reqData, creatingSlv ); - return creatingPr->dataRetFromReq; + return creatingSlv->dataRetFromReq; } SlaveVP * VSs__create_slave_with_affinity( TopLevelFnPtr fnPtr, void *initData, - SlaveVP *creatingPr, int32 coreToAssignOnto ) + SlaveVP *creatingSlv, int32 coreToAssignOnto ) { VSsSemReq reqData; //the semantic request data is on the stack and disappears when this // call returns -- it's guaranteed to remain in the VP's stack for as // long as the VP is suspended. - reqData.reqType = create_slave; + reqData.reqType = create_slave_w_aff; //not used, May 2012 reqData.coreToAssignOnto = coreToAssignOnto; reqData.fnPtr = fnPtr; reqData.initData = initData; - reqData.callingSlv = creatingPr; + reqData.callingSlv = creatingSlv; - VMS_WL__send_create_slaveVP_req( &reqData, creatingPr ); + VMS_WL__send_create_slaveVP_req( &reqData, creatingSlv ); - return creatingPr->dataRetFromReq; + return creatingSlv->dataRetFromReq; } @@ -438,7 +444,7 @@ VMS_WL__send_sem_request( &reqData, animSlv ); - return animSlv->dataRetFromReq; + return (int32)animSlv->dataRetFromReq; } /*NOTE: if want, don't need to send the animating SlaveVP around.. @@ -488,7 +494,7 @@ * semantic environment. */ void -VSs__start_fn_singleton( int32 singletonID, SlaveVP *animPr ) +VSs__start_fn_singleton( int32 singletonID, SlaveVP *animSlv ) { VSsSemReq reqData; @@ -496,10 +502,10 @@ reqData.reqType = singleton_fn_start; reqData.singletonID = singletonID; - VMS_WL__send_sem_request( &reqData, animPr ); - if( animPr->dataRetFromReq ) //will be 0 or addr of label in end singleton + VMS_WL__send_sem_request( &reqData, animSlv ); + if( animSlv->dataRetFromReq ) //will be 0 or addr of label in end singleton { - VSsSemEnv *semEnv = VMS_int__give_sem_env_for( animPr ); + VSsSemEnv *semEnv = VMS_int__give_sem_env_for( animSlv ); asm_write_ret_from_singleton(&(semEnv->fnSingletons[ singletonID])); } } @@ -509,7 +515,7 @@ * location. */ void -VSs__start_data_singleton( VSsSingleton **singletonAddr, SlaveVP *animPr ) +VSs__start_data_singleton( VSsSingleton **singletonAddr, SlaveVP *animSlv ) { VSsSemReq reqData; @@ -519,8 +525,8 @@ reqData.reqType = singleton_data_start; reqData.singletonPtrAddr = singletonAddr; - VMS_WL__send_sem_request( &reqData, animPr ); - if( animPr->dataRetFromReq ) //either 0 or end singleton's return addr + VMS_WL__send_sem_request( &reqData, animSlv ); + if( animSlv->dataRetFromReq ) //either 0 or end singleton's return addr { //Assembly code changes the return addr on the stack to the one // saved into the singleton by the end-singleton-fn //The return addr is at 0x4(%%ebp) @@ -538,26 +544,26 @@ * inside is shared by all invocations of a given singleton ID. */ void -VSs__end_fn_singleton( int32 singletonID, SlaveVP *animPr ) +VSs__end_fn_singleton( int32 singletonID, SlaveVP *animSlv ) { VSsSemReq reqData; //don't need this addr until after at least one singleton has reached // this function - VSsSemEnv *semEnv = VMS_int__give_sem_env_for( animPr ); + VSsSemEnv *semEnv = VMS_int__give_sem_env_for( animSlv ); asm_write_ret_from_singleton(&(semEnv->fnSingletons[ singletonID])); reqData.reqType = singleton_fn_end; reqData.singletonID = singletonID; - VMS_WL__send_sem_request( &reqData, animPr ); + VMS_WL__send_sem_request( &reqData, animSlv ); EndSingletonInstrAddr: return; } void -VSs__end_data_singleton( VSsSingleton **singletonPtrAddr, SlaveVP *animPr ) +VSs__end_data_singleton( VSsSingleton **singletonPtrAddr, SlaveVP *animSlv ) { VSsSemReq reqData; @@ -575,7 +581,7 @@ reqData.reqType = singleton_data_end; reqData.singletonPtrAddr = singletonPtrAddr; - VMS_WL__send_sem_request( &reqData, animPr ); + VMS_WL__send_sem_request( &reqData, animSlv ); } /*This executes the function in the masterVP, so it executes in isolation @@ -590,7 +596,7 @@ */ void VSs__animate_short_fn_in_isolation( PtrToAtomicFn ptrToFnToExecInMaster, - void *data, SlaveVP *animPr ) + void *data, SlaveVP *animSlv ) { VSsSemReq reqData; @@ -599,7 +605,7 @@ reqData.fnToExecInMaster = ptrToFnToExecInMaster; reqData.dataForFn = data; - VMS_WL__send_sem_request( &reqData, animPr ); + VMS_WL__send_sem_request( &reqData, animSlv ); } @@ -617,16 +623,16 @@ *If NULL, then write requesting into the field and resume. */ void -VSs__start_transaction( int32 transactionID, SlaveVP *animPr ) +VSs__start_transaction( int32 transactionID, SlaveVP *animSlv ) { VSsSemReq reqData; // - reqData.callingSlv = animPr; + reqData.callingSlv = animSlv; reqData.reqType = trans_start; reqData.transID = transactionID; - VMS_WL__send_sem_request( &reqData, animPr ); + VMS_WL__send_sem_request( &reqData, animSlv ); } /*This suspends to the master, then uses transactionID as index into an @@ -639,14 +645,14 @@ * resumes both. */ void -VSs__end_transaction( int32 transactionID, SlaveVP *animPr ) +VSs__end_transaction( int32 transactionID, SlaveVP *animSlv ) { VSsSemReq reqData; // - reqData.callingSlv = animPr; + reqData.callingSlv = animSlv; reqData.reqType = trans_end; reqData.transID = transactionID; - VMS_WL__send_sem_request( &reqData, animPr ); + VMS_WL__send_sem_request( &reqData, animSlv ); } diff -r f2ed1c379fe7 -r 468b8638ff92 VSs.h --- a/VSs.h Wed May 30 15:02:38 2012 -0700 +++ b/VSs.h Wed Jun 06 17:55:36 2012 -0700 @@ -12,7 +12,7 @@ #include "Queue_impl/PrivateQueue.h" #include "Hash_impl/PrivateHash.h" #include "VMS_impl/VMS.h" -#include "dependency.h" +#include "Measurement/dependency.h" //=========================================================================== @@ -29,13 +29,13 @@ /*This header defines everything specific to the VSs semantic plug-in */ typedef struct _VSsSemReq VSsSemReq; -typedef void (*VSsTaskFnPtr ) ( void * ); //executed atomically in master +typedef void (*VSsTaskFnPtr ) ( void *, SlaveVP *); typedef void (*PtrToAtomicFn ) ( void * ); //executed atomically in master //=========================================================================== #define IN 1 #define OUT 2 -#define INOUT 3 +#define INOUT 2 #define READER 1 #define WRITER 2 @@ -54,10 +54,19 @@ typedef struct { + bool32 hasEnabledNonFinishedWriter; + int32 numEnabledNonDoneReaders; + PrivQueueStruc *waitersQ; + } +VSsPointerEntry; + +typedef struct + { void **args; //ctld args must come first, as ptrs VSsTaskType *taskType; int32 numBlockingProp; SlaveVP *slaveAssignedTo; + VSsPointerEntry **ptrEntries; } VSsTaskStub; @@ -69,14 +78,6 @@ } VSsTaskStubCarrier; -typedef struct - { - bool32 hasEnabledNonFinishedWriter; - int32 numEnabledNonDoneReaders; - PrivQStruct *waitersQ; - } -VSsPointerEntry; - typedef struct { @@ -157,12 +158,15 @@ PrivQueueStruc *taskReadyQ; //Q: shared or local? HashTable *argPtrHashTbl; int32 numSlaveVP; - int32 nextCoreToGetNewPr; + int32 nextCoreToGetNewSlv; int32 primitiveStartTime; //fix limit on num with dynArray VSsSingleton fnSingletons[NUM_STRUCS_IN_SEM_ENV]; VSsTrans transactionStrucs[NUM_STRUCS_IN_SEM_ENV]; + + bool32 *coreIsDone; + int32 numCoresDone; #ifdef HOLISTIC__TURN_ON_OBSERVE_UCC ListOfArrays* unitList; @@ -178,7 +182,7 @@ #ifdef HOLISTIC__TURN_ON_PERF_COUNTERS ListOfArrays* counterList[NUM_CORES]; #endif - SlaveVP* idlePr[NUM_CORES][NUM_ANIM_SLOTS]; + SlaveVP* idleSlv[NUM_CORES][NUM_ANIM_SLOTS]; int shutdownInitiated; } VSsSemEnv; @@ -237,7 +241,7 @@ SlaveVP * VSs__create_slave_with_affinity( TopLevelFnPtr fnPtr, void *initData, - SlaveVP *creatingPr, int32 coreToAssignOnto); + SlaveVP *creatingSlv, int32 coreToAssignOnto); void VSs__dissipate_slave( SlaveVP *slaveToDissipate ); @@ -251,7 +255,7 @@ //======================= int32 -VSs__submit_task( VSsTaskType *taskType, void **args, SlaveVP *animSlv); +VSs__submit_task( VSsTaskType *taskType, void *args, SlaveVP *animSlv); void @@ -284,7 +288,7 @@ //========================= Internal use only ============================= void -VSs__Request_Handler( SlaveVP *requestingPr, void *_semEnv ); +VSs__Request_Handler( SlaveVP *requestingSlv, void *_semEnv ); SlaveVP * VSs__assign_slaveVP_to_slot( void *_semEnv, AnimSlot *slot ); @@ -294,7 +298,7 @@ VSsSemEnv *semEnv, int32 coreToAssignOnto ); //===================== Measurement of Lang Overheads ===================== -#include "VSs_Measurement.h" +#include "Measurement/VSs_Measurement.h" //=========================================================================== #endif /* _VSs_H */ diff -r f2ed1c379fe7 -r 468b8638ff92 VSs_PluginFns.c --- a/VSs_PluginFns.c Wed May 30 15:02:38 2012 -0700 +++ b/VSs_PluginFns.c Wed Jun 06 17:55:36 2012 -0700 @@ -16,13 +16,13 @@ resume_slaveVP( SlaveVP *slave, VSsSemEnv *semEnv ); void -handleSemReq( VMSReqst *req, SlaveVP *requestingPr, VSsSemEnv *semEnv ); +handleSemReq( VMSReqst *req, SlaveVP *requestingSlv, VSsSemEnv *semEnv ); void -handleDissipate( SlaveVP *requestingPr, VSsSemEnv *semEnv ); +handleDissipate( SlaveVP *requestingSlv, VSsSemEnv *semEnv ); void -handleCreate( VMSReqst *req, SlaveVP *requestingPr, VSsSemEnv *semEnv ); +handleCreate( VMSReqst *req, SlaveVP *requestingSlv, VSsSemEnv *semEnv ); //============================== Assigner ================================== // @@ -34,9 +34,10 @@ */ SlaveVP * VSs__assign_slaveVP_to_slot( void *_semEnv, AnimSlot *slot ) - { SlaveVP *assignPr; - VSsSemEnv *semEnv; - int32 coreNum, slotNum; + { SlaveVP *assignSlv; + VSsSemEnv *semEnv; + VSsSemData *semData; + int32 coreNum, slotNum; coreNum = slot->coreSlotIsOn; slotNum = slot->slotIdx; @@ -54,76 +55,118 @@ * of slaves, and take one from pool when a task suspends. */ //TODO: fix false sharing in array - assignPr = readPrivQ( semEnv->readyVPQs[coreNum] ); - if( assignPr == NULL ) - { //if there are tasks ready to go, then make a new slave to animate - // This only happens when all available slaves are blocked by - // constructs like send, or mutex, and so on.. - VMS_PI__throw_exception( "no slaves in readyQ", NULL, NULL ); + assignSlv = readPrivQ( semEnv->readyVPQs[coreNum] ); + if( assignSlv == NULL ) + { //make a new slave to animate + //This happens for the first task on the core and when all available + // slaves are blocked by constructs like send, or mutex, and so on.. + assignSlv = VSs__create_slave_helper( NULL, NULL, semEnv, coreNum ); } - if( assignPr != NULL ) //could still be NULL, if no tasks avail - { - if( ((VSsSemData *)assignPr->semanticData)->needsTaskAssigned ) - { VSsTaskStub * - newTaskStub = readQ( semEnv->taskReadyQ ); - if( newTaskStub == NULL ) - { //No task, so slave unused, so put it back and return "no-slave" - writeQ( assignPr, semEnv->readyVPQs[coreNum] ); - return NULL; + semData = (VSsSemData *)assignSlv->semanticData; + //slave could be resuming a task in progress, check for this + if( semData->needsTaskAssigned ) + { //no, not resuming, needs a task.. + VSsTaskStub *newTaskStub; + SlaveVP *extraSlv; + newTaskStub = readPrivQ( semEnv->taskReadyQ ); + if( newTaskStub == NULL ) + { //No task, so slave unused, so put it back and return "no-slave" + //But first check if have extra free slaves + extraSlv = readPrivQ( semEnv->readyVPQs[coreNum] ); + if( extraSlv == NULL ) + { //means no tasks and no slave on this core can generate more + //TODO: false sharing + if( semEnv->coreIsDone[coreNum] == FALSE) + { semEnv->numCoresDone += 1; + semEnv->coreIsDone[coreNum] = TRUE; + #ifdef DEBUG__TURN_ON_SEQUENTIAL_MODE + semEnv->shutdownInitiated = TRUE; + #else + if( semEnv->numCoresDone == NUM_CORES ) + { //means no cores have work, and none can generate more + semEnv->shutdownInitiated = TRUE; + } + #endif + } + //put slave back into Q and return NULL + writePrivQ( assignSlv, semEnv->readyVPQs[coreNum] ); + assignSlv = NULL; + //except if shutdown has been initiated by this or other core + if(semEnv->shutdownInitiated) + { assignSlv = VMS_SS__create_shutdown_slave(); + } } - //point slave to the task's function, and mark slave as having task - VMS_int__reset_slaveVP_to_TopLvlFn( assignPr, + else //extra slave exists, but no tasks for either slave + { if(((VSsSemData *)extraSlv->semanticData)->needsTaskAssigned == TRUE) + { //means have two slaves need tasks -- redundant, kill one + handleDissipate( extraSlv, semEnv ); + //then put other back into Q and return NULL + writePrivQ( assignSlv, semEnv->readyVPQs[coreNum] ); + assignSlv = NULL; + } + else + { //extra slave has work -- so take it instead + writePrivQ( assignSlv, semEnv->readyVPQs[coreNum] ); + assignSlv = extraSlv; + //semData = (VSsSemData *)assignSlv->semanticData; Don't use + } + } + } + else //have a new task for the slave. + { //point slave to task's function, and mark slave as having task + VMS_int__reset_slaveVP_to_TopLvlFn( assignSlv, newTaskStub->taskType->fn, newTaskStub->args ); - ((VSsSemData *)assignPr->semanticData)->taskStub = newTaskStub; - newTaskStub->slaveAssignedTo = assignPr; - ((VSsSemData *)assignPr->semanticData)->needsTaskAssigned = FALSE; + semData->taskStub = newTaskStub; + newTaskStub->slaveAssignedTo = assignSlv; + semData->needsTaskAssigned = FALSE; } + } //outcome: 1)slave didn't need a new task 2)slave just pointed at one + // 3)no tasks, so slave NULL + #ifdef HOLISTIC__TURN_ON_OBSERVE_UCC + if( assignSlv == NULL ) + { assignSlv = semEnv->idleSlv[coreNum][slotNum]; + if(semEnv->shutdownInitiated) + { assignSlv = VMS_SS__create_shutdown_slave(); + } + //things that would normally happen in resume(), but these VPs + // never go there + assignSlv->assignCount++; //Somewhere here! + Unit newu; + newu.vp = assignSlv->slaveID; + newu.task = assignSlv->assignCount; + addToListOfArrays(Unit,newu,semEnv->unitList); + + if (assignSlv->assignCount > 1) + { Dependency newd; + newd.from_vp = assignSlv->slaveID; + newd.from_task = assignSlv->assignCount - 1; + newd.to_vp = assignSlv->slaveID; + newd.to_task = assignSlv->assignCount; + addToListOfArrays(Dependency, newd ,semEnv->ctlDependenciesList); + } } - //Note, using a non-blocking queue -- it returns NULL if queue empty - else //assignPr is indeed NULL - { assignPr = semEnv->idlePr[coreNum][slotNum]; - if(semEnv->shutdownInitiated) - { assignPr = VMS_SS__create_shutdown_slave(); - } - //things that would normally happen in resume(), but these VPs - // never go there - assignPr->assignCount++; //Somewhere here! - Unit newu; - newu.vp = assignPr->slaveID; - newu.task = assignPr->assignCount; - addToListOfArrays(Unit,newu,semEnv->unitList); - - if (assignPr->assignCount > 1) - { Dependency newd; - newd.from_vp = assignPr->slaveID; - newd.from_task = assignPr->assignCount - 1; - newd.to_vp = assignPr->slaveID; - newd.to_task = assignPr->assignCount; - addToListOfArrays(Dependency, newd ,semEnv->ctlDependenciesList); - } - #endif - } + #endif #ifdef HOLISTIC__TURN_ON_OBSERVE_UCC - if( assignPr != NULL ) - { //assignPr->numTimesAssigned++; + if( assignSlv != NULL ) + { //assignSlv->numTimesAssigned++; Unit prev_in_slot = semEnv->last_in_slot[coreNum * NUM_ANIM_SLOTS + slotNum]; if(prev_in_slot.vp != 0) { Dependency newd; newd.from_vp = prev_in_slot.vp; newd.from_task = prev_in_slot.task; - newd.to_vp = assignPr->slaveID; - newd.to_task = assignPr->assignCount; + newd.to_vp = assignSlv->slaveID; + newd.to_task = assignSlv->assignCount; addToListOfArrays(Dependency,newd,semEnv->hwArcs); } - prev_in_slot.vp = assignPr->slaveID; - prev_in_slot.task = assignPr->assignCount; + prev_in_slot.vp = assignSlv->slaveID; + prev_in_slot.task = assignSlv->assignCount; semEnv->last_in_slot[coreNum * NUM_ANIM_SLOTS + slotNum] = prev_in_slot; } #endif - return( assignPr ); + return( assignSlv ); } @@ -132,38 +175,38 @@ /* */ void -VSs__Request_Handler( SlaveVP *requestingPr, void *_semEnv ) +VSs__Request_Handler( SlaveVP *requestingSlv, void *_semEnv ) { VSsSemEnv *semEnv; VMSReqst *req; semEnv = (VSsSemEnv *)_semEnv; - req = VMS_PI__take_next_request_out_of( requestingPr ); + req = VMS_PI__take_next_request_out_of( requestingSlv ); while( req != NULL ) { switch( req->reqType ) - { case semantic: handleSemReq( req, requestingPr, semEnv); + { case semantic: handleSemReq( req, requestingSlv, semEnv); break; - case createReq: handleCreate( req, requestingPr, semEnv); + case createReq: handleCreate( req, requestingSlv, semEnv); break; - case dissipate: handleDissipate( req, requestingPr, semEnv); + case dissipate: handleDissipate( requestingSlv, semEnv); break; - case VMSSemantic: VMS_PI__handle_VMSSemReq(req, requestingPr, semEnv, + case VMSSemantic: VMS_PI__handle_VMSSemReq(req, requestingSlv, semEnv, (ResumeSlvFnPtr) &resume_slaveVP); break; default: break; } - req = VMS_PI__take_next_request_out_of( requestingPr ); + req = VMS_PI__take_next_request_out_of( requestingSlv ); } //while( req != NULL ) } void -handleSemReq( VMSReqst *req, SlaveVP *reqPr, VSsSemEnv *semEnv ) +handleSemReq( VMSReqst *req, SlaveVP *reqSlv, VSsSemEnv *semEnv ) { VSsSemReq *semReq; semReq = VMS_PI__take_sem_reqst_from(req); @@ -175,23 +218,23 @@ case end_task: handleEndTask( semReq, semEnv); break; //==================================================================== - case malloc_req: handleMalloc( semReq, reqPr, semEnv); + case malloc_req: handleMalloc( semReq, reqSlv, semEnv); break; - case free_req: handleFree( semReq, reqPr, semEnv); + case free_req: handleFree( semReq, reqSlv, semEnv); break; - case singleton_fn_start: handleStartFnSingleton(semReq, reqPr, semEnv); + case singleton_fn_start: handleStartFnSingleton(semReq, reqSlv, semEnv); break; - case singleton_fn_end: handleEndFnSingleton( semReq, reqPr, semEnv); + case singleton_fn_end: handleEndFnSingleton( semReq, reqSlv, semEnv); break; - case singleton_data_start:handleStartDataSingleton(semReq,reqPr,semEnv); + case singleton_data_start:handleStartDataSingleton(semReq,reqSlv,semEnv); break; - case singleton_data_end: handleEndDataSingleton(semReq, reqPr, semEnv); + case singleton_data_end: handleEndDataSingleton(semReq, reqSlv, semEnv); break; - case atomic: handleAtomic( semReq, reqPr, semEnv); + case atomic: handleAtomic( semReq, reqSlv, semEnv); break; - case trans_start: handleTransStart( semReq, reqPr, semEnv); + case trans_start: handleTransStart( semReq, reqSlv, semEnv); break; - case trans_end: handleTransEnd( semReq, reqPr, semEnv); + case trans_end: handleTransEnd( semReq, reqSlv, semEnv); break; } } @@ -202,96 +245,90 @@ /*SlaveVP dissipate (NOT task-end!) */ void -handleDissipate( SlaveVP *requestingPr, VSsSemEnv *semEnv ) +handleDissipate( SlaveVP *requestingSlv, VSsSemEnv *semEnv ) { - DEBUG__printf1(dbgRqstHdlr,"Dissipate request from processor %d",requestingPr->slaveID) + DEBUG__printf1(dbgRqstHdlr,"Dissipate request from processor %d",requestingSlv->slaveID) //free any semantic data allocated to the virt procr - VMS_PI__free( requestingPr->semanticData ); + VMS_PI__free( requestingSlv->semanticData ); //Now, call VMS to free_all AppVP state -- stack and so on - VMS_PI__dissipate_slaveVP( requestingPr ); - - semEnv->numSlaveVP -= 1; - if( semEnv->numSlaveVP == 0 ) - { //no more work, so shutdown - semEnv->shutdownInitiated = TRUE; - //VMS_SS__shutdown(); - } + VMS_PI__dissipate_slaveVP( requestingSlv ); } /*Re-use this in the entry-point fn */ - SlaveVP * +SlaveVP * VSs__create_slave_helper( TopLevelFnPtr fnPtr, void *initData, VSsSemEnv *semEnv, int32 coreToAssignOnto ) - { SlaveVP *newPr; + { SlaveVP *newSlv; VSsSemData *semData; //This is running in master, so use internal version - newPr = VMS_PI__create_slaveVP( fnPtr, initData ); + newSlv = VMS_PI__create_slaveVP( fnPtr, initData ); semEnv->numSlaveVP += 1; semData = VMS_PI__malloc( sizeof(VSsSemData) ); semData->highestTransEntered = -1; semData->lastTransEntered = NULL; - - newPr->semanticData = semData; + semData->needsTaskAssigned = TRUE; + + newSlv->semanticData = semData; //=================== Assign new processor to a core ===================== #ifdef DEBUG__TURN_ON_SEQUENTIAL_MODE - newPr->coreAnimatedBy = 0; + newSlv->coreAnimatedBy = 0; #else if(coreToAssignOnto < 0 || coreToAssignOnto >= NUM_CORES ) { //out-of-range, so round-robin assignment - newPr->coreAnimatedBy = semEnv->nextCoreToGetNewPr; + newSlv->coreAnimatedBy = semEnv->nextCoreToGetNewSlv; - if( semEnv->nextCoreToGetNewPr >= NUM_CORES - 1 ) - semEnv->nextCoreToGetNewPr = 0; + if( semEnv->nextCoreToGetNewSlv >= NUM_CORES - 1 ) + semEnv->nextCoreToGetNewSlv = 0; else - semEnv->nextCoreToGetNewPr += 1; + semEnv->nextCoreToGetNewSlv += 1; } else //core num in-range, so use it - { newPr->coreAnimatedBy = coreToAssignOnto; + { newSlv->coreAnimatedBy = coreToAssignOnto; } #endif //======================================================================== - return newPr; + return newSlv; } /*SlaveVP create (NOT task create!) */ void -handleCreate( VMSReqst *req, SlaveVP *requestingPr, VSsSemEnv *semEnv ) +handleCreate( VMSReqst *req, SlaveVP *requestingSlv, VSsSemEnv *semEnv ) { VSsSemReq *semReq; - SlaveVP *newPr; + SlaveVP *newSlv; semReq = VMS_PI__take_sem_reqst_from( req ); - newPr = VSs__create_slave_helper( semReq->fnPtr, semReq->initData, semEnv, + newSlv = VSs__create_slave_helper( semReq->fnPtr, semReq->initData, semEnv, semReq->coreToAssignOnto ); - DEBUG__printf2(dbgRqstHdlr,"Create from: %d, new VP: %d", requestingPr->slaveID, newPr->slaveID) + DEBUG__printf2(dbgRqstHdlr,"Create from: %d, new VP: %d", requestingSlv->slaveID, newSlv->slaveID) #ifdef HOLISTIC__TURN_ON_OBSERVE_UCC Dependency newd; - newd.from_vp = requestingPr->slaveID; - newd.from_task = requestingPr->assignCount; - newd.to_vp = newPr->slaveID; + newd.from_vp = requestingSlv->slaveID; + newd.from_task = requestingSlv->assignCount; + newd.to_vp = newSlv->slaveID; newd.to_task = 1; //addToListOfArraysDependency(newd,semEnv->commDependenciesList); addToListOfArrays(Dependency,newd,semEnv->commDependenciesList); #endif //For VSs, caller needs ptr to created processor returned to it - requestingPr->dataRetFromReq = newPr; + requestingSlv->dataRetFromReq = newSlv; - resume_slaveVP( newPr, semEnv ); - resume_slaveVP( requestingPr, semEnv ); + resume_slaveVP( requestingSlv, semEnv ); + resume_slaveVP( newSlv, semEnv ); } diff -r f2ed1c379fe7 -r 468b8638ff92 VSs_Request_Handlers.c --- a/VSs_Request_Handlers.c Wed May 30 15:02:38 2012 -0700 +++ b/VSs_Request_Handlers.c Wed Jun 06 17:55:36 2012 -0700 @@ -25,7 +25,7 @@ // /*Only clone the elements of req used in these reqst handlers - */ + * VSsSemReq * cloneReq( VSsSemReq *semReq ) { VSsSemReq *clonedReq; @@ -38,7 +38,9 @@ return clonedReq; } +*/ +/* HashEntry * giveEntryElseInsertReqst( char *key, VSsSemReq *semReq, HashTable *commHashTbl ) @@ -60,15 +62,30 @@ } return entry; } - +*/ + +/*Various ideas for getting the 64b pointer into the two 32b key-array + * positions + key[0] = 2; //two 32b values in key + OR + (uint64) (key[1]) = argPtr; + OR + *( (uint64*)&key[1] ) = argPtr; + OR + key[2] = (uint32)argPtr; //low bits + key[1] = (uint32)(argPtr >> 32); //high bits +*/ + inline VSsPointerEntry * -create_pointer_entry_and_insert( void *argPtr ) - { VSsPointerEntry newEntry; +create_pointer_entry( ) + { VSsPointerEntry *newEntry; newEntry = VMS_PI__malloc( sizeof(VSsPointerEntry) ); newEntry->hasEnabledNonFinishedWriter = FALSE; newEntry->numEnabledNonDoneReaders = 0; newEntry->waitersQ = makePrivQ(); + + return newEntry; } /*malloc's space and initializes fields -- and COPIES the arg values @@ -79,21 +96,25 @@ { void **newArgs; int32 i, numArgs; VSsTaskStub * - newStub = malloc( sizeof(VSsTaskStub) + taskType->sizeOfArgs ); + newStub = VMS_int__malloc( sizeof(VSsTaskStub) + taskType->sizeOfArgs ); newStub->numBlockingProp = taskType->numCtldArgs; newStub->slaveAssignedTo = NULL; newStub->taskType = taskType; - newArgs = (void **)((uint8 *)newStub) + sizeof(VSsTaskStub); + newStub->ptrEntries = + VMS_int__malloc( taskType->numCtldArgs * sizeof(VSsPointerEntry *) ); + newArgs = (void **)( (uint8 *)newStub + sizeof(VSsTaskStub) ); newStub->args = newArgs; //Copy the arg-pointers.. can be more arguments than just the ones // that StarSs uses to control ordering of task execution. memcpy( newArgs, args, taskType->sizeOfArgs ); + + return newStub; } inline VSsTaskStubCarrier * create_task_carrier( VSsTaskStub *taskStub, int32 argNum, int32 rdOrWrite ) - { VSsTaskStubCarrier newCarrier; + { VSsTaskStubCarrier *newCarrier; newCarrier = VMS_PI__malloc( sizeof(VSsTaskStubCarrier) ); newCarrier->taskStub = taskStub; @@ -203,8 +224,8 @@ */ void handleSubmitTask( VSsSemReq *semReq, VSsSemEnv *semEnv ) - { int64 key[] = {0,0,0}; - HashEntry *rawHashEntry; + { uint32 key[3]; + HashEntry *rawHashEntry; //has char *, but use with uint32 * VSsPointerEntry *ptrEntry; //contents of hash table entry for an arg pointer void **args; VSsTaskStub *taskStub; @@ -236,32 +257,40 @@ int32 argNum; for( argNum = 0; argNum < taskType->numCtldArgs; argNum++ ) { - key[0] = (int64)args[argNum]; + key[0] = 2; //two 32b values in key + *( (uint64*)&key[1]) = (uint64)args[argNum]; //write 64b into two 32b - //key[2] acts as the 0 that terminates the string -//BUG! need new hash function that works on *pointers with zeros in* /*If the hash entry was chained, put it at the * start of the chain. (Means no-longer-used pointers accumulate * at end of chain, decide garbage collection later) */ - rawHashEntry = getEntryFromTable( (char *)key, argPtrHashTbl ); - ptrEntry = (VSsPointerEntry *)rawHashEntry->content; - if( ptrEntry == NULL ) - { ptrEntry = create_pointer_entry_and_insert( args[argNum] ); + rawHashEntry = getEntryFromTable32( key, argPtrHashTbl ); + if( rawHashEntry == NULL ) + { //adding a value auto-creates the hash-entry + ptrEntry = create_pointer_entry(); + rawHashEntry = addValueIntoTable32( key, ptrEntry, argPtrHashTbl ); } + else + { ptrEntry = (VSsPointerEntry *)rawHashEntry->content; + if( ptrEntry == NULL ) + { ptrEntry = create_pointer_entry(); + rawHashEntry = addValueIntoTable32(key, ptrEntry, argPtrHashTbl); + } + } + taskStub->ptrEntries[argNum] = ptrEntry; /*Have the hash entry. *If the arg is a reader and the entry does not have an enabled * non-finished writer, and the queue is empty. */ if( taskType->argTypes[argNum] == READER ) { if( !ptrEntry->hasEnabledNonFinishedWriter && - isEmptyPrivQ( ptrEntry->waitersQ ) ) + isEmptyPrivQ( ptrEntry->waitersQ ) ) { /*The reader is free. So, decrement the blocking-propendent * count in the task-stub. If the count is zero, then put the * task-stub into the readyQ. At the same time, increment * the hash-entry's count of enabled and non-finished readers.*/ taskStub->numBlockingProp -= 1; if( taskStub->numBlockingProp == 0 ) - { writeQ( taskStub, semEnv->taskReadyQ ); + { writePrivQ( taskStub, semEnv->taskReadyQ ); } ptrEntry->numEnabledNonDoneReaders += 1; } @@ -269,7 +298,7 @@ { /*Otherwise, the reader is put into the hash-entry's Q of * waiters*/ taskCarrier = create_task_carrier( taskStub, argNum, READER ); - writeQ( taskCarrier, ptrEntry->waitersQ ); + writePrivQ( taskCarrier, ptrEntry->waitersQ ); } } else //arg is a writer @@ -284,14 +313,14 @@ * into the readyQ.*/ taskStub->numBlockingProp -= 1; if( taskStub->numBlockingProp == 0 ) - { writeQ( taskStub, semEnv->taskReadyQ ); + { writePrivQ( taskStub, semEnv->taskReadyQ ); } ptrEntry->hasEnabledNonFinishedWriter = TRUE; } else {/*Otherwise, put the writer into the entry's Q of waiters.*/ taskCarrier = create_task_carrier( taskStub, argNum, WRITER ); - writeQ( taskCarrier, ptrEntry->waitersQ ); + writePrivQ( taskCarrier, ptrEntry->waitersQ ); } } } //for argNum @@ -333,17 +362,23 @@ * reader's task-stub. If it reaches zero, then put the task-stub into the * readyQ. *Repeat until encounter a writer -- put that writer back into the Q. + * + *May 2012 -- not keeping track of how many references to a given ptrEntry + * exist, so no way to garbage collect.. + *TODO: Might be safe to delete an entry when task ends and waiterQ empty + * and no readers and no writers.. */ void handleEndTask( VSsSemReq *semReq, VSsSemEnv *semEnv ) - { int64 key[] = {0,0,0}; + { uint32 key[3]; HashEntry *rawHashEntry; - VSsPointerEntry *entry; //contents of hash table entry for an arg pointer + VSsPointerEntry *ptrEntry; //contents of hash table entry for an arg pointer void **args; VSsTaskStub *endingTaskStub, *waitingTaskStub; VSsTaskType *endingTaskType; VSsWaiterCarrier *waitingTaskCarrier; - + VSsPointerEntry **ptrEntries; + HashTable * ptrHashTbl = semEnv->argPtrHashTbl; @@ -356,71 +391,83 @@ ((VSsSemData *)semReq->callingSlv->semanticData)->taskStub; args = endingTaskStub->args; endingTaskType = endingTaskStub->taskType; + ptrEntries = endingTaskStub->ptrEntries; //saved in stub when create /*The task's controlled arguments are processed one by one. - *Processing an argument means getting the hash of the pointer. + *Processing an argument means getting arg-pointer's entry. */ int32 argNum; for( argNum = 0; argNum < endingTaskType->numCtldArgs; argNum++ ) { - key[0] = (int64)args[argNum]; + /* + key[0] = 2; //says are 2 32b values in key + *( (uint64*)&key[1] ) = args[argNum]; //write 64b ptr into two 32b - //key[2] acts as the 0 that terminates the string -//BUG! need new hash function that works on *pointers with zeros in* - /*If the hash entry was chained, put it at the + /*If the hash entry was chained, put it at the * start of the chain. (Means no-longer-used pointers accumulate * at end of chain, decide garbage collection later) - *NOTE: could put pointer directly to hash entry into task-stub - * when do lookup during task creation.*/ - rawHashEntry = getEntryFromTable( (char *)key, ptrHashTbl ); - entry = (VSsPointerEntry *)rawHashEntry->content; - if( entry == NULL ) + */ + /*NOTE: don't do hash lookups here, instead, have a pointer to the + * hash entry inside task-stub, put there during task creation. + rawHashEntry = getEntryFromTable32( key, ptrHashTbl ); + ptrEntry = (VSsPointerEntry *)rawHashEntry->content; + if( ptrEntry == NULL ) VMS_App__throw_exception("hash entry NULL", NULL, NULL); + */ - /*With the hash entry: If the ending task was reader of this arg*/ + ptrEntry = ptrEntries[argNum]; + /*check if the ending task was reader of this arg*/ if( endingTaskType->argTypes[argNum] == READER ) { /*then decrement the enabled and non-finished reader-count in * the hash-entry. */ - entry->numEnabledNonDoneReaders -= 1; + ptrEntry->numEnabledNonDoneReaders -= 1; - /*If the count becomes zero, then take the next entry from the Q. It - * should be a writer, or else there's a bug in this algorithm.*/ - if( entry->numEnabledNonDoneReaders == 0 ) - { waitingTaskCarrier = readQ( entry->waitersQ ); + /*If the count becomes zero, then take the next entry from the Q. + *It should be a writer, or else there's a bug in this algorithm.*/ + if( ptrEntry->numEnabledNonDoneReaders == 0 ) + { waitingTaskCarrier = readPrivQ( ptrEntry->waitersQ ); + if( waitingTaskCarrier == NULL ) + { //TODO: looks safe to delete the ptr entry at this point + continue; //next iter of loop + } + if( waitingTaskCarrier->type == READER ) + VMS_App__throw_exception("READER waiting", NULL, NULL); + waitingTaskStub = waitingTaskCarrier->taskStub; - if( !waitingTaskCarrier->type == READER ) - VMS_App__throw_exception(); - /*Set the hash-entry to have an enabled non-finished writer.*/ - entry->hasEnabledNonFinishedWriter = TRUE; + ptrEntry->hasEnabledNonFinishedWriter = TRUE; /* Decrement the blocking-propendent-count of the writer's * task-stub. If the count has reached zero, then put the * task-stub into the readyQ.*/ waitingTaskStub->numBlockingProp -= 1; if( waitingTaskStub->numBlockingProp == 0 ) - { writeQ( waitingTaskStub, semEnv->taskReadyQ ); + { writePrivQ( waitingTaskStub, semEnv->taskReadyQ ); } } } else /*the ending task is a writer of this arg*/ { /*clear the enabled non-finished writer flag of the hash-entry.*/ - entry->hasEnabledNonFinishedWriter = FALSE; + ptrEntry->hasEnabledNonFinishedWriter = FALSE; /*Take the next waiter from the hash-entry's Q.*/ - waitingTaskCarrier = readQ( entry->waitersQ ); + waitingTaskCarrier = readPrivQ( ptrEntry->waitersQ ); + if( waitingTaskCarrier == NULL ) + { //TODO: looks safe to delete ptr entry at this point + continue; //go to next iter of loop, done here. + } waitingTaskStub = waitingTaskCarrier->taskStub; /*If task is a writer of this hash-entry's pointer*/ if( waitingTaskCarrier->type == WRITER ) { /* then turn the flag back on.*/ - entry->hasEnabledNonFinishedWriter = TRUE; + ptrEntry->hasEnabledNonFinishedWriter = TRUE; /*Decrement the writer's blocking-propendent-count in task-stub * If it becomes zero, then put the task-stub into the readyQ.*/ waitingTaskStub->numBlockingProp -= 1; if( waitingTaskStub->numBlockingProp == 0 ) - { writeQ( waitingTaskStub, semEnv->taskReadyQ ); + { writePrivQ( waitingTaskStub, semEnv->taskReadyQ ); } } else @@ -429,27 +476,28 @@ while( TRUE ) /*The checks guarantee have a waiting reader*/ { /*Increment the hash-entry's count of enabled non-finished * readers.*/ - entry->numEnabledNonDoneReaders += 1; + ptrEntry->numEnabledNonDoneReaders += 1; /*Decrement the blocking propendents count of the reader's * task-stub. If it reaches zero, then put the task-stub * into the readyQ.*/ waitingTaskStub->numBlockingProp -= 1; if( waitingTaskStub->numBlockingProp == 0 ) - { writeQ( waitingTaskStub, semEnv->taskReadyQ ); + { writePrivQ( waitingTaskStub, semEnv->taskReadyQ ); } /*Get next waiting task*/ - waitingTaskCarrier = peekQ( entry->waitersQ ); + waitingTaskCarrier = peekPrivQ( ptrEntry->waitersQ ); if( waitingTaskCarrier == NULL ) break; if( waitingTaskCarrier->type == WRITER ) break; - waitingTaskCarrier = readQ( entry->waitersQ ); + waitingTaskCarrier = readPrivQ( ptrEntry->waitersQ ); waitingTaskStub = waitingTaskCarrier->taskStub; }//while waiter is a reader - }//first waiting task is a reader - }//check of ending task, whether writer or reader + }//if-else, first waiting task is a reader + }//if-else, check of ending task, whether writer or reader }//for argnum in ending task //done ending the task, now free the stub + args copy + VMS_PI__free( endingTaskStub->ptrEntries ); VMS_PI__free( endingTaskStub ); //Resume the slave that animated the task -- assigner will give new task @@ -465,24 +513,24 @@ /* */ void -handleMalloc( VSsSemReq *semReq, SlaveVP *requestingPr, VSsSemEnv *semEnv ) +handleMalloc( VSsSemReq *semReq, SlaveVP *requestingSlv, VSsSemEnv *semEnv ) { void *ptr; - DEBUG__printf1(dbgRqstHdlr,"Malloc request from processor %d",requestingPr->slaveID) + DEBUG__printf1(dbgRqstHdlr,"Malloc request from processor %d",requestingSlv->slaveID) ptr = VMS_PI__malloc( semReq->sizeToMalloc ); - requestingPr->dataRetFromReq = ptr; - resume_slaveVP( requestingPr, semEnv ); + requestingSlv->dataRetFromReq = ptr; + resume_slaveVP( requestingSlv, semEnv ); } /* */ void -handleFree( VSsSemReq *semReq, SlaveVP *requestingPr, VSsSemEnv *semEnv ) +handleFree( VSsSemReq *semReq, SlaveVP *requestingSlv, VSsSemEnv *semEnv ) { - DEBUG__printf1(dbgRqstHdlr,"Free request from processor %d",requestingPr->slaveID) + DEBUG__printf1(dbgRqstHdlr,"Free request from processor %d",requestingSlv->slaveID) VMS_PI__free( semReq->ptrToFree ); - resume_slaveVP( requestingPr, semEnv ); + resume_slaveVP( requestingSlv, semEnv ); } @@ -492,43 +540,43 @@ * end-label. Else, sets flag and resumes normally. */ void inline -handleStartSingleton_helper( VSsSingleton *singleton, SlaveVP *reqstingPr, +handleStartSingleton_helper( VSsSingleton *singleton, SlaveVP *reqstingSlv, VSsSemEnv *semEnv ) { if( singleton->hasFinished ) { //the code that sets the flag to true first sets the end instr addr - reqstingPr->dataRetFromReq = singleton->endInstrAddr; - resume_slaveVP( reqstingPr, semEnv ); + reqstingSlv->dataRetFromReq = singleton->endInstrAddr; + resume_slaveVP( reqstingSlv, semEnv ); return; } else if( singleton->hasBeenStarted ) { //singleton is in-progress in a diff slave, so wait for it to finish - writePrivQ(reqstingPr, singleton->waitQ ); + writePrivQ(reqstingSlv, singleton->waitQ ); return; } else { //hasn't been started, so this is the first attempt at the singleton singleton->hasBeenStarted = TRUE; - reqstingPr->dataRetFromReq = 0x0; - resume_slaveVP( reqstingPr, semEnv ); + reqstingSlv->dataRetFromReq = 0x0; + resume_slaveVP( reqstingSlv, semEnv ); return; } } void inline -handleStartFnSingleton( VSsSemReq *semReq, SlaveVP *requestingPr, +handleStartFnSingleton( VSsSemReq *semReq, SlaveVP *requestingSlv, VSsSemEnv *semEnv ) { VSsSingleton *singleton; - DEBUG__printf1(dbgRqstHdlr,"StartFnSingleton request from processor %d",requestingPr->slaveID) + DEBUG__printf1(dbgRqstHdlr,"StartFnSingleton request from processor %d",requestingSlv->slaveID) singleton = &(semEnv->fnSingletons[ semReq->singletonID ]); - handleStartSingleton_helper( singleton, requestingPr, semEnv ); + handleStartSingleton_helper( singleton, requestingSlv, semEnv ); } void inline -handleStartDataSingleton( VSsSemReq *semReq, SlaveVP *requestingPr, +handleStartDataSingleton( VSsSemReq *semReq, SlaveVP *requestingSlv, VSsSemEnv *semEnv ) { VSsSingleton *singleton; - DEBUG__printf1(dbgRqstHdlr,"StartDataSingleton request from processor %d",requestingPr->slaveID) + DEBUG__printf1(dbgRqstHdlr,"StartDataSingleton request from processor %d",requestingSlv->slaveID) if( *(semReq->singletonPtrAddr) == NULL ) { singleton = VMS_PI__malloc( sizeof(VSsSingleton) ); singleton->waitQ = makeVMSQ(); @@ -539,21 +587,21 @@ } else singleton = *(semReq->singletonPtrAddr); - handleStartSingleton_helper( singleton, requestingPr, semEnv ); + handleStartSingleton_helper( singleton, requestingSlv, semEnv ); } void inline -handleEndSingleton_helper( VSsSingleton *singleton, SlaveVP *requestingPr, +handleEndSingleton_helper( VSsSingleton *singleton, SlaveVP *requestingSlv, VSsSemEnv *semEnv ) { PrivQueueStruc *waitQ; int32 numWaiting, i; - SlaveVP *resumingPr; + SlaveVP *resumingSlv; if( singleton->hasFinished ) { //by definition, only one slave should ever be able to run end singleton // so if this is true, is an error - ERROR1( "singleton code ran twice", requestingPr ); + ERROR1( "singleton code ran twice", requestingSlv ); } singleton->hasFinished = TRUE; @@ -561,35 +609,35 @@ numWaiting = numInPrivQ( waitQ ); for( i = 0; i < numWaiting; i++ ) { //they will resume inside start singleton, then jmp to end singleton - resumingPr = readPrivQ( waitQ ); - resumingPr->dataRetFromReq = singleton->endInstrAddr; - resume_slaveVP( resumingPr, semEnv ); + resumingSlv = readPrivQ( waitQ ); + resumingSlv->dataRetFromReq = singleton->endInstrAddr; + resume_slaveVP( resumingSlv, semEnv ); } - resume_slaveVP( requestingPr, semEnv ); + resume_slaveVP( requestingSlv, semEnv ); } void inline -handleEndFnSingleton( VSsSemReq *semReq, SlaveVP *requestingPr, +handleEndFnSingleton( VSsSemReq *semReq, SlaveVP *requestingSlv, VSsSemEnv *semEnv ) { VSsSingleton *singleton; - DEBUG__printf1(dbgRqstHdlr,"EndFnSingleton request from processor %d",requestingPr->slaveID) + DEBUG__printf1(dbgRqstHdlr,"EndFnSingleton request from processor %d",requestingSlv->slaveID) singleton = &(semEnv->fnSingletons[ semReq->singletonID ]); - handleEndSingleton_helper( singleton, requestingPr, semEnv ); + handleEndSingleton_helper( singleton, requestingSlv, semEnv ); } void inline -handleEndDataSingleton( VSsSemReq *semReq, SlaveVP *requestingPr, +handleEndDataSingleton( VSsSemReq *semReq, SlaveVP *requestingSlv, VSsSemEnv *semEnv ) { VSsSingleton *singleton; - DEBUG__printf1(dbgRqstHdlr,"EndDataSingleton request from processor %d",requestingPr->slaveID) + DEBUG__printf1(dbgRqstHdlr,"EndDataSingleton request from processor %d",requestingSlv->slaveID) singleton = *(semReq->singletonPtrAddr); - handleEndSingleton_helper( singleton, requestingPr, semEnv ); + handleEndSingleton_helper( singleton, requestingSlv, semEnv ); } @@ -597,11 +645,11 @@ * pointer out of the request and call it, then resume the VP. */ void -handleAtomic( VSsSemReq *semReq, SlaveVP *requestingPr, VSsSemEnv *semEnv ) +handleAtomic( VSsSemReq *semReq, SlaveVP *requestingSlv, VSsSemEnv *semEnv ) { - DEBUG__printf1(dbgRqstHdlr,"Atomic request from processor %d",requestingPr->slaveID) + DEBUG__printf1(dbgRqstHdlr,"Atomic request from processor %d",requestingSlv->slaveID) semReq->fnToExecInMaster( semReq->dataForFn ); - resume_slaveVP( requestingPr, semEnv ); + resume_slaveVP( requestingSlv, semEnv ); } /*First, it looks at the VP's semantic data, to see the highest transactionID @@ -619,18 +667,18 @@ *If NULL, then write requesting into the field and resume. */ void -handleTransStart( VSsSemReq *semReq, SlaveVP *requestingPr, +handleTransStart( VSsSemReq *semReq, SlaveVP *requestingSlv, VSsSemEnv *semEnv ) { VSsSemData *semData; TransListElem *nextTransElem; - DEBUG__printf1(dbgRqstHdlr,"TransStart request from processor %d",requestingPr->slaveID) + DEBUG__printf1(dbgRqstHdlr,"TransStart request from processor %d",requestingSlv->slaveID) //check ordering of entering transactions is correct - semData = requestingPr->semanticData; + semData = requestingSlv->semanticData; if( semData->highestTransEntered > semReq->transID ) { //throw VMS exception, which shuts down VMS. - VMS_PI__throw_exception( "transID smaller than prev", requestingPr, NULL); + VMS_PI__throw_exception( "transID smaller than prev", requestingSlv, NULL); } //add this trans ID to the list of transactions entered -- check when // end a transaction @@ -646,14 +694,14 @@ if( transStruc->VPCurrentlyExecuting == NULL ) { - transStruc->VPCurrentlyExecuting = requestingPr; - resume_slaveVP( requestingPr, semEnv ); + transStruc->VPCurrentlyExecuting = requestingSlv; + resume_slaveVP( requestingSlv, semEnv ); } else { //note, might make future things cleaner if save request with VP and // add this trans ID to the linked list when gets out of queue. // but don't need for now, and lazy.. - writePrivQ( requestingPr, transStruc->waitingVPQ ); + writePrivQ( requestingSlv, transStruc->waitingVPQ ); } } @@ -672,38 +720,38 @@ * resume both. */ void -handleTransEnd(VSsSemReq *semReq, SlaveVP *requestingPr, VSsSemEnv *semEnv) +handleTransEnd(VSsSemReq *semReq, SlaveVP *requestingSlv, VSsSemEnv *semEnv) { VSsSemData *semData; - SlaveVP *waitingPr; + SlaveVP *waitingSlv; VSsTrans *transStruc; TransListElem *lastTrans; - DEBUG__printf1(dbgRqstHdlr,"TransEnd request from processor %d",requestingPr->slaveID) + DEBUG__printf1(dbgRqstHdlr,"TransEnd request from processor %d",requestingSlv->slaveID) transStruc = &(semEnv->transactionStrucs[ semReq->transID ]); //make sure transaction ended in same VP as started it. - if( transStruc->VPCurrentlyExecuting != requestingPr ) + if( transStruc->VPCurrentlyExecuting != requestingSlv ) { - VMS_PI__throw_exception( "trans ended in diff VP", requestingPr, NULL ); + VMS_PI__throw_exception( "trans ended in diff VP", requestingSlv, NULL ); } //make sure nesting is correct -- last ID entered should == this ID - semData = requestingPr->semanticData; + semData = requestingSlv->semanticData; lastTrans = semData->lastTransEntered; if( lastTrans->transID != semReq->transID ) { - VMS_PI__throw_exception( "trans incorrectly nested", requestingPr, NULL ); + VMS_PI__throw_exception( "trans incorrectly nested", requestingSlv, NULL ); } semData->lastTransEntered = semData->lastTransEntered->nextTrans; - waitingPr = readPrivQ( transStruc->waitingVPQ ); - transStruc->VPCurrentlyExecuting = waitingPr; + waitingSlv = readPrivQ( transStruc->waitingVPQ ); + transStruc->VPCurrentlyExecuting = waitingSlv; - if( waitingPr != NULL ) - resume_slaveVP( waitingPr, semEnv ); + if( waitingSlv != NULL ) + resume_slaveVP( waitingSlv, semEnv ); - resume_slaveVP( requestingPr, semEnv ); + resume_slaveVP( requestingSlv, semEnv ); }