# HG changeset patch # User Sean Halle # Date 1343805533 25200 # Node ID 1780f6b00e3d9750dcc6ad3ead6d0e195a97b8c9 # Parent 8188c5b4bfd7cedb1e8c3dad7ab66ab23c781ac9 Not working -- checkpoint while making explicitly created VPs work, and DKU pattern diff -r 8188c5b4bfd7 -r 1780f6b00e3d VSs.c --- a/VSs.c Fri Jul 13 17:35:49 2012 +0200 +++ b/VSs.c Wed Aug 01 00:18:53 2012 -0700 @@ -73,8 +73,10 @@ */ void VSs__create_seed_slave_and_do_work( TopLevelFnPtr fnPtr, void *initData ) - { VSsSemEnv *semEnv; - SlaveVP *seedSlv; + { VSsSemEnv *semEnv; + SlaveVP *seedSlv; + VSsSemData *semData; + VSsTaskStub *explPrTaskStub; VSs__init(); //normal multi-thd @@ -83,12 +85,18 @@ //VSs starts with one processor, which is put into initial environ, // and which then calls create() to create more, thereby expanding work seedSlv = VSs__create_slave_helper( fnPtr, initData, - semEnv, semEnv->nextCoreToGetNewSlv++ ); + semEnv, semEnv->nextCoreToGetNewSlv++ ); - //seedVP doesn't do tasks - ((VSsSemData *)seedSlv->semanticData)->needsTaskAssigned = FALSE; + //seed slave is an explicit processor, so make one of the special + // task stubs for explicit processors, and attach it to the slave + explPrTaskStub = create_expl_proc_task_stub( initData ); + + semData = (VSsSemData *)seedSlv->semanticData; + //seedVP already has a permanent task + semData->needsTaskAssigned = FALSE; + semData->taskStub = explPrTaskStub; - resume_slaveVP( seedSlv, semEnv ); + resume_slaveVP( seedSlv, semEnv ); //returns right away, just queues Slv VMS_SS__start_the_work_then_wait_until_done(); //normal multi-thd @@ -169,8 +177,7 @@ void VSs__init_Helper() { VSsSemEnv *semanticEnv; - PrivQueueStruc **readyVPQs; - int coreIdx, i, j; + int32 i, coreNum, slotNum; //Hook up the semantic layer's plug-ins to the Master virt procr _VMSMasterEnv->requestHandler = &VSs__Request_Handler; @@ -190,43 +197,35 @@ semanticEnv->shutdownInitiated = FALSE; semanticEnv->coreIsDone = VMS_int__malloc( NUM_CORES * sizeof( bool32 ) ); - for( i = 0; i < NUM_CORES; ++i ) - { semanticEnv->coreIsDone[i] = FALSE; - for( j = 0; j < NUM_ANIM_SLOTS; ++j ) - { - semanticEnv->idleSlv[i][j] = VMS_int__create_slaveVP(&idle_fn,NULL); - semanticEnv->idleSlv[i][j]->coreAnimatedBy = i; + //For each animation slot, there is an idle slave, and an initial + // slave assigned as the current-task-slave. Create them here. + SlaveVP *idleSlv, *currTaskSlv; + for( coreNum = 0; coreNum < NUM_CORES; coreNum++ ) + { semanticEnv->coreIsDone[coreNum] = FALSE; //use during shutdown + + for( slotNum = 0; slotNum < NUM_ANIM_SLOTS; ++slotNum ) + { idleSlv = VMS_int__create_slaveVP(&idle_fn,NULL); + idleSlv->coreAnimatedBy = coreNum; + idleSlv->animSlotAssignedTo = slotNum; + semanticEnv->idleSlv[coreNum][slotNum] = idleSlv; + + currTaskSlv = VMS_int__create_slaveVP( &idle_fn, NULL ); + currTaskSlv->coreAnimatedBy = coreNum; + currTaskSlv->animSlotAssignedTo = slotNum; + semanticEnv->currTaskSlvs[coreNum][slotNum] = currTaskSlv; } } - #ifdef HOLISTIC__TURN_ON_OBSERVE_UCC - semanticEnv->unitList = makeListOfArrays(sizeof(Unit),128); - semanticEnv->ctlDependenciesList = makeListOfArrays(sizeof(Dependency),128); - semanticEnv->commDependenciesList = makeListOfArrays(sizeof(Dependency),128); - semanticEnv->dynDependenciesList = makeListOfArrays(sizeof(Dependency),128); - semanticEnv->ntonGroupsInfo = makePrivDynArrayOfSize((void***)&(semanticEnv->ntonGroups),8); - - semanticEnv->hwArcs = makeListOfArrays(sizeof(Dependency),128); - memset(semanticEnv->last_in_slot,0,sizeof(NUM_CORES * NUM_ANIM_SLOTS * sizeof(Unit))); - #endif - - //create the ready queue, hash tables used for matching and so forth - readyVPQs = VMS_int__malloc( NUM_CORES * sizeof(PrivQueueStruc *) ); - - for( coreIdx = 0; coreIdx < NUM_CORES; coreIdx++ ) - { - readyVPQs[ coreIdx ] = makeVMSQ(); - } - - semanticEnv->readyVPQs = readyVPQs; - - semanticEnv->taskReadyQ = makeVMSQ(); - - semanticEnv->nextCoreToGetNewSlv = 0; - semanticEnv->numSlaveVP = 0; + //create the ready queues, hash tables used for matching and so forth + semanticEnv->slavesReadyToResumeQ = makeVMSQ(); + semanticEnv->extraTaskSlvQ = makeVMSQ(); + semanticEnv->taskReadyQ = makeVMSQ(); semanticEnv->argPtrHashTbl = makeHashTable32( 16, &VMS_int__free ); semanticEnv->commHashTbl = makeHashTable32( 16, &VMS_int__free ); + + semanticEnv->nextCoreToGetNewSlv = 0; + //TODO: bug -- turn these arrays into dyn arrays to eliminate limit //semanticEnv->singletonHasBeenExecutedFlags = makeDynArrayInfo( ); @@ -239,6 +238,19 @@ semanticEnv->fnSingletons[i].waitQ = makeVMSQ(); semanticEnv->transactionStrucs[i].waitingVPQ = makeVMSQ(); } + + semanticEnv->numAdditionalSlvs = 0; //must be last + + #ifdef HOLISTIC__TURN_ON_OBSERVE_UCC + semanticEnv->unitList = makeListOfArrays(sizeof(Unit),128); + semanticEnv->ctlDependenciesList = makeListOfArrays(sizeof(Dependency),128); + semanticEnv->commDependenciesList = makeListOfArrays(sizeof(Dependency),128); + semanticEnv->dynDependenciesList = makeListOfArrays(sizeof(Dependency),128); + semanticEnv->ntonGroupsInfo = makePrivDynArrayOfSize((void***)&(semanticEnv->ntonGroups),8); + + semanticEnv->hwArcs = makeListOfArrays(sizeof(Dependency),128); + memset(semanticEnv->last_in_slot,0,sizeof(NUM_CORES * NUM_ANIM_SLOTS * sizeof(Unit))); + #endif } diff -r 8188c5b4bfd7 -r 1780f6b00e3d VSs.h --- a/VSs.h Fri Jul 13 17:35:49 2012 +0200 +++ b/VSs.h Wed Aug 01 00:18:53 2012 -0700 @@ -61,9 +61,7 @@ } VSsPointerEntry; -typedef struct _VSsTaskStub VSsTaskStub; - -struct _VSsTaskStub +typedef struct { void **args; //ctld args must come first, as ptrs VSsTaskType *taskType; @@ -74,17 +72,19 @@ void* parent; bool32 parentIsTask; int32 numChildTasks; - bool32 isWaiting; + bool32 isWaiting; } -; +VSsTaskStub; -typedef struct { +typedef struct + { void* parent; bool32 parentIsTask; int32 numChildTasks; bool32 isWaiting; SlaveVP *slaveAssignedTo; -} VSsThreadInfo; + } +VSsThreadInfo; typedef struct { @@ -185,11 +185,13 @@ typedef struct { - PrivQueueStruc **readyVPQs; - PrivQueueStruc *taskReadyQ; //Q: shared or local? + PrivQueueStruc **slavesReadyToResumeQ; //Shared (slaves not pinned) + PrivQueueStruc **extraTaskSlvQ; //Shared + PrivQueueStruc *taskReadyQ; //Shared (tasks not pinned) + SlaveVP *currTaskSlvs[NUM_CORES][NUM_ANIM_SLOTS]; HashTable *argPtrHashTbl; HashTable *commHashTbl; - int32 numSlaveVP; + int32 numAdditionalSlvs; int32 nextCoreToGetNewSlv; int32 primitiveStartTime; diff -r 8188c5b4bfd7 -r 1780f6b00e3d VSs_PluginFns.c --- a/VSs_PluginFns.c Fri Jul 13 17:35:49 2012 +0200 +++ b/VSs_PluginFns.c Wed Aug 01 00:18:53 2012 -0700 @@ -15,26 +15,42 @@ void resume_slaveVP( SlaveVP *slave, VSsSemEnv *semEnv ); -void +inline void handleSemReq( VMSReqst *req, SlaveVP *requestingSlv, VSsSemEnv *semEnv ); -void +inline void handleDissipate( SlaveVP *requestingSlv, VSsSemEnv *semEnv ); -void +inline void handleCreate( VMSReqst *req, SlaveVP *requestingSlv, VSsSemEnv *semEnv ); //============================== Assigner ================================== // -/*For VSs, assigning a slave simply takes the next work-unit off the - * ready-to-go work-unit queue and assigns it to the offered slot. - *If the ready-to-go work-unit queue is empty, then nothing to assign - * to the animation slot -- return FALSE to let Master loop know assigning - * that slot failed. +/*The assigner is complicated by having both tasks and explicitly created + * VPs, and by tasks being able to suspend. + *It can't use an explicit slave to animate a task because of stack + * pollution. So, it has to keep the two kinds separate. + *Simplest way for the assigner logic is with a Q for extra empty task + * slaves, and another Q for slaves of both types that are ready to resume. + * + *Keep a current task slave for each anim slot. The request handler manages + * it by pulling from the extraTaskSlvQ when a task suspends, or else + * creating a new task slave if taskSlvQ empty. + *Assigner only assigns a task to the current task slave for the slot. + *If no more tasks, then takes a ready to resume slave, if also none of them + * then dissipates extra task slaves (one per invocation). + *Shutdown condition is: must have no suspended tasks, and no suspended + * explicit slaves and no more tasks in taskQ. Will only have the masters + * plus a current task slave for each slot.. detects this condition. + * + *Having the two types of slave is part of having communications directly + * between tasks, and tasks to explicit slaves, which requires the ability + * to suspend both kinds, but also to keep explicit slave stacks clean from + * the junk tasks are allowed to leave behind. */ SlaveVP * VSs__assign_slaveVP_to_slot( void *_semEnv, AnimSlot *slot ) - { SlaveVP *assignSlv; + { SlaveVP *returnSlv; VSsSemEnv *semEnv; VSsSemData *semData; int32 coreNum, slotNum; @@ -44,111 +60,113 @@ semEnv = (VSsSemEnv *)_semEnv; - /*At this point, could do an optimization -- have one slave for each slot - * and make it ALWAYS the one to assign to that slot -- so there is no - * read fromQ. However, going to keep this compatible with other - * languages, like VOMP and SSR. So, leave the normal slave fetch - * from readyQ. For example, allows SSR constructs, to create extra - * slaves, and send communications direction between them, while still - * having the StarSs-style spawning of tasks.. so one of the tasks - * can now suspend and do more interesting things.. means keep a pool - * of slaves, and take one from pool when a task suspends. - */ - //TODO: fix false sharing in array - assignSlv = readPrivQ( semEnv->readyVPQs[coreNum] ); - if( assignSlv == NULL ) + + //Speculatively set the return slave to the current taskSlave + //TODO: false sharing ? Always read.. + returnSlv = semEnv->currTaskSlvs[coreNum][slotNum]; + +/* request handlers do this now.. move it to there.. + if( returnSlv == NULL ) { //make a new slave to animate //This happens for the first task on the core and when all available - // slaves are blocked by constructs like send, or mutex, and so on.. - assignSlv = VSs__create_slave_helper( NULL, NULL, semEnv, coreNum ); + //slaves are blocked by constructs like send, or mutex, and so on. + returnSlv = VSs__create_slave_helper( NULL, NULL, semEnv, coreNum ); } - semData = (VSsSemData *)assignSlv->semanticData; - //slave could be resuming a task in progress, check for this - if( semData->needsTaskAssigned ) - { //no, not resuming, needs a task.. - VSsTaskStub *newTaskStub; - SlaveVP *extraSlv; - newTaskStub = readPrivQ( semEnv->taskReadyQ ); - if( newTaskStub == NULL ) - { //No task, so slave unused, so put it back and return "no-slave" - //But first check if have extra free slaves - extraSlv = readPrivQ( semEnv->readyVPQs[coreNum] ); - if( extraSlv == NULL ) - { //means no tasks and no slave on this core can generate more - //TODO: false sharing - if( semEnv->coreIsDone[coreNum] == FALSE) + */ + semData = (VSsSemData *)returnSlv->semanticData; + + //There is always a curr task slave, and it always needs a task + //(task slaves that are resuming are in resumeQ) + VSsTaskStub *newTaskStub; + SlaveVP *extraSlv; + newTaskStub = readPrivQ( semEnv->taskReadyQ ); + if( newTaskStub != NULL ) + { //point slave to task's function, and mark slave as having task + VMS_int__reset_slaveVP_to_TopLvlFn( returnSlv, + newTaskStub->taskType->fn, newTaskStub->args ); + semData->taskStub = newTaskStub; + newTaskStub->slaveAssignedTo = returnSlv; + semData->needsTaskAssigned = FALSE; + if( semEnv->coreIsDone[coreNum] == TRUE ) //reads are higher perf + semEnv->coreIsDone[coreNum] = FALSE; //don't just write always + goto ReturnTheSlv; + } + else + { //no task, so try to get a ready to resume slave + returnSlv = readPrivQ( semEnv->slavesReadyToResumeQ ); + if( returnSlv != NULL ) //Yes, have a slave, so return it. + { returnSlv->coreAnimatedBy = coreNum; + if( semEnv->coreIsDone[coreNum] == TRUE ) //reads are higher perf + semEnv->coreIsDone[coreNum] = FALSE; //don't just write always + goto ReturnTheSlv; + } + //If get here, then no task, so check if have extra free slaves + extraSlv = readPrivQ( semEnv->extraTaskSlvQ ); + if( extraSlv != NULL ) + { //means have two slaves need tasks -- redundant, kill one + handleDissipate( extraSlv, semEnv ); + //then return NULL + returnSlv = NULL; + goto ReturnTheSlv; + } + else + { //candidate for shutdown.. if all extras dissipated, and no tasks + // and no ready to resume slaves, then then no way to generate + // more tasks.. + if( semEnv->numAdditionalSlvs == 0 ) //means none suspended + { //This core sees no way to generate more tasks, so say it + if( semEnv->coreIsDone[coreNum] == FALSE ) { semEnv->numCoresDone += 1; semEnv->coreIsDone[coreNum] = TRUE; #ifdef DEBUG__TURN_ON_SEQUENTIAL_MODE semEnv->shutdownInitiated = TRUE; #else if( semEnv->numCoresDone == NUM_CORES ) - { //means no cores have work, and none can generate more + { //means no cores have work, and none can generate more semEnv->shutdownInitiated = TRUE; } #endif } - //put slave back into Q and return NULL - writePrivQ( assignSlv, semEnv->readyVPQs[coreNum] ); - assignSlv = NULL; - //except if shutdown has been initiated by this or other core - if(semEnv->shutdownInitiated) - { assignSlv = VMS_SS__create_shutdown_slave(); - } } - else //extra slave exists, but no tasks for either slave - { if(((VSsSemData *)extraSlv->semanticData)->needsTaskAssigned == TRUE) - { //means have two slaves need tasks -- redundant, kill one - handleDissipate( extraSlv, semEnv ); - //then put other back into Q and return NULL - writePrivQ( assignSlv, semEnv->readyVPQs[coreNum] ); - assignSlv = NULL; - } - else - { //extra slave has work -- so take it instead - writePrivQ( assignSlv, semEnv->readyVPQs[coreNum] ); - assignSlv = extraSlv; - //semData = (VSsSemData *)assignSlv->semanticData; Don't use - } + //return NULL.. no task and none to resume + returnSlv = NULL; + //except if shutdown has been initiated by this or other core + if(semEnv->shutdownInitiated) + { returnSlv = VMS_SS__create_shutdown_slave(); } - } - else //have a new task for the slave. - { //point slave to task's function, and mark slave as having task - VMS_int__reset_slaveVP_to_TopLvlFn( assignSlv, - newTaskStub->taskType->fn, newTaskStub->args ); - semData->taskStub = newTaskStub; - newTaskStub->slaveAssignedTo = assignSlv; - semData->needsTaskAssigned = FALSE; - } - } //outcome: 1)slave didn't need a new task 2)slave just pointed at one - // 3)no tasks, so slave NULL - + goto ReturnTheSlv; //don't need, but completes pattern + } //if( extraSlv != NULL ) + } //if( newTaskStub == NULL ) + //outcome: 1)slave was just pointed to task, 2)no tasks, so slave NULL + +ReturnTheSlv: //Nina, doing gotos to here should help with holistic.. + #ifdef HOLISTIC__TURN_ON_OBSERVE_UCC - if( assignSlv == NULL ) - { assignSlv = semEnv->idleSlv[coreNum][slotNum]; + if( returnSlv == NULL ) + { returnSlv = semEnv->idleSlv[coreNum][slotNum]; if(semEnv->shutdownInitiated) - { assignSlv = VMS_SS__create_shutdown_slave(); + { returnSlv = VMS_SS__create_shutdown_slave(); } //things that would normally happen in resume(), but these VPs // never go there - assignSlv->assignCount++; //Somewhere here! + returnSlv->assignCount++; //Somewhere here! Unit newu; - newu.vp = assignSlv->slaveID; - newu.task = assignSlv->assignCount; + newu.vp = returnSlv->slaveID; + newu.task = returnSlv->assignCount; addToListOfArrays(Unit,newu,semEnv->unitList); - if (assignSlv->assignCount > 1) + if (returnSlv->assignCount > 1) { Dependency newd; - newd.from_vp = assignSlv->slaveID; - newd.from_task = assignSlv->assignCount - 1; - newd.to_vp = assignSlv->slaveID; - newd.to_task = assignSlv->assignCount; + newd.from_vp = returnSlv->slaveID; + newd.from_task = returnSlv->assignCount - 1; + newd.to_vp = returnSlv->slaveID; + newd.to_task = returnSlv->assignCount; addToListOfArrays(Dependency, newd ,semEnv->ctlDependenciesList); } } #endif #ifdef HOLISTIC__TURN_ON_OBSERVE_UCC - if( assignSlv != NULL ) + if( returnSlv != NULL ) { //assignSlv->numTimesAssigned++; Unit prev_in_slot = semEnv->last_in_slot[coreNum * NUM_ANIM_SLOTS + slotNum]; @@ -156,23 +174,24 @@ { Dependency newd; newd.from_vp = prev_in_slot.vp; newd.from_task = prev_in_slot.task; - newd.to_vp = assignSlv->slaveID; - newd.to_task = assignSlv->assignCount; + newd.to_vp = returnSlv->slaveID; + newd.to_task = returnSlv->assignCount; addToListOfArrays(Dependency,newd,semEnv->hwArcs); } - prev_in_slot.vp = assignSlv->slaveID; - prev_in_slot.task = assignSlv->assignCount; + prev_in_slot.vp = returnSlv->slaveID; + prev_in_slot.task = returnSlv->assignCount; semEnv->last_in_slot[coreNum * NUM_ANIM_SLOTS + slotNum] = prev_in_slot; } #endif - return( assignSlv ); + return( returnSlv ); } //=========================== Request Handler ============================ // /* + * (BTW not inline because invoked indirectly via a pointer) */ void VSs__Request_Handler( SlaveVP *requestingSlv, void *_semEnv ) @@ -186,14 +205,14 @@ while( req != NULL ) { switch( req->reqType ) - { case semantic: handleSemReq( req, requestingSlv, semEnv); + { case semantic: handleSemReq( req, requestingSlv, semEnv); break; - case createReq: handleCreate( req, requestingSlv, semEnv); + case createReq: handleCreate( req, requestingSlv, semEnv); break; - case dissipate: handleDissipate( requestingSlv, semEnv); + case dissipate: handleDissipate( requestingSlv, semEnv); break; case VMSSemantic: VMS_PI__handle_VMSSemReq(req, requestingSlv, semEnv, - (ResumeSlvFnPtr) &resume_slaveVP); + (ResumeSlvFnPtr) &resume_slaveVP); break; default: break; @@ -205,7 +224,7 @@ } -void +inline void handleSemReq( VMSReqst *req, SlaveVP *reqSlv, VSsSemEnv *semEnv ) { VSsSemReq *semReq; @@ -213,22 +232,22 @@ if( semReq == NULL ) return; switch( semReq->reqType ) //sem handlers are all in other file { - case submit_task: handleSubmitTask( semReq, semEnv); + case submit_task: handleSubmitTask( semReq, semEnv); + break; + case end_task: handleEndTask( semReq, semEnv); break; - case end_task: handleEndTask( semReq, semEnv); + case send_type_to: handleSendTypeTo( semReq, semEnv); break; - case send_type_to: handleSendTypeTo( semReq, semEnv); + case send_from_to: handleSendFromTo( semReq, semEnv); break; - case send_from_to: handleSendFromTo( semReq, semEnv); + case receive_type_to: handleReceiveTypeTo(semReq, semEnv); break; - case receive_type_to: handleReceiveTypeTo(semReq, semEnv); + case receive_from_to: handleReceiveFromTo(semReq, semEnv); break; - case receive_from_to: handleReceiveFromTo(semReq, semEnv); - break; + case taskwait: handleTaskwait( semReq, reqSlv, semEnv); + break; //==================================================================== - case taskwait: handleTaskwait(semReq, reqSlv, semEnv); - break; case malloc_req: handleMalloc( semReq, reqSlv, semEnv); break; case free_req: handleFree( semReq, reqSlv, semEnv); @@ -255,10 +274,14 @@ //=========================== VMS Request Handlers ============================== /*SlaveVP dissipate (NOT task-end!) */ -void +inline void handleDissipate( SlaveVP *requestingSlv, VSsSemEnv *semEnv ) { - DEBUG__printf1(dbgRqstHdlr,"Dissipate request from processor %d",requestingSlv->slaveID) + DEBUG__printf1(dbgRqstHdlr,"Dissipate request from processor %d", + requestingSlv->slaveID) + + semEnv->numAdditionalSlvs -= 1; + //free any semantic data allocated to the virt procr VMS_PI__free( requestingSlv->semanticData ); @@ -268,7 +291,7 @@ /*Re-use this in the entry-point fn */ -SlaveVP * +inline SlaveVP * VSs__create_slave_helper( TopLevelFnPtr fnPtr, void *initData, VSsSemEnv *semEnv, int32 coreToAssignOnto ) { SlaveVP *newSlv; @@ -277,7 +300,7 @@ //This is running in master, so use internal version newSlv = VMS_PI__create_slaveVP( fnPtr, initData ); - semEnv->numSlaveVP += 1; + semEnv->numAdditionalSlvs += 1; semData = VMS_PI__malloc( sizeof(VSsSemData) ); semData->highestTransEntered = -1; @@ -298,7 +321,7 @@ newSlv->coreAnimatedBy = 0; #else - + //Assigning slaves to cores is part of SSR code.. if(coreToAssignOnto < 0 || coreToAssignOnto >= NUM_CORES ) { //out-of-range, so round-robin assignment newSlv->coreAnimatedBy = semEnv->nextCoreToGetNewSlv; @@ -317,9 +340,17 @@ return newSlv; } -/*SlaveVP create (NOT task create!) +/*This has been removed, because have changed things.. the only way to + * create a slaveVP now is to either do an explicit create in the app, or + * else for req hdlr to create it when a task suspends if no extras are + * free. + *So, only have handleExplCreate for now.. and have the req hdlrs use the + * helper + *SlaveVP create (NOT task create!) + * */ -void +/* +inline void handleCreate( VMSReqst *req, SlaveVP *requestingSlv, VSsSemEnv *semEnv ) { VSsSemReq *semReq; SlaveVP *newSlv; @@ -327,12 +358,13 @@ semReq = VMS_PI__take_sem_reqst_from( req ); - newSlv = VSs__create_slave_helper( semReq->fnPtr, semReq->initData, semEnv, - semReq->coreToAssignOnto ); + newSlv = VSs__create_slave_helper( semReq->fnPtr, semReq->initData, + semEnv, semReq->coreToAssignOnto ); ((VSsSemData*)newSlv->semanticData)->threadInfo->parent = requestingSlv; - DEBUG__printf2(dbgRqstHdlr,"Create from: %d, new VP: %d", requestingSlv->slaveID, newSlv->slaveID) + DEBUG__printf2(dbgRqstHdlr,"Create from: %d, new VP: %d", + requestingSlv->slaveID, newSlv->slaveID) #ifdef HOLISTIC__TURN_ON_OBSERVE_UCC Dependency newd; @@ -340,7 +372,75 @@ newd.from_task = requestingSlv->assignCount; newd.to_vp = newSlv->slaveID; newd.to_task = 1; - //addToListOfArraysDependency(newd,semEnv->commDependenciesList); + addToListOfArrays(Dependency,newd,semEnv->commDependenciesList); + #endif + + //For VSs, caller needs ptr to created processor returned to it + requestingSlv->dataRetFromReq = newSlv; + + resume_slaveVP( requestingSlv, semEnv ); + resume_slaveVP( newSlv, semEnv ); + } +*/ + +VSsTaskStub * +create_expl_proc_task_stub( void *initData ) + { VSsTaskStub *newStub; + + newStub = VMS_PI__malloc( sizeof(VSsTaskStub) ); + newStub->numBlockingProp = 0; + newStub->slaveAssignedTo = NULL; //set later + newStub->taskType = NULL; //Identifies as an explicit processor + newStub->ptrEntries = NULL; + newStub->args = initData; + newStub->numChildTasks = 0; + newStub->parent = NULL; + newStub->isWaiting = FALSE; + newStub->taskID = NULL; + newStub->parentIsTask = FALSE; + + return newStub; + } + +/*Application invokes this when it explicitly creates a "processor" via the + * "SSR__create_processor()" command. + * + *Make everything in VSs be a task. An explicitly created VP is just a + * suspendable task, and the seedVP is also a suspendable task. + *So, here, create a task Stub. + * Then, see if there are any extra slaveVPs hanging around, and if not, + * call the helper to make a new one. + * Then, put the task stub into the slave's semantic Data. + *When the slave calls dissipate, have to recycle the task stub. + */ +inline void +handleExplCreate( VMSReqst *req, SlaveVP *requestingSlv, VSsSemEnv *semEnv ) + { VSsSemReq *semReq; + SlaveVP *newSlv; + VSsSemData *semData; + + semReq = VMS_PI__take_sem_reqst_from( req ); + + taskStub = create_expl_proc_task_stub( initData ); + + newSlv = readPrivQ( semEnv->extraTaskSlvQ ); + if( newSlv == NULL ) + newSlv = VSs__create_slave_helper( semReq->fnPtr, semReq->initData, + semEnv, semReq->coreToAssignOnto ); + + semData = ( (VSsSemData *)newSlv->semanticData ); + semData->threadInfo->parent = requestingSlv; + semData->taskStub = newPrTaskStub; + + DEBUG__printf2(dbgRqstHdlr,"Create from: %d, new VP: %d", + requestingSlv->slaveID, newSlv->slaveID) + + #ifdef HOLISTIC__TURN_ON_OBSERVE_UCC + Dependency newd; + newd.from_vp = requestingSlv->slaveID; + newd.from_task = requestingSlv->assignCount; + newd.to_vp = newSlv->slaveID; + newd.to_task = 1; addToListOfArrays(Dependency,newd,semEnv->commDependenciesList); #endif @@ -356,6 +456,9 @@ void resume_slaveVP( SlaveVP *slave, VSsSemEnv *semEnv ) { + //both suspended tasks and suspended explicit slaves resumed with this + writePrivQ( slave, semEnv->slavesReadyToResumeQ ); + #ifdef HOLISTIC__TURN_ON_PERF_COUNTERS /* int lastRecordIdx = slave->counter_history_array_info->numInArray -1; @@ -379,5 +482,4 @@ addToListOfArrays(Dependency, newd ,semEnv->ctlDependenciesList); } #endif - writePrivQ( slave, semEnv->readyVPQs[ slave->coreAnimatedBy] ); } diff -r 8188c5b4bfd7 -r 1780f6b00e3d VSs_Request_Handlers.c --- a/VSs_Request_Handlers.c Fri Jul 13 17:35:49 2012 +0200 +++ b/VSs_Request_Handlers.c Wed Aug 01 00:18:53 2012 -0700 @@ -212,7 +212,7 @@ * *That should be it -- that should work. */ -void +inline void handleSubmitTask( VSsSemReq *semReq, VSsSemEnv *semEnv ) { uint32 key[3]; HashEntry *rawHashEntry; //has char *, but use with uint32 * @@ -335,11 +335,6 @@ return; } -inline void -handleSubmitTaskWID( VSsSemReq *semReq, VSsSemEnv *semEnv) - { - } - /* ========================== end of task =========================== * @@ -376,7 +371,7 @@ *TODO: Might be safe to delete an entry when task ends and waiterQ empty * and no readers and no writers.. */ -void +inline void handleEndTask( VSsSemReq *semReq, VSsSemEnv *semEnv ) { uint32 key[3]; HashEntry *rawHashEntry; @@ -402,21 +397,24 @@ ptrEntries = endingTaskStub->ptrEntries; //saved in stub when create /* Check if parent was waiting on this task */ - if(endingTaskStub->parentIsTask){ - VSsTaskStub* parent = (VSsTaskStub*) endingTaskStub->parent; - parent->numChildTasks--; - if(parent->isWaiting && parent->numChildTasks == 0){ - parent->isWaiting = FALSE; - resume_slaveVP( parent->slaveAssignedTo, semEnv ); + if(endingTaskStub->parentIsTask) + { VSsTaskStub* parent = (VSsTaskStub*) endingTaskStub->parent; + parent->numChildTasks--; + if(parent->isWaiting && parent->numChildTasks == 0) + { + parent->isWaiting = FALSE; + resume_slaveVP( parent->slaveAssignedTo, semEnv ); } - } else { - VSsThreadInfo* parent = (VSsThreadInfo*) endingTaskStub->parent; - parent->numChildTasks--; - if(parent->isWaiting && parent->numChildTasks == 0){ - parent->isWaiting = FALSE; - resume_slaveVP( parent->slaveAssignedTo, semEnv ); + } + else + { VSsThreadInfo* parent = (VSsThreadInfo*) endingTaskStub->parent; + parent->numChildTasks--; + if(parent->isWaiting && parent->numChildTasks == 0) + { + parent->isWaiting = FALSE; + resume_slaveVP( parent->slaveAssignedTo, semEnv ); } - } + } /*The task's controlled arguments are processed one by one. *Processing an argument means getting arg-pointer's entry. @@ -549,7 +547,7 @@ * separate tasks can send to the same receiver, and doing hash on the * receive task, so they will stack up. */ -void +inline void handleSendTypeTo( VSsSemReq *semReq, VSsSemEnv *semEnv ) { SlaveVP *senderSlv, *receiverSlv; int32 *senderID, *receiverID; @@ -645,7 +643,7 @@ /*Looks like can make single handler for both sends.. */ //TODO: combine both send handlers into single handler -void +inline void handleSendFromTo( VSsSemReq *semReq, VSsSemEnv *semEnv) { SlaveVP *senderSlv, *receiverSlv; int32 *senderID, *receiverID; @@ -721,7 +719,7 @@ // -void +inline void handleReceiveTypeTo( VSsSemReq *semReq, VSsSemEnv *semEnv) { SlaveVP *senderSlv, *receiverSlv; int32 *receiverID; @@ -803,7 +801,7 @@ /* */ -void +inline void handleReceiveFromTo( VSsSemReq *semReq, VSsSemEnv *semEnv) { SlaveVP *senderSlv, *receiverSlv; int32 *senderID, *receiverID; @@ -867,8 +865,8 @@ } //========================================================================== -void -handleTaskwait( VSsSemReq *semReq, SlaveVP *requestingSlv, VSsSemEnv *semEnv ) +inline void +handleTaskwait( VSsSemReq *semReq, SlaveVP *requestingSlv, VSsSemEnv *semEnv) { VSsTaskStub* requestingTaskStub; diff -r 8188c5b4bfd7 -r 1780f6b00e3d VSs_Request_Handlers.h --- a/VSs_Request_Handlers.h Fri Jul 13 17:35:49 2012 +0200 +++ b/VSs_Request_Handlers.h Wed Aug 01 00:18:53 2012 -0700 @@ -19,6 +19,17 @@ inline void handleEndTask( VSsSemReq *semReq, VSsSemEnv *semEnv); inline void +handleSendTypeTo( VSsSemReq *semReq, VSsSemEnv *semEnv); +inline void +handleSendFromTo( VSsSemReq *semReq, VSsSemEnv *semEnv); +inline void +handleReceiveTypeTo( VSsSemReq *semReq, VSsSemEnv *semEnv); +inline void +handleReceiveFromTo( VSsSemReq *semReq, VSsSemEnv *semEnv); +inline void +handleTaskwait(VSsSemReq *semReq, SlaveVP *requestingSlv, VSsSemEnv *semEnv); + +inline void handleMalloc( VSsSemReq *semReq, SlaveVP *requestingSlv, VSsSemEnv *semEnv); inline void handleFree( VSsSemReq *semReq, SlaveVP *requestingSlv, VSsSemEnv *semEnv ); diff -r 8188c5b4bfd7 -r 1780f6b00e3d __brch__default --- a/__brch__default Fri Jul 13 17:35:49 2012 +0200 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,4 +0,0 @@ -This branch is for the project structure defined Jan 2012.. the #includes reflect this directory structure. - -More importantly, the MC_shared version of VMS requires a separat malloc implemeted by VMS code.. so this branch has modified the library to use the VMS-specific malloc. - diff -r 8188c5b4bfd7 -r 1780f6b00e3d __brch__dev_expl_VP_and_DKU --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/__brch__dev_expl_VP_and_DKU Wed Aug 01 00:18:53 2012 -0700 @@ -0,0 +1,4 @@ +This branch is for the project structure defined Jan 2012.. the #includes reflect this directory structure. + +More importantly, the MC_shared version of VMS requires a separat malloc implemeted by VMS code.. so this branch has modified the library to use the VMS-specific malloc. +