# HG changeset patch # User Some Random Person # Date 1330636851 28800 # Node ID e5d4d5871ac97bd7e7ca58a9fbf1cf226f5c21e9 # Parent c1c36be9c47a0399613aff797e0c88f34d15aa0c half-done update to common_ancesor VMS version.. in middle diff -r c1c36be9c47a -r e5d4d5871ac9 .hgeol --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/.hgeol Thu Mar 01 13:20:51 2012 -0800 @@ -0,0 +1,12 @@ + +[patterns] +**.py = native +**.txt = native +**.c = native +**.h = native +**.cpp = native +**.java = native +**.sh = native +**.pl = native +**.jpg = bin +**.gif = bin diff -r c1c36be9c47a -r e5d4d5871ac9 DESIGN_NOTES__VPThread_lib.txt --- a/DESIGN_NOTES__VPThread_lib.txt Tue Jul 26 16:37:26 2011 +0200 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,82 +0,0 @@ - -Implement VPThread this way: - -We implemented a subset of PThreads functionality, called VMSPThd, that -includes: mutex_lock, mutex_unlock, cond_wait, and cond_notify, which we name -as VMSPThd__mutix_lock and so forth. \ All VMSPThd functions take a reference -to the AppVP that is animating the function call, in addition to any other -parameters. - -A mutex variable is an integer, returned by VMSPThd__mutex_create(), which is -used inside the request handler as a key to lookup an entry in a hash table, -that lives in the SemanticEnv. \ Such an entry has a field holding a -reference to the AppVP that currently owns the lock, and a queue of AppVPs -waiting to acquire the lock. \ - -Acquiring a lock is done with VMSPThd__mutex_lock(), which generates a -request. \ Recall that all request sends cause the suspention of the AppVP -that is animating the library call that generates the request, in this case -the AppVP animating VMSPThd__mutex_lock() is suspended. \ The request -includes a reference to that animating AppVP, and the mutex integer value. -\ When the request reaches the request handler, the mutex integer is used as -key to look up the hash entry, then if the owner field is null (or the same -as the AppVP in the request), the AppVP in the request is placed into the -owner field, and that AppVP is queued to be scheduled for re-animation. -\ However, if a different AppVP is listed in the owner field, then the AppVP -in the request is added to the queue of those trying to acquire. \ Notice -that this is a purely sequential algorithm that systematic reasoning can be -used on. - -VMSPThd__mutex_unlock(), meanwhile, generates a request that causes the -request handler to queue for re-animation the AppVP that animated the call. -\ It also pops the queue of AppVPs waiting to acquire the lock, and writes -the AppVP that comes out as the current owner of the lock and queues that -AppVP for re-animation (unless the popped value is null, in which case the -current owner is just set to null). - -Implementing condition variables takes a similar approach, in that -VMSPThd__init_cond() returns an integer that is then used to look up an entry -in a hash table, where the entry contains a queue of AppVPs waiting on the -condition variable. \ VMSPThd__cond_wait() generates a request that pushes -the AppVP into the queue, while VMSPThd__cond_signal() takes a wait request -from the queue. - -Notice that this is again a purely sequential algorithm, and sidesteps issues -such as ``simultaneous'' wait and signal requests -- the wait and signal get -serialized automatically, even though they take place at the same instant of -program virtual time. \ - -It is the fact of having a program virtual time that allows ``virtual -simultaneous'' actions to be handled of the virtual time. \ That -ability to escape outside of the virtual time is what enables a - algorithm to handle the simultaneity that is at the heart of -making implementing locks in physical time so intricately tricky -> > ->.\ - -What's nice about this approach is that the design and implementation are -simple and straight forward. \ It took just X days to design, implement, and -debug, and is in a form that should be amenable to proof of freedom from race -conditions, given a correct implementation of VMS. \ The hash-table based -approach also makes it reasonably high performance, with (essentially) no -slowdown when the number of locks or number of AppVPs grows large. - -=========================== -Behavior: -Cond variables are half of a two-piece mechanism. The other half is a mutex. - Every cond var owns a mutex -- the two intrinsically work - together, as a pair. The mutex must only be used with the condition var - and not used on its own in other ways. - -cond_wait is called with a cond-var and its mutex. -The animating processor must have acquired the mutex before calling cond_wait -The call adds the animating processor to the queue associated with the cond -variable and then calls mutex_unlock on the mutex. - -cond_signal can only be called after acquiring the cond var's mutex. It is -called with the cond-var. - The call takes the next processor from the condition-var's wait queue and - transfers it to the waiting-for-lock queue of the cond-var's mutex. -The processor that called the cond_signal next has to perform a mutex_unlock - on the cond-var's mutex -- that, finally, lets the waiting processor acquire - the mutex and proceed. diff -r c1c36be9c47a -r e5d4d5871ac9 DESIGN_NOTES__Vthread_lib.txt --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/DESIGN_NOTES__Vthread_lib.txt Thu Mar 01 13:20:51 2012 -0800 @@ -0,0 +1,82 @@ + +Implement VPThread this way: + +We implemented a subset of PThreads functionality, called VMSPThd, that +includes: mutex_lock, mutex_unlock, cond_wait, and cond_notify, which we name +as VMSPThd__mutix_lock and so forth. \ All VMSPThd functions take a reference +to the AppVP that is animating the function call, in addition to any other +parameters. + +A mutex variable is an integer, returned by VMSPThd__mutex_create(), which is +used inside the request handler as a key to lookup an entry in a hash table, +that lives in the SemanticEnv. \ Such an entry has a field holding a +reference to the AppVP that currently owns the lock, and a queue of AppVPs +waiting to acquire the lock. \ + +Acquiring a lock is done with VMSPThd__mutex_lock(), which generates a +request. \ Recall that all request sends cause the suspention of the AppVP +that is animating the library call that generates the request, in this case +the AppVP animating VMSPThd__mutex_lock() is suspended. \ The request +includes a reference to that animating AppVP, and the mutex integer value. +\ When the request reaches the request handler, the mutex integer is used as +key to look up the hash entry, then if the owner field is null (or the same +as the AppVP in the request), the AppVP in the request is placed into the +owner field, and that AppVP is queued to be scheduled for re-animation. +\ However, if a different AppVP is listed in the owner field, then the AppVP +in the request is added to the queue of those trying to acquire. \ Notice +that this is a purely sequential algorithm that systematic reasoning can be +used on. + +VMSPThd__mutex_unlock(), meanwhile, generates a request that causes the +request handler to queue for re-animation the AppVP that animated the call. +\ It also pops the queue of AppVPs waiting to acquire the lock, and writes +the AppVP that comes out as the current owner of the lock and queues that +AppVP for re-animation (unless the popped value is null, in which case the +current owner is just set to null). + +Implementing condition variables takes a similar approach, in that +VMSPThd__init_cond() returns an integer that is then used to look up an entry +in a hash table, where the entry contains a queue of AppVPs waiting on the +condition variable. \ VMSPThd__cond_wait() generates a request that pushes +the AppVP into the queue, while VMSPThd__cond_signal() takes a wait request +from the queue. + +Notice that this is again a purely sequential algorithm, and sidesteps issues +such as ``simultaneous'' wait and signal requests -- the wait and signal get +serialized automatically, even though they take place at the same instant of +program virtual time. \ + +It is the fact of having a program virtual time that allows ``virtual +simultaneous'' actions to be handled of the virtual time. \ That +ability to escape outside of the virtual time is what enables a + algorithm to handle the simultaneity that is at the heart of +making implementing locks in physical time so intricately tricky +> > +>.\ + +What's nice about this approach is that the design and implementation are +simple and straight forward. \ It took just X days to design, implement, and +debug, and is in a form that should be amenable to proof of freedom from race +conditions, given a correct implementation of VMS. \ The hash-table based +approach also makes it reasonably high performance, with (essentially) no +slowdown when the number of locks or number of AppVPs grows large. + +=========================== +Behavior: +Cond variables are half of a two-piece mechanism. The other half is a mutex. + Every cond var owns a mutex -- the two intrinsically work + together, as a pair. The mutex must only be used with the condition var + and not used on its own in other ways. + +cond_wait is called with a cond-var and its mutex. +The animating processor must have acquired the mutex before calling cond_wait +The call adds the animating processor to the queue associated with the cond +variable and then calls mutex_unlock on the mutex. + +cond_signal can only be called after acquiring the cond var's mutex. It is +called with the cond-var. + The call takes the next processor from the condition-var's wait queue and + transfers it to the waiting-for-lock queue of the cond-var's mutex. +The processor that called the cond_signal next has to perform a mutex_unlock + on the cond-var's mutex -- that, finally, lets the waiting processor acquire + the mutex and proceed. diff -r c1c36be9c47a -r e5d4d5871ac9 VPThread.h --- a/VPThread.h Tue Jul 26 16:37:26 2011 +0200 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,256 +0,0 @@ -/* - * Copyright 2009 OpenSourceStewardshipFoundation.org - * Licensed under GNU General Public License version 2 - * - * Author: seanhalle@yahoo.com - * - */ - -#ifndef _VPThread_H -#define _VPThread_H - -#include "VMS/VMS.h" -#include "VMS/Queue_impl/PrivateQueue.h" -#include "VMS/DynArray/DynArray.h" - - -/*This header defines everything specific to the VPThread semantic plug-in - */ - - -//=========================================================================== -#define INIT_NUM_MUTEX 10000 -#define INIT_NUM_COND 10000 - -#define NUM_STRUCS_IN_SEM_ENV 1000 -//=========================================================================== - -//=========================================================================== -typedef struct _VPThreadSemReq VPThdSemReq; -typedef void (*PtrToAtomicFn ) ( void * ); //executed atomically in master -//=========================================================================== - - -/*WARNING: assembly hard-codes position of endInstrAddr as first field - */ -typedef struct - { - void *endInstrAddr; - int32 hasBeenStarted; - int32 hasFinished; - PrivQueueStruc *waitQ; - } -VPThdSingleton; - -/*Semantic layer-specific data sent inside a request from lib called in app - * to request handler called in MasterLoop - */ -enum VPThreadReqType - { - make_mutex = 1, - mutex_lock, - mutex_unlock, - make_cond, - cond_wait, - cond_signal, - make_procr, - malloc_req, - free_req, - singleton_fn_start, - singleton_fn_end, - singleton_data_start, - singleton_data_end, - atomic, - trans_start, - trans_end - }; - -struct _VPThreadSemReq - { enum VPThreadReqType reqType; - VirtProcr *requestingPr; - int32 mutexIdx; - int32 condIdx; - - void *initData; - VirtProcrFnPtr fnPtr; - int32 coreToScheduleOnto; - - size_t sizeToMalloc; - void *ptrToFree; - - int32 singletonID; - VPThdSingleton **singletonPtrAddr; - - PtrToAtomicFn fnToExecInMaster; - void *dataForFn; - - int32 transID; - } -/* VPThreadSemReq */; - - -typedef struct - { - VirtProcr *VPCurrentlyExecuting; - PrivQueueStruc *waitingVPQ; - } -VPThdTrans; - - -typedef struct - { - int32 mutexIdx; - VirtProcr *holderOfLock; - PrivQueueStruc *waitingQueue; - } -VPThdMutex; - - -typedef struct - { - int32 condIdx; - PrivQueueStruc *waitingQueue; - VPThdMutex *partnerMutex; - } -VPThdCond; - -typedef struct _TransListElem TransListElem; -struct _TransListElem - { - int32 transID; - TransListElem *nextTrans; - }; -//TransListElem - -typedef struct - { - int32 highestTransEntered; - TransListElem *lastTransEntered; - } -VPThdSemData; - - -typedef struct - { - //Standard stuff will be in most every semantic env - PrivQueueStruc **readyVPQs; - int32 numVirtPr; - int32 nextCoreToGetNewPr; - int32 primitiveStartTime; - - //Specific to this semantic layer - VPThdMutex **mutexDynArray; - PrivDynArrayInfo *mutexDynArrayInfo; - - VPThdCond **condDynArray; - PrivDynArrayInfo *condDynArrayInfo; - - void *applicationGlobals; - - //fix limit on num with dynArray - VPThdSingleton fnSingletons[NUM_STRUCS_IN_SEM_ENV]; - - VPThdTrans transactionStrucs[NUM_STRUCS_IN_SEM_ENV]; - } -VPThdSemEnv; - - -//=========================================================================== - -inline void -VPThread__create_seed_procr_and_do_work( VirtProcrFnPtr fn, void *initData ); - -//======================= - -inline VirtProcr * -VPThread__create_thread( VirtProcrFnPtr fnPtr, void *initData, - VirtProcr *creatingPr ); - -inline VirtProcr * -VPThread__create_thread_with_affinity( VirtProcrFnPtr fnPtr, void *initData, - VirtProcr *creatingPr, int32 coreToScheduleOnto ); - -inline void -VPThread__dissipate_thread( VirtProcr *procrToDissipate ); - -//======================= -inline void -VPThread__set_globals_to( void *globals ); - -inline void * -VPThread__give_globals(); - -//======================= -inline int32 -VPThread__make_mutex( VirtProcr *animPr ); - -inline void -VPThread__mutex_lock( int32 mutexIdx, VirtProcr *acquiringPr ); - -inline void -VPThread__mutex_unlock( int32 mutexIdx, VirtProcr *releasingPr ); - - -//======================= -inline int32 -VPThread__make_cond( int32 ownedMutexIdx, VirtProcr *animPr); - -inline void -VPThread__cond_wait( int32 condIdx, VirtProcr *waitingPr); - -inline void * -VPThread__cond_signal( int32 condIdx, VirtProcr *signallingPr ); - - -//======================= -void -VPThread__start_fn_singleton( int32 singletonID, VirtProcr *animPr ); - -void -VPThread__end_fn_singleton( int32 singletonID, VirtProcr *animPr ); - -void -VPThread__start_data_singleton( VPThdSingleton **singeltonAddr, VirtProcr *animPr ); - -void -VPThread__end_data_singleton( VPThdSingleton **singletonAddr, VirtProcr *animPr ); - -void -VPThread__animate_short_fn_in_isolation( PtrToAtomicFn ptrToFnToExecInMaster, - void *data, VirtProcr *animPr ); - -void -VPThread__start_transaction( int32 transactionID, VirtProcr *animPr ); - -void -VPThread__end_transaction( int32 transactionID, VirtProcr *animPr ); - - - -//========================= Internal use only ============================= -inline void -VPThread__Request_Handler( VirtProcr *requestingPr, void *_semEnv ); - -inline VirtProcr * -VPThread__schedule_virt_procr( void *_semEnv, int coreNum ); - -//======================= -inline void -VPThread__free_semantic_request( VPThdSemReq *semReq ); - -//======================= - -void * -VPThread__malloc( size_t sizeToMalloc, VirtProcr *animPr ); - -void -VPThread__init(); - -void -VPThread__cleanup_after_shutdown(); - -void inline -resume_procr( VirtProcr *procr, VPThdSemEnv *semEnv ); - -#endif /* _VPThread_H */ - diff -r c1c36be9c47a -r e5d4d5871ac9 VPThread.s --- a/VPThread.s Tue Jul 26 16:37:26 2011 +0200 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,21 +0,0 @@ - -//Assembly code takes the return addr off the stack and saves -// into the singleton. The first field in the singleton is the -// "endInstrAddr" field, and the return addr is at 0x4(%ebp) -.globl asm_save_ret_to_singleton -asm_save_ret_to_singleton: - movq 0x8(%rbp), %rax #get ret address, ebp is the same as in the calling function - movq %rax, (%rdi) #write ret addr to endInstrAddr field - ret - - -//Assembly code changes the return addr on the stack to the one -// saved into the singleton by the end-singleton-fn -//The stack's return addr is at 0x4(%%ebp) -.globl asm_write_ret_from_singleton -asm_write_ret_from_singleton: - movq (%rdi), %rax #get endInstrAddr field - movq %rax, 0x8(%rbp) #write return addr to the stack of the caller - ret - - diff -r c1c36be9c47a -r e5d4d5871ac9 VPThread_PluginFns.c --- a/VPThread_PluginFns.c Tue Jul 26 16:37:26 2011 +0200 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,192 +0,0 @@ -/* - * Copyright 2010 OpenSourceCodeStewardshipFoundation - * - * Licensed under BSD - */ - -#include -#include -#include - -#include "VMS/Queue_impl/PrivateQueue.h" -#include "VPThread.h" -#include "VPThread_Request_Handlers.h" -#include "VPThread_helper.h" - -//=========================== Local Fn Prototypes =========================== - -void inline -handleSemReq( VMSReqst *req, VirtProcr *requestingPr, VPThdSemEnv *semEnv ); - -inline void -handleDissipate( VirtProcr *requestingPr, VPThdSemEnv *semEnv ); - -inline void -handleCreate( VMSReqst *req, VirtProcr *requestingPr, VPThdSemEnv *semEnv ); - - -//============================== Scheduler ================================== -// -/*For VPThread, scheduling a slave simply takes the next work-unit off the - * ready-to-go work-unit queue and assigns it to the slaveToSched. - *If the ready-to-go work-unit queue is empty, then nothing to schedule - * to the slave -- return FALSE to let Master loop know scheduling that - * slave failed. - */ -char __Scheduler[] = "FIFO Scheduler"; //Gobal variable for name in saved histogram -VirtProcr * -VPThread__schedule_virt_procr( void *_semEnv, int coreNum ) - { VirtProcr *schedPr; - VPThdSemEnv *semEnv; - - semEnv = (VPThdSemEnv *)_semEnv; - - schedPr = readPrivQ( semEnv->readyVPQs[coreNum] ); - //Note, using a non-blocking queue -- it returns NULL if queue empty - - return( schedPr ); - } - - - -//=========================== Request Handler ============================= -// -/*Will get requests to send, to receive, and to create new processors. - * Upon send, check the hash to see if a receive is waiting. - * Upon receive, check hash to see if a send has already happened. - * When other is not there, put in. When other is there, the comm. - * completes, which means the receiver P gets scheduled and - * picks up right after the receive request. So make the work-unit - * and put it into the queue of work-units ready to go. - * Other request is create a new Processor, with the function to run in the - * Processor, and initial data. - */ -void -VPThread__Request_Handler( VirtProcr *requestingPr, void *_semEnv ) - { VPThdSemEnv *semEnv; - VMSReqst *req; - - semEnv = (VPThdSemEnv *)_semEnv; - - req = VMS__take_next_request_out_of( requestingPr ); - - while( req != NULL ) - { - switch( req->reqType ) - { case semantic: handleSemReq( req, requestingPr, semEnv); - break; - case createReq: handleCreate( req, requestingPr, semEnv); - break; - case dissipate: handleDissipate( requestingPr, semEnv); - break; - case VMSSemantic: VMS__handle_VMSSemReq(req, requestingPr, semEnv, - (ResumePrFnPtr)&resume_procr); - break; - default: - break; - } - - req = VMS__take_next_request_out_of( requestingPr ); - } //while( req != NULL ) - } - - -void inline -handleSemReq( VMSReqst *req, VirtProcr *reqPr, VPThdSemEnv *semEnv ) - { VPThdSemReq *semReq; - - semReq = VMS__take_sem_reqst_from(req); - if( semReq == NULL ) return; - switch( semReq->reqType ) - { - case make_mutex: handleMakeMutex( semReq, semEnv); - break; - case mutex_lock: handleMutexLock( semReq, semEnv); - break; - case mutex_unlock: handleMutexUnlock(semReq, semEnv); - break; - case make_cond: handleMakeCond( semReq, semEnv); - break; - case cond_wait: handleCondWait( semReq, semEnv); - break; - case cond_signal: handleCondSignal( semReq, semEnv); - break; - case malloc_req: handleMalloc( semReq, reqPr, semEnv); - break; - case free_req: handleFree( semReq, reqPr, semEnv); - break; - case singleton_fn_start: handleStartFnSingleton(semReq, reqPr, semEnv); - break; - case singleton_fn_end: handleEndFnSingleton( semReq, reqPr, semEnv); - break; - case singleton_data_start:handleStartDataSingleton(semReq,reqPr,semEnv); - break; - case singleton_data_end: handleEndDataSingleton(semReq, reqPr, semEnv); - break; - case atomic: handleAtomic( semReq, reqPr, semEnv); - break; - case trans_start: handleTransStart( semReq, reqPr, semEnv); - break; - case trans_end: handleTransEnd( semReq, reqPr, semEnv); - break; - } - } - -//=========================== VMS Request Handlers =========================== -// -inline void -handleDissipate( VirtProcr *requestingPr, VPThdSemEnv *semEnv ) - { - //free any semantic data allocated to the virt procr - VMS__free( requestingPr->semanticData ); - - //Now, call VMS to free_all AppVP state -- stack and so on - VMS__dissipate_procr( requestingPr ); - - semEnv->numVirtPr -= 1; - if( semEnv->numVirtPr == 0 ) - { //no more work, so shutdown - VMS__shutdown(); - } - } - -inline void -handleCreate( VMSReqst *req, VirtProcr *requestingPr, VPThdSemEnv *semEnv ) - { VPThdSemReq *semReq; - VirtProcr *newPr; - - //========================= MEASUREMENT STUFF ====================== - Meas_startCreate - //================================================================== - - semReq = VMS__take_sem_reqst_from( req ); - - newPr = VPThread__create_procr_helper( semReq->fnPtr, semReq->initData, - semEnv, semReq->coreToScheduleOnto); - - //For VPThread, caller needs ptr to created processor returned to it - requestingPr->dataRetFromReq = newPr; - - resume_procr( newPr, semEnv ); - resume_procr( requestingPr, semEnv ); - - //========================= MEASUREMENT STUFF ====================== - Meas_endCreate - #ifdef MEAS__TIME_PLUGIN - #ifdef MEAS__SUB_CREATE - subIntervalFromHist( startStamp, endStamp, - _VMSMasterEnv->reqHdlrHighTimeHist ); - #endif - #endif - //================================================================== - } - - -//=========================== Helper ============================== -void inline -resume_procr( VirtProcr *procr, VPThdSemEnv *semEnv ) - { - writePrivQ( procr, semEnv->readyVPQs[ procr->coreAnimatedBy] ); - } - -//=========================================================================== \ No newline at end of file diff -r c1c36be9c47a -r e5d4d5871ac9 VPThread_Request_Handlers.c --- a/VPThread_Request_Handlers.c Tue Jul 26 16:37:26 2011 +0200 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,445 +0,0 @@ -/* - * Copyright 2010 OpenSourceCodeStewardshipFoundation - * - * Licensed under BSD - */ - -#include -#include -#include - -#include "VMS/VMS.h" -#include "VMS/Queue_impl/PrivateQueue.h" -#include "VMS/Hash_impl/PrivateHash.h" -#include "VPThread.h" -#include "VMS/vmalloc.h" - - - -//=============================== Mutexes ================================= -/*The semantic request has a mutexIdx value, which acts as index into array. - */ -inline void -handleMakeMutex( VPThdSemReq *semReq, VPThdSemEnv *semEnv) - { VPThdMutex *newMutex; - VirtProcr *requestingPr; - - requestingPr = semReq->requestingPr; - newMutex = VMS__malloc( sizeof(VPThdMutex) ); - newMutex->waitingQueue = makeVMSPrivQ( requestingPr ); - newMutex->holderOfLock = NULL; - - //The mutex struc contains an int that identifies it -- use that as - // its index within the array of mutexes. Add the new mutex to array. - newMutex->mutexIdx = addToDynArray( newMutex, semEnv->mutexDynArrayInfo ); - - //Now communicate the mutex's identifying int back to requesting procr - semReq->requestingPr->dataRetFromReq = (void*)newMutex->mutexIdx; //mutexIdx is 32 bit - - //re-animate the requester - resume_procr( requestingPr, semEnv ); - } - - -inline void -handleMutexLock( VPThdSemReq *semReq, VPThdSemEnv *semEnv) - { VPThdMutex *mutex; - //=================== Deterministic Replay ====================== - #ifdef RECORD_DETERMINISTIC_REPLAY - - #endif - //================================================================= - Meas_startMutexLock - //lookup mutex struc, using mutexIdx as index - mutex = semEnv->mutexDynArray[ semReq->mutexIdx ]; - - //see if mutex is free or not - if( mutex->holderOfLock == NULL ) //none holding, give lock to requester - { - mutex->holderOfLock = semReq->requestingPr; - - //re-animate requester, now that it has the lock - resume_procr( semReq->requestingPr, semEnv ); - } - else //queue up requester to wait for release of lock - { - writePrivQ( semReq->requestingPr, mutex->waitingQueue ); - } - Meas_endMutexLock - } - -/* - */ -inline void -handleMutexUnlock( VPThdSemReq *semReq, VPThdSemEnv *semEnv) - { VPThdMutex *mutex; - - Meas_startMutexUnlock - //lookup mutex struc, using mutexIdx as index - mutex = semEnv->mutexDynArray[ semReq->mutexIdx ]; - - //set new holder of mutex-lock to be next in queue (NULL if empty) - mutex->holderOfLock = readPrivQ( mutex->waitingQueue ); - - //if have new non-NULL holder, re-animate it - if( mutex->holderOfLock != NULL ) - { - resume_procr( mutex->holderOfLock, semEnv ); - } - - //re-animate the releaser of the lock - resume_procr( semReq->requestingPr, semEnv ); - Meas_endMutexUnlock - } - -//=========================== Condition Vars ============================== -/*The semantic request has the cond-var value and mutex value, which are the - * indexes into the array. Not worrying about having too many mutexes or - * cond vars created, so using array instead of hash table, for speed. - */ - - -/*Make cond has to be called with the mutex that the cond is paired to - * Don't have to implement this way, but was confusing learning cond vars - * until deduced that each cond var owns a mutex that is used only for - * interacting with that cond var. So, make this pairing explicit. - */ -inline void -handleMakeCond( VPThdSemReq *semReq, VPThdSemEnv *semEnv) - { VPThdCond *newCond; - VirtProcr *requestingPr; - - requestingPr = semReq->requestingPr; - newCond = VMS__malloc( sizeof(VPThdCond) ); - newCond->partnerMutex = semEnv->mutexDynArray[ semReq->mutexIdx ]; - - newCond->waitingQueue = makeVMSPrivQ(); - - //The cond struc contains an int that identifies it -- use that as - // its index within the array of conds. Add the new cond to array. - newCond->condIdx = addToDynArray( newCond, semEnv->condDynArrayInfo ); - - //Now communicate the cond's identifying int back to requesting procr - semReq->requestingPr->dataRetFromReq = (void*)newCond->condIdx; //condIdx is 32 bit - - //re-animate the requester - resume_procr( requestingPr, semEnv ); - } - - -/*Mutex has already been paired to the cond var, so don't need to send the - * mutex, just the cond var. Don't have to do this, but want to bitch-slap - * the designers of Posix standard ; ) - */ -inline void -handleCondWait( VPThdSemReq *semReq, VPThdSemEnv *semEnv) - { VPThdCond *cond; - VPThdMutex *mutex; - - Meas_startCondWait - //get cond struc out of array of them that's in the sem env - cond = semEnv->condDynArray[ semReq->condIdx ]; - - //add requester to queue of wait-ers - writePrivQ( semReq->requestingPr, cond->waitingQueue ); - - //unlock mutex -- can't reuse above handler 'cause not queuing releaser - mutex = cond->partnerMutex; - mutex->holderOfLock = readPrivQ( mutex->waitingQueue ); - - if( mutex->holderOfLock != NULL ) - { - resume_procr( mutex->holderOfLock, semEnv ); - } - Meas_endCondWait - } - - -/*Note that have to implement this such that guarantee the waiter is the one - * that gets the lock - */ -inline void -handleCondSignal( VPThdSemReq *semReq, VPThdSemEnv *semEnv) - { VPThdCond *cond; - VPThdMutex *mutex; - VirtProcr *waitingPr; - - Meas_startCondSignal - //get cond struc out of array of them that's in the sem env - cond = semEnv->condDynArray[ semReq->condIdx ]; - - //take next waiting procr out of queue - waitingPr = readPrivQ( cond->waitingQueue ); - - //transfer waiting procr to wait queue of mutex - // mutex is guaranteed to be held by signalling procr, so no check - mutex = cond->partnerMutex; - pushPrivQ( waitingPr, mutex->waitingQueue ); //is first out when read - - //re-animate the signalling procr - resume_procr( semReq->requestingPr, semEnv ); - Meas_endCondSignal - } - - - -//============================================================================ -// -/* - */ -void inline -handleMalloc(VPThdSemReq *semReq, VirtProcr *requestingPr,VPThdSemEnv *semEnv) - { void *ptr; - - //========================= MEASUREMENT STUFF ====================== - #ifdef MEAS__TIME_PLUGIN - int32 startStamp, endStamp; - saveLowTimeStampCountInto( startStamp ); - #endif - //================================================================== - ptr = VMS__malloc( semReq->sizeToMalloc ); - requestingPr->dataRetFromReq = ptr; - resume_procr( requestingPr, semEnv ); - //========================= MEASUREMENT STUFF ====================== - #ifdef MEAS__TIME_PLUGIN - saveLowTimeStampCountInto( endStamp ); - subIntervalFromHist( startStamp, endStamp, - _VMSMasterEnv->reqHdlrHighTimeHist ); - #endif - //================================================================== - } - -/* - */ -void inline -handleFree( VPThdSemReq *semReq, VirtProcr *requestingPr, VPThdSemEnv *semEnv) - { - //========================= MEASUREMENT STUFF ====================== - #ifdef MEAS__TIME_PLUGIN - int32 startStamp, endStamp; - saveLowTimeStampCountInto( startStamp ); - #endif - //================================================================== - VMS__free( semReq->ptrToFree ); - resume_procr( requestingPr, semEnv ); - //========================= MEASUREMENT STUFF ====================== - #ifdef MEAS__TIME_PLUGIN - saveLowTimeStampCountInto( endStamp ); - subIntervalFromHist( startStamp, endStamp, - _VMSMasterEnv->reqHdlrHighTimeHist ); - #endif - //================================================================== - } - - -//=========================================================================== -// -/*Uses ID as index into array of flags. If flag already set, resumes from - * end-label. Else, sets flag and resumes normally. - */ -void inline -handleStartSingleton_helper( VPThdSingleton *singleton, VirtProcr *reqstingPr, - VPThdSemEnv *semEnv ) - { - if( singleton->hasFinished ) - { //the code that sets the flag to true first sets the end instr addr - reqstingPr->dataRetFromReq = singleton->endInstrAddr; - resume_procr( reqstingPr, semEnv ); - return; - } - else if( singleton->hasBeenStarted ) - { //singleton is in-progress in a diff slave, so wait for it to finish - writePrivQ(reqstingPr, singleton->waitQ ); - return; - } - else - { //hasn't been started, so this is the first attempt at the singleton - singleton->hasBeenStarted = TRUE; - reqstingPr->dataRetFromReq = 0x0; - resume_procr( reqstingPr, semEnv ); - return; - } - } -void inline -handleStartFnSingleton( VPThdSemReq *semReq, VirtProcr *requestingPr, - VPThdSemEnv *semEnv ) - { VPThdSingleton *singleton; - - singleton = &(semEnv->fnSingletons[ semReq->singletonID ]); - handleStartSingleton_helper( singleton, requestingPr, semEnv ); - } -void inline -handleStartDataSingleton( VPThdSemReq *semReq, VirtProcr *requestingPr, - VPThdSemEnv *semEnv ) - { VPThdSingleton *singleton; - - if( *(semReq->singletonPtrAddr) == NULL ) - { singleton = VMS__malloc( sizeof(VPThdSingleton) ); - singleton->waitQ = makeVMSPrivQ(); - singleton->endInstrAddr = 0x0; - singleton->hasBeenStarted = FALSE; - singleton->hasFinished = FALSE; - *(semReq->singletonPtrAddr) = singleton; - } - else - singleton = *(semReq->singletonPtrAddr); - handleStartSingleton_helper( singleton, requestingPr, semEnv ); - } - - -void inline -handleEndSingleton_helper( VPThdSingleton *singleton, VirtProcr *requestingPr, - VPThdSemEnv *semEnv ) - { PrivQueueStruc *waitQ; - int32 numWaiting, i; - VirtProcr *resumingPr; - - if( singleton->hasFinished ) - { //by definition, only one slave should ever be able to run end singleton - // so if this is true, is an error - //VMS__throw_exception( "singleton code ran twice", requestingPr, NULL); - } - - singleton->hasFinished = TRUE; - waitQ = singleton->waitQ; - numWaiting = numInPrivQ( waitQ ); - for( i = 0; i < numWaiting; i++ ) - { //they will resume inside start singleton, then jmp to end singleton - resumingPr = readPrivQ( waitQ ); - resumingPr->dataRetFromReq = singleton->endInstrAddr; - resume_procr( resumingPr, semEnv ); - } - - resume_procr( requestingPr, semEnv ); - - } -void inline -handleEndFnSingleton( VPThdSemReq *semReq, VirtProcr *requestingPr, - VPThdSemEnv *semEnv ) - { - VPThdSingleton *singleton; - - singleton = &(semEnv->fnSingletons[ semReq->singletonID ]); - handleEndSingleton_helper( singleton, requestingPr, semEnv ); - } -void inline -handleEndDataSingleton( VPThdSemReq *semReq, VirtProcr *requestingPr, - VPThdSemEnv *semEnv ) - { - VPThdSingleton *singleton; - - singleton = *(semReq->singletonPtrAddr); - handleEndSingleton_helper( singleton, requestingPr, semEnv ); - } - - -/*This executes the function in the masterVP, take the function - * pointer out of the request and call it, then resume the VP. - */ -void inline -handleAtomic(VPThdSemReq *semReq, VirtProcr *requestingPr,VPThdSemEnv *semEnv) - { - semReq->fnToExecInMaster( semReq->dataForFn ); - resume_procr( requestingPr, semEnv ); - } - -/*First, it looks at the VP's semantic data, to see the highest transactionID - * that VP - * already has entered. If the current ID is not larger, it throws an - * exception stating a bug in the code. - *Otherwise it puts the current ID - * there, and adds the ID to a linked list of IDs entered -- the list is - * used to check that exits are properly ordered. - *Next it is uses transactionID as index into an array of transaction - * structures. - *If the "VP_currently_executing" field is non-null, then put requesting VP - * into queue in the struct. (At some point a holder will request - * end-transaction, which will take this VP from the queue and resume it.) - *If NULL, then write requesting into the field and resume. - */ -void inline -handleTransStart( VPThdSemReq *semReq, VirtProcr *requestingPr, - VPThdSemEnv *semEnv ) - { VPThdSemData *semData; - TransListElem *nextTransElem; - - //check ordering of entering transactions is correct - semData = requestingPr->semanticData; - if( semData->highestTransEntered > semReq->transID ) - { //throw VMS exception, which shuts down VMS. - VMS__throw_exception( "transID smaller than prev", requestingPr, NULL); - } - //add this trans ID to the list of transactions entered -- check when - // end a transaction - semData->highestTransEntered = semReq->transID; - nextTransElem = VMS__malloc( sizeof(TransListElem) ); - nextTransElem->transID = semReq->transID; - nextTransElem->nextTrans = semData->lastTransEntered; - semData->lastTransEntered = nextTransElem; - - //get the structure for this transaction ID - VPThdTrans * - transStruc = &(semEnv->transactionStrucs[ semReq->transID ]); - - if( transStruc->VPCurrentlyExecuting == NULL ) - { - transStruc->VPCurrentlyExecuting = requestingPr; - resume_procr( requestingPr, semEnv ); - } - else - { //note, might make future things cleaner if save request with VP and - // add this trans ID to the linked list when gets out of queue. - // but don't need for now, and lazy.. - writePrivQ( requestingPr, transStruc->waitingVPQ ); - } - } - - -/*Use the trans ID to get the transaction structure from the array. - *Look at VP_currently_executing to be sure it's same as requesting VP. - * If different, throw an exception, stating there's a bug in the code. - *Next, take the first element off the list of entered transactions. - * Check to be sure the ending transaction is the same ID as the next on - * the list. If not, incorrectly nested so throw an exception. - * - *Next, get from the queue in the structure. - *If it's empty, set VP_currently_executing field to NULL and resume - * requesting VP. - *If get somethine, set VP_currently_executing to the VP from the queue, then - * resume both. - */ -void inline -handleTransEnd( VPThdSemReq *semReq, VirtProcr *requestingPr, - VPThdSemEnv *semEnv) - { VPThdSemData *semData; - VirtProcr *waitingPr; - VPThdTrans *transStruc; - TransListElem *lastTrans; - - transStruc = &(semEnv->transactionStrucs[ semReq->transID ]); - - //make sure transaction ended in same VP as started it. - if( transStruc->VPCurrentlyExecuting != requestingPr ) - { - VMS__throw_exception( "trans ended in diff VP", requestingPr, NULL ); - } - - //make sure nesting is correct -- last ID entered should == this ID - semData = requestingPr->semanticData; - lastTrans = semData->lastTransEntered; - if( lastTrans->transID != semReq->transID ) - { - VMS__throw_exception( "trans incorrectly nested", requestingPr, NULL ); - } - - semData->lastTransEntered = semData->lastTransEntered->nextTrans; - - - waitingPr = readPrivQ( transStruc->waitingVPQ ); - transStruc->VPCurrentlyExecuting = waitingPr; - - if( waitingPr != NULL ) - resume_procr( waitingPr, semEnv ); - - resume_procr( requestingPr, semEnv ); - } diff -r c1c36be9c47a -r e5d4d5871ac9 VPThread_Request_Handlers.h --- a/VPThread_Request_Handlers.h Tue Jul 26 16:37:26 2011 +0200 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,57 +0,0 @@ -/* - * Copyright 2009 OpenSourceStewardshipFoundation.org - * Licensed under GNU General Public License version 2 - * - * Author: seanhalle@yahoo.com - * - */ - -#ifndef _VPThread_REQ_H -#define _VPThread_REQ_H - -#include "VPThread.h" - -/*This header defines everything specific to the VPThread semantic plug-in - */ - -inline void -handleMakeMutex( VPThdSemReq *semReq, VPThdSemEnv *semEnv); -inline void -handleMutexLock( VPThdSemReq *semReq, VPThdSemEnv *semEnv); -inline void -handleMutexUnlock(VPThdSemReq *semReq, VPThdSemEnv *semEnv); -inline void -handleMakeCond( VPThdSemReq *semReq, VPThdSemEnv *semEnv); -inline void -handleCondWait( VPThdSemReq *semReq, VPThdSemEnv *semEnv); -inline void -handleCondSignal( VPThdSemReq *semReq, VPThdSemEnv *semEnv); -void inline -handleMalloc(VPThdSemReq *semReq, VirtProcr *requestingPr,VPThdSemEnv *semEnv); -void inline -handleFree( VPThdSemReq *semReq, VirtProcr *requestingPr, VPThdSemEnv *semEnv); -inline void -handleStartFnSingleton( VPThdSemReq *semReq, VirtProcr *reqstingPr, - VPThdSemEnv *semEnv ); -inline void -handleEndFnSingleton( VPThdSemReq *semReq, VirtProcr *requestingPr, - VPThdSemEnv *semEnv ); -inline void -handleStartDataSingleton( VPThdSemReq *semReq, VirtProcr *reqstingPr, - VPThdSemEnv *semEnv ); -inline void -handleEndDataSingleton( VPThdSemReq *semReq, VirtProcr *requestingPr, - VPThdSemEnv *semEnv ); -void inline -handleAtomic( VPThdSemReq *semReq, VirtProcr *requestingPr, - VPThdSemEnv *semEnv); -void inline -handleTransStart( VPThdSemReq *semReq, VirtProcr *requestingPr, - VPThdSemEnv *semEnv ); -void inline -handleTransEnd( VPThdSemReq *semReq, VirtProcr *requestingPr, - VPThdSemEnv *semEnv); - - -#endif /* _VPThread_REQ_H */ - diff -r c1c36be9c47a -r e5d4d5871ac9 VPThread_helper.c --- a/VPThread_helper.c Tue Jul 26 16:37:26 2011 +0200 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,48 +0,0 @@ - -#include - -#include "VMS/VMS.h" -#include "VPThread.h" - -/*Re-use this in the entry-point fn - */ -inline VirtProcr * -VPThread__create_procr_helper( VirtProcrFnPtr fnPtr, void *initData, - VPThdSemEnv *semEnv, int32 coreToScheduleOnto ) - { VirtProcr *newPr; - VPThdSemData *semData; - - //This is running in master, so use internal version - newPr = VMS__create_procr( fnPtr, initData ); - - semEnv->numVirtPr += 1; - - semData = VMS__malloc( sizeof(VPThdSemData) ); - semData->highestTransEntered = -1; - semData->lastTransEntered = NULL; - - newPr->semanticData = semData; - - //=================== Assign new processor to a core ===================== - #ifdef SEQUENTIAL - newPr->coreAnimatedBy = 0; - - #else - - if(coreToScheduleOnto < 0 || coreToScheduleOnto >= NUM_CORES ) - { //out-of-range, so round-robin assignment - newPr->coreAnimatedBy = semEnv->nextCoreToGetNewPr; - - if( semEnv->nextCoreToGetNewPr >= NUM_CORES - 1 ) - semEnv->nextCoreToGetNewPr = 0; - else - semEnv->nextCoreToGetNewPr += 1; - } - else //core num in-range, so use it - { newPr->coreAnimatedBy = coreToScheduleOnto; - } - #endif - //======================================================================== - - return newPr; - } \ No newline at end of file diff -r c1c36be9c47a -r e5d4d5871ac9 VPThread_helper.h --- a/VPThread_helper.h Tue Jul 26 16:37:26 2011 +0200 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,19 +0,0 @@ -/* - * File: VPThread_helper.h - * Author: msach - * - * Created on June 10, 2011, 12:20 PM - */ - -#include "VMS/VMS.h" -#include "VPThread.h" - -#ifndef VPTHREAD_HELPER_H -#define VPTHREAD_HELPER_H - -inline VirtProcr * -VPThread__create_procr_helper( VirtProcrFnPtr fnPtr, void *initData, - VPThdSemEnv *semEnv, int32 coreToScheduleOnto ); - -#endif /* VPTHREAD_HELPER_H */ - diff -r c1c36be9c47a -r e5d4d5871ac9 VPThread_lib.c --- a/VPThread_lib.c Tue Jul 26 16:37:26 2011 +0200 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,626 +0,0 @@ -/* - * Copyright 2010 OpenSourceCodeStewardshipFoundation - * - * Licensed under BSD - */ - -#include -#include -#include - -#include "VMS/VMS.h" -#include "VPThread.h" -#include "VPThread_helper.h" -#include "VMS/Queue_impl/PrivateQueue.h" -#include "VMS/Hash_impl/PrivateHash.h" - - -//========================================================================== - -void -VPThread__init(); - -void -VPThread__init_Seq(); - -void -VPThread__init_Helper(); - - -//=========================================================================== - - -/*These are the library functions *called in the application* - * - *There's a pattern for the outside sequential code to interact with the - * VMS_HW code. - *The VMS_HW system is inside a boundary.. every VPThread system is in its - * own directory that contains the functions for each of the processor types. - * One of the processor types is the "seed" processor that starts the - * cascade of creating all the processors that do the work. - *So, in the directory is a file called "EntryPoint.c" that contains the - * function, named appropriately to the work performed, that the outside - * sequential code calls. This function follows a pattern: - *1) it calls VPThread__init() - *2) it creates the initial data for the seed processor, which is passed - * in to the function - *3) it creates the seed VPThread processor, with the data to start it with. - *4) it calls startVPThreadThenWaitUntilWorkDone - *5) it gets the returnValue from the transfer struc and returns that - * from the function - * - *For now, a new VPThread system has to be created via VPThread__init every - * time an entry point function is called -- later, might add letting the - * VPThread system be created once, and let all the entry points just reuse - * it -- want to be as simple as possible now, and see by using what makes - * sense for later.. - */ - - - -//=========================================================================== - -/*This is the "border crossing" function -- the thing that crosses from the - * outside world, into the VMS_HW world. It initializes and starts up the - * VMS system, then creates one processor from the specified function and - * puts it into the readyQ. From that point, that one function is resp. - * for creating all the other processors, that then create others, and so - * forth. - *When all the processors, including the seed, have dissipated, then this - * function returns. The results will have been written by side-effect via - * pointers read from, or written into initData. - * - *NOTE: no Threads should exist in the outside program that might touch - * any of the data reachable from initData passed in to here - */ -void -VPThread__create_seed_procr_and_do_work( VirtProcrFnPtr fnPtr, void *initData ) - { VPThdSemEnv *semEnv; - VirtProcr *seedPr; - - #ifdef SEQUENTIAL - VPThread__init_Seq(); //debug sequential exe - #else - VPThread__init(); //normal multi-thd - #endif - semEnv = _VMSMasterEnv->semanticEnv; - - //VPThread starts with one processor, which is put into initial environ, - // and which then calls create() to create more, thereby expanding work - seedPr = VPThread__create_procr_helper( fnPtr, initData, semEnv, -1 ); - - resume_procr( seedPr, semEnv ); - - #ifdef SEQUENTIAL - VMS__start_the_work_then_wait_until_done_Seq(); //debug sequential exe - #else - VMS__start_the_work_then_wait_until_done(); //normal multi-thd - #endif - - VPThread__cleanup_after_shutdown(); - } - - -inline int32 -VPThread__giveMinWorkUnitCycles( float32 percentOverhead ) - { - return MIN_WORK_UNIT_CYCLES; - } - -inline int32 -VPThread__giveIdealNumWorkUnits() - { - return NUM_SCHED_SLOTS * NUM_CORES; - } - -inline int32 -VPThread__give_number_of_cores_to_schedule_onto() - { - return NUM_CORES; - } - -/*For now, use TSC -- later, make these two macros with assembly that first - * saves jump point, and second jumps back several times to get reliable time - */ -inline void -VPThread__start_primitive() - { saveLowTimeStampCountInto( ((VPThdSemEnv *)(_VMSMasterEnv->semanticEnv))-> - primitiveStartTime ); - } - -/*Just quick and dirty for now -- make reliable later - * will want this to jump back several times -- to be sure cache is warm - * because don't want comm time included in calc-time measurement -- and - * also to throw out any "weird" values due to OS interrupt or TSC rollover - */ -inline int32 -VPThread__end_primitive_and_give_cycles() - { int32 endTime, startTime; - //TODO: fix by repeating time-measurement - saveLowTimeStampCountInto( endTime ); - startTime=((VPThdSemEnv*)(_VMSMasterEnv->semanticEnv))->primitiveStartTime; - return (endTime - startTime); - } - -//=========================================================================== -// -/*Initializes all the data-structures for a VPThread system -- but doesn't - * start it running yet! - * - * - *This sets up the semantic layer over the VMS system - * - *First, calls VMS_Setup, then creates own environment, making it ready - * for creating the seed processor and then starting the work. - */ -void -VPThread__init() - { - VMS__init(); - //masterEnv, a global var, now is partially set up by init_VMS - - //Moved here from VMS.c because this is not parallel construct independent - MakeTheMeasHists(); - - VPThread__init_Helper(); - } - -#ifdef SEQUENTIAL -void -VPThread__init_Seq() - { - VMS__init_Seq(); - flushRegisters(); - //masterEnv, a global var, now is partially set up by init_VMS - - VPThread__init_Helper(); - } -#endif - -void -VPThread__init_Helper() - { VPThdSemEnv *semanticEnv; - PrivQueueStruc **readyVPQs; - int coreIdx, i; - - //Hook up the semantic layer's plug-ins to the Master virt procr - _VMSMasterEnv->requestHandler = &VPThread__Request_Handler; - _VMSMasterEnv->slaveScheduler = &VPThread__schedule_virt_procr; - - //create the semantic layer's environment (all its data) and add to - // the master environment - semanticEnv = VMS__malloc( sizeof( VPThdSemEnv ) ); - _VMSMasterEnv->semanticEnv = semanticEnv; - - //create the ready queue - readyVPQs = VMS__malloc( NUM_CORES * sizeof(PrivQueueStruc *) ); - - for( coreIdx = 0; coreIdx < NUM_CORES; coreIdx++ ) - { - readyVPQs[ coreIdx ] = makeVMSPrivQ(); - } - - semanticEnv->readyVPQs = readyVPQs; - - semanticEnv->numVirtPr = 0; - semanticEnv->nextCoreToGetNewPr = 0; - - semanticEnv->mutexDynArrayInfo = - makePrivDynArrayOfSize( (void*)&(semanticEnv->mutexDynArray), INIT_NUM_MUTEX ); - - semanticEnv->condDynArrayInfo = - makePrivDynArrayOfSize( (void*)&(semanticEnv->condDynArray), INIT_NUM_COND ); - - //TODO: bug -- turn these arrays into dyn arrays to eliminate limit - //semanticEnv->singletonHasBeenExecutedFlags = makeDynArrayInfo( ); - //semanticEnv->transactionStrucs = makeDynArrayInfo( ); - for( i = 0; i < NUM_STRUCS_IN_SEM_ENV; i++ ) - { - semanticEnv->fnSingletons[i].endInstrAddr = NULL; - semanticEnv->fnSingletons[i].hasBeenStarted = FALSE; - semanticEnv->fnSingletons[i].hasFinished = FALSE; - semanticEnv->fnSingletons[i].waitQ = makeVMSPrivQ(); - semanticEnv->transactionStrucs[i].waitingVPQ = makeVMSPrivQ(); - } - } - - -/*Frees any memory allocated by VPThread__init() then calls VMS__shutdown - */ -void -VPThread__cleanup_after_shutdown() - { /*VPThdSemEnv *semEnv; - int32 coreIdx, idx, highestIdx; - VPThdMutex **mutexArray, *mutex; - VPThdCond **condArray, *cond; */ - - /* It's all allocated inside VMS's big chunk -- that's about to be freed, so - * nothing to do here - semEnv = _VMSMasterEnv->semanticEnv; - -//TODO: double check that all sem env locations freed - - for( coreIdx = 0; coreIdx < NUM_CORES; coreIdx++ ) - { - free( semEnv->readyVPQs[coreIdx]->startOfData ); - free( semEnv->readyVPQs[coreIdx] ); - } - - free( semEnv->readyVPQs ); - - - //==== Free mutexes and mutex array ==== - mutexArray = semEnv->mutexDynArray->array; - highestIdx = semEnv->mutexDynArray->highestIdxInArray; - for( idx=0; idx < highestIdx; idx++ ) - { mutex = mutexArray[ idx ]; - if( mutex == NULL ) continue; - free( mutex ); - } - free( mutexArray ); - free( semEnv->mutexDynArray ); - //====================================== - - - //==== Free conds and cond array ==== - condArray = semEnv->condDynArray->array; - highestIdx = semEnv->condDynArray->highestIdxInArray; - for( idx=0; idx < highestIdx; idx++ ) - { cond = condArray[ idx ]; - if( cond == NULL ) continue; - free( cond ); - } - free( condArray ); - free( semEnv->condDynArray ); - //=================================== - - - free( _VMSMasterEnv->semanticEnv ); - */ - VMS__cleanup_at_end_of_shutdown(); - } - - -//=========================================================================== - -/* - */ -inline VirtProcr * -VPThread__create_thread( VirtProcrFnPtr fnPtr, void *initData, - VirtProcr *creatingPr ) - { VPThdSemReq reqData; - - //the semantic request data is on the stack and disappears when this - // call returns -- it's guaranteed to remain in the VP's stack for as - // long as the VP is suspended. - reqData.reqType = 0; //know the type because is a VMS create req - reqData.coreToScheduleOnto = -1; //means round-robin schedule - reqData.fnPtr = fnPtr; - reqData.initData = initData; - reqData.requestingPr = creatingPr; - - VMS__send_create_procr_req( &reqData, creatingPr ); - - return creatingPr->dataRetFromReq; - } - -inline VirtProcr * -VPThread__create_thread_with_affinity( VirtProcrFnPtr fnPtr, void *initData, - VirtProcr *creatingPr, int32 coreToScheduleOnto ) - { VPThdSemReq reqData; - - //the semantic request data is on the stack and disappears when this - // call returns -- it's guaranteed to remain in the VP's stack for as - // long as the VP is suspended. - reqData.reqType = 0; //know type because in a VMS create req - reqData.coreToScheduleOnto = coreToScheduleOnto; - reqData.fnPtr = fnPtr; - reqData.initData = initData; - reqData.requestingPr = creatingPr; - - VMS__send_create_procr_req( &reqData, creatingPr ); - } - -inline void -VPThread__dissipate_thread( VirtProcr *procrToDissipate ) - { - VMS__send_dissipate_req( procrToDissipate ); - } - - -//=========================================================================== - -void * -VPThread__malloc( size_t sizeToMalloc, VirtProcr *animPr ) - { VPThdSemReq reqData; - - reqData.reqType = malloc_req; - reqData.sizeToMalloc = sizeToMalloc; - reqData.requestingPr = animPr; - - VMS__send_sem_request( &reqData, animPr ); - - return animPr->dataRetFromReq; - } - - -/*Sends request to Master, which does the work of freeing - */ -void -VPThread__free( void *ptrToFree, VirtProcr *animPr ) - { VPThdSemReq reqData; - - reqData.reqType = free_req; - reqData.ptrToFree = ptrToFree; - reqData.requestingPr = animPr; - - VMS__send_sem_request( &reqData, animPr ); - } - - -//=========================================================================== - -inline void -VPThread__set_globals_to( void *globals ) - { - ((VPThdSemEnv *) - (_VMSMasterEnv->semanticEnv))->applicationGlobals = globals; - } - -inline void * -VPThread__give_globals() - { - return((VPThdSemEnv *) (_VMSMasterEnv->semanticEnv))->applicationGlobals; - } - - -//=========================================================================== - -inline int32 -VPThread__make_mutex( VirtProcr *animPr ) - { VPThdSemReq reqData; - - reqData.reqType = make_mutex; - reqData.requestingPr = animPr; - - VMS__send_sem_request( &reqData, animPr ); - - return (int32)animPr->dataRetFromReq; //mutexid is 32bit wide - } - -inline void -VPThread__mutex_lock( int32 mutexIdx, VirtProcr *acquiringPr ) - { VPThdSemReq reqData; - - reqData.reqType = mutex_lock; - reqData.mutexIdx = mutexIdx; - reqData.requestingPr = acquiringPr; - - VMS__send_sem_request( &reqData, acquiringPr ); - } - -inline void -VPThread__mutex_unlock( int32 mutexIdx, VirtProcr *releasingPr ) - { VPThdSemReq reqData; - - reqData.reqType = mutex_unlock; - reqData.mutexIdx = mutexIdx; - reqData.requestingPr = releasingPr; - - VMS__send_sem_request( &reqData, releasingPr ); - } - - -//======================= -inline int32 -VPThread__make_cond( int32 ownedMutexIdx, VirtProcr *animPr) - { VPThdSemReq reqData; - - reqData.reqType = make_cond; - reqData.mutexIdx = ownedMutexIdx; - reqData.requestingPr = animPr; - - VMS__send_sem_request( &reqData, animPr ); - - return (int32)animPr->dataRetFromReq; //condIdx is 32 bit wide - } - -inline void -VPThread__cond_wait( int32 condIdx, VirtProcr *waitingPr) - { VPThdSemReq reqData; - - reqData.reqType = cond_wait; - reqData.condIdx = condIdx; - reqData.requestingPr = waitingPr; - - VMS__send_sem_request( &reqData, waitingPr ); - } - -inline void * -VPThread__cond_signal( int32 condIdx, VirtProcr *signallingPr ) - { VPThdSemReq reqData; - - reqData.reqType = cond_signal; - reqData.condIdx = condIdx; - reqData.requestingPr = signallingPr; - - VMS__send_sem_request( &reqData, signallingPr ); - } - - -//=========================================================================== -// -/*A function singleton is a function whose body executes exactly once, on a - * single core, no matter how many times the fuction is called and no - * matter how many cores or the timing of cores calling it. - * - *A data singleton is a ticket attached to data. That ticket can be used - * to get the data through the function exactly once, no matter how many - * times the data is given to the function, and no matter the timing of - * trying to get the data through from different cores. - */ - -/*asm function declarations*/ -void asm_save_ret_to_singleton(VPThdSingleton *singletonPtrAddr); -void asm_write_ret_from_singleton(VPThdSingleton *singletonPtrAddr); - -/*Fn singleton uses ID as index into array of singleton structs held in the - * semantic environment. - */ -void -VPThread__start_fn_singleton( int32 singletonID, VirtProcr *animPr ) - { - VPThdSemReq reqData; - - // - reqData.reqType = singleton_fn_start; - reqData.singletonID = singletonID; - - VMS__send_sem_request( &reqData, animPr ); - if( animPr->dataRetFromReq ) //will be 0 or addr of label in end singleton - { - VPThdSemEnv *semEnv = VMS__give_sem_env_for( animPr ); - asm_write_ret_from_singleton(&(semEnv->fnSingletons[ singletonID])); - } - } - -/*Data singleton hands addr of loc holding a pointer to a singleton struct. - * The start_data_singleton makes the structure and puts its addr into the - * location. - */ -void -VPThread__start_data_singleton( VPThdSingleton **singletonAddr, VirtProcr *animPr ) - { - VPThdSemReq reqData; - - if( *singletonAddr && (*singletonAddr)->hasFinished ) - goto JmpToEndSingleton; - - reqData.reqType = singleton_data_start; - reqData.singletonPtrAddr = singletonAddr; - - VMS__send_sem_request( &reqData, animPr ); - if( animPr->dataRetFromReq ) //either 0 or end singleton's return addr - { - JmpToEndSingleton: - asm_write_ret_from_singleton(*singletonAddr); - - } - //now, simply return - //will exit either from the start singleton call or the end-singleton call - } - -/*Uses ID as index into array of flags. If flag already set, resumes from - * end-label. Else, sets flag and resumes normally. - * - *Note, this call cannot be inlined because the instr addr at the label - * inside is shared by all invocations of a given singleton ID. - */ -void -VPThread__end_fn_singleton( int32 singletonID, VirtProcr *animPr ) - { - VPThdSemReq reqData; - - //don't need this addr until after at least one singleton has reached - // this function - VPThdSemEnv *semEnv = VMS__give_sem_env_for( animPr ); - asm_write_ret_from_singleton(&(semEnv->fnSingletons[ singletonID])); - - reqData.reqType = singleton_fn_end; - reqData.singletonID = singletonID; - - VMS__send_sem_request( &reqData, animPr ); - } - -void -VPThread__end_data_singleton( VPThdSingleton **singletonPtrAddr, VirtProcr *animPr ) - { - VPThdSemReq reqData; - - //don't need this addr until after singleton struct has reached - // this function for first time - //do assembly that saves the return addr of this fn call into the - // data singleton -- that data-singleton can only be given to exactly - // one instance in the code of this function. However, can use this - // function in different places for different data-singletons. - - asm_save_ret_to_singleton(*singletonPtrAddr); - - reqData.reqType = singleton_data_end; - reqData.singletonPtrAddr = singletonPtrAddr; - - VMS__send_sem_request( &reqData, animPr ); - } - - -/*This executes the function in the masterVP, so it executes in isolation - * from any other copies -- only one copy of the function can ever execute - * at a time. - * - *It suspends to the master, and the request handler takes the function - * pointer out of the request and calls it, then resumes the VP. - *Only very short functions should be called this way -- for longer-running - * isolation, use transaction-start and transaction-end, which run the code - * between as work-code. - */ -void -VPThread__animate_short_fn_in_isolation( PtrToAtomicFn ptrToFnToExecInMaster, - void *data, VirtProcr *animPr ) - { - VPThdSemReq reqData; - - // - reqData.reqType = atomic; - reqData.fnToExecInMaster = ptrToFnToExecInMaster; - reqData.dataForFn = data; - - VMS__send_sem_request( &reqData, animPr ); - } - - -/*This suspends to the master. - *First, it looks at the VP's data, to see the highest transactionID that VP - * already has entered. If the current ID is not larger, it throws an - * exception stating a bug in the code. Otherwise it puts the current ID - * there, and adds the ID to a linked list of IDs entered -- the list is - * used to check that exits are properly ordered. - *Next it is uses transactionID as index into an array of transaction - * structures. - *If the "VP_currently_executing" field is non-null, then put requesting VP - * into queue in the struct. (At some point a holder will request - * end-transaction, which will take this VP from the queue and resume it.) - *If NULL, then write requesting into the field and resume. - */ -void -VPThread__start_transaction( int32 transactionID, VirtProcr *animPr ) - { - VPThdSemReq reqData; - - // - reqData.reqType = trans_start; - reqData.transID = transactionID; - - VMS__send_sem_request( &reqData, animPr ); - } - -/*This suspends to the master, then uses transactionID as index into an - * array of transaction structures. - *It looks at VP_currently_executing to be sure it's same as requesting VP. - * If different, throws an exception, stating there's a bug in the code. - *Next it looks at the queue in the structure. - *If it's empty, it sets VP_currently_executing field to NULL and resumes. - *If something in, gets it, sets VP_currently_executing to that VP, then - * resumes both. - */ -void -VPThread__end_transaction( int32 transactionID, VirtProcr *animPr ) - { - VPThdSemReq reqData; - - // - reqData.reqType = trans_end; - reqData.transID = transactionID; - - VMS__send_sem_request( &reqData, animPr ); - } -//=========================================================================== diff -r c1c36be9c47a -r e5d4d5871ac9 Vthread.h --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/Vthread.h Thu Mar 01 13:20:51 2012 -0800 @@ -0,0 +1,259 @@ +/* + * Copyright 2009 OpenSourceStewardshipFoundation.org + * Licensed under GNU General Public License version 2 + * + * Author: seanhalle@yahoo.com + * + */ + +#ifndef _VPThread_H +#define _VPThread_H + +#include "VMS_impl/VMS.h" +#include "C_Libraries/Queue_impl/PrivateQueue.h" +#include "C_Libraries/DynArray/DynArray.h" + + +/*This header defines everything specific to the VPThread semantic plug-in + */ + + +//=========================================================================== + //turn on the counter measurements of language overhead -- comment to turn off +#define MEAS__TURN_ON_LANG_MEAS + +#define INIT_NUM_MUTEX 10000 +#define INIT_NUM_COND 10000 + +#define NUM_STRUCS_IN_SEM_ENV 1000 +//=========================================================================== + +//=========================================================================== +typedef struct _VPThreadSemReq VPThdSemReq; +typedef void (*PtrToAtomicFn ) ( void * ); //executed atomically in master +//=========================================================================== + + +/*WARNING: assembly hard-codes position of endInstrAddr as first field + */ +typedef struct + { + void *endInstrAddr; + int32 hasBeenStarted; + int32 hasFinished; + PrivQueueStruc *waitQ; + } +VPThdSingleton; + +/*Semantic layer-specific data sent inside a request from lib called in app + * to request handler called in MasterLoop + */ +enum VPThreadReqType + { + make_mutex = 1, + mutex_lock, + mutex_unlock, + make_cond, + cond_wait, + cond_signal, + make_procr, + malloc_req, + free_req, + singleton_fn_start, + singleton_fn_end, + singleton_data_start, + singleton_data_end, + atomic, + trans_start, + trans_end + }; + +struct _VPThreadSemReq + { enum VPThreadReqType reqType; + SlaveVP *requestingVP; + int32 mutexIdx; + int32 condIdx; + + void *initData; + TopLevelFnPtr fnPtr; + int32 coreToScheduleOnto; + + size_t sizeToMalloc; + void *ptrToFree; + + int32 singletonID; + VPThdSingleton **singletonPtrAddr; + + PtrToAtomicFn fnToExecInMaster; + void *dataForFn; + + int32 transID; + } +/* VPThreadSemReq */; + + +typedef struct + { + SlaveVP *VPCurrentlyExecuting; + PrivQueueStruc *waitingVPQ; + } +VPThdTrans; + + +typedef struct + { + int32 mutexIdx; + SlaveVP *holderOfLock; + PrivQueueStruc *waitingQueue; + } +VPThdMutex; + + +typedef struct + { + int32 condIdx; + PrivQueueStruc *waitingQueue; + VPThdMutex *partnerMutex; + } +VPThdCond; + +typedef struct _TransListElem TransListElem; +struct _TransListElem + { + int32 transID; + TransListElem *nextTrans; + }; +//TransListElem + +typedef struct + { + int32 highestTransEntered; + TransListElem *lastTransEntered; + } +VPThdSemData; + + +typedef struct + { + //Standard stuff will be in most every semantic env + PrivQueueStruc **readyVPQs; + int32 numVirtVP; + int32 nextCoreToGetNewVP; + int32 primitiveStartTime; + + //Specific to this semantic layer + VPThdMutex **mutexDynArray; + PrivDynArrayInfo *mutexDynArrayInfo; + + VPThdCond **condDynArray; + PrivDynArrayInfo *condDynArrayInfo; + + void *applicationGlobals; + + //fix limit on num with dynArray + VPThdSingleton fnSingletons[NUM_STRUCS_IN_SEM_ENV]; + + VPThdTrans transactionStrucs[NUM_STRUCS_IN_SEM_ENV]; + } +VPThdSemEnv; + + +//=========================================================================== + +inline void +VPThread__create_seed_procr_and_do_work( TopLevelFnPtr fn, void *initData ); + +//======================= + +inline SlaveVP * +VPThread__create_thread( TopLevelFnPtr fnPtr, void *initData, + SlaveVP *creatingVP ); + +inline SlaveVP * +VPThread__create_thread_with_affinity( TopLevelFnPtr fnPtr, void *initData, + SlaveVP *creatingVP, int32 coreToScheduleOnto ); + +inline void +VPThread__dissipate_thread( SlaveVP *procrToDissipate ); + +//======================= +inline void +VPThread__set_globals_to( void *globals ); + +inline void * +VPThread__give_globals(); + +//======================= +inline int32 +VPThread__make_mutex( SlaveVP *animVP ); + +inline void +VPThread__mutex_lock( int32 mutexIdx, SlaveVP *acquiringVP ); + +inline void +VPThread__mutex_unlock( int32 mutexIdx, SlaveVP *releasingVP ); + + +//======================= +inline int32 +VPThread__make_cond( int32 ownedMutexIdx, SlaveVP *animPr); + +inline void +VPThread__cond_wait( int32 condIdx, SlaveVP *waitingPr); + +inline void * +VPThread__cond_signal( int32 condIdx, SlaveVP *signallingVP ); + + +//======================= +void +VPThread__start_fn_singleton( int32 singletonID, SlaveVP *animVP ); + +void +VPThread__end_fn_singleton( int32 singletonID, SlaveVP *animVP ); + +void +VPThread__start_data_singleton( VPThdSingleton **singeltonAddr, SlaveVP *animVP ); + +void +VPThread__end_data_singleton( VPThdSingleton **singletonAddr, SlaveVP *animVP ); + +void +VPThread__animate_short_fn_in_isolation( PtrToAtomicFn ptrToFnToExecInMaster, + void *data, SlaveVP *animVP ); + +void +VPThread__start_transaction( int32 transactionID, SlaveVP *animVP ); + +void +VPThread__end_transaction( int32 transactionID, SlaveVP *animVP ); + + + +//========================= Internal use only ============================= +inline void +VPThread__Request_Handler( SlaveVP *requestingVP, void *_semEnv ); + +inline SlaveVP * +VPThread__schedule_virt_procr( void *_semEnv, int coreNum ); + +//======================= +inline void +VPThread__free_semantic_request( VPThdSemReq *semReq ); + +//======================= + +void * +VPThread__malloc( size_t sizeToMalloc, SlaveVP *animVP ); + +void +VPThread__init(); + +void +VPThread__cleanup_after_shutdown(); + +void inline +resume_procr( SlaveVP *procr, VPThdSemEnv *semEnv ); + +#endif /* _VPThread_H */ + diff -r c1c36be9c47a -r e5d4d5871ac9 Vthread.s --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/Vthread.s Thu Mar 01 13:20:51 2012 -0800 @@ -0,0 +1,21 @@ + +//Assembly code takes the return addr off the stack and saves +// into the singleton. The first field in the singleton is the +// "endInstrAddr" field, and the return addr is at 0x4(%ebp) +.globl asm_save_ret_to_singleton +asm_save_ret_to_singleton: + movq 0x8(%rbp), %rax #get ret address, ebp is the same as in the calling function + movq %rax, (%rdi) #write ret addr to endInstrAddr field + ret + + +//Assembly code changes the return addr on the stack to the one +// saved into the singleton by the end-singleton-fn +//The stack's return addr is at 0x4(%%ebp) +.globl asm_write_ret_from_singleton +asm_write_ret_from_singleton: + movq (%rdi), %rax #get endInstrAddr field + movq %rax, 0x8(%rbp) #write return addr to the stack of the caller + ret + + diff -r c1c36be9c47a -r e5d4d5871ac9 Vthread_Meas.h --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/Vthread_Meas.h Thu Mar 01 13:20:51 2012 -0800 @@ -0,0 +1,109 @@ +/* + * File: VPThread_helper.h + * Author: msach + * + * Created on June 10, 2011, 12:20 PM + */ + +#ifndef VTHREAD_MEAS_H +#define VTHREAD_MEAS_H + +#ifdef MEAS__TURN_ON_LANG_MEAS + + #ifdef MEAS__Make_Meas_Hists_for_Language + #undef MEAS__Make_Meas_Hists_for_Language + #endif + +//=================== Language-specific Measurement Stuff =================== +// +// + #define createHistIdx 1 //note: starts at 1 + #define mutexLockHistIdx 2 + #define mutexUnlockHistIdx 3 + #define condWaitHistIdx 4 + #define condSignalHistIdx 5 + + #define MEAS__Make_Meas_Hists_for_Language() \ + _VMSMasterEnv->measHistsInfo = \ + makePrivDynArrayOfSize( (void***)&(_VMSMasterEnv->measHists), 200); \ + makeAMeasHist( createHistIdx, "create", 250, 0, 100 ) \ + makeAMeasHist( mutexLockHistIdx, "mutex_lock", 50, 0, 100 ) \ + makeAMeasHist( mutexUnlockHistIdx, "mutex_unlock", 50, 0, 100 ) \ + makeAMeasHist( condWaitHistIdx, "cond_wait", 50, 0, 100 ) \ + makeAMeasHist( condSignalHistIdx, "cond_signal", 50, 0, 100 ) + + + #define Meas_startCreate \ + int32 startStamp, endStamp; \ + saveLowTimeStampCountInto( startStamp ); + + #define Meas_endCreate \ + saveLowTimeStampCountInto( endStamp ); \ + addIntervalToHist( startStamp, endStamp, \ + _VMSMasterEnv->measHists[ createHistIdx ] ); + + #define Meas_startMutexLock \ + int32 startStamp, endStamp; \ + saveLowTimeStampCountInto( startStamp ); + + #define Meas_endMutexLock \ + saveLowTimeStampCountInto( endStamp ); \ + addIntervalToHist( startStamp, endStamp, \ + _VMSMasterEnv->measHists[ mutexLockHistIdx ] ); + + #define Meas_startMutexUnlock \ + int32 startStamp, endStamp; \ + saveLowTimeStampCountInto( startStamp ); + + #define Meas_endMutexUnlock \ + saveLowTimeStampCountInto( endStamp ); \ + addIntervalToHist( startStamp, endStamp, \ + _VMSMasterEnv->measHists[ mutexUnlockHistIdx ] ); + + #define Meas_startCondWait \ + int32 startStamp, endStamp; \ + saveLowTimeStampCountInto( startStamp ); + + #define Meas_endCondWait \ + saveLowTimeStampCountInto( endStamp ); \ + addIntervalToHist( startStamp, endStamp, \ + _VMSMasterEnv->measHists[ condWaitHistIdx ] ); + + #define Meas_startCondSignal \ + int32 startStamp, endStamp; \ + saveLowTimeStampCountInto( startStamp ); + + #define Meas_endCondSignal \ + saveLowTimeStampCountInto( endStamp ); \ + addIntervalToHist( startStamp, endStamp, \ + _VMSMasterEnv->measHists[ condSignalHistIdx ] ); + +#else //===================== turned off ========================== + + #define MEAS__Make_Meas_Hists_for_Language() + + #define Meas_startCreate + + #define Meas_endCreate + + #define Meas_startMutexLock + + #define Meas_endMutexLock + + #define Meas_startMutexUnlock + + #define Meas_endMutexUnlock + + #define Meas_startCondWait + + #define Meas_endCondWait + + #define Meas_startCondSignal + + #define Meas_endCondSignal + +#endif + + +#endif /* VTHREAD_MEAS_H */ + diff -r c1c36be9c47a -r e5d4d5871ac9 Vthread_PluginFns.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/Vthread_PluginFns.c Thu Mar 01 13:20:51 2012 -0800 @@ -0,0 +1,192 @@ +/* + * Copyright 2010 OpenSourceCodeStewardshipFoundation + * + * Licensed under BSD + */ + +#include +#include +#include + +#include "VMS/Queue_impl/PrivateQueue.h" +#include "VPThread.h" +#include "VPThread_Request_Handlers.h" +#include "VPThread_helper.h" + +//=========================== Local Fn Prototypes =========================== + +void inline +handleSemReq( VMSReqst *req, SlaveVP *requestingVP, VPThdSemEnv *semEnv ); + +inline void +handleDissipate( SlaveVP *requestingVP, VPThdSemEnv *semEnv ); + +inline void +handleCreate( VMSReqst *req, SlaveVP *requestingVP, VPThdSemEnv *semEnv ); + + +//============================== Scheduler ================================== +// +/*For VPThread, scheduling a slave simply takes the next work-unit off the + * ready-to-go work-unit queue and assigns it to the slaveToSched. + *If the ready-to-go work-unit queue is empty, then nothing to schedule + * to the slave -- return FALSE to let Master loop know scheduling that + * slave failed. + */ +char __Scheduler[] = "FIFO Scheduler"; //Gobal variable for name in saved histogram +SlaveVP * +VPThread__schedule_virt_procr( void *_semEnv, int coreNum ) + { SlaveVP *schedVP; + VPThdSemEnv *semEnv; + + semEnv = (VPThdSemEnv *)_semEnv; + + schedVP = readPrivQ( semEnv->readyVPQs[coreNum] ); + //Note, using a non-blocking queue -- it returns NULL if queue empty + + return( schedVP ); + } + + + +//=========================== Request Handler ============================= +// +/*Will get requests to send, to receive, and to create new processors. + * Upon send, check the hash to see if a receive is waiting. + * Upon receive, check hash to see if a send has already happened. + * When other is not there, put in. When other is there, the comm. + * completes, which means the receiver P gets scheduled and + * picks up right after the receive request. So make the work-unit + * and put it into the queue of work-units ready to go. + * Other request is create a new Processor, with the function to run in the + * Processor, and initial data. + */ +void +VPThread__Request_Handler( SlaveVP *requestingVP, void *_semEnv ) + { VPThdSemEnv *semEnv; + VMSReqst *req; + + semEnv = (VPThdSemEnv *)_semEnv; + + req = VMS__take_next_request_out_of( requestingVP ); + + while( req != NULL ) + { + switch( req->reqType ) + { case semantic: handleSemReq( req, requestingVP, semEnv); + break; + case createReq: handleCreate( req, requestingVP, semEnv); + break; + case dissipate: handleDissipate( requestingVP, semEnv); + break; + case VMSSemantic: VMS__handle_VMSSemReq(req, requestingVP, semEnv, + (ResumeVPFnPtr)&resume_procr); + break; + default: + break; + } + + req = VMS__take_next_request_out_of( requestingVP ); + } //while( req != NULL ) + } + + +void inline +handleSemReq( VMSReqst *req, SlaveVP *reqVP, VPThdSemEnv *semEnv ) + { VPThdSemReq *semReq; + + semReq = VMS__take_sem_reqst_from(req); + if( semReq == NULL ) return; + switch( semReq->reqType ) + { + case make_mutex: handleMakeMutex( semReq, semEnv); + break; + case mutex_lock: handleMutexLock( semReq, semEnv); + break; + case mutex_unlock: handleMutexUnlock(semReq, semEnv); + break; + case make_cond: handleMakeCond( semReq, semEnv); + break; + case cond_wait: handleCondWait( semReq, semEnv); + break; + case cond_signal: handleCondSignal( semReq, semEnv); + break; + case malloc_req: handleMalloc( semReq, reqVP, semEnv); + break; + case free_req: handleFree( semReq, reqVP, semEnv); + break; + case singleton_fn_start: handleStartFnSingleton(semReq, reqVP, semEnv); + break; + case singleton_fn_end: handleEndFnSingleton( semReq, reqVP, semEnv); + break; + case singleton_data_start:handleStartDataSingleton(semReq,reqVP,semEnv); + break; + case singleton_data_end: handleEndDataSingleton(semReq, reqVP, semEnv); + break; + case atomic: handleAtomic( semReq, reqVP, semEnv); + break; + case trans_start: handleTransStart( semReq, reqVP, semEnv); + break; + case trans_end: handleTransEnd( semReq, reqVP, semEnv); + break; + } + } + +//=========================== VMS Request Handlers =========================== +// +inline void +handleDissipate( SlaveVP *requestingVP, VPThdSemEnv *semEnv ) + { + //free any semantic data allocated to the virt procr + VMS__free( requestingVP->semanticData ); + + //Now, call VMS to free_all AppVP state -- stack and so on + VMS__dissipate_procr( requestingVP ); + + semEnv->numVP -= 1; + if( semEnv->numVP == 0 ) + { //no more work, so shutdown + VMS__shutdown(); + } + } + +inline void +handleCreate( VMSReqst *req, SlaveVP *requestingVP, VPThdSemEnv *semEnv ) + { VPThdSemReq *semReq; + SlaveVP *newVP; + + //========================= MEASUREMENT STUFF ====================== + Meas_startCreate + //================================================================== + + semReq = VMS__take_sem_reqst_from( req ); + + newVP = VPThread__create_procr_helper( semReq->fnPtr, semReq->initData, + semEnv, semReq->coreToScheduleOnto); + + //For VPThread, caller needs ptr to created processor returned to it + requestingVP->dataRetFromReq = newVP; + + resume_procr( newVP, semEnv ); + resume_procr( requestingVP, semEnv ); + + //========================= MEASUREMENT STUFF ====================== + Meas_endCreate + #ifdef MEAS__TIME_PLUGIN + #ifdef MEAS__SUB_CREATE + subIntervalFromHist( startStamp, endStamp, + _VMSMasterEnv->reqHdlrHighTimeHist ); + #endif + #endif + //================================================================== + } + + +//=========================== Helper ============================== +void inline +resume_procr( SlaveVP *procr, VPThdSemEnv *semEnv ) + { + writePrivQ( procr, semEnv->readyVPQs[ procr->coreAnimatedBy] ); + } + +//=========================================================================== \ No newline at end of file diff -r c1c36be9c47a -r e5d4d5871ac9 Vthread_Request_Handlers.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/Vthread_Request_Handlers.c Thu Mar 01 13:20:51 2012 -0800 @@ -0,0 +1,444 @@ +/* + * Copyright 2010 OpenSourceCodeStewardshipFoundation + * + * Licensed under BSD + */ + +#include +#include +#include + +#include "VMS_Implementations/VMS_impl/VMS.h" +#include "C_Libraries/Queue_impl/PrivateQueue.h" +#include "C_Libraries/Hash_impl/PrivateHash.h" +#include "Vthread.h" + + + +//=============================== Mutexes ================================= +/*The semantic request has a mutexIdx value, which acts as index into array. + */ +inline void +handleMakeMutex( VPThdSemReq *semReq, VPThdSemEnv *semEnv) + { VPThdMutex *newMutex; + SlaveVP *requestingVP; + + requestingVP = semReq->requestingVP; + newMutex = VMS__malloc( sizeof(VPThdMutex) ); + newMutex->waitingQueue = makeVMSPrivQ( requestingVP ); + newMutex->holderOfLock = NULL; + + //The mutex struc contains an int that identifies it -- use that as + // its index within the array of mutexes. Add the new mutex to array. + newMutex->mutexIdx = addToDynArray( newMutex, semEnv->mutexDynArrayInfo ); + + //Now communicate the mutex's identifying int back to requesting procr + semReq->requestingVP->dataRetFromReq = (void*)newMutex->mutexIdx; //mutexIdx is 32 bit + + //re-animate the requester + resume_procr( requestingVP, semEnv ); + } + + +inline void +handleMutexLock( VPThdSemReq *semReq, VPThdSemEnv *semEnv) + { VPThdMutex *mutex; + //=================== Deterministic Replay ====================== + #ifdef RECORD_DETERMINISTIC_REPLAY + + #endif + //================================================================= + Meas_startMutexLock + //lookup mutex struc, using mutexIdx as index + mutex = semEnv->mutexDynArray[ semReq->mutexIdx ]; + + //see if mutex is free or not + if( mutex->holderOfLock == NULL ) //none holding, give lock to requester + { + mutex->holderOfLock = semReq->requestingVP; + + //re-animate requester, now that it has the lock + resume_procr( semReq->requestingVP, semEnv ); + } + else //queue up requester to wait for release of lock + { + writePrivQ( semReq->requestingVP, mutex->waitingQueue ); + } + Meas_endMutexLock + } + +/* + */ +inline void +handleMutexUnlock( VPThdSemReq *semReq, VPThdSemEnv *semEnv) + { VPThdMutex *mutex; + + Meas_startMutexUnlock + //lookup mutex struc, using mutexIdx as index + mutex = semEnv->mutexDynArray[ semReq->mutexIdx ]; + + //set new holder of mutex-lock to be next in queue (NULL if empty) + mutex->holderOfLock = readPrivQ( mutex->waitingQueue ); + + //if have new non-NULL holder, re-animate it + if( mutex->holderOfLock != NULL ) + { + resume_procr( mutex->holderOfLock, semEnv ); + } + + //re-animate the releaser of the lock + resume_procr( semReq->requestingVP, semEnv ); + Meas_endMutexUnlock + } + +//=========================== Condition Vars ============================== +/*The semantic request has the cond-var value and mutex value, which are the + * indexes into the array. Not worrying about having too many mutexes or + * cond vars created, so using array instead of hash table, for speed. + */ + + +/*Make cond has to be called with the mutex that the cond is paired to + * Don't have to implement this way, but was confusing learning cond vars + * until deduced that each cond var owns a mutex that is used only for + * interacting with that cond var. So, make this pairing explicit. + */ +inline void +handleMakeCond( VPThdSemReq *semReq, VPThdSemEnv *semEnv) + { VPThdCond *newCond; + SlaveVP *requestingVP; + + requestingVP = semReq->requestingVP; + newCond = VMS__malloc( sizeof(VPThdCond) ); + newCond->partnerMutex = semEnv->mutexDynArray[ semReq->mutexIdx ]; + + newCond->waitingQueue = makeVMSPrivQ(); + + //The cond struc contains an int that identifies it -- use that as + // its index within the array of conds. Add the new cond to array. + newCond->condIdx = addToDynArray( newCond, semEnv->condDynArrayInfo ); + + //Now communicate the cond's identifying int back to requesting procr + semReq->requestingVP->dataRetFromReq = (void*)newCond->condIdx; //condIdx is 32 bit + + //re-animate the requester + resume_procr( requestingVP, semEnv ); + } + + +/*Mutex has already been paired to the cond var, so don't need to send the + * mutex, just the cond var. Don't have to do this, but want to bitch-slap + * the designers of Posix standard ; ) + */ +inline void +handleCondWait( VPThdSemReq *semReq, VPThdSemEnv *semEnv) + { VPThdCond *cond; + VPThdMutex *mutex; + + Meas_startCondWait + //get cond struc out of array of them that's in the sem env + cond = semEnv->condDynArray[ semReq->condIdx ]; + + //add requester to queue of wait-ers + writePrivQ( semReq->requestingVP, cond->waitingQueue ); + + //unlock mutex -- can't reuse above handler 'cause not queuing releaser + mutex = cond->partnerMutex; + mutex->holderOfLock = readPrivQ( mutex->waitingQueue ); + + if( mutex->holderOfLock != NULL ) + { + resume_procr( mutex->holderOfLock, semEnv ); + } + Meas_endCondWait + } + + +/*Note that have to implement this such that guarantee the waiter is the one + * that gets the lock + */ +inline void +handleCondSignal( VPThdSemReq *semReq, VPThdSemEnv *semEnv) + { VPThdCond *cond; + VPThdMutex *mutex; + SlaveVP *waitingVP; + + Meas_startCondSignal; + //get cond struc out of array of them that's in the sem env + cond = semEnv->condDynArray[ semReq->condIdx ]; + + //take next waiting procr out of queue + waitingVP = readPrivQ( cond->waitingQueue ); + + //transfer waiting procr to wait queue of mutex + // mutex is guaranteed to be held by signalling procr, so no check + mutex = cond->partnerMutex; + pushPrivQ( waitingVP, mutex->waitingQueue ); //is first out when read + + //re-animate the signalling procr + resume_procr( semReq->requestingVP, semEnv ); + Meas_endCondSignal; + } + + + +//============================================================================ +// +/* + */ +void inline +handleMalloc(VPThdSemReq *semReq, SlaveVP *requestingVP,VPThdSemEnv *semEnv) + { void *ptr; + + //========================= MEASUREMENT STUFF ====================== + #ifdef MEAS__TIME_PLUGIN + int32 startStamp, endStamp; + saveLowTimeStampCountInto( startStamp ); + #endif + //================================================================== + ptr = VMS__malloc( semReq->sizeToMalloc ); + requestingVP->dataRetFromReq = ptr; + resume_procr( requestingVP, semEnv ); + //========================= MEASUREMENT STUFF ====================== + #ifdef MEAS__TIME_PLUGIN + saveLowTimeStampCountInto( endStamp ); + subIntervalFromHist( startStamp, endStamp, + _VMSMasterEnv->reqHdlrHighTimeHist ); + #endif + //================================================================== + } + +/* + */ +void inline +handleFree( VPThdSemReq *semReq, SlaveVP *requestingVP, VPThdSemEnv *semEnv) + { + //========================= MEASUREMENT STUFF ====================== + #ifdef MEAS__TIME_PLUGIN + int32 startStamp, endStamp; + saveLowTimeStampCountInto( startStamp ); + #endif + //================================================================== + VMS__free( semReq->ptrToFree ); + resume_procr( requestingVP, semEnv ); + //========================= MEASUREMENT STUFF ====================== + #ifdef MEAS__TIME_PLUGIN + saveLowTimeStampCountInto( endStamp ); + subIntervalFromHist( startStamp, endStamp, + _VMSMasterEnv->reqHdlrHighTimeHist ); + #endif + //================================================================== + } + + +//=========================================================================== +// +/*Uses ID as index into array of flags. If flag already set, resumes from + * end-label. Else, sets flag and resumes normally. + */ +void inline +handleStartSingleton_helper( VPThdSingleton *singleton, SlaveVP *reqstingVP, + VPThdSemEnv *semEnv ) + { + if( singleton->hasFinished ) + { //the code that sets the flag to true first sets the end instr addr + reqstingVP->dataRetFromReq = singleton->endInstrAddr; + resume_procr( reqstingVP, semEnv ); + return; + } + else if( singleton->hasBeenStarted ) + { //singleton is in-progress in a diff slave, so wait for it to finish + writePrivQ(reqstingVP, singleton->waitQ ); + return; + } + else + { //hasn't been started, so this is the first attempt at the singleton + singleton->hasBeenStarted = TRUE; + reqstingVP->dataRetFromReq = 0x0; + resume_procr( reqstingVP, semEnv ); + return; + } + } +void inline +handleStartFnSingleton( VPThdSemReq *semReq, SlaveVP *requestingVP, + VPThdSemEnv *semEnv ) + { VPThdSingleton *singleton; + + singleton = &(semEnv->fnSingletons[ semReq->singletonID ]); + handleStartSingleton_helper( singleton, requestingVP, semEnv ); + } +void inline +handleStartDataSingleton( VPThdSemReq *semReq, SlaveVP *requestingVP, + VPThdSemEnv *semEnv ) + { VPThdSingleton *singleton; + + if( *(semReq->singletonPtrAddr) == NULL ) + { singleton = VMS__malloc( sizeof(VPThdSingleton) ); + singleton->waitQ = makeVMSPrivQ(); + singleton->endInstrAddr = 0x0; + singleton->hasBeenStarted = FALSE; + singleton->hasFinished = FALSE; + *(semReq->singletonPtrAddr) = singleton; + } + else + singleton = *(semReq->singletonPtrAddr); + handleStartSingleton_helper( singleton, requestingVP, semEnv ); + } + + +void inline +handleEndSingleton_helper( VPThdSingleton *singleton, SlaveVP *requestingVP, + VPThdSemEnv *semEnv ) + { PrivQueueStruc *waitQ; + int32 numWaiting, i; + SlaveVP *resumingVP; + + if( singleton->hasFinished ) + { //by definition, only one slave should ever be able to run end singleton + // so if this is true, is an error + //VMS__throw_exception( "singleton code ran twice", requestingVP, NULL); + } + + singleton->hasFinished = TRUE; + waitQ = singleton->waitQ; + numWaiting = numInPrivQ( waitQ ); + for( i = 0; i < numWaiting; i++ ) + { //they will resume inside start singleton, then jmp to end singleton + resumingVP = readPrivQ( waitQ ); + resumingVP->dataRetFromReq = singleton->endInstrAddr; + resume_procr( resumingVP, semEnv ); + } + + resume_procr( requestingVP, semEnv ); + + } +void inline +handleEndFnSingleton( VPThdSemReq *semReq, SlaveVP *requestingVP, + VPThdSemEnv *semEnv ) + { + VPThdSingleton *singleton; + + singleton = &(semEnv->fnSingletons[ semReq->singletonID ]); + handleEndSingleton_helper( singleton, requestingVP, semEnv ); + } +void inline +handleEndDataSingleton( VPThdSemReq *semReq, SlaveVP *requestingVP, + VPThdSemEnv *semEnv ) + { + VPThdSingleton *singleton; + + singleton = *(semReq->singletonPtrAddr); + handleEndSingleton_helper( singleton, requestingVP, semEnv ); + } + + +/*This executes the function in the masterVP, take the function + * pointer out of the request and call it, then resume the VP. + */ +void inline +handleAtomic(VPThdSemReq *semReq, SlaveVP *requestingVP,VPThdSemEnv *semEnv) + { + semReq->fnToExecInMaster( semReq->dataForFn ); + resume_procr( requestingVP, semEnv ); + } + +/*First, it looks at the VP's semantic data, to see the highest transactionID + * that VP + * already has entered. If the current ID is not larger, it throws an + * exception stating a bug in the code. + *Otherwise it puts the current ID + * there, and adds the ID to a linked list of IDs entered -- the list is + * used to check that exits are properly ordered. + *Next it is uses transactionID as index into an array of transaction + * structures. + *If the "VP_currently_executing" field is non-null, then put requesting VP + * into queue in the struct. (At some point a holder will request + * end-transaction, which will take this VP from the queue and resume it.) + *If NULL, then write requesting into the field and resume. + */ +void inline +handleTransStart( VPThdSemReq *semReq, SlaveVP *requestingVP, + VPThdSemEnv *semEnv ) + { VPThdSemData *semData; + TransListElem *nextTransElem; + + //check ordering of entering transactions is correct + semData = requestingVP->semanticData; + if( semData->highestTransEntered > semReq->transID ) + { //throw VMS exception, which shuts down VMS. + VMS__throw_exception( "transID smaller than prev", requestingVP, NULL); + } + //add this trans ID to the list of transactions entered -- check when + // end a transaction + semData->highestTransEntered = semReq->transID; + nextTransElem = VMS_PI__malloc( sizeof(TransListElem) ); + nextTransElem->transID = semReq->transID; + nextTransElem->nextTrans = semData->lastTransEntered; + semData->lastTransEntered = nextTransElem; + + //get the structure for this transaction ID + VPThdTrans * + transStruc = &(semEnv->transactionStrucs[ semReq->transID ]); + + if( transStruc->VPCurrentlyExecuting == NULL ) + { + transStruc->VPCurrentlyExecuting = requestingVP; + resume_procr( requestingVP, semEnv ); + } + else + { //note, might make future things cleaner if save request with VP and + // add this trans ID to the linked list when gets out of queue. + // but don't need for now, and lazy.. + writePrivQ( requestingVP, transStruc->waitingVPQ ); + } + } + + +/*Use the trans ID to get the transaction structure from the array. + *Look at VP_currently_executing to be sure it's same as requesting VP. + * If different, throw an exception, stating there's a bug in the code. + *Next, take the first element off the list of entered transactions. + * Check to be sure the ending transaction is the same ID as the next on + * the list. If not, incorrectly nested so throw an exception. + * + *Next, get from the queue in the structure. + *If it's empty, set VP_currently_executing field to NULL and resume + * requesting VP. + *If get somethine, set VP_currently_executing to the VP from the queue, then + * resume both. + */ +void inline +handleTransEnd( VPThdSemReq *semReq, SlaveVP *requestingVP, + VPThdSemEnv *semEnv) + { VPThdSemData *semData; + SlaveVP *waitingVP; + VPThdTrans *transStruc; + TransListElem *lastTrans; + + transStruc = &(semEnv->transactionStrucs[ semReq->transID ]); + + //make sure transaction ended in same VP as started it. + if( transStruc->VPCurrentlyExecuting != requestingVP ) + { + VMS__throw_exception( "trans ended in diff VP", requestingVP, NULL ); + } + + //make sure nesting is correct -- last ID entered should == this ID + semData = requestingVP->semanticData; + lastTrans = semData->lastTransEntered; + if( lastTrans->transID != semReq->transID ) + { + VMS__throw_exception( "trans incorrectly nested", requestingVP, NULL ); + } + + semData->lastTransEntered = semData->lastTransEntered->nextTrans; + + + waitingVP = readPrivQ( transStruc->waitingVPQ ); + transStruc->VPCurrentlyExecuting = waitingVP; + + if( waitingVP != NULL ) + resume_procr( waitingVP, semEnv ); + + resume_procr( requestingVP, semEnv ); + } diff -r c1c36be9c47a -r e5d4d5871ac9 Vthread_Request_Handlers.h --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/Vthread_Request_Handlers.h Thu Mar 01 13:20:51 2012 -0800 @@ -0,0 +1,57 @@ +/* + * Copyright 2009 OpenSourceStewardshipFoundation.org + * Licensed under GNU General Public License version 2 + * + * Author: seanhalle@yahoo.com + * + */ + +#ifndef _VPThread_REQ_H +#define _VPThread_REQ_H + +#include "VPThread.h" + +/*This header defines everything specific to the VPThread semantic plug-in + */ + +inline void +handleMakeMutex( VPThdSemReq *semReq, VPThdSemEnv *semEnv); +inline void +handleMutexLock( VPThdSemReq *semReq, VPThdSemEnv *semEnv); +inline void +handleMutexUnlock(VPThdSemReq *semReq, VPThdSemEnv *semEnv); +inline void +handleMakeCond( VPThdSemReq *semReq, VPThdSemEnv *semEnv); +inline void +handleCondWait( VPThdSemReq *semReq, VPThdSemEnv *semEnv); +inline void +handleCondSignal( VPThdSemReq *semReq, VPThdSemEnv *semEnv); +void inline +handleMalloc(VPThdSemReq *semReq, SlaveVP *requestingVP,VPThdSemEnv *semEnv); +void inline +handleFree( VPThdSemReq *semReq, SlaveVP *requestingVP, VPThdSemEnv *semEnv); +inline void +handleStartFnSingleton( VPThdSemReq *semReq, SlaveVP *reqstingVP, + VPThdSemEnv *semEnv ); +inline void +handleEndFnSingleton( VPThdSemReq *semReq, SlaveVP *requestingVP, + VPThdSemEnv *semEnv ); +inline void +handleStartDataSingleton( VPThdSemReq *semReq, SlaveVP *reqstingVP, + VPThdSemEnv *semEnv ); +inline void +handleEndDataSingleton( VPThdSemReq *semReq, SlaveVP *requestingVP, + VPThdSemEnv *semEnv ); +void inline +handleAtomic( VPThdSemReq *semReq, SlaveVP *requestingVP, + VPThdSemEnv *semEnv); +void inline +handleTransStart( VPThdSemReq *semReq, SlaveVP *requestingVP, + VPThdSemEnv *semEnv ); +void inline +handleTransEnd( VPThdSemReq *semReq, SlaveVP *requestingVP, + VPThdSemEnv *semEnv); + + +#endif /* _VPThread_REQ_H */ + diff -r c1c36be9c47a -r e5d4d5871ac9 Vthread_helper.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/Vthread_helper.c Thu Mar 01 13:20:51 2012 -0800 @@ -0,0 +1,48 @@ + +#include + +#include "VMS/VMS.h" +#include "VPThread.h" + +/*Re-use this in the entry-point fn + */ +inline SlaveVP * +VPThread__create_procr_helper( TopLevelFnPtr fnPtr, void *initData, + VPThdSemEnv *semEnv, int32 coreToScheduleOnto ) + { SlaveVP *newVP; + VPThdSemData *semData; + + //This is running in master, so use internal version + newVP = VMS__create_procr( fnPtr, initData ); + + semEnv->numVP += 1; + + semData = VMS__malloc( sizeof(VPThdSemData) ); + semData->highestTransEntered = -1; + semData->lastTransEntered = NULL; + + newVP->semanticData = semData; + + //=================== Assign new processor to a core ===================== + #ifdef SEQUENTIAL + newVP->coreAnimatedBy = 0; + + #else + + if(coreToScheduleOnto < 0 || coreToScheduleOnto >= NUM_CORES ) + { //out-of-range, so round-robin assignment + newVP->coreAnimatedBy = semEnv->nextCoreToGetNewVP; + + if( semEnv->nextCoreToGetNewVP >= NUM_CORES - 1 ) + semEnv->nextCoreToGetNewVP = 0; + else + semEnv->nextCoreToGetNewVP += 1; + } + else //core num in-range, so use it + { newVP->coreAnimatedBy = coreToScheduleOnto; + } + #endif + //======================================================================== + + return newVP; + } \ No newline at end of file diff -r c1c36be9c47a -r e5d4d5871ac9 Vthread_helper.h --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/Vthread_helper.h Thu Mar 01 13:20:51 2012 -0800 @@ -0,0 +1,19 @@ +/* + * File: VPThread_helper.h + * Author: msach + * + * Created on June 10, 2011, 12:20 PM + */ + +#include "VMS/VMS.h" +#include "VPThread.h" + +#ifndef VPTHREAD_HELPER_H +#define VPTHREAD_HELPER_H + +inline SlaveVP * +VPThread__create_procr_helper( TopLevelFnPtr fnPtr, void *initData, + VPThdSemEnv *semEnv, int32 coreToScheduleOnto ); + +#endif /* VPTHREAD_HELPER_H */ + diff -r c1c36be9c47a -r e5d4d5871ac9 Vthread_lib.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/Vthread_lib.c Thu Mar 01 13:20:51 2012 -0800 @@ -0,0 +1,626 @@ +/* + * Copyright 2010 OpenSourceCodeStewardshipFoundation + * + * Licensed under BSD + */ + +#include +#include +#include + +#include "VMS/VMS.h" +#include "VPThread.h" +#include "VPThread_helper.h" +#include "VMS/Queue_impl/PrivateQueue.h" +#include "VMS/Hash_impl/PrivateHash.h" + + +//========================================================================== + +void +VPThread__init(); + +void +VPThread__init_Seq(); + +void +VPThread__init_Helper(); + + +//=========================================================================== + + +/*These are the library functions *called in the application* + * + *There's a pattern for the outside sequential code to interact with the + * VMS_HW code. + *The VMS_HW system is inside a boundary.. every VPThread system is in its + * own directory that contains the functions for each of the processor types. + * One of the processor types is the "seed" processor that starts the + * cascade of creating all the processors that do the work. + *So, in the directory is a file called "EntryPoint.c" that contains the + * function, named appropriately to the work performed, that the outside + * sequential code calls. This function follows a pattern: + *1) it calls VPThread__init() + *2) it creates the initial data for the seed processor, which is passed + * in to the function + *3) it creates the seed VPThread processor, with the data to start it with. + *4) it calls startVPThreadThenWaitUntilWorkDone + *5) it gets the returnValue from the transfer struc and returns that + * from the function + * + *For now, a new VPThread system has to be created via VPThread__init every + * time an entry point function is called -- later, might add letting the + * VPThread system be created once, and let all the entry points just reuse + * it -- want to be as simple as possible now, and see by using what makes + * sense for later.. + */ + + + +//=========================================================================== + +/*This is the "border crossing" function -- the thing that crosses from the + * outside world, into the VMS_HW world. It initializes and starts up the + * VMS system, then creates one processor from the specified function and + * puts it into the readyQ. From that point, that one function is resp. + * for creating all the other processors, that then create others, and so + * forth. + *When all the processors, including the seed, have dissipated, then this + * function returns. The results will have been written by side-effect via + * pointers read from, or written into initData. + * + *NOTE: no Threads should exist in the outside program that might touch + * any of the data reachable from initData passed in to here + */ +void +VPThread__create_seed_procr_and_do_work( TopLevelFnPtr fnPtr, void *initData ) + { VPThdSemEnv *semEnv; + SlaveVP *seedVP; + + #ifdef SEQUENTIAL + VPThread__init_Seq(); //debug sequential exe + #else + VPThread__init(); //normal multi-thd + #endif + semEnv = _VMSMasterEnv->semanticEnv; + + //VPThread starts with one processor, which is put into initial environ, + // and which then calls create() to create more, thereby expanding work + seedVP = VPThread__create_procr_helper( fnPtr, initData, semEnv, -1 ); + + resume_procr( seedVP, semEnv ); + + #ifdef SEQUENTIAL + VMS__start_the_work_then_wait_until_done_Seq(); //debug sequential exe + #else + VMS__start_the_work_then_wait_until_done(); //normal multi-thd + #endif + + VPThread__cleanup_after_shutdown(); + } + + +inline int32 +VPThread__giveMinWorkUnitCycles( float32 percentOverhead ) + { + return MIN_WORK_UNIT_CYCLES; + } + +inline int32 +VPThread__giveIdealNumWorkUnits() + { + return NUM_SCHED_SLOTS * NUM_CORES; + } + +inline int32 +VPThread__give_number_of_cores_to_schedule_onto() + { + return NUM_CORES; + } + +/*For now, use TSC -- later, make these two macros with assembly that first + * saves jump point, and second jumps back several times to get reliable time + */ +inline void +VPThread__start_primitive() + { saveLowTimeStampCountInto( ((VPThdSemEnv *)(_VMSMasterEnv->semanticEnv))-> + primitiveStartTime ); + } + +/*Just quick and dirty for now -- make reliable later + * will want this to jump back several times -- to be sure cache is warm + * because don't want comm time included in calc-time measurement -- and + * also to throw out any "weird" values due to OS interrupt or TSC rollover + */ +inline int32 +VPThread__end_primitive_and_give_cycles() + { int32 endTime, startTime; + //TODO: fix by repeating time-measurement + saveLowTimeStampCountInto( endTime ); + startTime=((VPThdSemEnv*)(_VMSMasterEnv->semanticEnv))->primitiveStartTime; + return (endTime - startTime); + } + +//=========================================================================== +// +/*Initializes all the data-structures for a VPThread system -- but doesn't + * start it running yet! + * + * + *This sets up the semantic layer over the VMS system + * + *First, calls VMS_Setup, then creates own environment, making it ready + * for creating the seed processor and then starting the work. + */ +void +VPThread__init() + { + VMS__init(); + //masterEnv, a global var, now is partially set up by init_VMS + + //Moved here from VMS.c because this is not parallel construct independent + MakeTheMeasHists(); + + VPThread__init_Helper(); + } + +#ifdef SEQUENTIAL +void +VPThread__init_Seq() + { + VMS__init_Seq(); + flushRegisters(); + //masterEnv, a global var, now is partially set up by init_VMS + + VPThread__init_Helper(); + } +#endif + +void +VPThread__init_Helper() + { VPThdSemEnv *semanticEnv; + PrivQueueStruc **readyVPQs; + int coreIdx, i; + + //Hook up the semantic layer's plug-ins to the Master virt procr + _VMSMasterEnv->requestHandler = &VPThread__Request_Handler; + _VMSMasterEnv->slaveScheduler = &VPThread__schedule_virt_procr; + + //create the semantic layer's environment (all its data) and add to + // the master environment + semanticEnv = VMS__malloc( sizeof( VPThdSemEnv ) ); + _VMSMasterEnv->semanticEnv = semanticEnv; + + //create the ready queue + readyVPQs = VMS__malloc( NUM_CORES * sizeof(PrivQueueStruc *) ); + + for( coreIdx = 0; coreIdx < NUM_CORES; coreIdx++ ) + { + readyVPQs[ coreIdx ] = makeVMSPrivQ(); + } + + semanticEnv->readyVPQs = readyVPQs; + + semanticEnv->numVP = 0; + semanticEnv->nextCoreToGetNewVP = 0; + + semanticEnv->mutexDynArrayInfo = + makePrivDynArrayOfSize( (void*)&(semanticEnv->mutexDynArray), INIT_NUM_MUTEX ); + + semanticEnv->condDynArrayInfo = + makePrivDynArrayOfSize( (void*)&(semanticEnv->condDynArray), INIT_NUM_COND ); + + //TODO: bug -- turn these arrays into dyn arrays to eliminate limit + //semanticEnv->singletonHasBeenExecutedFlags = makeDynArrayInfo( ); + //semanticEnv->transactionStrucs = makeDynArrayInfo( ); + for( i = 0; i < NUM_STRUCS_IN_SEM_ENV; i++ ) + { + semanticEnv->fnSingletons[i].endInstrAddr = NULL; + semanticEnv->fnSingletons[i].hasBeenStarted = FALSE; + semanticEnv->fnSingletons[i].hasFinished = FALSE; + semanticEnv->fnSingletons[i].waitQ = makeVMSPrivQ(); + semanticEnv->transactionStrucs[i].waitingVPQ = makeVMSPrivQ(); + } + } + + +/*Frees any memory allocated by VPThread__init() then calls VMS__shutdown + */ +void +VPThread__cleanup_after_shutdown() + { /*VPThdSemEnv *semEnv; + int32 coreIdx, idx, highestIdx; + VPThdMutex **mutexArray, *mutex; + VPThdCond **condArray, *cond; */ + + /* It's all allocated inside VMS's big chunk -- that's about to be freed, so + * nothing to do here + semEnv = _VMSMasterEnv->semanticEnv; + +//TODO: double check that all sem env locations freed + + for( coreIdx = 0; coreIdx < NUM_CORES; coreIdx++ ) + { + free( semEnv->readyVPQs[coreIdx]->startOfData ); + free( semEnv->readyVPQs[coreIdx] ); + } + + free( semEnv->readyVPQs ); + + + //==== Free mutexes and mutex array ==== + mutexArray = semEnv->mutexDynArray->array; + highestIdx = semEnv->mutexDynArray->highestIdxInArray; + for( idx=0; idx < highestIdx; idx++ ) + { mutex = mutexArray[ idx ]; + if( mutex == NULL ) continue; + free( mutex ); + } + free( mutexArray ); + free( semEnv->mutexDynArray ); + //====================================== + + + //==== Free conds and cond array ==== + condArray = semEnv->condDynArray->array; + highestIdx = semEnv->condDynArray->highestIdxInArray; + for( idx=0; idx < highestIdx; idx++ ) + { cond = condArray[ idx ]; + if( cond == NULL ) continue; + free( cond ); + } + free( condArray ); + free( semEnv->condDynArray ); + //=================================== + + + free( _VMSMasterEnv->semanticEnv ); + */ + VMS__cleanup_at_end_of_shutdown(); + } + + +//=========================================================================== + +/* + */ +inline SlaveVP * +VPThread__create_thread( TopLevelFnPtr fnPtr, void *initData, + SlaveVP *creatingVP ) + { VPThdSemReq reqData; + + //the semantic request data is on the stack and disappears when this + // call returns -- it's guaranteed to remain in the VP's stack for as + // long as the VP is suspended. + reqData.reqType = 0; //know the type because is a VMS create req + reqData.coreToScheduleOnto = -1; //means round-robin schedule + reqData.fnPtr = fnPtr; + reqData.initData = initData; + reqData.requestingVP = creatingVP; + + VMS__send_create_procr_req( &reqData, creatingVP ); + + return creatingVP->dataRetFromReq; + } + +inline SlaveVP * +VPThread__create_thread_with_affinity( TopLevelFnPtr fnPtr, void *initData, + SlaveVP *creatingVP, int32 coreToScheduleOnto ) + { VPThdSemReq reqData; + + //the semantic request data is on the stack and disappears when this + // call returns -- it's guaranteed to remain in the VP's stack for as + // long as the VP is suspended. + reqData.reqType = 0; //know type because in a VMS create req + reqData.coreToScheduleOnto = coreToScheduleOnto; + reqData.fnPtr = fnPtr; + reqData.initData = initData; + reqData.requestingVP = creatingVP; + + VMS__send_create_procr_req( &reqData, creatingVP ); + } + +inline void +VPThread__dissipate_thread( SlaveVP *procrToDissipate ) + { + VMS__send_dissipate_req( procrToDissipate ); + } + + +//=========================================================================== + +void * +VPThread__malloc( size_t sizeToMalloc, SlaveVP *animVP ) + { VPThdSemReq reqData; + + reqData.reqType = malloc_req; + reqData.sizeToMalloc = sizeToMalloc; + reqData.requestingVP = animVP; + + VMS__send_sem_request( &reqData, animVP ); + + return animVP->dataRetFromReq; + } + + +/*Sends request to Master, which does the work of freeing + */ +void +VPThread__free( void *ptrToFree, SlaveVP *animVP ) + { VPThdSemReq reqData; + + reqData.reqType = free_req; + reqData.ptrToFree = ptrToFree; + reqData.requestingVP = animVP; + + VMS__send_sem_request( &reqData, animVP ); + } + + +//=========================================================================== + +inline void +VPThread__set_globals_to( void *globals ) + { + ((VPThdSemEnv *) + (_VMSMasterEnv->semanticEnv))->applicationGlobals = globals; + } + +inline void * +VPThread__give_globals() + { + return((VPThdSemEnv *) (_VMSMasterEnv->semanticEnv))->applicationGlobals; + } + + +//=========================================================================== + +inline int32 +VPThread__make_mutex( SlaveVP *animVP ) + { VPThdSemReq reqData; + + reqData.reqType = make_mutex; + reqData.requestingVP = animVP; + + VMS__send_sem_request( &reqData, animVP ); + + return (int32)animVP->dataRetFromReq; //mutexid is 32bit wide + } + +inline void +VPThread__mutex_lock( int32 mutexIdx, SlaveVP *acquiringVP ) + { VPThdSemReq reqData; + + reqData.reqType = mutex_lock; + reqData.mutexIdx = mutexIdx; + reqData.requestingVP = acquiringVP; + + VMS__send_sem_request( &reqData, acquiringVP ); + } + +inline void +VPThread__mutex_unlock( int32 mutexIdx, SlaveVP *releasingVP ) + { VPThdSemReq reqData; + + reqData.reqType = mutex_unlock; + reqData.mutexIdx = mutexIdx; + reqData.requestingVP = releasingVP; + + VMS__send_sem_request( &reqData, releasingVP ); + } + + +//======================= +inline int32 +VPThread__make_cond( int32 ownedMutexIdx, SlaveVP *animPr) + { VPThdSemReq reqData; + + reqData.reqType = make_cond; + reqData.mutexIdx = ownedMutexIdx; + reqData.requestingVP = animVP; + + VMS__send_sem_request( &reqData, animVP ); + + return (int32)animVP->dataRetFromReq; //condIdx is 32 bit wide + } + +inline void +VPThread__cond_wait( int32 condIdx, SlaveVP *waitingPr) + { VPThdSemReq reqData; + + reqData.reqType = cond_wait; + reqData.condIdx = condIdx; + reqData.requestingVP = waitingVP; + + VMS__send_sem_request( &reqData, waitingVP ); + } + +inline void * +VPThread__cond_signal( int32 condIdx, SlaveVP *signallingVP ) + { VPThdSemReq reqData; + + reqData.reqType = cond_signal; + reqData.condIdx = condIdx; + reqData.requestingVP = signallingVP; + + VMS__send_sem_request( &reqData, signallingVP ); + } + + +//=========================================================================== +// +/*A function singleton is a function whose body executes exactly once, on a + * single core, no matter how many times the fuction is called and no + * matter how many cores or the timing of cores calling it. + * + *A data singleton is a ticket attached to data. That ticket can be used + * to get the data through the function exactly once, no matter how many + * times the data is given to the function, and no matter the timing of + * trying to get the data through from different cores. + */ + +/*asm function declarations*/ +void asm_save_ret_to_singleton(VPThdSingleton *singletonPtrAddr); +void asm_write_ret_from_singleton(VPThdSingleton *singletonPtrAddr); + +/*Fn singleton uses ID as index into array of singleton structs held in the + * semantic environment. + */ +void +VPThread__start_fn_singleton( int32 singletonID, SlaveVP *animVP ) + { + VPThdSemReq reqData; + + // + reqData.reqType = singleton_fn_start; + reqData.singletonID = singletonID; + + VMS__send_sem_request( &reqData, animVP ); + if( animVP->dataRetFromReq ) //will be 0 or addr of label in end singleton + { + VPThdSemEnv *semEnv = VMS__give_sem_env_for( animVP ); + asm_write_ret_from_singleton(&(semEnv->fnSingletons[ singletonID])); + } + } + +/*Data singleton hands addr of loc holding a pointer to a singleton struct. + * The start_data_singleton makes the structure and puts its addr into the + * location. + */ +void +VPThread__start_data_singleton( VPThdSingleton **singletonAddr, SlaveVP *animVP ) + { + VPThdSemReq reqData; + + if( *singletonAddr && (*singletonAddr)->hasFinished ) + goto JmpToEndSingleton; + + reqData.reqType = singleton_data_start; + reqData.singletonPtrAddr = singletonAddr; + + VMS__send_sem_request( &reqData, animVP ); + if( animVP->dataRetFromReq ) //either 0 or end singleton's return addr + { + JmpToEndSingleton: + asm_write_ret_from_singleton(*singletonAddr); + + } + //now, simply return + //will exit either from the start singleton call or the end-singleton call + } + +/*Uses ID as index into array of flags. If flag already set, resumes from + * end-label. Else, sets flag and resumes normally. + * + *Note, this call cannot be inlined because the instr addr at the label + * inside is shared by all invocations of a given singleton ID. + */ +void +VPThread__end_fn_singleton( int32 singletonID, SlaveVP *animVP ) + { + VPThdSemReq reqData; + + //don't need this addr until after at least one singleton has reached + // this function + VPThdSemEnv *semEnv = VMS__give_sem_env_for( animVP ); + asm_write_ret_from_singleton(&(semEnv->fnSingletons[ singletonID])); + + reqData.reqType = singleton_fn_end; + reqData.singletonID = singletonID; + + VMS__send_sem_request( &reqData, animVP ); + } + +void +VPThread__end_data_singleton( VPThdSingleton **singletonPtrAddr, SlaveVP *animVP ) + { + VPThdSemReq reqData; + + //don't need this addr until after singleton struct has reached + // this function for first time + //do assembly that saves the return addr of this fn call into the + // data singleton -- that data-singleton can only be given to exactly + // one instance in the code of this function. However, can use this + // function in different places for different data-singletons. + + asm_save_ret_to_singleton(*singletonPtrAddr); + + reqData.reqType = singleton_data_end; + reqData.singletonPtrAddr = singletonPtrAddr; + + VMS__send_sem_request( &reqData, animVP ); + } + + +/*This executes the function in the masterVP, so it executes in isolation + * from any other copies -- only one copy of the function can ever execute + * at a time. + * + *It suspends to the master, and the request handler takes the function + * pointer out of the request and calls it, then resumes the VP. + *Only very short functions should be called this way -- for longer-running + * isolation, use transaction-start and transaction-end, which run the code + * between as work-code. + */ +void +VPThread__animate_short_fn_in_isolation( PtrToAtomicFn ptrToFnToExecInMaster, + void *data, SlaveVP *animVP ) + { + VPThdSemReq reqData; + + // + reqData.reqType = atomic; + reqData.fnToExecInMaster = ptrToFnToExecInMaster; + reqData.dataForFn = data; + + VMS__send_sem_request( &reqData, animVP ); + } + + +/*This suspends to the master. + *First, it looks at the VP's data, to see the highest transactionID that VP + * already has entered. If the current ID is not larger, it throws an + * exception stating a bug in the code. Otherwise it puts the current ID + * there, and adds the ID to a linked list of IDs entered -- the list is + * used to check that exits are properly ordered. + *Next it is uses transactionID as index into an array of transaction + * structures. + *If the "VP_currently_executing" field is non-null, then put requesting VP + * into queue in the struct. (At some point a holder will request + * end-transaction, which will take this VP from the queue and resume it.) + *If NULL, then write requesting into the field and resume. + */ +void +VPThread__start_transaction( int32 transactionID, SlaveVP *animVP ) + { + VPThdSemReq reqData; + + // + reqData.reqType = trans_start; + reqData.transID = transactionID; + + VMS__send_sem_request( &reqData, animVP ); + } + +/*This suspends to the master, then uses transactionID as index into an + * array of transaction structures. + *It looks at VP_currently_executing to be sure it's same as requesting VP. + * If different, throws an exception, stating there's a bug in the code. + *Next it looks at the queue in the structure. + *If it's empty, it sets VP_currently_executing field to NULL and resumes. + *If something in, gets it, sets VP_currently_executing to that VP, then + * resumes both. + */ +void +VPThread__end_transaction( int32 transactionID, SlaveVP *animVP ) + { + VPThdSemReq reqData; + + // + reqData.reqType = trans_end; + reqData.transID = transactionID; + + VMS__send_sem_request( &reqData, animVP ); + } +//=========================================================================== diff -r c1c36be9c47a -r e5d4d5871ac9 __brch__default --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/__brch__default Thu Mar 01 13:20:51 2012 -0800 @@ -0,0 +1,1 @@ +The default branch for Vthread -- the language libraries will have fewer branches than VMS does.. might be some used for feature development, or something.. \ No newline at end of file