changeset 27:e5d4d5871ac9

half-done update to common_ancesor VMS version.. in middle
author Some Random Person <seanhalle@yahoo.com>
date Thu, 01 Mar 2012 13:20:51 -0800
parents c1c36be9c47a
children b3a881f25c5a
files .hgeol DESIGN_NOTES__VPThread_lib.txt DESIGN_NOTES__Vthread_lib.txt VPThread.h VPThread.s VPThread_PluginFns.c VPThread_Request_Handlers.c VPThread_Request_Handlers.h VPThread_helper.c VPThread_helper.h VPThread_lib.c Vthread.h Vthread.s Vthread_Meas.h Vthread_PluginFns.c Vthread_Request_Handlers.c Vthread_Request_Handlers.h Vthread_helper.c Vthread_helper.h Vthread_lib.c __brch__default
diffstat 21 files changed, 1870 insertions(+), 1746 deletions(-) [+]
line diff
     1.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     1.2 +++ b/.hgeol	Thu Mar 01 13:20:51 2012 -0800
     1.3 @@ -0,0 +1,12 @@
     1.4 +
     1.5 +[patterns]
     1.6 +**.py = native
     1.7 +**.txt = native
     1.8 +**.c = native
     1.9 +**.h = native
    1.10 +**.cpp = native
    1.11 +**.java = native
    1.12 +**.sh = native
    1.13 +**.pl = native
    1.14 +**.jpg = bin
    1.15 +**.gif = bin
     2.1 --- a/DESIGN_NOTES__VPThread_lib.txt	Tue Jul 26 16:37:26 2011 +0200
     2.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
     2.3 @@ -1,82 +0,0 @@
     2.4 -
     2.5 
     2.6 -Implement VPThread this way:
     2.7 
     2.8 -
     2.9 
    2.10 -We implemented a subset of PThreads functionality, called VMSPThd, that
    2.11 
    2.12 -includes: mutex_lock, mutex_unlock, cond_wait, and cond_notify, which we name
    2.13 
    2.14 -as VMSPThd__mutix_lock and so forth. \ All VMSPThd functions take a reference
    2.15 
    2.16 -to the AppVP that is animating the function call, in addition to any other
    2.17 
    2.18 -parameters.
    2.19 
    2.20 -
    2.21 
    2.22 -A mutex variable is an integer, returned by VMSPThd__mutex_create(), which is
    2.23 
    2.24 -used inside the request handler as a key to lookup an entry in a hash table,
    2.25 
    2.26 -that lives in the SemanticEnv. \ Such an entry has a field holding a
    2.27 
    2.28 -reference to the AppVP that currently owns the lock, and a queue of AppVPs
    2.29 
    2.30 -waiting to acquire the lock. \
    2.31 
    2.32 -
    2.33 
    2.34 -Acquiring a lock is done with VMSPThd__mutex_lock(), which generates a
    2.35 
    2.36 -request. \ Recall that all request sends cause the suspention of the AppVP
    2.37 
    2.38 -that is animating the library call that generates the request, in this case
    2.39 
    2.40 -the AppVP animating VMSPThd__mutex_lock() is suspended. \ The request
    2.41 
    2.42 -includes a reference to that animating AppVP, and the mutex integer value.
    2.43 
    2.44 -\ When the request reaches the request handler, the mutex integer is used as
    2.45 
    2.46 -key to look up the hash entry, then if the owner field is null (or the same
    2.47 
    2.48 -as the AppVP in the request), the AppVP in the request is placed into the
    2.49 
    2.50 -owner field, and that AppVP is queued to be scheduled for re-animation.
    2.51 
    2.52 -\ However, if a different AppVP is listed in the owner field, then the AppVP
    2.53 
    2.54 -in the request is added to the queue of those trying to acquire. \ Notice
    2.55 
    2.56 -that this is a purely sequential algorithm that systematic reasoning can be
    2.57 
    2.58 -used on.
    2.59 
    2.60 -
    2.61 
    2.62 -VMSPThd__mutex_unlock(), meanwhile, generates a request that causes the
    2.63 
    2.64 -request handler to queue for re-animation the AppVP that animated the call.
    2.65 
    2.66 -\ It also pops the queue of AppVPs waiting to acquire the lock, and writes
    2.67 
    2.68 -the AppVP that comes out as the current owner of the lock and queues that
    2.69 
    2.70 -AppVP for re-animation (unless the popped value is null, in which case the
    2.71 
    2.72 -current owner is just set to null).
    2.73 
    2.74 -
    2.75 
    2.76 -Implementing condition variables takes a similar approach, in that
    2.77 
    2.78 -VMSPThd__init_cond() returns an integer that is then used to look up an entry
    2.79 
    2.80 -in a hash table, where the entry contains a queue of AppVPs waiting on the
    2.81 
    2.82 -condition variable. \ VMSPThd__cond_wait() generates a request that pushes
    2.83 
    2.84 -the AppVP into the queue, while VMSPThd__cond_signal() takes a wait request
    2.85 
    2.86 -from the queue.
    2.87 
    2.88 -
    2.89 
    2.90 -Notice that this is again a purely sequential algorithm, and sidesteps issues
    2.91 
    2.92 -such as ``simultaneous'' wait and signal requests -- the wait and signal get
    2.93 
    2.94 -serialized automatically, even though they take place at the same instant of
    2.95 
    2.96 -program virtual time. \
    2.97 
    2.98 -
    2.99 
   2.100 -It is the fact of having a program virtual time that allows ``virtual
   2.101 
   2.102 -simultaneous'' actions to be handled <em|outside> of the virtual time. \ That
   2.103 
   2.104 -ability to escape outside of the virtual time is what enables a
   2.105 
   2.106 -<em|sequential> algorithm to handle the simultaneity that is at the heart of
   2.107 
   2.108 -making implementing locks in physical time so intricately tricky
   2.109 
   2.110 -<inactive|<cite|LamportLockImpl>> <inactive|<cite|DijkstraLockPaper>>
   2.111 
   2.112 -<inactive|<cite|LamportRelativisticTimePaper>>.\
   2.113 
   2.114 -
   2.115 
   2.116 -What's nice about this approach is that the design and implementation are
   2.117 
   2.118 -simple and straight forward. \ It took just X days to design, implement, and
   2.119 
   2.120 -debug, and is in a form that should be amenable to proof of freedom from race
   2.121 
   2.122 -conditions, given a correct implementation of VMS. \ The hash-table based
   2.123 
   2.124 -approach also makes it reasonably high performance, with (essentially) no
   2.125 
   2.126 -slowdown when the number of locks or number of AppVPs grows large.
   2.127 
   2.128 -
   2.129 
   2.130 -===========================
   2.131 
   2.132 -Behavior:
   2.133 
   2.134 -Cond variables are half of a two-piece mechanism.  The other half is a mutex.
   2.135 
   2.136 - Every cond var owns a mutex -- the two intrinsically work
   2.137 
   2.138 - together, as a pair.  The mutex must only be used with the condition var
   2.139 
   2.140 - and not used on its own in other ways.
   2.141 
   2.142 -
   2.143 
   2.144 -cond_wait is called with a cond-var and its mutex.
   2.145 
   2.146 -The animating processor must have acquired the mutex before calling cond_wait
   2.147 
   2.148 -The call adds the animating processor to the queue associated with the cond
   2.149 
   2.150 -variable and then calls mutex_unlock on the mutex.
   2.151 
   2.152 -
   2.153 
   2.154 -cond_signal can only be called after acquiring the cond var's mutex.  It is
   2.155 
   2.156 -called with the cond-var.
   2.157 
   2.158 - The call takes the next processor from the condition-var's wait queue and
   2.159 
   2.160 - transfers it to the waiting-for-lock queue of the cond-var's mutex.
   2.161 
   2.162 -The processor that called the cond_signal next has to perform a mutex_unlock
   2.163 
   2.164 - on the cond-var's mutex -- that, finally, lets the waiting processor acquire
   2.165 
   2.166 - the mutex and proceed.
   2.167 
     3.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     3.2 +++ b/DESIGN_NOTES__Vthread_lib.txt	Thu Mar 01 13:20:51 2012 -0800
     3.3 @@ -0,0 +1,82 @@
     3.4 +
     3.5 +Implement VPThread this way:
     3.6 +
     3.7 +We implemented a subset of PThreads functionality, called VMSPThd, that
     3.8 +includes: mutex_lock, mutex_unlock, cond_wait, and cond_notify, which we name
     3.9 +as VMSPThd__mutix_lock and so forth. \ All VMSPThd functions take a reference
    3.10 +to the AppVP that is animating the function call, in addition to any other
    3.11 +parameters.
    3.12 +
    3.13 +A mutex variable is an integer, returned by VMSPThd__mutex_create(), which is
    3.14 +used inside the request handler as a key to lookup an entry in a hash table,
    3.15 +that lives in the SemanticEnv. \ Such an entry has a field holding a
    3.16 +reference to the AppVP that currently owns the lock, and a queue of AppVPs
    3.17 +waiting to acquire the lock. \
    3.18 +
    3.19 +Acquiring a lock is done with VMSPThd__mutex_lock(), which generates a
    3.20 +request. \ Recall that all request sends cause the suspention of the AppVP
    3.21 +that is animating the library call that generates the request, in this case
    3.22 +the AppVP animating VMSPThd__mutex_lock() is suspended. \ The request
    3.23 +includes a reference to that animating AppVP, and the mutex integer value.
    3.24 +\ When the request reaches the request handler, the mutex integer is used as
    3.25 +key to look up the hash entry, then if the owner field is null (or the same
    3.26 +as the AppVP in the request), the AppVP in the request is placed into the
    3.27 +owner field, and that AppVP is queued to be scheduled for re-animation.
    3.28 +\ However, if a different AppVP is listed in the owner field, then the AppVP
    3.29 +in the request is added to the queue of those trying to acquire. \ Notice
    3.30 +that this is a purely sequential algorithm that systematic reasoning can be
    3.31 +used on.
    3.32 +
    3.33 +VMSPThd__mutex_unlock(), meanwhile, generates a request that causes the
    3.34 +request handler to queue for re-animation the AppVP that animated the call.
    3.35 +\ It also pops the queue of AppVPs waiting to acquire the lock, and writes
    3.36 +the AppVP that comes out as the current owner of the lock and queues that
    3.37 +AppVP for re-animation (unless the popped value is null, in which case the
    3.38 +current owner is just set to null).
    3.39 +
    3.40 +Implementing condition variables takes a similar approach, in that
    3.41 +VMSPThd__init_cond() returns an integer that is then used to look up an entry
    3.42 +in a hash table, where the entry contains a queue of AppVPs waiting on the
    3.43 +condition variable. \ VMSPThd__cond_wait() generates a request that pushes
    3.44 +the AppVP into the queue, while VMSPThd__cond_signal() takes a wait request
    3.45 +from the queue.
    3.46 +
    3.47 +Notice that this is again a purely sequential algorithm, and sidesteps issues
    3.48 +such as ``simultaneous'' wait and signal requests -- the wait and signal get
    3.49 +serialized automatically, even though they take place at the same instant of
    3.50 +program virtual time. \
    3.51 +
    3.52 +It is the fact of having a program virtual time that allows ``virtual
    3.53 +simultaneous'' actions to be handled <em|outside> of the virtual time. \ That
    3.54 +ability to escape outside of the virtual time is what enables a
    3.55 +<em|sequential> algorithm to handle the simultaneity that is at the heart of
    3.56 +making implementing locks in physical time so intricately tricky
    3.57 +<inactive|<cite|LamportLockImpl>> <inactive|<cite|DijkstraLockPaper>>
    3.58 +<inactive|<cite|LamportRelativisticTimePaper>>.\
    3.59 +
    3.60 +What's nice about this approach is that the design and implementation are
    3.61 +simple and straight forward. \ It took just X days to design, implement, and
    3.62 +debug, and is in a form that should be amenable to proof of freedom from race
    3.63 +conditions, given a correct implementation of VMS. \ The hash-table based
    3.64 +approach also makes it reasonably high performance, with (essentially) no
    3.65 +slowdown when the number of locks or number of AppVPs grows large.
    3.66 +
    3.67 +===========================
    3.68 +Behavior:
    3.69 +Cond variables are half of a two-piece mechanism.  The other half is a mutex.
    3.70 + Every cond var owns a mutex -- the two intrinsically work
    3.71 + together, as a pair.  The mutex must only be used with the condition var
    3.72 + and not used on its own in other ways.
    3.73 +
    3.74 +cond_wait is called with a cond-var and its mutex.
    3.75 +The animating processor must have acquired the mutex before calling cond_wait
    3.76 +The call adds the animating processor to the queue associated with the cond
    3.77 +variable and then calls mutex_unlock on the mutex.
    3.78 +
    3.79 +cond_signal can only be called after acquiring the cond var's mutex.  It is
    3.80 +called with the cond-var.
    3.81 + The call takes the next processor from the condition-var's wait queue and
    3.82 + transfers it to the waiting-for-lock queue of the cond-var's mutex.
    3.83 +The processor that called the cond_signal next has to perform a mutex_unlock
    3.84 + on the cond-var's mutex -- that, finally, lets the waiting processor acquire
    3.85 + the mutex and proceed.
     4.1 --- a/VPThread.h	Tue Jul 26 16:37:26 2011 +0200
     4.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
     4.3 @@ -1,256 +0,0 @@
     4.4 -/*
     4.5 - *  Copyright 2009 OpenSourceStewardshipFoundation.org
     4.6 - *  Licensed under GNU General Public License version 2
     4.7 - *
     4.8 - * Author: seanhalle@yahoo.com
     4.9 - *
    4.10 - */
    4.11 -
    4.12 -#ifndef _VPThread_H
    4.13 -#define	_VPThread_H
    4.14 -
    4.15 -#include "VMS/VMS.h"
    4.16 -#include "VMS/Queue_impl/PrivateQueue.h"
    4.17 -#include "VMS/DynArray/DynArray.h"
    4.18 -
    4.19 -
    4.20 -/*This header defines everything specific to the VPThread semantic plug-in
    4.21 - */
    4.22 -
    4.23 -
    4.24 -//===========================================================================
    4.25 -#define INIT_NUM_MUTEX 10000
    4.26 -#define INIT_NUM_COND  10000
    4.27 -
    4.28 -#define NUM_STRUCS_IN_SEM_ENV 1000
    4.29 -//===========================================================================
    4.30 -
    4.31 -//===========================================================================
    4.32 -typedef struct _VPThreadSemReq   VPThdSemReq;
    4.33 -typedef void  (*PtrToAtomicFn )   ( void * ); //executed atomically in master
    4.34 -//===========================================================================
    4.35 -
    4.36 -
    4.37 -/*WARNING: assembly hard-codes position of endInstrAddr as first field
    4.38 - */
    4.39 -typedef struct
    4.40 - {
    4.41 -   void           *endInstrAddr;
    4.42 -   int32           hasBeenStarted;
    4.43 -   int32           hasFinished;
    4.44 -   PrivQueueStruc *waitQ;
    4.45 - }
    4.46 -VPThdSingleton;
    4.47 -
    4.48 -/*Semantic layer-specific data sent inside a request from lib called in app
    4.49 - * to request handler called in MasterLoop
    4.50 - */
    4.51 -enum VPThreadReqType
    4.52 - {
    4.53 -   make_mutex = 1,
    4.54 -   mutex_lock,
    4.55 -   mutex_unlock,
    4.56 -   make_cond,
    4.57 -   cond_wait,
    4.58 -   cond_signal,
    4.59 -   make_procr,
    4.60 -   malloc_req,
    4.61 -   free_req,
    4.62 -   singleton_fn_start,
    4.63 -   singleton_fn_end,
    4.64 -   singleton_data_start,
    4.65 -   singleton_data_end,
    4.66 -   atomic,
    4.67 -   trans_start,
    4.68 -   trans_end
    4.69 - };
    4.70 -
    4.71 -struct _VPThreadSemReq
    4.72 - { enum VPThreadReqType reqType;
    4.73 -   VirtProcr           *requestingPr;
    4.74 -   int32                mutexIdx;
    4.75 -   int32                condIdx;
    4.76 -
    4.77 -   void                *initData;
    4.78 -   VirtProcrFnPtr       fnPtr;
    4.79 -   int32                coreToScheduleOnto;
    4.80 -
    4.81 -   size_t                sizeToMalloc;
    4.82 -   void                *ptrToFree;
    4.83 -
    4.84 -   int32              singletonID;
    4.85 -   VPThdSingleton     **singletonPtrAddr;
    4.86 -
    4.87 -   PtrToAtomicFn      fnToExecInMaster;
    4.88 -   void              *dataForFn;
    4.89 -
    4.90 -   int32              transID;
    4.91 - }
    4.92 -/* VPThreadSemReq */;
    4.93 -
    4.94 -
    4.95 -typedef struct
    4.96 - {
    4.97 -   VirtProcr      *VPCurrentlyExecuting;
    4.98 -   PrivQueueStruc *waitingVPQ;
    4.99 - }
   4.100 -VPThdTrans;
   4.101 -
   4.102 -
   4.103 -typedef struct
   4.104 - {
   4.105 -   int32           mutexIdx;
   4.106 -   VirtProcr      *holderOfLock;
   4.107 -   PrivQueueStruc *waitingQueue;
   4.108 - }
   4.109 -VPThdMutex;
   4.110 -
   4.111 -
   4.112 -typedef struct
   4.113 - {
   4.114 -   int32           condIdx;
   4.115 -   PrivQueueStruc *waitingQueue;
   4.116 -   VPThdMutex       *partnerMutex;
   4.117 - }
   4.118 -VPThdCond;
   4.119 -
   4.120 -typedef struct _TransListElem TransListElem;
   4.121 -struct _TransListElem
   4.122 - {
   4.123 -   int32          transID;
   4.124 -   TransListElem *nextTrans;
   4.125 - };
   4.126 -//TransListElem
   4.127 -
   4.128 -typedef struct
   4.129 - {
   4.130 -   int32          highestTransEntered;
   4.131 -   TransListElem *lastTransEntered;
   4.132 - }
   4.133 -VPThdSemData;
   4.134 -
   4.135 -
   4.136 -typedef struct
   4.137 - {
   4.138 -      //Standard stuff will be in most every semantic env
   4.139 -   PrivQueueStruc  **readyVPQs;
   4.140 -   int32             numVirtPr;
   4.141 -   int32             nextCoreToGetNewPr;
   4.142 -   int32             primitiveStartTime;
   4.143 -
   4.144 -      //Specific to this semantic layer
   4.145 -   VPThdMutex      **mutexDynArray;
   4.146 -   PrivDynArrayInfo *mutexDynArrayInfo;
   4.147 -
   4.148 -   VPThdCond       **condDynArray;
   4.149 -   PrivDynArrayInfo *condDynArrayInfo;
   4.150 -
   4.151 -   void             *applicationGlobals;
   4.152 -
   4.153 -                       //fix limit on num with dynArray
   4.154 -   VPThdSingleton     fnSingletons[NUM_STRUCS_IN_SEM_ENV];
   4.155 -
   4.156 -   VPThdTrans        transactionStrucs[NUM_STRUCS_IN_SEM_ENV];
   4.157 - }
   4.158 -VPThdSemEnv;
   4.159 -
   4.160 -
   4.161 -//===========================================================================
   4.162 -
   4.163 -inline void
   4.164 -VPThread__create_seed_procr_and_do_work( VirtProcrFnPtr fn, void *initData );
   4.165 -
   4.166 -//=======================
   4.167 -
   4.168 -inline VirtProcr *
   4.169 -VPThread__create_thread( VirtProcrFnPtr fnPtr, void *initData,
   4.170 -                          VirtProcr *creatingPr );
   4.171 -
   4.172 -inline VirtProcr *
   4.173 -VPThread__create_thread_with_affinity( VirtProcrFnPtr fnPtr, void *initData,
   4.174 -                          VirtProcr *creatingPr,  int32  coreToScheduleOnto );
   4.175 -
   4.176 -inline void
   4.177 -VPThread__dissipate_thread( VirtProcr *procrToDissipate );
   4.178 -
   4.179 -//=======================
   4.180 -inline void
   4.181 -VPThread__set_globals_to( void *globals );
   4.182 -
   4.183 -inline void *
   4.184 -VPThread__give_globals();
   4.185 -
   4.186 -//=======================
   4.187 -inline int32
   4.188 -VPThread__make_mutex( VirtProcr *animPr );
   4.189 -
   4.190 -inline void
   4.191 -VPThread__mutex_lock( int32 mutexIdx, VirtProcr *acquiringPr );
   4.192 -                                                    
   4.193 -inline void
   4.194 -VPThread__mutex_unlock( int32 mutexIdx, VirtProcr *releasingPr );
   4.195 -
   4.196 -
   4.197 -//=======================
   4.198 -inline int32
   4.199 -VPThread__make_cond( int32 ownedMutexIdx, VirtProcr *animPr);
   4.200 -
   4.201 -inline void
   4.202 -VPThread__cond_wait( int32 condIdx, VirtProcr *waitingPr);
   4.203 -
   4.204 -inline void *
   4.205 -VPThread__cond_signal( int32 condIdx, VirtProcr *signallingPr );
   4.206 -
   4.207 -
   4.208 -//=======================
   4.209 -void
   4.210 -VPThread__start_fn_singleton( int32 singletonID, VirtProcr *animPr );
   4.211 -
   4.212 -void
   4.213 -VPThread__end_fn_singleton( int32 singletonID, VirtProcr *animPr );
   4.214 -
   4.215 -void
   4.216 -VPThread__start_data_singleton( VPThdSingleton **singeltonAddr, VirtProcr *animPr );
   4.217 -
   4.218 -void
   4.219 -VPThread__end_data_singleton( VPThdSingleton **singletonAddr, VirtProcr *animPr );
   4.220 -
   4.221 -void
   4.222 -VPThread__animate_short_fn_in_isolation( PtrToAtomicFn ptrToFnToExecInMaster,
   4.223 -                                         void *data, VirtProcr *animPr );
   4.224 -
   4.225 -void
   4.226 -VPThread__start_transaction( int32 transactionID, VirtProcr *animPr );
   4.227 -
   4.228 -void
   4.229 -VPThread__end_transaction( int32 transactionID, VirtProcr *animPr );
   4.230 -
   4.231 -
   4.232 -
   4.233 -//=========================  Internal use only  =============================
   4.234 -inline void
   4.235 -VPThread__Request_Handler( VirtProcr *requestingPr, void *_semEnv );
   4.236 -
   4.237 -inline VirtProcr *
   4.238 -VPThread__schedule_virt_procr( void *_semEnv, int coreNum );
   4.239 -
   4.240 -//=======================
   4.241 -inline void
   4.242 -VPThread__free_semantic_request( VPThdSemReq *semReq );
   4.243 -
   4.244 -//=======================
   4.245 -
   4.246 -void *
   4.247 -VPThread__malloc( size_t sizeToMalloc, VirtProcr *animPr );
   4.248 -
   4.249 -void
   4.250 -VPThread__init();
   4.251 -
   4.252 -void
   4.253 -VPThread__cleanup_after_shutdown();
   4.254 -
   4.255 -void inline
   4.256 -resume_procr( VirtProcr *procr, VPThdSemEnv *semEnv );
   4.257 -
   4.258 -#endif	/* _VPThread_H */
   4.259 -
     5.1 --- a/VPThread.s	Tue Jul 26 16:37:26 2011 +0200
     5.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
     5.3 @@ -1,21 +0,0 @@
     5.4 -
     5.5 -//Assembly code takes the return addr off the stack and saves
     5.6 -// into the singleton.  The first field in the singleton is the
     5.7 -// "endInstrAddr" field, and the return addr is at 0x4(%ebp)
     5.8 -.globl asm_save_ret_to_singleton
     5.9 -asm_save_ret_to_singleton:
    5.10 -    movq 0x8(%rbp),     %rax   #get ret address, ebp is the same as in the calling function
    5.11 -    movq     %rax,     (%rdi) #write ret addr to endInstrAddr field
    5.12 -    ret
    5.13 -
    5.14 -
    5.15 -//Assembly code changes the return addr on the stack to the one
    5.16 -// saved into the singleton by the end-singleton-fn
    5.17 -//The stack's return addr is at 0x4(%%ebp)
    5.18 -.globl asm_write_ret_from_singleton
    5.19 -asm_write_ret_from_singleton:
    5.20 -    movq    (%rdi),    %rax  #get endInstrAddr field
    5.21 -    movq      %rax,    0x8(%rbp) #write return addr to the stack of the caller
    5.22 -    ret
    5.23 -
    5.24 -
     6.1 --- a/VPThread_PluginFns.c	Tue Jul 26 16:37:26 2011 +0200
     6.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
     6.3 @@ -1,192 +0,0 @@
     6.4 -/*
     6.5 - * Copyright 2010  OpenSourceCodeStewardshipFoundation
     6.6 - *
     6.7 - * Licensed under BSD
     6.8 - */
     6.9 -
    6.10 -#include <stdio.h>
    6.11 -#include <stdlib.h>
    6.12 -#include <malloc.h>
    6.13 -
    6.14 -#include "VMS/Queue_impl/PrivateQueue.h"
    6.15 -#include "VPThread.h"
    6.16 -#include "VPThread_Request_Handlers.h"
    6.17 -#include "VPThread_helper.h"
    6.18 -
    6.19 -//=========================== Local Fn Prototypes ===========================
    6.20 -
    6.21 -void inline
    6.22 -handleSemReq( VMSReqst *req, VirtProcr *requestingPr, VPThdSemEnv *semEnv );
    6.23 -
    6.24 -inline void
    6.25 -handleDissipate(             VirtProcr *requestingPr, VPThdSemEnv *semEnv );
    6.26 -
    6.27 -inline void
    6.28 -handleCreate( VMSReqst *req, VirtProcr *requestingPr, VPThdSemEnv *semEnv  );
    6.29 -
    6.30 -
    6.31 -//============================== Scheduler ==================================
    6.32 -//
    6.33 -/*For VPThread, scheduling a slave simply takes the next work-unit off the
    6.34 - * ready-to-go work-unit queue and assigns it to the slaveToSched.
    6.35 - *If the ready-to-go work-unit queue is empty, then nothing to schedule
    6.36 - * to the slave -- return FALSE to let Master loop know scheduling that
    6.37 - * slave failed.
    6.38 - */
    6.39 -char __Scheduler[] = "FIFO Scheduler"; //Gobal variable for name in saved histogram
    6.40 -VirtProcr *
    6.41 -VPThread__schedule_virt_procr( void *_semEnv, int coreNum )
    6.42 - { VirtProcr   *schedPr;
    6.43 -   VPThdSemEnv *semEnv;
    6.44 -
    6.45 -   semEnv = (VPThdSemEnv *)_semEnv;
    6.46 -
    6.47 -   schedPr = readPrivQ( semEnv->readyVPQs[coreNum] );
    6.48 -      //Note, using a non-blocking queue -- it returns NULL if queue empty
    6.49 -
    6.50 -   return( schedPr );
    6.51 - }
    6.52 -
    6.53 -
    6.54 -
    6.55 -//===========================  Request Handler  =============================
    6.56 -//
    6.57 -/*Will get requests to send, to receive, and to create new processors.
    6.58 - * Upon send, check the hash to see if a receive is waiting.
    6.59 - * Upon receive, check hash to see if a send has already happened.
    6.60 - * When other is not there, put in.  When other is there, the comm.
    6.61 - *  completes, which means the receiver P gets scheduled and
    6.62 - *  picks up right after the receive request.  So make the work-unit
    6.63 - *  and put it into the queue of work-units ready to go.
    6.64 - * Other request is create a new Processor, with the function to run in the
    6.65 - *  Processor, and initial data.
    6.66 - */
    6.67 -void
    6.68 -VPThread__Request_Handler( VirtProcr *requestingPr, void *_semEnv )
    6.69 - { VPThdSemEnv *semEnv;
    6.70 -   VMSReqst    *req;
    6.71 - 
    6.72 -   semEnv = (VPThdSemEnv *)_semEnv;
    6.73 -
    6.74 -   req    = VMS__take_next_request_out_of( requestingPr );
    6.75 -
    6.76 -   while( req != NULL )
    6.77 -    {
    6.78 -      switch( req->reqType )
    6.79 -       { case semantic:     handleSemReq(         req, requestingPr, semEnv);
    6.80 -            break;
    6.81 -         case createReq:    handleCreate(         req, requestingPr, semEnv);
    6.82 -            break;
    6.83 -         case dissipate:    handleDissipate(           requestingPr, semEnv);
    6.84 -            break;
    6.85 -         case VMSSemantic:  VMS__handle_VMSSemReq(req, requestingPr, semEnv,
    6.86 -                                                (ResumePrFnPtr)&resume_procr);
    6.87 -            break;
    6.88 -         default:
    6.89 -            break;
    6.90 -       }
    6.91 -
    6.92 -      req = VMS__take_next_request_out_of( requestingPr );
    6.93 -    } //while( req != NULL )
    6.94 - }
    6.95 -
    6.96 -
    6.97 -void inline
    6.98 -handleSemReq( VMSReqst *req, VirtProcr *reqPr, VPThdSemEnv *semEnv )
    6.99 - { VPThdSemReq *semReq;
   6.100 -
   6.101 -   semReq = VMS__take_sem_reqst_from(req);
   6.102 -   if( semReq == NULL ) return;
   6.103 -   switch( semReq->reqType )
   6.104 -    {
   6.105 -      case make_mutex:     handleMakeMutex(  semReq, semEnv);
   6.106 -         break;
   6.107 -      case mutex_lock:     handleMutexLock(  semReq, semEnv);
   6.108 -         break;
   6.109 -      case mutex_unlock:   handleMutexUnlock(semReq, semEnv);
   6.110 -         break;
   6.111 -      case make_cond:      handleMakeCond(   semReq, semEnv);
   6.112 -         break;
   6.113 -      case cond_wait:      handleCondWait(   semReq, semEnv);
   6.114 -         break;
   6.115 -      case cond_signal:    handleCondSignal( semReq, semEnv);
   6.116 -         break;
   6.117 -      case malloc_req:    handleMalloc( semReq, reqPr, semEnv);
   6.118 -         break;
   6.119 -      case free_req:    handleFree( semReq, reqPr, semEnv);
   6.120 -         break;
   6.121 -      case singleton_fn_start:  handleStartFnSingleton(semReq, reqPr, semEnv);
   6.122 -         break;
   6.123 -      case singleton_fn_end:    handleEndFnSingleton(  semReq, reqPr, semEnv);
   6.124 -         break;
   6.125 -      case singleton_data_start:handleStartDataSingleton(semReq,reqPr,semEnv);
   6.126 -         break;
   6.127 -      case singleton_data_end:  handleEndDataSingleton(semReq, reqPr, semEnv);
   6.128 -         break;
   6.129 -      case atomic:    handleAtomic( semReq, reqPr, semEnv);
   6.130 -         break;
   6.131 -      case trans_start:    handleTransStart( semReq, reqPr, semEnv);
   6.132 -         break;
   6.133 -      case trans_end:    handleTransEnd( semReq, reqPr, semEnv);
   6.134 -         break;
   6.135 -    }
   6.136 - }
   6.137 -
   6.138 -//=========================== VMS Request Handlers ===========================
   6.139 -//
   6.140 -inline void
   6.141 -handleDissipate( VirtProcr *requestingPr, VPThdSemEnv *semEnv )
   6.142 - {
   6.143 -      //free any semantic data allocated to the virt procr
   6.144 -   VMS__free( requestingPr->semanticData );
   6.145 -
   6.146 -      //Now, call VMS to free_all AppVP state -- stack and so on
   6.147 -   VMS__dissipate_procr( requestingPr );
   6.148 -
   6.149 -   semEnv->numVirtPr -= 1;
   6.150 -   if( semEnv->numVirtPr == 0 )
   6.151 -    {    //no more work, so shutdown
   6.152 -      VMS__shutdown();
   6.153 -    }
   6.154 - }
   6.155 -
   6.156 -inline void
   6.157 -handleCreate( VMSReqst *req, VirtProcr *requestingPr, VPThdSemEnv *semEnv  )
   6.158 - { VPThdSemReq *semReq;
   6.159 -   VirtProcr    *newPr;
   6.160 -    
   6.161 -    //========================= MEASUREMENT STUFF ======================
   6.162 -    Meas_startCreate
   6.163 -    //==================================================================
   6.164 -     
   6.165 -   semReq = VMS__take_sem_reqst_from( req );
   6.166 -   
   6.167 -   newPr = VPThread__create_procr_helper( semReq->fnPtr, semReq->initData, 
   6.168 -                                          semEnv, semReq->coreToScheduleOnto);
   6.169 -
   6.170 -      //For VPThread, caller needs ptr to created processor returned to it
   6.171 -   requestingPr->dataRetFromReq = newPr;
   6.172 -
   6.173 -   resume_procr( newPr,        semEnv );
   6.174 -   resume_procr( requestingPr, semEnv );
   6.175 -
   6.176 -     //========================= MEASUREMENT STUFF ======================
   6.177 -         Meas_endCreate
   6.178 -     #ifdef MEAS__TIME_PLUGIN
   6.179 -     #ifdef MEAS__SUB_CREATE
   6.180 -         subIntervalFromHist( startStamp, endStamp,
   6.181 -                                        _VMSMasterEnv->reqHdlrHighTimeHist );
   6.182 -     #endif
   6.183 -     #endif
   6.184 -     //==================================================================
   6.185 - }
   6.186 -
   6.187 -
   6.188 -//=========================== Helper ==============================
   6.189 -void inline
   6.190 -resume_procr( VirtProcr *procr, VPThdSemEnv *semEnv )
   6.191 - {
   6.192 -   writePrivQ( procr, semEnv->readyVPQs[ procr->coreAnimatedBy] );
   6.193 - }
   6.194 -
   6.195 -//===========================================================================
   6.196 \ No newline at end of file
     7.1 --- a/VPThread_Request_Handlers.c	Tue Jul 26 16:37:26 2011 +0200
     7.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
     7.3 @@ -1,445 +0,0 @@
     7.4 -/*
     7.5 - * Copyright 2010  OpenSourceCodeStewardshipFoundation
     7.6 - *
     7.7 - * Licensed under BSD
     7.8 - */
     7.9 -
    7.10 -#include <stdio.h>
    7.11 -#include <stdlib.h>
    7.12 -#include <malloc.h>
    7.13 -
    7.14 -#include "VMS/VMS.h"
    7.15 -#include "VMS/Queue_impl/PrivateQueue.h"
    7.16 -#include "VMS/Hash_impl/PrivateHash.h"
    7.17 -#include "VPThread.h"
    7.18 -#include "VMS/vmalloc.h"
    7.19 -
    7.20 -
    7.21 -
    7.22 -//===============================  Mutexes  =================================
    7.23 -/*The semantic request has a mutexIdx value, which acts as index into array.
    7.24 - */
    7.25 -inline void
    7.26 -handleMakeMutex( VPThdSemReq *semReq, VPThdSemEnv *semEnv)
    7.27 - { VPThdMutex  *newMutex;
    7.28 -   VirtProcr   *requestingPr;
    7.29 -
    7.30 -   requestingPr = semReq->requestingPr;
    7.31 -   newMutex = VMS__malloc( sizeof(VPThdMutex)  );
    7.32 -   newMutex->waitingQueue = makeVMSPrivQ( requestingPr );
    7.33 -   newMutex->holderOfLock = NULL;
    7.34 -
    7.35 -      //The mutex struc contains an int that identifies it -- use that as
    7.36 -      // its index within the array of mutexes.  Add the new mutex to array.
    7.37 -   newMutex->mutexIdx = addToDynArray( newMutex, semEnv->mutexDynArrayInfo );
    7.38 -
    7.39 -      //Now communicate the mutex's identifying int back to requesting procr
    7.40 -   semReq->requestingPr->dataRetFromReq = (void*)newMutex->mutexIdx; //mutexIdx is 32 bit
    7.41 -
    7.42 -      //re-animate the requester
    7.43 -   resume_procr( requestingPr, semEnv );
    7.44 - }
    7.45 -
    7.46 -
    7.47 -inline void
    7.48 -handleMutexLock( VPThdSemReq *semReq, VPThdSemEnv *semEnv)
    7.49 - { VPThdMutex  *mutex;
    7.50 -   //===================  Deterministic Replay  ======================
    7.51 -   #ifdef RECORD_DETERMINISTIC_REPLAY
    7.52 -   
    7.53 -   #endif
    7.54 -   //=================================================================
    7.55 -         Meas_startMutexLock
    7.56 -      //lookup mutex struc, using mutexIdx as index
    7.57 -   mutex = semEnv->mutexDynArray[ semReq->mutexIdx ];
    7.58 -
    7.59 -      //see if mutex is free or not
    7.60 -   if( mutex->holderOfLock == NULL ) //none holding, give lock to requester
    7.61 -    {
    7.62 -      mutex->holderOfLock = semReq->requestingPr;
    7.63 -      
    7.64 -         //re-animate requester, now that it has the lock
    7.65 -      resume_procr( semReq->requestingPr, semEnv );
    7.66 -    }
    7.67 -   else //queue up requester to wait for release of lock
    7.68 -    {
    7.69 -      writePrivQ( semReq->requestingPr, mutex->waitingQueue );
    7.70 -    }
    7.71 -         Meas_endMutexLock
    7.72 - }
    7.73 -
    7.74 -/*
    7.75 - */
    7.76 -inline void
    7.77 -handleMutexUnlock( VPThdSemReq *semReq, VPThdSemEnv *semEnv)
    7.78 - { VPThdMutex  *mutex;
    7.79 -
    7.80 -         Meas_startMutexUnlock
    7.81 -      //lookup mutex struc, using mutexIdx as index
    7.82 -   mutex = semEnv->mutexDynArray[ semReq->mutexIdx ];
    7.83 -
    7.84 -      //set new holder of mutex-lock to be next in queue (NULL if empty)
    7.85 -   mutex->holderOfLock = readPrivQ( mutex->waitingQueue );
    7.86 -
    7.87 -      //if have new non-NULL holder, re-animate it
    7.88 -   if( mutex->holderOfLock != NULL )
    7.89 -    {
    7.90 -      resume_procr( mutex->holderOfLock, semEnv );
    7.91 -    }
    7.92 -
    7.93 -      //re-animate the releaser of the lock
    7.94 -   resume_procr( semReq->requestingPr, semEnv );
    7.95 -         Meas_endMutexUnlock
    7.96 - }
    7.97 -
    7.98 -//===========================  Condition Vars  ==============================
    7.99 -/*The semantic request has the cond-var value and mutex value, which are the
   7.100 - * indexes into the array.  Not worrying about having too many mutexes or
   7.101 - * cond vars created, so using array instead of hash table, for speed.
   7.102 - */
   7.103 -
   7.104 -
   7.105 -/*Make cond has to be called with the mutex that the cond is paired to
   7.106 - * Don't have to implement this way, but was confusing learning cond vars
   7.107 - * until deduced that each cond var owns a mutex that is used only for
   7.108 - * interacting with that cond var.  So, make this pairing explicit.
   7.109 - */
   7.110 -inline void
   7.111 -handleMakeCond( VPThdSemReq *semReq, VPThdSemEnv *semEnv)
   7.112 - { VPThdCond   *newCond;
   7.113 -   VirtProcr  *requestingPr;
   7.114 -
   7.115 -   requestingPr  = semReq->requestingPr;
   7.116 -   newCond = VMS__malloc( sizeof(VPThdCond) );
   7.117 -   newCond->partnerMutex = semEnv->mutexDynArray[ semReq->mutexIdx ];
   7.118 -
   7.119 -   newCond->waitingQueue = makeVMSPrivQ();
   7.120 -
   7.121 -      //The cond struc contains an int that identifies it -- use that as
   7.122 -      // its index within the array of conds.  Add the new cond to array.
   7.123 -   newCond->condIdx = addToDynArray( newCond, semEnv->condDynArrayInfo );
   7.124 -
   7.125 -      //Now communicate the cond's identifying int back to requesting procr
   7.126 -   semReq->requestingPr->dataRetFromReq = (void*)newCond->condIdx; //condIdx is 32 bit
   7.127 -   
   7.128 -      //re-animate the requester
   7.129 -   resume_procr( requestingPr, semEnv );
   7.130 - }
   7.131 -
   7.132 -
   7.133 -/*Mutex has already been paired to the cond var, so don't need to send the
   7.134 - * mutex, just the cond var.  Don't have to do this, but want to bitch-slap
   7.135 - * the designers of Posix standard  ; )
   7.136 - */
   7.137 -inline void
   7.138 -handleCondWait( VPThdSemReq *semReq, VPThdSemEnv *semEnv)
   7.139 - { VPThdCond   *cond;
   7.140 -   VPThdMutex  *mutex;
   7.141 -
   7.142 -         Meas_startCondWait
   7.143 -      //get cond struc out of array of them that's in the sem env
   7.144 -   cond = semEnv->condDynArray[ semReq->condIdx ];
   7.145 -
   7.146 -      //add requester to queue of wait-ers
   7.147 -   writePrivQ( semReq->requestingPr, cond->waitingQueue );
   7.148 -    
   7.149 -      //unlock mutex -- can't reuse above handler 'cause not queuing releaser
   7.150 -   mutex = cond->partnerMutex;
   7.151 -   mutex->holderOfLock = readPrivQ( mutex->waitingQueue );
   7.152 -
   7.153 -   if( mutex->holderOfLock != NULL )
   7.154 -    {
   7.155 -      resume_procr( mutex->holderOfLock, semEnv );
   7.156 -    }
   7.157 -         Meas_endCondWait
   7.158 - }
   7.159 -
   7.160 -
   7.161 -/*Note that have to implement this such that guarantee the waiter is the one
   7.162 - * that gets the lock
   7.163 - */
   7.164 -inline void
   7.165 -handleCondSignal( VPThdSemReq *semReq, VPThdSemEnv *semEnv)
   7.166 - { VPThdCond   *cond;
   7.167 -   VPThdMutex  *mutex;
   7.168 -   VirtProcr  *waitingPr;
   7.169 -
   7.170 -         Meas_startCondSignal
   7.171 -      //get cond struc out of array of them that's in the sem env
   7.172 -   cond = semEnv->condDynArray[ semReq->condIdx ];
   7.173 -   
   7.174 -      //take next waiting procr out of queue
   7.175 -   waitingPr = readPrivQ( cond->waitingQueue );
   7.176 -
   7.177 -      //transfer waiting procr to wait queue of mutex
   7.178 -      // mutex is guaranteed to be held by signalling procr, so no check
   7.179 -   mutex = cond->partnerMutex;
   7.180 -   pushPrivQ( waitingPr, mutex->waitingQueue ); //is first out when read
   7.181 -
   7.182 -      //re-animate the signalling procr
   7.183 -   resume_procr( semReq->requestingPr, semEnv );
   7.184 -         Meas_endCondSignal
   7.185 - }
   7.186 -
   7.187 -
   7.188 -
   7.189 -//============================================================================
   7.190 -//
   7.191 -/*
   7.192 - */
   7.193 -void inline
   7.194 -handleMalloc(VPThdSemReq *semReq, VirtProcr *requestingPr,VPThdSemEnv *semEnv)
   7.195 - { void *ptr;
   7.196 -
   7.197 -         //========================= MEASUREMENT STUFF ======================
   7.198 -         #ifdef MEAS__TIME_PLUGIN
   7.199 -         int32 startStamp, endStamp;
   7.200 -         saveLowTimeStampCountInto( startStamp );
   7.201 -         #endif
   7.202 -         //==================================================================
   7.203 -   ptr = VMS__malloc( semReq->sizeToMalloc );
   7.204 -   requestingPr->dataRetFromReq = ptr;
   7.205 -   resume_procr( requestingPr, semEnv );
   7.206 -         //========================= MEASUREMENT STUFF ======================
   7.207 -         #ifdef MEAS__TIME_PLUGIN
   7.208 -         saveLowTimeStampCountInto( endStamp );
   7.209 -         subIntervalFromHist( startStamp, endStamp,
   7.210 -                                        _VMSMasterEnv->reqHdlrHighTimeHist );
   7.211 -         #endif
   7.212 -         //==================================================================
   7.213 -  }
   7.214 -
   7.215 -/*
   7.216 - */
   7.217 -void inline
   7.218 -handleFree( VPThdSemReq *semReq, VirtProcr *requestingPr, VPThdSemEnv *semEnv)
   7.219 - {
   7.220 -         //========================= MEASUREMENT STUFF ======================
   7.221 -         #ifdef MEAS__TIME_PLUGIN
   7.222 -         int32 startStamp, endStamp;
   7.223 -         saveLowTimeStampCountInto( startStamp );
   7.224 -         #endif
   7.225 -         //==================================================================
   7.226 -   VMS__free( semReq->ptrToFree );
   7.227 -   resume_procr( requestingPr, semEnv );
   7.228 -         //========================= MEASUREMENT STUFF ======================
   7.229 -         #ifdef MEAS__TIME_PLUGIN
   7.230 -         saveLowTimeStampCountInto( endStamp );
   7.231 -         subIntervalFromHist( startStamp, endStamp,
   7.232 -                                        _VMSMasterEnv->reqHdlrHighTimeHist );
   7.233 -         #endif
   7.234 -         //==================================================================
   7.235 - }
   7.236 -
   7.237 -
   7.238 -//===========================================================================
   7.239 -//
   7.240 -/*Uses ID as index into array of flags.  If flag already set, resumes from
   7.241 - * end-label.  Else, sets flag and resumes normally.
   7.242 - */
   7.243 -void inline
   7.244 -handleStartSingleton_helper( VPThdSingleton *singleton, VirtProcr *reqstingPr,
   7.245 -                             VPThdSemEnv    *semEnv )
   7.246 - {
   7.247 -   if( singleton->hasFinished )
   7.248 -    {    //the code that sets the flag to true first sets the end instr addr
   7.249 -      reqstingPr->dataRetFromReq = singleton->endInstrAddr;
   7.250 -      resume_procr( reqstingPr, semEnv );
   7.251 -      return;
   7.252 -    }
   7.253 -   else if( singleton->hasBeenStarted )
   7.254 -    {    //singleton is in-progress in a diff slave, so wait for it to finish
   7.255 -      writePrivQ(reqstingPr, singleton->waitQ );
   7.256 -      return;
   7.257 -    }
   7.258 -   else
   7.259 -    {    //hasn't been started, so this is the first attempt at the singleton
   7.260 -      singleton->hasBeenStarted = TRUE;
   7.261 -      reqstingPr->dataRetFromReq = 0x0;
   7.262 -      resume_procr( reqstingPr, semEnv );
   7.263 -      return;
   7.264 -    }
   7.265 - }
   7.266 -void inline
   7.267 -handleStartFnSingleton( VPThdSemReq *semReq, VirtProcr *requestingPr,
   7.268 -                      VPThdSemEnv *semEnv )
   7.269 - { VPThdSingleton *singleton;
   7.270 -
   7.271 -   singleton = &(semEnv->fnSingletons[ semReq->singletonID ]);
   7.272 -   handleStartSingleton_helper( singleton, requestingPr, semEnv );
   7.273 - }
   7.274 -void inline
   7.275 -handleStartDataSingleton( VPThdSemReq *semReq, VirtProcr *requestingPr,
   7.276 -                      VPThdSemEnv *semEnv )
   7.277 - { VPThdSingleton *singleton;
   7.278 -
   7.279 -   if( *(semReq->singletonPtrAddr) == NULL )
   7.280 -    { singleton                 = VMS__malloc( sizeof(VPThdSingleton) );
   7.281 -      singleton->waitQ          = makeVMSPrivQ();
   7.282 -      singleton->endInstrAddr   = 0x0;
   7.283 -      singleton->hasBeenStarted = FALSE;
   7.284 -      singleton->hasFinished    = FALSE;
   7.285 -      *(semReq->singletonPtrAddr)  = singleton;
   7.286 -    }
   7.287 -   else
   7.288 -      singleton = *(semReq->singletonPtrAddr);
   7.289 -   handleStartSingleton_helper( singleton, requestingPr, semEnv );
   7.290 - }
   7.291 -
   7.292 -
   7.293 -void inline
   7.294 -handleEndSingleton_helper( VPThdSingleton *singleton, VirtProcr *requestingPr,
   7.295 -                           VPThdSemEnv    *semEnv )
   7.296 - { PrivQueueStruc *waitQ;
   7.297 -   int32           numWaiting, i;
   7.298 -   VirtProcr      *resumingPr;
   7.299 -
   7.300 -   if( singleton->hasFinished )
   7.301 -    { //by definition, only one slave should ever be able to run end singleton
   7.302 -      // so if this is true, is an error
   7.303 -      //VMS__throw_exception( "singleton code ran twice", requestingPr, NULL);
   7.304 -    }
   7.305 -
   7.306 -   singleton->hasFinished = TRUE;
   7.307 -   waitQ = singleton->waitQ;
   7.308 -   numWaiting = numInPrivQ( waitQ );
   7.309 -   for( i = 0; i < numWaiting; i++ )
   7.310 -    {    //they will resume inside start singleton, then jmp to end singleton
   7.311 -      resumingPr = readPrivQ( waitQ );
   7.312 -      resumingPr->dataRetFromReq = singleton->endInstrAddr;
   7.313 -      resume_procr( resumingPr, semEnv );
   7.314 -    }
   7.315 -
   7.316 -   resume_procr( requestingPr, semEnv );
   7.317 -
   7.318 - }
   7.319 -void inline
   7.320 -handleEndFnSingleton( VPThdSemReq *semReq, VirtProcr *requestingPr,
   7.321 -                        VPThdSemEnv *semEnv )
   7.322 - {
   7.323 -   VPThdSingleton   *singleton;
   7.324 -
   7.325 -   singleton = &(semEnv->fnSingletons[ semReq->singletonID ]);
   7.326 -   handleEndSingleton_helper( singleton, requestingPr, semEnv );
   7.327 - }
   7.328 -void inline
   7.329 -handleEndDataSingleton( VPThdSemReq *semReq, VirtProcr *requestingPr,
   7.330 -                        VPThdSemEnv *semEnv )
   7.331 - {
   7.332 -   VPThdSingleton   *singleton;
   7.333 -
   7.334 -   singleton = *(semReq->singletonPtrAddr);
   7.335 -   handleEndSingleton_helper( singleton, requestingPr, semEnv );
   7.336 - }
   7.337 -
   7.338 -
   7.339 -/*This executes the function in the masterVP, take the function
   7.340 - * pointer out of the request and call it, then resume the VP.
   7.341 - */
   7.342 -void inline
   7.343 -handleAtomic(VPThdSemReq *semReq, VirtProcr *requestingPr,VPThdSemEnv *semEnv)
   7.344 - {
   7.345 -   semReq->fnToExecInMaster( semReq->dataForFn );
   7.346 -   resume_procr( requestingPr, semEnv );
   7.347 - }
   7.348 -
   7.349 -/*First, it looks at the VP's semantic data, to see the highest transactionID
   7.350 - * that VP
   7.351 - * already has entered.  If the current ID is not larger, it throws an
   7.352 - * exception stating a bug in the code.
   7.353 - *Otherwise it puts the current ID
   7.354 - * there, and adds the ID to a linked list of IDs entered -- the list is
   7.355 - * used to check that exits are properly ordered.
   7.356 - *Next it is uses transactionID as index into an array of transaction
   7.357 - * structures.
   7.358 - *If the "VP_currently_executing" field is non-null, then put requesting VP
   7.359 - * into queue in the struct.  (At some point a holder will request
   7.360 - * end-transaction, which will take this VP from the queue and resume it.)
   7.361 - *If NULL, then write requesting into the field and resume.
   7.362 - */
   7.363 -void inline
   7.364 -handleTransStart( VPThdSemReq *semReq, VirtProcr *requestingPr,
   7.365 -                  VPThdSemEnv *semEnv )
   7.366 - { VPThdSemData *semData;
   7.367 -   TransListElem *nextTransElem;
   7.368 -
   7.369 -      //check ordering of entering transactions is correct
   7.370 -   semData = requestingPr->semanticData;
   7.371 -   if( semData->highestTransEntered > semReq->transID )
   7.372 -    {    //throw VMS exception, which shuts down VMS.
   7.373 -      VMS__throw_exception( "transID smaller than prev", requestingPr, NULL);
   7.374 -    }
   7.375 -      //add this trans ID to the list of transactions entered -- check when
   7.376 -      // end a transaction
   7.377 -   semData->highestTransEntered = semReq->transID;
   7.378 -   nextTransElem = VMS__malloc( sizeof(TransListElem) );
   7.379 -   nextTransElem->transID = semReq->transID;
   7.380 -   nextTransElem->nextTrans = semData->lastTransEntered;
   7.381 -   semData->lastTransEntered = nextTransElem;
   7.382 -
   7.383 -      //get the structure for this transaction ID
   7.384 -   VPThdTrans *
   7.385 -   transStruc = &(semEnv->transactionStrucs[ semReq->transID ]);
   7.386 -
   7.387 -   if( transStruc->VPCurrentlyExecuting == NULL )
   7.388 -    {
   7.389 -      transStruc->VPCurrentlyExecuting = requestingPr;
   7.390 -      resume_procr( requestingPr, semEnv );
   7.391 -    }
   7.392 -   else
   7.393 -    {    //note, might make future things cleaner if save request with VP and
   7.394 -         // add this trans ID to the linked list when gets out of queue.
   7.395 -         // but don't need for now, and lazy..
   7.396 -      writePrivQ( requestingPr, transStruc->waitingVPQ );
   7.397 -    }
   7.398 - }
   7.399 -
   7.400 -
   7.401 -/*Use the trans ID to get the transaction structure from the array.
   7.402 - *Look at VP_currently_executing to be sure it's same as requesting VP.
   7.403 - * If different, throw an exception, stating there's a bug in the code.
   7.404 - *Next, take the first element off the list of entered transactions.
   7.405 - * Check to be sure the ending transaction is the same ID as the next on
   7.406 - * the list.  If not, incorrectly nested so throw an exception.
   7.407 - *
   7.408 - *Next, get from the queue in the structure.
   7.409 - *If it's empty, set VP_currently_executing field to NULL and resume
   7.410 - * requesting VP.
   7.411 - *If get somethine, set VP_currently_executing to the VP from the queue, then
   7.412 - * resume both.
   7.413 - */
   7.414 -void inline
   7.415 -handleTransEnd( VPThdSemReq *semReq, VirtProcr *requestingPr,
   7.416 -                VPThdSemEnv *semEnv)
   7.417 - { VPThdSemData    *semData;
   7.418 -   VirtProcr       *waitingPr;
   7.419 -   VPThdTrans      *transStruc;
   7.420 -   TransListElem   *lastTrans;
   7.421 -
   7.422 -   transStruc = &(semEnv->transactionStrucs[ semReq->transID ]);
   7.423 -
   7.424 -      //make sure transaction ended in same VP as started it.
   7.425 -   if( transStruc->VPCurrentlyExecuting != requestingPr )
   7.426 -    {
   7.427 -      VMS__throw_exception( "trans ended in diff VP", requestingPr, NULL );
   7.428 -    }
   7.429 -
   7.430 -      //make sure nesting is correct -- last ID entered should == this ID
   7.431 -   semData = requestingPr->semanticData;
   7.432 -   lastTrans = semData->lastTransEntered;
   7.433 -   if( lastTrans->transID != semReq->transID )
   7.434 -    {
   7.435 -      VMS__throw_exception( "trans incorrectly nested", requestingPr, NULL );
   7.436 -    }
   7.437 -
   7.438 -   semData->lastTransEntered = semData->lastTransEntered->nextTrans;
   7.439 -
   7.440 -
   7.441 -   waitingPr = readPrivQ( transStruc->waitingVPQ );
   7.442 -   transStruc->VPCurrentlyExecuting = waitingPr;
   7.443 -
   7.444 -   if( waitingPr != NULL )
   7.445 -      resume_procr( waitingPr, semEnv );
   7.446 -
   7.447 -   resume_procr( requestingPr, semEnv );
   7.448 - }
     8.1 --- a/VPThread_Request_Handlers.h	Tue Jul 26 16:37:26 2011 +0200
     8.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
     8.3 @@ -1,57 +0,0 @@
     8.4 -/*
     8.5 - *  Copyright 2009 OpenSourceStewardshipFoundation.org
     8.6 - *  Licensed under GNU General Public License version 2
     8.7 - *
     8.8 - * Author: seanhalle@yahoo.com
     8.9 - *
    8.10 - */
    8.11 -
    8.12 -#ifndef _VPThread_REQ_H
    8.13 -#define	_VPThread_REQ_H
    8.14 -
    8.15 -#include "VPThread.h"
    8.16 -
    8.17 -/*This header defines everything specific to the VPThread semantic plug-in
    8.18 - */
    8.19 -
    8.20 -inline void
    8.21 -handleMakeMutex(  VPThdSemReq *semReq, VPThdSemEnv *semEnv);
    8.22 -inline void
    8.23 -handleMutexLock(  VPThdSemReq *semReq, VPThdSemEnv *semEnv);
    8.24 -inline void
    8.25 -handleMutexUnlock(VPThdSemReq *semReq, VPThdSemEnv *semEnv);
    8.26 -inline void
    8.27 -handleMakeCond(   VPThdSemReq *semReq, VPThdSemEnv *semEnv);
    8.28 -inline void
    8.29 -handleCondWait(   VPThdSemReq *semReq, VPThdSemEnv *semEnv);
    8.30 -inline void
    8.31 -handleCondSignal( VPThdSemReq *semReq, VPThdSemEnv *semEnv);
    8.32 -void inline
    8.33 -handleMalloc(VPThdSemReq *semReq, VirtProcr *requestingPr,VPThdSemEnv *semEnv);
    8.34 -void inline
    8.35 -handleFree( VPThdSemReq *semReq, VirtProcr *requestingPr, VPThdSemEnv *semEnv);
    8.36 -inline void
    8.37 -handleStartFnSingleton( VPThdSemReq *semReq, VirtProcr *reqstingPr,
    8.38 -                      VPThdSemEnv *semEnv );
    8.39 -inline void
    8.40 -handleEndFnSingleton( VPThdSemReq *semReq, VirtProcr *requestingPr,
    8.41 -                    VPThdSemEnv *semEnv );
    8.42 -inline void
    8.43 -handleStartDataSingleton( VPThdSemReq *semReq, VirtProcr *reqstingPr,
    8.44 -                      VPThdSemEnv *semEnv );
    8.45 -inline void
    8.46 -handleEndDataSingleton( VPThdSemReq *semReq, VirtProcr *requestingPr,
    8.47 -                    VPThdSemEnv *semEnv );
    8.48 -void inline
    8.49 -handleAtomic( VPThdSemReq *semReq, VirtProcr *requestingPr,
    8.50 -              VPThdSemEnv *semEnv);
    8.51 -void inline
    8.52 -handleTransStart( VPThdSemReq *semReq, VirtProcr *requestingPr,
    8.53 -                  VPThdSemEnv *semEnv );
    8.54 -void inline
    8.55 -handleTransEnd( VPThdSemReq *semReq, VirtProcr *requestingPr,
    8.56 -                VPThdSemEnv *semEnv);
    8.57 -
    8.58 -
    8.59 -#endif	/* _VPThread_REQ_H */
    8.60 -
     9.1 --- a/VPThread_helper.c	Tue Jul 26 16:37:26 2011 +0200
     9.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
     9.3 @@ -1,48 +0,0 @@
     9.4 -
     9.5 -#include <stddef.h>
     9.6 -
     9.7 -#include "VMS/VMS.h"
     9.8 -#include "VPThread.h"
     9.9 -
    9.10 -/*Re-use this in the entry-point fn
    9.11 - */
    9.12 -inline VirtProcr *
    9.13 -VPThread__create_procr_helper( VirtProcrFnPtr fnPtr, void *initData,
    9.14 -                          VPThdSemEnv *semEnv,    int32 coreToScheduleOnto )
    9.15 - { VirtProcr      *newPr;
    9.16 -   VPThdSemData   *semData;
    9.17 -
    9.18 -      //This is running in master, so use internal version
    9.19 -   newPr = VMS__create_procr( fnPtr, initData );
    9.20 -
    9.21 -   semEnv->numVirtPr += 1;
    9.22 -
    9.23 -   semData = VMS__malloc( sizeof(VPThdSemData) );
    9.24 -   semData->highestTransEntered = -1;
    9.25 -   semData->lastTransEntered    = NULL;
    9.26 -
    9.27 -   newPr->semanticData = semData;
    9.28 -
    9.29 -   //=================== Assign new processor to a core =====================
    9.30 -   #ifdef SEQUENTIAL
    9.31 -   newPr->coreAnimatedBy = 0;
    9.32 -
    9.33 -   #else
    9.34 -
    9.35 -   if(coreToScheduleOnto < 0 || coreToScheduleOnto >= NUM_CORES )
    9.36 -    {    //out-of-range, so round-robin assignment
    9.37 -      newPr->coreAnimatedBy = semEnv->nextCoreToGetNewPr;
    9.38 -
    9.39 -      if( semEnv->nextCoreToGetNewPr >= NUM_CORES - 1 )
    9.40 -          semEnv->nextCoreToGetNewPr  = 0;
    9.41 -      else
    9.42 -          semEnv->nextCoreToGetNewPr += 1;
    9.43 -    }
    9.44 -   else //core num in-range, so use it
    9.45 -    { newPr->coreAnimatedBy = coreToScheduleOnto;
    9.46 -    }
    9.47 -   #endif
    9.48 -   //========================================================================
    9.49 -
    9.50 -   return newPr;
    9.51 - }
    9.52 \ No newline at end of file
    10.1 --- a/VPThread_helper.h	Tue Jul 26 16:37:26 2011 +0200
    10.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
    10.3 @@ -1,19 +0,0 @@
    10.4 -/* 
    10.5 - * File:   VPThread_helper.h
    10.6 - * Author: msach
    10.7 - *
    10.8 - * Created on June 10, 2011, 12:20 PM
    10.9 - */
   10.10 -
   10.11 -#include "VMS/VMS.h"
   10.12 -#include "VPThread.h"
   10.13 -
   10.14 -#ifndef VPTHREAD_HELPER_H
   10.15 -#define	VPTHREAD_HELPER_H
   10.16 -
   10.17 -inline VirtProcr *
   10.18 -VPThread__create_procr_helper( VirtProcrFnPtr fnPtr, void *initData,
   10.19 -                          VPThdSemEnv *semEnv,    int32 coreToScheduleOnto );
   10.20 -
   10.21 -#endif	/* VPTHREAD_HELPER_H */
   10.22 -
    11.1 --- a/VPThread_lib.c	Tue Jul 26 16:37:26 2011 +0200
    11.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
    11.3 @@ -1,626 +0,0 @@
    11.4 -/*
    11.5 - * Copyright 2010  OpenSourceCodeStewardshipFoundation
    11.6 - *
    11.7 - * Licensed under BSD
    11.8 - */
    11.9 -
   11.10 -#include <stdio.h>
   11.11 -#include <stdlib.h>
   11.12 -#include <malloc.h>
   11.13 -
   11.14 -#include "VMS/VMS.h"
   11.15 -#include "VPThread.h"
   11.16 -#include "VPThread_helper.h"
   11.17 -#include "VMS/Queue_impl/PrivateQueue.h"
   11.18 -#include "VMS/Hash_impl/PrivateHash.h"
   11.19 -
   11.20 -
   11.21 -//==========================================================================
   11.22 -
   11.23 -void
   11.24 -VPThread__init();
   11.25 -
   11.26 -void
   11.27 -VPThread__init_Seq();
   11.28 -
   11.29 -void
   11.30 -VPThread__init_Helper();
   11.31 -
   11.32 -
   11.33 -//===========================================================================
   11.34 -
   11.35 -
   11.36 -/*These are the library functions *called in the application*
   11.37 - * 
   11.38 - *There's a pattern for the outside sequential code to interact with the
   11.39 - * VMS_HW code.
   11.40 - *The VMS_HW system is inside a boundary..  every VPThread system is in its
   11.41 - * own directory that contains the functions for each of the processor types.
   11.42 - * One of the processor types is the "seed" processor that starts the
   11.43 - * cascade of creating all the processors that do the work.
   11.44 - *So, in the directory is a file called "EntryPoint.c" that contains the
   11.45 - * function, named appropriately to the work performed, that the outside
   11.46 - * sequential code calls.  This function follows a pattern:
   11.47 - *1) it calls VPThread__init()
   11.48 - *2) it creates the initial data for the seed processor, which is passed
   11.49 - *    in to the function
   11.50 - *3) it creates the seed VPThread processor, with the data to start it with.
   11.51 - *4) it calls startVPThreadThenWaitUntilWorkDone
   11.52 - *5) it gets the returnValue from the transfer struc and returns that
   11.53 - *    from the function
   11.54 - *
   11.55 - *For now, a new VPThread system has to be created via VPThread__init every
   11.56 - * time an entry point function is called -- later, might add letting the
   11.57 - * VPThread system be created once, and let all the entry points just reuse
   11.58 - * it -- want to be as simple as possible now, and see by using what makes
   11.59 - * sense for later..
   11.60 - */
   11.61 -
   11.62 -
   11.63 -
   11.64 -//===========================================================================
   11.65 -
   11.66 -/*This is the "border crossing" function -- the thing that crosses from the
   11.67 - * outside world, into the VMS_HW world.  It initializes and starts up the
   11.68 - * VMS system, then creates one processor from the specified function and
   11.69 - * puts it into the readyQ.  From that point, that one function is resp.
   11.70 - * for creating all the other processors, that then create others, and so
   11.71 - * forth.
   11.72 - *When all the processors, including the seed, have dissipated, then this
   11.73 - * function returns.  The results will have been written by side-effect via
   11.74 - * pointers read from, or written into initData.
   11.75 - *
   11.76 - *NOTE: no Threads should exist in the outside program that might touch
   11.77 - * any of the data reachable from initData passed in to here
   11.78 - */
   11.79 -void
   11.80 -VPThread__create_seed_procr_and_do_work( VirtProcrFnPtr fnPtr, void *initData )
   11.81 - { VPThdSemEnv *semEnv;
   11.82 -   VirtProcr *seedPr;
   11.83 -
   11.84 -   #ifdef SEQUENTIAL
   11.85 -   VPThread__init_Seq();  //debug sequential exe
   11.86 -   #else
   11.87 -   VPThread__init();      //normal multi-thd
   11.88 -   #endif
   11.89 -   semEnv = _VMSMasterEnv->semanticEnv;
   11.90 -
   11.91 -      //VPThread starts with one processor, which is put into initial environ,
   11.92 -      // and which then calls create() to create more, thereby expanding work
   11.93 -   seedPr = VPThread__create_procr_helper( fnPtr, initData, semEnv, -1 );
   11.94 -
   11.95 -   resume_procr( seedPr, semEnv );
   11.96 -
   11.97 -   #ifdef SEQUENTIAL
   11.98 -   VMS__start_the_work_then_wait_until_done_Seq();  //debug sequential exe
   11.99 -   #else
  11.100 -   VMS__start_the_work_then_wait_until_done();      //normal multi-thd
  11.101 -   #endif
  11.102 -
  11.103 -   VPThread__cleanup_after_shutdown();
  11.104 - }
  11.105 -
  11.106 -
  11.107 -inline int32
  11.108 -VPThread__giveMinWorkUnitCycles( float32 percentOverhead )
  11.109 - {
  11.110 -   return MIN_WORK_UNIT_CYCLES;
  11.111 - }
  11.112 -
  11.113 -inline int32
  11.114 -VPThread__giveIdealNumWorkUnits()
  11.115 - {
  11.116 -   return NUM_SCHED_SLOTS * NUM_CORES;
  11.117 - }
  11.118 -
  11.119 -inline int32
  11.120 -VPThread__give_number_of_cores_to_schedule_onto()
  11.121 - {
  11.122 -   return NUM_CORES;
  11.123 - }
  11.124 -
  11.125 -/*For now, use TSC -- later, make these two macros with assembly that first
  11.126 - * saves jump point, and second jumps back several times to get reliable time
  11.127 - */
  11.128 -inline void
  11.129 -VPThread__start_primitive()
  11.130 - { saveLowTimeStampCountInto( ((VPThdSemEnv *)(_VMSMasterEnv->semanticEnv))->
  11.131 -                              primitiveStartTime );
  11.132 - }
  11.133 -
  11.134 -/*Just quick and dirty for now -- make reliable later
  11.135 - * will want this to jump back several times -- to be sure cache is warm
  11.136 - * because don't want comm time included in calc-time measurement -- and
  11.137 - * also to throw out any "weird" values due to OS interrupt or TSC rollover
  11.138 - */
  11.139 -inline int32
  11.140 -VPThread__end_primitive_and_give_cycles()
  11.141 - { int32 endTime, startTime;
  11.142 -   //TODO: fix by repeating time-measurement
  11.143 -   saveLowTimeStampCountInto( endTime );
  11.144 -   startTime=((VPThdSemEnv*)(_VMSMasterEnv->semanticEnv))->primitiveStartTime;
  11.145 -   return (endTime - startTime);
  11.146 - }
  11.147 -
  11.148 -//===========================================================================
  11.149 -//
  11.150 -/*Initializes all the data-structures for a VPThread system -- but doesn't
  11.151 - * start it running yet!
  11.152 - *
  11.153 - * 
  11.154 - *This sets up the semantic layer over the VMS system
  11.155 - *
  11.156 - *First, calls VMS_Setup, then creates own environment, making it ready
  11.157 - * for creating the seed processor and then starting the work.
  11.158 - */
  11.159 -void
  11.160 -VPThread__init()
  11.161 - {
  11.162 -   VMS__init();
  11.163 -   //masterEnv, a global var, now is partially set up by init_VMS
  11.164 -   
  11.165 -   //Moved here from VMS.c because this is not parallel construct independent
  11.166 -   MakeTheMeasHists();
  11.167 -
  11.168 -   VPThread__init_Helper();
  11.169 - }
  11.170 -
  11.171 -#ifdef SEQUENTIAL
  11.172 -void
  11.173 -VPThread__init_Seq()
  11.174 - {
  11.175 -   VMS__init_Seq();
  11.176 -   flushRegisters();
  11.177 -      //masterEnv, a global var, now is partially set up by init_VMS
  11.178 -
  11.179 -   VPThread__init_Helper();
  11.180 - }
  11.181 -#endif
  11.182 -
  11.183 -void
  11.184 -VPThread__init_Helper()
  11.185 - { VPThdSemEnv       *semanticEnv;
  11.186 -   PrivQueueStruc **readyVPQs;
  11.187 -   int              coreIdx, i;
  11.188 - 
  11.189 -      //Hook up the semantic layer's plug-ins to the Master virt procr
  11.190 -   _VMSMasterEnv->requestHandler = &VPThread__Request_Handler;
  11.191 -   _VMSMasterEnv->slaveScheduler = &VPThread__schedule_virt_procr;
  11.192 -
  11.193 -      //create the semantic layer's environment (all its data) and add to
  11.194 -      // the master environment
  11.195 -   semanticEnv = VMS__malloc( sizeof( VPThdSemEnv ) );
  11.196 -   _VMSMasterEnv->semanticEnv = semanticEnv;
  11.197 -
  11.198 -      //create the ready queue
  11.199 -   readyVPQs = VMS__malloc( NUM_CORES * sizeof(PrivQueueStruc *) );
  11.200 -
  11.201 -   for( coreIdx = 0; coreIdx < NUM_CORES; coreIdx++ )
  11.202 -    {
  11.203 -      readyVPQs[ coreIdx ] = makeVMSPrivQ();
  11.204 -    }
  11.205 -   
  11.206 -   semanticEnv->readyVPQs          = readyVPQs;
  11.207 -   
  11.208 -   semanticEnv->numVirtPr          = 0;
  11.209 -   semanticEnv->nextCoreToGetNewPr = 0;
  11.210 -
  11.211 -   semanticEnv->mutexDynArrayInfo  =
  11.212 -      makePrivDynArrayOfSize( (void*)&(semanticEnv->mutexDynArray), INIT_NUM_MUTEX );
  11.213 -
  11.214 -   semanticEnv->condDynArrayInfo   =
  11.215 -      makePrivDynArrayOfSize( (void*)&(semanticEnv->condDynArray),  INIT_NUM_COND );
  11.216 -   
  11.217 -   //TODO: bug -- turn these arrays into dyn arrays to eliminate limit
  11.218 -   //semanticEnv->singletonHasBeenExecutedFlags = makeDynArrayInfo( );
  11.219 -   //semanticEnv->transactionStrucs = makeDynArrayInfo( );
  11.220 -   for( i = 0; i < NUM_STRUCS_IN_SEM_ENV; i++ )
  11.221 -    {
  11.222 -      semanticEnv->fnSingletons[i].endInstrAddr      = NULL;
  11.223 -      semanticEnv->fnSingletons[i].hasBeenStarted    = FALSE;
  11.224 -      semanticEnv->fnSingletons[i].hasFinished       = FALSE;
  11.225 -      semanticEnv->fnSingletons[i].waitQ             = makeVMSPrivQ();
  11.226 -      semanticEnv->transactionStrucs[i].waitingVPQ   = makeVMSPrivQ();
  11.227 -    }   
  11.228 - }
  11.229 -
  11.230 -
  11.231 -/*Frees any memory allocated by VPThread__init() then calls VMS__shutdown
  11.232 - */
  11.233 -void
  11.234 -VPThread__cleanup_after_shutdown()
  11.235 - { /*VPThdSemEnv *semEnv;
  11.236 -   int32           coreIdx,     idx,   highestIdx;
  11.237 -   VPThdMutex      **mutexArray, *mutex;
  11.238 -   VPThdCond       **condArray, *cond; */
  11.239 - 
  11.240 - /* It's all allocated inside VMS's big chunk -- that's about to be freed, so
  11.241 - *  nothing to do here
  11.242 -  semEnv = _VMSMasterEnv->semanticEnv;
  11.243 -
  11.244 -//TODO: double check that all sem env locations freed
  11.245 -
  11.246 -   for( coreIdx = 0; coreIdx < NUM_CORES; coreIdx++ )
  11.247 -    {
  11.248 -      free( semEnv->readyVPQs[coreIdx]->startOfData );
  11.249 -      free( semEnv->readyVPQs[coreIdx] );
  11.250 -    }
  11.251 -   
  11.252 -   free( semEnv->readyVPQs );
  11.253 -
  11.254 -   
  11.255 -   //==== Free mutexes and mutex array ====
  11.256 -   mutexArray = semEnv->mutexDynArray->array;
  11.257 -   highestIdx = semEnv->mutexDynArray->highestIdxInArray;
  11.258 -   for( idx=0; idx < highestIdx; idx++ )
  11.259 -    { mutex = mutexArray[ idx ];
  11.260 -      if( mutex == NULL ) continue;
  11.261 -      free( mutex );
  11.262 -    }
  11.263 -   free( mutexArray );
  11.264 -   free( semEnv->mutexDynArray );
  11.265 -   //======================================
  11.266 -   
  11.267 -
  11.268 -   //==== Free conds and cond array ====
  11.269 -   condArray  = semEnv->condDynArray->array;
  11.270 -   highestIdx = semEnv->condDynArray->highestIdxInArray;
  11.271 -   for( idx=0; idx < highestIdx; idx++ )
  11.272 -    { cond = condArray[ idx ];
  11.273 -      if( cond == NULL ) continue;
  11.274 -      free( cond );
  11.275 -    }
  11.276 -   free( condArray );
  11.277 -   free( semEnv->condDynArray );
  11.278 -   //===================================
  11.279 -
  11.280 -   
  11.281 -   free( _VMSMasterEnv->semanticEnv );
  11.282 -  */
  11.283 -   VMS__cleanup_at_end_of_shutdown();
  11.284 - }
  11.285 -
  11.286 -
  11.287 -//===========================================================================
  11.288 -
  11.289 -/*
  11.290 - */
  11.291 -inline VirtProcr *
  11.292 -VPThread__create_thread( VirtProcrFnPtr fnPtr, void *initData,
  11.293 -                          VirtProcr *creatingPr )
  11.294 - { VPThdSemReq  reqData;
  11.295 -
  11.296 -      //the semantic request data is on the stack and disappears when this
  11.297 -      // call returns -- it's guaranteed to remain in the VP's stack for as
  11.298 -      // long as the VP is suspended.
  11.299 -   reqData.reqType            = 0; //know the type because is a VMS create req
  11.300 -   reqData.coreToScheduleOnto = -1; //means round-robin schedule
  11.301 -   reqData.fnPtr              = fnPtr;
  11.302 -   reqData.initData           = initData;
  11.303 -   reqData.requestingPr       = creatingPr;
  11.304 -
  11.305 -   VMS__send_create_procr_req( &reqData, creatingPr );
  11.306 -
  11.307 -   return creatingPr->dataRetFromReq;
  11.308 - }
  11.309 -
  11.310 -inline VirtProcr *
  11.311 -VPThread__create_thread_with_affinity( VirtProcrFnPtr fnPtr, void *initData,
  11.312 -                           VirtProcr *creatingPr,  int32  coreToScheduleOnto )
  11.313 - { VPThdSemReq  reqData;
  11.314 -
  11.315 -      //the semantic request data is on the stack and disappears when this
  11.316 -      // call returns -- it's guaranteed to remain in the VP's stack for as
  11.317 -      // long as the VP is suspended.
  11.318 -   reqData.reqType            = 0; //know type because in a VMS create req
  11.319 -   reqData.coreToScheduleOnto = coreToScheduleOnto;
  11.320 -   reqData.fnPtr              = fnPtr;
  11.321 -   reqData.initData           = initData;
  11.322 -   reqData.requestingPr       = creatingPr;
  11.323 -
  11.324 -   VMS__send_create_procr_req( &reqData, creatingPr );
  11.325 - }
  11.326 -
  11.327 -inline void
  11.328 -VPThread__dissipate_thread( VirtProcr *procrToDissipate )
  11.329 - {
  11.330 -   VMS__send_dissipate_req( procrToDissipate );
  11.331 - }
  11.332 -
  11.333 -
  11.334 -//===========================================================================
  11.335 -
  11.336 -void *
  11.337 -VPThread__malloc( size_t sizeToMalloc, VirtProcr *animPr )
  11.338 - { VPThdSemReq  reqData;
  11.339 -
  11.340 -   reqData.reqType      = malloc_req;
  11.341 -   reqData.sizeToMalloc = sizeToMalloc;
  11.342 -   reqData.requestingPr = animPr;
  11.343 -
  11.344 -   VMS__send_sem_request( &reqData, animPr );
  11.345 -
  11.346 -   return animPr->dataRetFromReq;
  11.347 - }
  11.348 -
  11.349 -
  11.350 -/*Sends request to Master, which does the work of freeing
  11.351 - */
  11.352 -void
  11.353 -VPThread__free( void *ptrToFree, VirtProcr *animPr )
  11.354 - { VPThdSemReq  reqData;
  11.355 -
  11.356 -   reqData.reqType      = free_req;
  11.357 -   reqData.ptrToFree    = ptrToFree;
  11.358 -   reqData.requestingPr = animPr;
  11.359 -
  11.360 -   VMS__send_sem_request( &reqData, animPr );
  11.361 - }
  11.362 -
  11.363 -
  11.364 -//===========================================================================
  11.365 -
  11.366 -inline void
  11.367 -VPThread__set_globals_to( void *globals )
  11.368 - {
  11.369 -   ((VPThdSemEnv *)
  11.370 -    (_VMSMasterEnv->semanticEnv))->applicationGlobals = globals;
  11.371 - }
  11.372 -
  11.373 -inline void *
  11.374 -VPThread__give_globals()
  11.375 - {
  11.376 -   return((VPThdSemEnv *) (_VMSMasterEnv->semanticEnv))->applicationGlobals;
  11.377 - }
  11.378 -
  11.379 -
  11.380 -//===========================================================================
  11.381 -
  11.382 -inline int32
  11.383 -VPThread__make_mutex( VirtProcr *animPr )
  11.384 - { VPThdSemReq  reqData;
  11.385 -
  11.386 -   reqData.reqType      = make_mutex;
  11.387 -   reqData.requestingPr = animPr;
  11.388 -
  11.389 -   VMS__send_sem_request( &reqData, animPr );
  11.390 -
  11.391 -   return (int32)animPr->dataRetFromReq; //mutexid is 32bit wide
  11.392 - }
  11.393 -
  11.394 -inline void
  11.395 -VPThread__mutex_lock( int32 mutexIdx, VirtProcr *acquiringPr )
  11.396 - { VPThdSemReq  reqData;
  11.397 -
  11.398 -   reqData.reqType      = mutex_lock;
  11.399 -   reqData.mutexIdx     = mutexIdx;
  11.400 -   reqData.requestingPr = acquiringPr;
  11.401 -
  11.402 -   VMS__send_sem_request( &reqData, acquiringPr );
  11.403 - }
  11.404 -
  11.405 -inline void
  11.406 -VPThread__mutex_unlock( int32 mutexIdx, VirtProcr *releasingPr )
  11.407 - { VPThdSemReq  reqData;
  11.408 -
  11.409 -   reqData.reqType      = mutex_unlock;
  11.410 -   reqData.mutexIdx     = mutexIdx;
  11.411 -   reqData.requestingPr = releasingPr;
  11.412 -
  11.413 -   VMS__send_sem_request( &reqData, releasingPr );
  11.414 - }
  11.415 -
  11.416 -
  11.417 -//=======================
  11.418 -inline int32
  11.419 -VPThread__make_cond( int32 ownedMutexIdx, VirtProcr *animPr)
  11.420 - { VPThdSemReq  reqData;
  11.421 -
  11.422 -   reqData.reqType      = make_cond;
  11.423 -   reqData.mutexIdx     = ownedMutexIdx;
  11.424 -   reqData.requestingPr = animPr;
  11.425 -
  11.426 -   VMS__send_sem_request( &reqData, animPr );
  11.427 -
  11.428 -   return (int32)animPr->dataRetFromReq; //condIdx is 32 bit wide
  11.429 - }
  11.430 -
  11.431 -inline void
  11.432 -VPThread__cond_wait( int32 condIdx, VirtProcr *waitingPr)
  11.433 - { VPThdSemReq  reqData;
  11.434 -
  11.435 -   reqData.reqType      = cond_wait;
  11.436 -   reqData.condIdx      = condIdx;
  11.437 -   reqData.requestingPr = waitingPr;
  11.438 -
  11.439 -   VMS__send_sem_request( &reqData, waitingPr );
  11.440 - }
  11.441 -
  11.442 -inline void *
  11.443 -VPThread__cond_signal( int32 condIdx, VirtProcr *signallingPr )
  11.444 - { VPThdSemReq  reqData;
  11.445 -
  11.446 -   reqData.reqType      = cond_signal;
  11.447 -   reqData.condIdx      = condIdx;
  11.448 -   reqData.requestingPr = signallingPr;
  11.449 -
  11.450 -   VMS__send_sem_request( &reqData, signallingPr );
  11.451 - }
  11.452 -
  11.453 -
  11.454 -//===========================================================================
  11.455 -//
  11.456 -/*A function singleton is a function whose body executes exactly once, on a
  11.457 - * single core, no matter how many times the fuction is called and no
  11.458 - * matter how many cores or the timing of cores calling it.
  11.459 - *
  11.460 - *A data singleton is a ticket attached to data.  That ticket can be used
  11.461 - * to get the data through the function exactly once, no matter how many
  11.462 - * times the data is given to the function, and no matter the timing of
  11.463 - * trying to get the data through from different cores.
  11.464 - */
  11.465 -
  11.466 -/*asm function declarations*/
  11.467 -void asm_save_ret_to_singleton(VPThdSingleton *singletonPtrAddr);
  11.468 -void asm_write_ret_from_singleton(VPThdSingleton *singletonPtrAddr);
  11.469 -
  11.470 -/*Fn singleton uses ID as index into array of singleton structs held in the
  11.471 - * semantic environment.
  11.472 - */
  11.473 -void
  11.474 -VPThread__start_fn_singleton( int32 singletonID,   VirtProcr *animPr )
  11.475 - {
  11.476 -   VPThdSemReq  reqData;
  11.477 -
  11.478 -      //
  11.479 -   reqData.reqType     = singleton_fn_start;
  11.480 -   reqData.singletonID = singletonID;
  11.481 -
  11.482 -   VMS__send_sem_request( &reqData, animPr );
  11.483 -   if( animPr->dataRetFromReq ) //will be 0 or addr of label in end singleton
  11.484 -    {
  11.485 -       VPThdSemEnv *semEnv = VMS__give_sem_env_for( animPr );
  11.486 -       asm_write_ret_from_singleton(&(semEnv->fnSingletons[ singletonID]));
  11.487 -    }
  11.488 - }
  11.489 -
  11.490 -/*Data singleton hands addr of loc holding a pointer to a singleton struct.
  11.491 - * The start_data_singleton makes the structure and puts its addr into the
  11.492 - * location.
  11.493 - */
  11.494 -void
  11.495 -VPThread__start_data_singleton( VPThdSingleton **singletonAddr,  VirtProcr *animPr )
  11.496 - {
  11.497 -   VPThdSemReq  reqData;
  11.498 -
  11.499 -   if( *singletonAddr && (*singletonAddr)->hasFinished )
  11.500 -      goto JmpToEndSingleton;
  11.501 -      
  11.502 -   reqData.reqType       = singleton_data_start;
  11.503 -   reqData.singletonPtrAddr = singletonAddr;
  11.504 -
  11.505 -   VMS__send_sem_request( &reqData, animPr );
  11.506 -   if( animPr->dataRetFromReq ) //either 0 or end singleton's return addr
  11.507 -    {    
  11.508 -       JmpToEndSingleton:
  11.509 -       asm_write_ret_from_singleton(*singletonAddr);
  11.510 -
  11.511 -    }
  11.512 -   //now, simply return
  11.513 -   //will exit either from the start singleton call or the end-singleton call
  11.514 - }
  11.515 -
  11.516 -/*Uses ID as index into array of flags.  If flag already set, resumes from
  11.517 - * end-label.  Else, sets flag and resumes normally.
  11.518 - *
  11.519 - *Note, this call cannot be inlined because the instr addr at the label
  11.520 - * inside is shared by all invocations of a given singleton ID.
  11.521 - */
  11.522 -void
  11.523 -VPThread__end_fn_singleton( int32 singletonID, VirtProcr *animPr )
  11.524 - {
  11.525 -   VPThdSemReq  reqData;
  11.526 -
  11.527 -   //don't need this addr until after at least one singleton has reached
  11.528 -   // this function
  11.529 -   VPThdSemEnv *semEnv = VMS__give_sem_env_for( animPr );
  11.530 -   asm_write_ret_from_singleton(&(semEnv->fnSingletons[ singletonID]));
  11.531 -
  11.532 -   reqData.reqType     = singleton_fn_end;
  11.533 -   reqData.singletonID = singletonID;
  11.534 -
  11.535 -   VMS__send_sem_request( &reqData, animPr );
  11.536 - }
  11.537 -
  11.538 -void
  11.539 -VPThread__end_data_singleton(  VPThdSingleton **singletonPtrAddr, VirtProcr *animPr )
  11.540 - {
  11.541 -   VPThdSemReq  reqData;
  11.542 -
  11.543 -      //don't need this addr until after singleton struct has reached
  11.544 -      // this function for first time
  11.545 -      //do assembly that saves the return addr of this fn call into the
  11.546 -      // data singleton -- that data-singleton can only be given to exactly
  11.547 -      // one instance in the code of this function.  However, can use this
  11.548 -      // function in different places for different data-singletons.
  11.549 -
  11.550 -   asm_save_ret_to_singleton(*singletonPtrAddr);
  11.551 -
  11.552 -   reqData.reqType          = singleton_data_end;
  11.553 -   reqData.singletonPtrAddr = singletonPtrAddr;
  11.554 -
  11.555 -   VMS__send_sem_request( &reqData, animPr );
  11.556 - }
  11.557 -
  11.558 -
  11.559 -/*This executes the function in the masterVP, so it executes in isolation
  11.560 - * from any other copies -- only one copy of the function can ever execute
  11.561 - * at a time.
  11.562 - *
  11.563 - *It suspends to the master, and the request handler takes the function
  11.564 - * pointer out of the request and calls it, then resumes the VP.
  11.565 - *Only very short functions should be called this way -- for longer-running
  11.566 - * isolation, use transaction-start and transaction-end, which run the code
  11.567 - * between as work-code.
  11.568 - */
  11.569 -void
  11.570 -VPThread__animate_short_fn_in_isolation( PtrToAtomicFn ptrToFnToExecInMaster,
  11.571 -                                    void *data, VirtProcr *animPr )
  11.572 - {
  11.573 -   VPThdSemReq  reqData;
  11.574 -
  11.575 -      //
  11.576 -   reqData.reqType          = atomic;
  11.577 -   reqData.fnToExecInMaster = ptrToFnToExecInMaster;
  11.578 -   reqData.dataForFn        = data;
  11.579 -
  11.580 -   VMS__send_sem_request( &reqData, animPr );
  11.581 - }
  11.582 -
  11.583 -
  11.584 -/*This suspends to the master.
  11.585 - *First, it looks at the VP's data, to see the highest transactionID that VP
  11.586 - * already has entered.  If the current ID is not larger, it throws an
  11.587 - * exception stating a bug in the code.  Otherwise it puts the current ID
  11.588 - * there, and adds the ID to a linked list of IDs entered -- the list is
  11.589 - * used to check that exits are properly ordered.
  11.590 - *Next it is uses transactionID as index into an array of transaction
  11.591 - * structures.
  11.592 - *If the "VP_currently_executing" field is non-null, then put requesting VP
  11.593 - * into queue in the struct.  (At some point a holder will request
  11.594 - * end-transaction, which will take this VP from the queue and resume it.)
  11.595 - *If NULL, then write requesting into the field and resume.
  11.596 - */
  11.597 -void
  11.598 -VPThread__start_transaction( int32 transactionID, VirtProcr *animPr )
  11.599 - {
  11.600 -   VPThdSemReq  reqData;
  11.601 -
  11.602 -      //
  11.603 -   reqData.reqType     = trans_start;
  11.604 -   reqData.transID     = transactionID;
  11.605 -
  11.606 -   VMS__send_sem_request( &reqData, animPr );
  11.607 - }
  11.608 -
  11.609 -/*This suspends to the master, then uses transactionID as index into an
  11.610 - * array of transaction structures.
  11.611 - *It looks at VP_currently_executing to be sure it's same as requesting VP.
  11.612 - * If different, throws an exception, stating there's a bug in the code.
  11.613 - *Next it looks at the queue in the structure.
  11.614 - *If it's empty, it sets VP_currently_executing field to NULL and resumes.
  11.615 - *If something in, gets it, sets VP_currently_executing to that VP, then
  11.616 - * resumes both.
  11.617 - */
  11.618 -void
  11.619 -VPThread__end_transaction( int32 transactionID, VirtProcr *animPr )
  11.620 - {
  11.621 -   VPThdSemReq  reqData;
  11.622 -
  11.623 -      //
  11.624 -   reqData.reqType     = trans_end;
  11.625 -   reqData.transID     = transactionID;
  11.626 -
  11.627 -   VMS__send_sem_request( &reqData, animPr );
  11.628 - }
  11.629 -//===========================================================================
    12.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
    12.2 +++ b/Vthread.h	Thu Mar 01 13:20:51 2012 -0800
    12.3 @@ -0,0 +1,259 @@
    12.4 +/*
    12.5 + *  Copyright 2009 OpenSourceStewardshipFoundation.org
    12.6 + *  Licensed under GNU General Public License version 2
    12.7 + *
    12.8 + * Author: seanhalle@yahoo.com
    12.9 + *
   12.10 + */
   12.11 +
   12.12 +#ifndef _VPThread_H
   12.13 +#define	_VPThread_H
   12.14 +
   12.15 +#include "VMS_impl/VMS.h"
   12.16 +#include "C_Libraries/Queue_impl/PrivateQueue.h"
   12.17 +#include "C_Libraries/DynArray/DynArray.h"
   12.18 +
   12.19 +
   12.20 +/*This header defines everything specific to the VPThread semantic plug-in
   12.21 + */
   12.22 +
   12.23 +
   12.24 +//===========================================================================
   12.25 +   //turn on the counter measurements of language overhead -- comment to turn off
   12.26 +#define MEAS__TURN_ON_LANG_MEAS
   12.27 +
   12.28 +#define INIT_NUM_MUTEX 10000
   12.29 +#define INIT_NUM_COND  10000
   12.30 +
   12.31 +#define NUM_STRUCS_IN_SEM_ENV 1000
   12.32 +//===========================================================================
   12.33 +
   12.34 +//===========================================================================
   12.35 +typedef struct _VPThreadSemReq   VPThdSemReq;
   12.36 +typedef void  (*PtrToAtomicFn )   ( void * ); //executed atomically in master
   12.37 +//===========================================================================
   12.38 +
   12.39 +
   12.40 +/*WARNING: assembly hard-codes position of endInstrAddr as first field
   12.41 + */
   12.42 +typedef struct
   12.43 + {
   12.44 +   void           *endInstrAddr;
   12.45 +   int32           hasBeenStarted;
   12.46 +   int32           hasFinished;
   12.47 +   PrivQueueStruc *waitQ;
   12.48 + }
   12.49 +VPThdSingleton;
   12.50 +
   12.51 +/*Semantic layer-specific data sent inside a request from lib called in app
   12.52 + * to request handler called in MasterLoop
   12.53 + */
   12.54 +enum VPThreadReqType
   12.55 + {
   12.56 +   make_mutex = 1,
   12.57 +   mutex_lock,
   12.58 +   mutex_unlock,
   12.59 +   make_cond,
   12.60 +   cond_wait,
   12.61 +   cond_signal,
   12.62 +   make_procr,
   12.63 +   malloc_req,
   12.64 +   free_req,
   12.65 +   singleton_fn_start,
   12.66 +   singleton_fn_end,
   12.67 +   singleton_data_start,
   12.68 +   singleton_data_end,
   12.69 +   atomic,
   12.70 +   trans_start,
   12.71 +   trans_end
   12.72 + };
   12.73 +
   12.74 +struct _VPThreadSemReq
   12.75 + { enum VPThreadReqType reqType;
   12.76 +   SlaveVP           *requestingVP;
   12.77 +   int32                mutexIdx;
   12.78 +   int32                condIdx;
   12.79 +
   12.80 +   void                *initData;
   12.81 +   TopLevelFnPtr       fnPtr;
   12.82 +   int32                coreToScheduleOnto;
   12.83 +
   12.84 +   size_t                sizeToMalloc;
   12.85 +   void                *ptrToFree;
   12.86 +
   12.87 +   int32              singletonID;
   12.88 +   VPThdSingleton     **singletonPtrAddr;
   12.89 +
   12.90 +   PtrToAtomicFn      fnToExecInMaster;
   12.91 +   void              *dataForFn;
   12.92 +
   12.93 +   int32              transID;
   12.94 + }
   12.95 +/* VPThreadSemReq */;
   12.96 +
   12.97 +
   12.98 +typedef struct
   12.99 + {
  12.100 +   SlaveVP      *VPCurrentlyExecuting;
  12.101 +   PrivQueueStruc *waitingVPQ;
  12.102 + }
  12.103 +VPThdTrans;
  12.104 +
  12.105 +
  12.106 +typedef struct
  12.107 + {
  12.108 +   int32           mutexIdx;
  12.109 +   SlaveVP      *holderOfLock;
  12.110 +   PrivQueueStruc *waitingQueue;
  12.111 + }
  12.112 +VPThdMutex;
  12.113 +
  12.114 +
  12.115 +typedef struct
  12.116 + {
  12.117 +   int32           condIdx;
  12.118 +   PrivQueueStruc *waitingQueue;
  12.119 +   VPThdMutex       *partnerMutex;
  12.120 + }
  12.121 +VPThdCond;
  12.122 +
  12.123 +typedef struct _TransListElem TransListElem;
  12.124 +struct _TransListElem
  12.125 + {
  12.126 +   int32          transID;
  12.127 +   TransListElem *nextTrans;
  12.128 + };
  12.129 +//TransListElem
  12.130 +
  12.131 +typedef struct
  12.132 + {
  12.133 +   int32          highestTransEntered;
  12.134 +   TransListElem *lastTransEntered;
  12.135 + }
  12.136 +VPThdSemData;
  12.137 +
  12.138 +
  12.139 +typedef struct
  12.140 + {
  12.141 +      //Standard stuff will be in most every semantic env
  12.142 +   PrivQueueStruc  **readyVPQs;
  12.143 +   int32             numVirtVP;
  12.144 +   int32             nextCoreToGetNewVP;
  12.145 +   int32             primitiveStartTime;
  12.146 +
  12.147 +      //Specific to this semantic layer
  12.148 +   VPThdMutex      **mutexDynArray;
  12.149 +   PrivDynArrayInfo *mutexDynArrayInfo;
  12.150 +
  12.151 +   VPThdCond       **condDynArray;
  12.152 +   PrivDynArrayInfo *condDynArrayInfo;
  12.153 +
  12.154 +   void             *applicationGlobals;
  12.155 +
  12.156 +                       //fix limit on num with dynArray
  12.157 +   VPThdSingleton     fnSingletons[NUM_STRUCS_IN_SEM_ENV];
  12.158 +
  12.159 +   VPThdTrans        transactionStrucs[NUM_STRUCS_IN_SEM_ENV];
  12.160 + }
  12.161 +VPThdSemEnv;
  12.162 +
  12.163 +
  12.164 +//===========================================================================
  12.165 +
  12.166 +inline void
  12.167 +VPThread__create_seed_procr_and_do_work( TopLevelFnPtr fn, void *initData );
  12.168 +
  12.169 +//=======================
  12.170 +
  12.171 +inline SlaveVP *
  12.172 +VPThread__create_thread( TopLevelFnPtr fnPtr, void *initData,
  12.173 +                          SlaveVP *creatingVP );
  12.174 +
  12.175 +inline SlaveVP *
  12.176 +VPThread__create_thread_with_affinity( TopLevelFnPtr fnPtr, void *initData,
  12.177 +                          SlaveVP *creatingVP,  int32  coreToScheduleOnto );
  12.178 +
  12.179 +inline void
  12.180 +VPThread__dissipate_thread( SlaveVP *procrToDissipate );
  12.181 +
  12.182 +//=======================
  12.183 +inline void
  12.184 +VPThread__set_globals_to( void *globals );
  12.185 +
  12.186 +inline void *
  12.187 +VPThread__give_globals();
  12.188 +
  12.189 +//=======================
  12.190 +inline int32
  12.191 +VPThread__make_mutex( SlaveVP *animVP );
  12.192 +
  12.193 +inline void
  12.194 +VPThread__mutex_lock( int32 mutexIdx, SlaveVP *acquiringVP );
  12.195 +                                                    
  12.196 +inline void
  12.197 +VPThread__mutex_unlock( int32 mutexIdx, SlaveVP *releasingVP );
  12.198 +
  12.199 +
  12.200 +//=======================
  12.201 +inline int32
  12.202 +VPThread__make_cond( int32 ownedMutexIdx, SlaveVP *animPr);
  12.203 +
  12.204 +inline void
  12.205 +VPThread__cond_wait( int32 condIdx, SlaveVP *waitingPr);
  12.206 +
  12.207 +inline void *
  12.208 +VPThread__cond_signal( int32 condIdx, SlaveVP *signallingVP );
  12.209 +
  12.210 +
  12.211 +//=======================
  12.212 +void
  12.213 +VPThread__start_fn_singleton( int32 singletonID, SlaveVP *animVP );
  12.214 +
  12.215 +void
  12.216 +VPThread__end_fn_singleton( int32 singletonID, SlaveVP *animVP );
  12.217 +
  12.218 +void
  12.219 +VPThread__start_data_singleton( VPThdSingleton **singeltonAddr, SlaveVP *animVP );
  12.220 +
  12.221 +void
  12.222 +VPThread__end_data_singleton( VPThdSingleton **singletonAddr, SlaveVP *animVP );
  12.223 +
  12.224 +void
  12.225 +VPThread__animate_short_fn_in_isolation( PtrToAtomicFn ptrToFnToExecInMaster,
  12.226 +                                         void *data, SlaveVP *animVP );
  12.227 +
  12.228 +void
  12.229 +VPThread__start_transaction( int32 transactionID, SlaveVP *animVP );
  12.230 +
  12.231 +void
  12.232 +VPThread__end_transaction( int32 transactionID, SlaveVP *animVP );
  12.233 +
  12.234 +
  12.235 +
  12.236 +//=========================  Internal use only  =============================
  12.237 +inline void
  12.238 +VPThread__Request_Handler( SlaveVP *requestingVP, void *_semEnv );
  12.239 +
  12.240 +inline SlaveVP *
  12.241 +VPThread__schedule_virt_procr( void *_semEnv, int coreNum );
  12.242 +
  12.243 +//=======================
  12.244 +inline void
  12.245 +VPThread__free_semantic_request( VPThdSemReq *semReq );
  12.246 +
  12.247 +//=======================
  12.248 +
  12.249 +void *
  12.250 +VPThread__malloc( size_t sizeToMalloc, SlaveVP *animVP );
  12.251 +
  12.252 +void
  12.253 +VPThread__init();
  12.254 +
  12.255 +void
  12.256 +VPThread__cleanup_after_shutdown();
  12.257 +
  12.258 +void inline
  12.259 +resume_procr( SlaveVP *procr, VPThdSemEnv *semEnv );
  12.260 +
  12.261 +#endif	/* _VPThread_H */
  12.262 +
    13.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
    13.2 +++ b/Vthread.s	Thu Mar 01 13:20:51 2012 -0800
    13.3 @@ -0,0 +1,21 @@
    13.4 +
    13.5 +//Assembly code takes the return addr off the stack and saves
    13.6 +// into the singleton.  The first field in the singleton is the
    13.7 +// "endInstrAddr" field, and the return addr is at 0x4(%ebp)
    13.8 +.globl asm_save_ret_to_singleton
    13.9 +asm_save_ret_to_singleton:
   13.10 +    movq 0x8(%rbp),     %rax   #get ret address, ebp is the same as in the calling function
   13.11 +    movq     %rax,     (%rdi) #write ret addr to endInstrAddr field
   13.12 +    ret
   13.13 +
   13.14 +
   13.15 +//Assembly code changes the return addr on the stack to the one
   13.16 +// saved into the singleton by the end-singleton-fn
   13.17 +//The stack's return addr is at 0x4(%%ebp)
   13.18 +.globl asm_write_ret_from_singleton
   13.19 +asm_write_ret_from_singleton:
   13.20 +    movq    (%rdi),    %rax  #get endInstrAddr field
   13.21 +    movq      %rax,    0x8(%rbp) #write return addr to the stack of the caller
   13.22 +    ret
   13.23 +
   13.24 +
    14.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
    14.2 +++ b/Vthread_Meas.h	Thu Mar 01 13:20:51 2012 -0800
    14.3 @@ -0,0 +1,109 @@
    14.4 +/* 
    14.5 + * File:   VPThread_helper.h
    14.6 + * Author: msach
    14.7 + *
    14.8 + * Created on June 10, 2011, 12:20 PM
    14.9 + */
   14.10 +
   14.11 +#ifndef VTHREAD_MEAS_H
   14.12 +#define VTHREAD_MEAS_H
   14.13 +
   14.14 +#ifdef MEAS__TURN_ON_LANG_MEAS
   14.15 +
   14.16 +   #ifdef MEAS__Make_Meas_Hists_for_Language
   14.17 +   #undef MEAS__Make_Meas_Hists_for_Language
   14.18 +   #endif
   14.19 +
   14.20 +//===================  Language-specific Measurement Stuff ===================
   14.21 +//
   14.22 +//
   14.23 +   #define createHistIdx      1  //note: starts at 1
   14.24 +   #define mutexLockHistIdx   2
   14.25 +   #define mutexUnlockHistIdx 3
   14.26 +   #define condWaitHistIdx    4
   14.27 +   #define condSignalHistIdx  5
   14.28 +
   14.29 +   #define MEAS__Make_Meas_Hists_for_Language() \
   14.30 +   _VMSMasterEnv->measHistsInfo = \
   14.31 +              makePrivDynArrayOfSize( (void***)&(_VMSMasterEnv->measHists), 200); \
   14.32 +   makeAMeasHist( createHistIdx,      "create",        250, 0, 100 ) \
   14.33 +   makeAMeasHist( mutexLockHistIdx,   "mutex_lock",    50, 0, 100 ) \
   14.34 +   makeAMeasHist( mutexUnlockHistIdx, "mutex_unlock",  50, 0, 100 ) \
   14.35 +   makeAMeasHist( condWaitHistIdx,    "cond_wait",     50, 0, 100 ) \
   14.36 +   makeAMeasHist( condSignalHistIdx,  "cond_signal",   50, 0, 100 )
   14.37 +
   14.38 +   
   14.39 +   #define Meas_startCreate \
   14.40 +    int32 startStamp, endStamp; \
   14.41 +    saveLowTimeStampCountInto( startStamp ); 
   14.42 +
   14.43 +   #define Meas_endCreate \
   14.44 +    saveLowTimeStampCountInto( endStamp ); \
   14.45 +    addIntervalToHist( startStamp, endStamp, \
   14.46 +                                 _VMSMasterEnv->measHists[ createHistIdx ] );
   14.47 +
   14.48 +   #define Meas_startMutexLock \
   14.49 +    int32 startStamp, endStamp; \
   14.50 +    saveLowTimeStampCountInto( startStamp ); 
   14.51 +
   14.52 +   #define Meas_endMutexLock \
   14.53 +    saveLowTimeStampCountInto( endStamp ); \
   14.54 +    addIntervalToHist( startStamp, endStamp, \
   14.55 +                              _VMSMasterEnv->measHists[ mutexLockHistIdx ] );
   14.56 +
   14.57 +   #define Meas_startMutexUnlock \
   14.58 +    int32 startStamp, endStamp; \
   14.59 +    saveLowTimeStampCountInto( startStamp ); 
   14.60 +
   14.61 +   #define Meas_endMutexUnlock \
   14.62 +    saveLowTimeStampCountInto( endStamp ); \
   14.63 +    addIntervalToHist( startStamp, endStamp, \
   14.64 +                            _VMSMasterEnv->measHists[ mutexUnlockHistIdx ] );
   14.65 +
   14.66 +   #define Meas_startCondWait \
   14.67 +    int32 startStamp, endStamp; \
   14.68 +    saveLowTimeStampCountInto( startStamp ); 
   14.69 +
   14.70 +   #define Meas_endCondWait \
   14.71 +    saveLowTimeStampCountInto( endStamp ); \
   14.72 +    addIntervalToHist( startStamp, endStamp, \
   14.73 +                               _VMSMasterEnv->measHists[ condWaitHistIdx ] );
   14.74 +
   14.75 +   #define Meas_startCondSignal \
   14.76 +    int32 startStamp, endStamp; \
   14.77 +    saveLowTimeStampCountInto( startStamp ); 
   14.78 +
   14.79 +   #define Meas_endCondSignal \
   14.80 +    saveLowTimeStampCountInto( endStamp ); \
   14.81 +    addIntervalToHist( startStamp, endStamp, \
   14.82 +                             _VMSMasterEnv->measHists[ condSignalHistIdx ] );
   14.83 +
   14.84 +#else //===================== turned off ==========================
   14.85 +
   14.86 +   #define MEAS__Make_Meas_Hists_for_Language() 
   14.87 +   
   14.88 +   #define Meas_startCreate 
   14.89 +
   14.90 +   #define Meas_endCreate 
   14.91 +
   14.92 +   #define Meas_startMutexLock
   14.93 +
   14.94 +   #define Meas_endMutexLock
   14.95 +
   14.96 +   #define Meas_startMutexUnlock
   14.97 +
   14.98 +   #define Meas_endMutexUnlock
   14.99 +
  14.100 +   #define Meas_startCondWait
  14.101 +
  14.102 +   #define Meas_endCondWait 
  14.103 +
  14.104 +   #define Meas_startCondSignal 
  14.105 +
  14.106 +   #define Meas_endCondSignal 
  14.107 +
  14.108 +#endif
  14.109 +
  14.110 +
  14.111 +#endif	/* VTHREAD_MEAS_H */
  14.112 +
    15.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
    15.2 +++ b/Vthread_PluginFns.c	Thu Mar 01 13:20:51 2012 -0800
    15.3 @@ -0,0 +1,192 @@
    15.4 +/*
    15.5 + * Copyright 2010  OpenSourceCodeStewardshipFoundation
    15.6 + *
    15.7 + * Licensed under BSD
    15.8 + */
    15.9 +
   15.10 +#include <stdio.h>
   15.11 +#include <stdlib.h>
   15.12 +#include <malloc.h>
   15.13 +
   15.14 +#include "VMS/Queue_impl/PrivateQueue.h"
   15.15 +#include "VPThread.h"
   15.16 +#include "VPThread_Request_Handlers.h"
   15.17 +#include "VPThread_helper.h"
   15.18 +
   15.19 +//=========================== Local Fn Prototypes ===========================
   15.20 +
   15.21 +void inline
   15.22 +handleSemReq( VMSReqst *req, SlaveVP *requestingVP, VPThdSemEnv *semEnv );
   15.23 +
   15.24 +inline void
   15.25 +handleDissipate(             SlaveVP *requestingVP, VPThdSemEnv *semEnv );
   15.26 +
   15.27 +inline void
   15.28 +handleCreate( VMSReqst *req, SlaveVP *requestingVP, VPThdSemEnv *semEnv  );
   15.29 +
   15.30 +
   15.31 +//============================== Scheduler ==================================
   15.32 +//
   15.33 +/*For VPThread, scheduling a slave simply takes the next work-unit off the
   15.34 + * ready-to-go work-unit queue and assigns it to the slaveToSched.
   15.35 + *If the ready-to-go work-unit queue is empty, then nothing to schedule
   15.36 + * to the slave -- return FALSE to let Master loop know scheduling that
   15.37 + * slave failed.
   15.38 + */
   15.39 +char __Scheduler[] = "FIFO Scheduler"; //Gobal variable for name in saved histogram
   15.40 +SlaveVP *
   15.41 +VPThread__schedule_virt_procr( void *_semEnv, int coreNum )
   15.42 + { SlaveVP   *schedVP;
   15.43 +   VPThdSemEnv *semEnv;
   15.44 +
   15.45 +   semEnv = (VPThdSemEnv *)_semEnv;
   15.46 +
   15.47 +   schedVP = readPrivQ( semEnv->readyVPQs[coreNum] );
   15.48 +      //Note, using a non-blocking queue -- it returns NULL if queue empty
   15.49 +
   15.50 +   return( schedVP );
   15.51 + }
   15.52 +
   15.53 +
   15.54 +
   15.55 +//===========================  Request Handler  =============================
   15.56 +//
   15.57 +/*Will get requests to send, to receive, and to create new processors.
   15.58 + * Upon send, check the hash to see if a receive is waiting.
   15.59 + * Upon receive, check hash to see if a send has already happened.
   15.60 + * When other is not there, put in.  When other is there, the comm.
   15.61 + *  completes, which means the receiver P gets scheduled and
   15.62 + *  picks up right after the receive request.  So make the work-unit
   15.63 + *  and put it into the queue of work-units ready to go.
   15.64 + * Other request is create a new Processor, with the function to run in the
   15.65 + *  Processor, and initial data.
   15.66 + */
   15.67 +void
   15.68 +VPThread__Request_Handler( SlaveVP *requestingVP, void *_semEnv )
   15.69 + { VPThdSemEnv *semEnv;
   15.70 +   VMSReqst    *req;
   15.71 + 
   15.72 +   semEnv = (VPThdSemEnv *)_semEnv;
   15.73 +
   15.74 +   req    = VMS__take_next_request_out_of( requestingVP );
   15.75 +
   15.76 +   while( req != NULL )
   15.77 +    {
   15.78 +      switch( req->reqType )
   15.79 +       { case semantic:     handleSemReq(         req, requestingVP, semEnv);
   15.80 +            break;
   15.81 +         case createReq:    handleCreate(         req, requestingVP, semEnv);
   15.82 +            break;
   15.83 +         case dissipate:    handleDissipate(           requestingVP, semEnv);
   15.84 +            break;
   15.85 +         case VMSSemantic:  VMS__handle_VMSSemReq(req, requestingVP, semEnv,
   15.86 +                                                (ResumeVPFnPtr)&resume_procr);
   15.87 +            break;
   15.88 +         default:
   15.89 +            break;
   15.90 +       }
   15.91 +
   15.92 +      req = VMS__take_next_request_out_of( requestingVP );
   15.93 +    } //while( req != NULL )
   15.94 + }
   15.95 +
   15.96 +
   15.97 +void inline
   15.98 +handleSemReq( VMSReqst *req, SlaveVP *reqVP, VPThdSemEnv *semEnv )
   15.99 + { VPThdSemReq *semReq;
  15.100 +
  15.101 +   semReq = VMS__take_sem_reqst_from(req);
  15.102 +   if( semReq == NULL ) return;
  15.103 +   switch( semReq->reqType )
  15.104 +    {
  15.105 +      case make_mutex:     handleMakeMutex(  semReq, semEnv);
  15.106 +         break;
  15.107 +      case mutex_lock:     handleMutexLock(  semReq, semEnv);
  15.108 +         break;
  15.109 +      case mutex_unlock:   handleMutexUnlock(semReq, semEnv);
  15.110 +         break;
  15.111 +      case make_cond:      handleMakeCond(   semReq, semEnv);
  15.112 +         break;
  15.113 +      case cond_wait:      handleCondWait(   semReq, semEnv);
  15.114 +         break;
  15.115 +      case cond_signal:    handleCondSignal( semReq, semEnv);
  15.116 +         break;
  15.117 +      case malloc_req:    handleMalloc( semReq, reqVP, semEnv);
  15.118 +         break;
  15.119 +      case free_req:    handleFree( semReq, reqVP, semEnv);
  15.120 +         break;
  15.121 +      case singleton_fn_start:  handleStartFnSingleton(semReq, reqVP, semEnv);
  15.122 +         break;
  15.123 +      case singleton_fn_end:    handleEndFnSingleton(  semReq, reqVP, semEnv);
  15.124 +         break;
  15.125 +      case singleton_data_start:handleStartDataSingleton(semReq,reqVP,semEnv);
  15.126 +         break;
  15.127 +      case singleton_data_end:  handleEndDataSingleton(semReq, reqVP, semEnv);
  15.128 +         break;
  15.129 +      case atomic:    handleAtomic( semReq, reqVP, semEnv);
  15.130 +         break;
  15.131 +      case trans_start:    handleTransStart( semReq, reqVP, semEnv);
  15.132 +         break;
  15.133 +      case trans_end:    handleTransEnd( semReq, reqVP, semEnv);
  15.134 +         break;
  15.135 +    }
  15.136 + }
  15.137 +
  15.138 +//=========================== VMS Request Handlers ===========================
  15.139 +//
  15.140 +inline void
  15.141 +handleDissipate( SlaveVP *requestingVP, VPThdSemEnv *semEnv )
  15.142 + {
  15.143 +      //free any semantic data allocated to the virt procr
  15.144 +   VMS__free( requestingVP->semanticData );
  15.145 +
  15.146 +      //Now, call VMS to free_all AppVP state -- stack and so on
  15.147 +   VMS__dissipate_procr( requestingVP );
  15.148 +
  15.149 +   semEnv->numVP -= 1;
  15.150 +   if( semEnv->numVP == 0 )
  15.151 +    {    //no more work, so shutdown
  15.152 +      VMS__shutdown();
  15.153 +    }
  15.154 + }
  15.155 +
  15.156 +inline void
  15.157 +handleCreate( VMSReqst *req, SlaveVP *requestingVP, VPThdSemEnv *semEnv  )
  15.158 + { VPThdSemReq *semReq;
  15.159 +   SlaveVP    *newVP;
  15.160 +    
  15.161 +    //========================= MEASUREMENT STUFF ======================
  15.162 +    Meas_startCreate
  15.163 +    //==================================================================
  15.164 +     
  15.165 +   semReq = VMS__take_sem_reqst_from( req );
  15.166 +   
  15.167 +   newVP = VPThread__create_procr_helper( semReq->fnPtr, semReq->initData, 
  15.168 +                                          semEnv, semReq->coreToScheduleOnto);
  15.169 +
  15.170 +      //For VPThread, caller needs ptr to created processor returned to it
  15.171 +   requestingVP->dataRetFromReq = newVP;
  15.172 +
  15.173 +   resume_procr( newVP,        semEnv );
  15.174 +   resume_procr( requestingVP, semEnv );
  15.175 +
  15.176 +     //========================= MEASUREMENT STUFF ======================
  15.177 +         Meas_endCreate
  15.178 +     #ifdef MEAS__TIME_PLUGIN
  15.179 +     #ifdef MEAS__SUB_CREATE
  15.180 +         subIntervalFromHist( startStamp, endStamp,
  15.181 +                                        _VMSMasterEnv->reqHdlrHighTimeHist );
  15.182 +     #endif
  15.183 +     #endif
  15.184 +     //==================================================================
  15.185 + }
  15.186 +
  15.187 +
  15.188 +//=========================== Helper ==============================
  15.189 +void inline
  15.190 +resume_procr( SlaveVP *procr, VPThdSemEnv *semEnv )
  15.191 + {
  15.192 +   writePrivQ( procr, semEnv->readyVPQs[ procr->coreAnimatedBy] );
  15.193 + }
  15.194 +
  15.195 +//===========================================================================
  15.196 \ No newline at end of file
    16.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
    16.2 +++ b/Vthread_Request_Handlers.c	Thu Mar 01 13:20:51 2012 -0800
    16.3 @@ -0,0 +1,444 @@
    16.4 +/*
    16.5 + * Copyright 2010  OpenSourceCodeStewardshipFoundation
    16.6 + *
    16.7 + * Licensed under BSD
    16.8 + */
    16.9 +
   16.10 +#include <stdio.h>
   16.11 +#include <stdlib.h>
   16.12 +#include <malloc.h>
   16.13 +
   16.14 +#include "VMS_Implementations/VMS_impl/VMS.h"
   16.15 +#include "C_Libraries/Queue_impl/PrivateQueue.h"
   16.16 +#include "C_Libraries/Hash_impl/PrivateHash.h"
   16.17 +#include "Vthread.h"
   16.18 +
   16.19 +
   16.20 +
   16.21 +//===============================  Mutexes  =================================
   16.22 +/*The semantic request has a mutexIdx value, which acts as index into array.
   16.23 + */
   16.24 +inline void
   16.25 +handleMakeMutex( VPThdSemReq *semReq, VPThdSemEnv *semEnv)
   16.26 + { VPThdMutex  *newMutex;
   16.27 +   SlaveVP   *requestingVP;
   16.28 +
   16.29 +   requestingVP = semReq->requestingVP;
   16.30 +   newMutex = VMS__malloc( sizeof(VPThdMutex)  );
   16.31 +   newMutex->waitingQueue = makeVMSPrivQ( requestingVP );
   16.32 +   newMutex->holderOfLock = NULL;
   16.33 +
   16.34 +      //The mutex struc contains an int that identifies it -- use that as
   16.35 +      // its index within the array of mutexes.  Add the new mutex to array.
   16.36 +   newMutex->mutexIdx = addToDynArray( newMutex, semEnv->mutexDynArrayInfo );
   16.37 +
   16.38 +      //Now communicate the mutex's identifying int back to requesting procr
   16.39 +   semReq->requestingVP->dataRetFromReq = (void*)newMutex->mutexIdx; //mutexIdx is 32 bit
   16.40 +
   16.41 +      //re-animate the requester
   16.42 +   resume_procr( requestingVP, semEnv );
   16.43 + }
   16.44 +
   16.45 +
   16.46 +inline void
   16.47 +handleMutexLock( VPThdSemReq *semReq, VPThdSemEnv *semEnv)
   16.48 + { VPThdMutex  *mutex;
   16.49 +   //===================  Deterministic Replay  ======================
   16.50 +   #ifdef RECORD_DETERMINISTIC_REPLAY
   16.51 +   
   16.52 +   #endif
   16.53 +   //=================================================================
   16.54 +         Meas_startMutexLock
   16.55 +      //lookup mutex struc, using mutexIdx as index
   16.56 +   mutex = semEnv->mutexDynArray[ semReq->mutexIdx ];
   16.57 +
   16.58 +      //see if mutex is free or not
   16.59 +   if( mutex->holderOfLock == NULL ) //none holding, give lock to requester
   16.60 +    {
   16.61 +      mutex->holderOfLock = semReq->requestingVP;
   16.62 +      
   16.63 +         //re-animate requester, now that it has the lock
   16.64 +      resume_procr( semReq->requestingVP, semEnv );
   16.65 +    }
   16.66 +   else //queue up requester to wait for release of lock
   16.67 +    {
   16.68 +      writePrivQ( semReq->requestingVP, mutex->waitingQueue );
   16.69 +    }
   16.70 +         Meas_endMutexLock
   16.71 + }
   16.72 +
   16.73 +/*
   16.74 + */
   16.75 +inline void
   16.76 +handleMutexUnlock( VPThdSemReq *semReq, VPThdSemEnv *semEnv)
   16.77 + { VPThdMutex  *mutex;
   16.78 +
   16.79 +         Meas_startMutexUnlock
   16.80 +      //lookup mutex struc, using mutexIdx as index
   16.81 +   mutex = semEnv->mutexDynArray[ semReq->mutexIdx ];
   16.82 +
   16.83 +      //set new holder of mutex-lock to be next in queue (NULL if empty)
   16.84 +   mutex->holderOfLock = readPrivQ( mutex->waitingQueue );
   16.85 +
   16.86 +      //if have new non-NULL holder, re-animate it
   16.87 +   if( mutex->holderOfLock != NULL )
   16.88 +    {
   16.89 +      resume_procr( mutex->holderOfLock, semEnv );
   16.90 +    }
   16.91 +
   16.92 +      //re-animate the releaser of the lock
   16.93 +   resume_procr( semReq->requestingVP, semEnv );
   16.94 +         Meas_endMutexUnlock
   16.95 + }
   16.96 +
   16.97 +//===========================  Condition Vars  ==============================
   16.98 +/*The semantic request has the cond-var value and mutex value, which are the
   16.99 + * indexes into the array.  Not worrying about having too many mutexes or
  16.100 + * cond vars created, so using array instead of hash table, for speed.
  16.101 + */
  16.102 +
  16.103 +
  16.104 +/*Make cond has to be called with the mutex that the cond is paired to
  16.105 + * Don't have to implement this way, but was confusing learning cond vars
  16.106 + * until deduced that each cond var owns a mutex that is used only for
  16.107 + * interacting with that cond var.  So, make this pairing explicit.
  16.108 + */
  16.109 +inline void
  16.110 +handleMakeCond( VPThdSemReq *semReq, VPThdSemEnv *semEnv)
  16.111 + { VPThdCond   *newCond;
  16.112 +   SlaveVP  *requestingVP;
  16.113 +
  16.114 +   requestingVP  = semReq->requestingVP;
  16.115 +   newCond = VMS__malloc( sizeof(VPThdCond) );
  16.116 +   newCond->partnerMutex = semEnv->mutexDynArray[ semReq->mutexIdx ];
  16.117 +
  16.118 +   newCond->waitingQueue = makeVMSPrivQ();
  16.119 +
  16.120 +      //The cond struc contains an int that identifies it -- use that as
  16.121 +      // its index within the array of conds.  Add the new cond to array.
  16.122 +   newCond->condIdx = addToDynArray( newCond, semEnv->condDynArrayInfo );
  16.123 +
  16.124 +      //Now communicate the cond's identifying int back to requesting procr
  16.125 +   semReq->requestingVP->dataRetFromReq = (void*)newCond->condIdx; //condIdx is 32 bit
  16.126 +   
  16.127 +      //re-animate the requester
  16.128 +   resume_procr( requestingVP, semEnv );
  16.129 + }
  16.130 +
  16.131 +
  16.132 +/*Mutex has already been paired to the cond var, so don't need to send the
  16.133 + * mutex, just the cond var.  Don't have to do this, but want to bitch-slap
  16.134 + * the designers of Posix standard  ; )
  16.135 + */
  16.136 +inline void
  16.137 +handleCondWait( VPThdSemReq *semReq, VPThdSemEnv *semEnv)
  16.138 + { VPThdCond   *cond;
  16.139 +   VPThdMutex  *mutex;
  16.140 +
  16.141 +         Meas_startCondWait
  16.142 +      //get cond struc out of array of them that's in the sem env
  16.143 +   cond = semEnv->condDynArray[ semReq->condIdx ];
  16.144 +
  16.145 +      //add requester to queue of wait-ers
  16.146 +   writePrivQ( semReq->requestingVP, cond->waitingQueue );
  16.147 +    
  16.148 +      //unlock mutex -- can't reuse above handler 'cause not queuing releaser
  16.149 +   mutex = cond->partnerMutex;
  16.150 +   mutex->holderOfLock = readPrivQ( mutex->waitingQueue );
  16.151 +
  16.152 +   if( mutex->holderOfLock != NULL )
  16.153 +    {
  16.154 +      resume_procr( mutex->holderOfLock, semEnv );
  16.155 +    }
  16.156 +         Meas_endCondWait
  16.157 + }
  16.158 +
  16.159 +
  16.160 +/*Note that have to implement this such that guarantee the waiter is the one
  16.161 + * that gets the lock
  16.162 + */
  16.163 +inline void
  16.164 +handleCondSignal( VPThdSemReq *semReq, VPThdSemEnv *semEnv)
  16.165 + { VPThdCond   *cond;
  16.166 +   VPThdMutex  *mutex;
  16.167 +   SlaveVP  *waitingVP;
  16.168 +
  16.169 +         Meas_startCondSignal;
  16.170 +      //get cond struc out of array of them that's in the sem env
  16.171 +   cond = semEnv->condDynArray[ semReq->condIdx ];
  16.172 +   
  16.173 +      //take next waiting procr out of queue
  16.174 +   waitingVP = readPrivQ( cond->waitingQueue );
  16.175 +
  16.176 +      //transfer waiting procr to wait queue of mutex
  16.177 +      // mutex is guaranteed to be held by signalling procr, so no check
  16.178 +   mutex = cond->partnerMutex;
  16.179 +   pushPrivQ( waitingVP, mutex->waitingQueue ); //is first out when read
  16.180 +
  16.181 +      //re-animate the signalling procr
  16.182 +   resume_procr( semReq->requestingVP, semEnv );
  16.183 +         Meas_endCondSignal;
  16.184 + }
  16.185 +
  16.186 +
  16.187 +
  16.188 +//============================================================================
  16.189 +//
  16.190 +/*
  16.191 + */
  16.192 +void inline
  16.193 +handleMalloc(VPThdSemReq *semReq, SlaveVP *requestingVP,VPThdSemEnv *semEnv)
  16.194 + { void *ptr;
  16.195 +
  16.196 +         //========================= MEASUREMENT STUFF ======================
  16.197 +         #ifdef MEAS__TIME_PLUGIN
  16.198 +         int32 startStamp, endStamp;
  16.199 +         saveLowTimeStampCountInto( startStamp );
  16.200 +         #endif
  16.201 +         //==================================================================
  16.202 +   ptr = VMS__malloc( semReq->sizeToMalloc );
  16.203 +   requestingVP->dataRetFromReq = ptr;
  16.204 +   resume_procr( requestingVP, semEnv );
  16.205 +         //========================= MEASUREMENT STUFF ======================
  16.206 +         #ifdef MEAS__TIME_PLUGIN
  16.207 +         saveLowTimeStampCountInto( endStamp );
  16.208 +         subIntervalFromHist( startStamp, endStamp,
  16.209 +                                        _VMSMasterEnv->reqHdlrHighTimeHist );
  16.210 +         #endif
  16.211 +         //==================================================================
  16.212 +  }
  16.213 +
  16.214 +/*
  16.215 + */
  16.216 +void inline
  16.217 +handleFree( VPThdSemReq *semReq, SlaveVP *requestingVP, VPThdSemEnv *semEnv)
  16.218 + {
  16.219 +         //========================= MEASUREMENT STUFF ======================
  16.220 +         #ifdef MEAS__TIME_PLUGIN
  16.221 +         int32 startStamp, endStamp;
  16.222 +         saveLowTimeStampCountInto( startStamp );
  16.223 +         #endif
  16.224 +         //==================================================================
  16.225 +   VMS__free( semReq->ptrToFree );
  16.226 +   resume_procr( requestingVP, semEnv );
  16.227 +         //========================= MEASUREMENT STUFF ======================
  16.228 +         #ifdef MEAS__TIME_PLUGIN
  16.229 +         saveLowTimeStampCountInto( endStamp );
  16.230 +         subIntervalFromHist( startStamp, endStamp,
  16.231 +                                        _VMSMasterEnv->reqHdlrHighTimeHist );
  16.232 +         #endif
  16.233 +         //==================================================================
  16.234 + }
  16.235 +
  16.236 +
  16.237 +//===========================================================================
  16.238 +//
  16.239 +/*Uses ID as index into array of flags.  If flag already set, resumes from
  16.240 + * end-label.  Else, sets flag and resumes normally.
  16.241 + */
  16.242 +void inline
  16.243 +handleStartSingleton_helper( VPThdSingleton *singleton, SlaveVP *reqstingVP,
  16.244 +                             VPThdSemEnv    *semEnv )
  16.245 + {
  16.246 +   if( singleton->hasFinished )
  16.247 +    {    //the code that sets the flag to true first sets the end instr addr
  16.248 +      reqstingVP->dataRetFromReq = singleton->endInstrAddr;
  16.249 +      resume_procr( reqstingVP, semEnv );
  16.250 +      return;
  16.251 +    }
  16.252 +   else if( singleton->hasBeenStarted )
  16.253 +    {    //singleton is in-progress in a diff slave, so wait for it to finish
  16.254 +      writePrivQ(reqstingVP, singleton->waitQ );
  16.255 +      return;
  16.256 +    }
  16.257 +   else
  16.258 +    {    //hasn't been started, so this is the first attempt at the singleton
  16.259 +      singleton->hasBeenStarted = TRUE;
  16.260 +      reqstingVP->dataRetFromReq = 0x0;
  16.261 +      resume_procr( reqstingVP, semEnv );
  16.262 +      return;
  16.263 +    }
  16.264 + }
  16.265 +void inline
  16.266 +handleStartFnSingleton( VPThdSemReq *semReq, SlaveVP *requestingVP,
  16.267 +                      VPThdSemEnv *semEnv )
  16.268 + { VPThdSingleton *singleton;
  16.269 +
  16.270 +   singleton = &(semEnv->fnSingletons[ semReq->singletonID ]);
  16.271 +   handleStartSingleton_helper( singleton, requestingVP, semEnv );
  16.272 + }
  16.273 +void inline
  16.274 +handleStartDataSingleton( VPThdSemReq *semReq, SlaveVP *requestingVP,
  16.275 +                      VPThdSemEnv *semEnv )
  16.276 + { VPThdSingleton *singleton;
  16.277 +
  16.278 +   if( *(semReq->singletonPtrAddr) == NULL )
  16.279 +    { singleton                 = VMS__malloc( sizeof(VPThdSingleton) );
  16.280 +      singleton->waitQ          = makeVMSPrivQ();
  16.281 +      singleton->endInstrAddr   = 0x0;
  16.282 +      singleton->hasBeenStarted = FALSE;
  16.283 +      singleton->hasFinished    = FALSE;
  16.284 +      *(semReq->singletonPtrAddr)  = singleton;
  16.285 +    }
  16.286 +   else
  16.287 +      singleton = *(semReq->singletonPtrAddr);
  16.288 +   handleStartSingleton_helper( singleton, requestingVP, semEnv );
  16.289 + }
  16.290 +
  16.291 +
  16.292 +void inline
  16.293 +handleEndSingleton_helper( VPThdSingleton *singleton, SlaveVP *requestingVP,
  16.294 +                           VPThdSemEnv    *semEnv )
  16.295 + { PrivQueueStruc *waitQ;
  16.296 +   int32           numWaiting, i;
  16.297 +   SlaveVP      *resumingVP;
  16.298 +
  16.299 +   if( singleton->hasFinished )
  16.300 +    { //by definition, only one slave should ever be able to run end singleton
  16.301 +      // so if this is true, is an error
  16.302 +      //VMS__throw_exception( "singleton code ran twice", requestingVP, NULL);
  16.303 +    }
  16.304 +
  16.305 +   singleton->hasFinished = TRUE;
  16.306 +   waitQ = singleton->waitQ;
  16.307 +   numWaiting = numInPrivQ( waitQ );
  16.308 +   for( i = 0; i < numWaiting; i++ )
  16.309 +    {    //they will resume inside start singleton, then jmp to end singleton
  16.310 +      resumingVP = readPrivQ( waitQ );
  16.311 +      resumingVP->dataRetFromReq = singleton->endInstrAddr;
  16.312 +      resume_procr( resumingVP, semEnv );
  16.313 +    }
  16.314 +
  16.315 +   resume_procr( requestingVP, semEnv );
  16.316 +
  16.317 + }
  16.318 +void inline
  16.319 +handleEndFnSingleton( VPThdSemReq *semReq, SlaveVP *requestingVP,
  16.320 +                        VPThdSemEnv *semEnv )
  16.321 + {
  16.322 +   VPThdSingleton   *singleton;
  16.323 +
  16.324 +   singleton = &(semEnv->fnSingletons[ semReq->singletonID ]);
  16.325 +   handleEndSingleton_helper( singleton, requestingVP, semEnv );
  16.326 + }
  16.327 +void inline
  16.328 +handleEndDataSingleton( VPThdSemReq *semReq, SlaveVP *requestingVP,
  16.329 +                        VPThdSemEnv *semEnv )
  16.330 + {
  16.331 +   VPThdSingleton   *singleton;
  16.332 +
  16.333 +   singleton = *(semReq->singletonPtrAddr);
  16.334 +   handleEndSingleton_helper( singleton, requestingVP, semEnv );
  16.335 + }
  16.336 +
  16.337 +
  16.338 +/*This executes the function in the masterVP, take the function
  16.339 + * pointer out of the request and call it, then resume the VP.
  16.340 + */
  16.341 +void inline
  16.342 +handleAtomic(VPThdSemReq *semReq, SlaveVP *requestingVP,VPThdSemEnv *semEnv)
  16.343 + {
  16.344 +   semReq->fnToExecInMaster( semReq->dataForFn );
  16.345 +   resume_procr( requestingVP, semEnv );
  16.346 + }
  16.347 +
  16.348 +/*First, it looks at the VP's semantic data, to see the highest transactionID
  16.349 + * that VP
  16.350 + * already has entered.  If the current ID is not larger, it throws an
  16.351 + * exception stating a bug in the code.
  16.352 + *Otherwise it puts the current ID
  16.353 + * there, and adds the ID to a linked list of IDs entered -- the list is
  16.354 + * used to check that exits are properly ordered.
  16.355 + *Next it is uses transactionID as index into an array of transaction
  16.356 + * structures.
  16.357 + *If the "VP_currently_executing" field is non-null, then put requesting VP
  16.358 + * into queue in the struct.  (At some point a holder will request
  16.359 + * end-transaction, which will take this VP from the queue and resume it.)
  16.360 + *If NULL, then write requesting into the field and resume.
  16.361 + */
  16.362 +void inline
  16.363 +handleTransStart( VPThdSemReq *semReq, SlaveVP *requestingVP,
  16.364 +                  VPThdSemEnv *semEnv )
  16.365 + { VPThdSemData *semData;
  16.366 +   TransListElem *nextTransElem;
  16.367 +
  16.368 +      //check ordering of entering transactions is correct
  16.369 +   semData = requestingVP->semanticData;
  16.370 +   if( semData->highestTransEntered > semReq->transID )
  16.371 +    {    //throw VMS exception, which shuts down VMS.
  16.372 +      VMS__throw_exception( "transID smaller than prev", requestingVP, NULL);
  16.373 +    }
  16.374 +      //add this trans ID to the list of transactions entered -- check when
  16.375 +      // end a transaction
  16.376 +   semData->highestTransEntered = semReq->transID;
  16.377 +   nextTransElem = VMS_PI__malloc( sizeof(TransListElem) );
  16.378 +   nextTransElem->transID = semReq->transID;
  16.379 +   nextTransElem->nextTrans = semData->lastTransEntered;
  16.380 +   semData->lastTransEntered = nextTransElem;
  16.381 +
  16.382 +      //get the structure for this transaction ID
  16.383 +   VPThdTrans *
  16.384 +   transStruc = &(semEnv->transactionStrucs[ semReq->transID ]);
  16.385 +
  16.386 +   if( transStruc->VPCurrentlyExecuting == NULL )
  16.387 +    {
  16.388 +      transStruc->VPCurrentlyExecuting = requestingVP;
  16.389 +      resume_procr( requestingVP, semEnv );
  16.390 +    }
  16.391 +   else
  16.392 +    {    //note, might make future things cleaner if save request with VP and
  16.393 +         // add this trans ID to the linked list when gets out of queue.
  16.394 +         // but don't need for now, and lazy..
  16.395 +      writePrivQ( requestingVP, transStruc->waitingVPQ );
  16.396 +    }
  16.397 + }
  16.398 +
  16.399 +
  16.400 +/*Use the trans ID to get the transaction structure from the array.
  16.401 + *Look at VP_currently_executing to be sure it's same as requesting VP.
  16.402 + * If different, throw an exception, stating there's a bug in the code.
  16.403 + *Next, take the first element off the list of entered transactions.
  16.404 + * Check to be sure the ending transaction is the same ID as the next on
  16.405 + * the list.  If not, incorrectly nested so throw an exception.
  16.406 + *
  16.407 + *Next, get from the queue in the structure.
  16.408 + *If it's empty, set VP_currently_executing field to NULL and resume
  16.409 + * requesting VP.
  16.410 + *If get somethine, set VP_currently_executing to the VP from the queue, then
  16.411 + * resume both.
  16.412 + */
  16.413 +void inline
  16.414 +handleTransEnd( VPThdSemReq *semReq, SlaveVP *requestingVP,
  16.415 +                VPThdSemEnv *semEnv)
  16.416 + { VPThdSemData    *semData;
  16.417 +   SlaveVP       *waitingVP;
  16.418 +   VPThdTrans      *transStruc;
  16.419 +   TransListElem   *lastTrans;
  16.420 +
  16.421 +   transStruc = &(semEnv->transactionStrucs[ semReq->transID ]);
  16.422 +
  16.423 +      //make sure transaction ended in same VP as started it.
  16.424 +   if( transStruc->VPCurrentlyExecuting != requestingVP )
  16.425 +    {
  16.426 +      VMS__throw_exception( "trans ended in diff VP", requestingVP, NULL );
  16.427 +    }
  16.428 +
  16.429 +      //make sure nesting is correct -- last ID entered should == this ID
  16.430 +   semData = requestingVP->semanticData;
  16.431 +   lastTrans = semData->lastTransEntered;
  16.432 +   if( lastTrans->transID != semReq->transID )
  16.433 +    {
  16.434 +      VMS__throw_exception( "trans incorrectly nested", requestingVP, NULL );
  16.435 +    }
  16.436 +
  16.437 +   semData->lastTransEntered = semData->lastTransEntered->nextTrans;
  16.438 +
  16.439 +
  16.440 +   waitingVP = readPrivQ( transStruc->waitingVPQ );
  16.441 +   transStruc->VPCurrentlyExecuting = waitingVP;
  16.442 +
  16.443 +   if( waitingVP != NULL )
  16.444 +      resume_procr( waitingVP, semEnv );
  16.445 +
  16.446 +   resume_procr( requestingVP, semEnv );
  16.447 + }
    17.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
    17.2 +++ b/Vthread_Request_Handlers.h	Thu Mar 01 13:20:51 2012 -0800
    17.3 @@ -0,0 +1,57 @@
    17.4 +/*
    17.5 + *  Copyright 2009 OpenSourceStewardshipFoundation.org
    17.6 + *  Licensed under GNU General Public License version 2
    17.7 + *
    17.8 + * Author: seanhalle@yahoo.com
    17.9 + *
   17.10 + */
   17.11 +
   17.12 +#ifndef _VPThread_REQ_H
   17.13 +#define	_VPThread_REQ_H
   17.14 +
   17.15 +#include "VPThread.h"
   17.16 +
   17.17 +/*This header defines everything specific to the VPThread semantic plug-in
   17.18 + */
   17.19 +
   17.20 +inline void
   17.21 +handleMakeMutex(  VPThdSemReq *semReq, VPThdSemEnv *semEnv);
   17.22 +inline void
   17.23 +handleMutexLock(  VPThdSemReq *semReq, VPThdSemEnv *semEnv);
   17.24 +inline void
   17.25 +handleMutexUnlock(VPThdSemReq *semReq, VPThdSemEnv *semEnv);
   17.26 +inline void
   17.27 +handleMakeCond(   VPThdSemReq *semReq, VPThdSemEnv *semEnv);
   17.28 +inline void
   17.29 +handleCondWait(   VPThdSemReq *semReq, VPThdSemEnv *semEnv);
   17.30 +inline void
   17.31 +handleCondSignal( VPThdSemReq *semReq, VPThdSemEnv *semEnv);
   17.32 +void inline
   17.33 +handleMalloc(VPThdSemReq *semReq, SlaveVP *requestingVP,VPThdSemEnv *semEnv);
   17.34 +void inline
   17.35 +handleFree( VPThdSemReq *semReq, SlaveVP *requestingVP, VPThdSemEnv *semEnv);
   17.36 +inline void
   17.37 +handleStartFnSingleton( VPThdSemReq *semReq, SlaveVP *reqstingVP,
   17.38 +                      VPThdSemEnv *semEnv );
   17.39 +inline void
   17.40 +handleEndFnSingleton( VPThdSemReq *semReq, SlaveVP *requestingVP,
   17.41 +                    VPThdSemEnv *semEnv );
   17.42 +inline void
   17.43 +handleStartDataSingleton( VPThdSemReq *semReq, SlaveVP *reqstingVP,
   17.44 +                      VPThdSemEnv *semEnv );
   17.45 +inline void
   17.46 +handleEndDataSingleton( VPThdSemReq *semReq, SlaveVP *requestingVP,
   17.47 +                    VPThdSemEnv *semEnv );
   17.48 +void inline
   17.49 +handleAtomic( VPThdSemReq *semReq, SlaveVP *requestingVP,
   17.50 +              VPThdSemEnv *semEnv);
   17.51 +void inline
   17.52 +handleTransStart( VPThdSemReq *semReq, SlaveVP *requestingVP,
   17.53 +                  VPThdSemEnv *semEnv );
   17.54 +void inline
   17.55 +handleTransEnd( VPThdSemReq *semReq, SlaveVP *requestingVP,
   17.56 +                VPThdSemEnv *semEnv);
   17.57 +
   17.58 +
   17.59 +#endif	/* _VPThread_REQ_H */
   17.60 +
    18.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
    18.2 +++ b/Vthread_helper.c	Thu Mar 01 13:20:51 2012 -0800
    18.3 @@ -0,0 +1,48 @@
    18.4 +
    18.5 +#include <stddef.h>
    18.6 +
    18.7 +#include "VMS/VMS.h"
    18.8 +#include "VPThread.h"
    18.9 +
   18.10 +/*Re-use this in the entry-point fn
   18.11 + */
   18.12 +inline SlaveVP *
   18.13 +VPThread__create_procr_helper( TopLevelFnPtr fnPtr, void *initData,
   18.14 +                          VPThdSemEnv *semEnv,    int32 coreToScheduleOnto )
   18.15 + { SlaveVP      *newVP;
   18.16 +   VPThdSemData   *semData;
   18.17 +
   18.18 +      //This is running in master, so use internal version
   18.19 +   newVP = VMS__create_procr( fnPtr, initData );
   18.20 +
   18.21 +   semEnv->numVP += 1;
   18.22 +
   18.23 +   semData = VMS__malloc( sizeof(VPThdSemData) );
   18.24 +   semData->highestTransEntered = -1;
   18.25 +   semData->lastTransEntered    = NULL;
   18.26 +
   18.27 +   newVP->semanticData = semData;
   18.28 +
   18.29 +   //=================== Assign new processor to a core =====================
   18.30 +   #ifdef SEQUENTIAL
   18.31 +   newVP->coreAnimatedBy = 0;
   18.32 +
   18.33 +   #else
   18.34 +
   18.35 +   if(coreToScheduleOnto < 0 || coreToScheduleOnto >= NUM_CORES )
   18.36 +    {    //out-of-range, so round-robin assignment
   18.37 +      newVP->coreAnimatedBy = semEnv->nextCoreToGetNewVP;
   18.38 +
   18.39 +      if( semEnv->nextCoreToGetNewVP >= NUM_CORES - 1 )
   18.40 +          semEnv->nextCoreToGetNewVP  = 0;
   18.41 +      else
   18.42 +          semEnv->nextCoreToGetNewVP += 1;
   18.43 +    }
   18.44 +   else //core num in-range, so use it
   18.45 +    { newVP->coreAnimatedBy = coreToScheduleOnto;
   18.46 +    }
   18.47 +   #endif
   18.48 +   //========================================================================
   18.49 +
   18.50 +   return newVP;
   18.51 + }
   18.52 \ No newline at end of file
    19.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
    19.2 +++ b/Vthread_helper.h	Thu Mar 01 13:20:51 2012 -0800
    19.3 @@ -0,0 +1,19 @@
    19.4 +/* 
    19.5 + * File:   VPThread_helper.h
    19.6 + * Author: msach
    19.7 + *
    19.8 + * Created on June 10, 2011, 12:20 PM
    19.9 + */
   19.10 +
   19.11 +#include "VMS/VMS.h"
   19.12 +#include "VPThread.h"
   19.13 +
   19.14 +#ifndef VPTHREAD_HELPER_H
   19.15 +#define	VPTHREAD_HELPER_H
   19.16 +
   19.17 +inline SlaveVP *
   19.18 +VPThread__create_procr_helper( TopLevelFnPtr fnPtr, void *initData,
   19.19 +                          VPThdSemEnv *semEnv,    int32 coreToScheduleOnto );
   19.20 +
   19.21 +#endif	/* VPTHREAD_HELPER_H */
   19.22 +
    20.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
    20.2 +++ b/Vthread_lib.c	Thu Mar 01 13:20:51 2012 -0800
    20.3 @@ -0,0 +1,626 @@
    20.4 +/*
    20.5 + * Copyright 2010  OpenSourceCodeStewardshipFoundation
    20.6 + *
    20.7 + * Licensed under BSD
    20.8 + */
    20.9 +
   20.10 +#include <stdio.h>
   20.11 +#include <stdlib.h>
   20.12 +#include <malloc.h>
   20.13 +
   20.14 +#include "VMS/VMS.h"
   20.15 +#include "VPThread.h"
   20.16 +#include "VPThread_helper.h"
   20.17 +#include "VMS/Queue_impl/PrivateQueue.h"
   20.18 +#include "VMS/Hash_impl/PrivateHash.h"
   20.19 +
   20.20 +
   20.21 +//==========================================================================
   20.22 +
   20.23 +void
   20.24 +VPThread__init();
   20.25 +
   20.26 +void
   20.27 +VPThread__init_Seq();
   20.28 +
   20.29 +void
   20.30 +VPThread__init_Helper();
   20.31 +
   20.32 +
   20.33 +//===========================================================================
   20.34 +
   20.35 +
   20.36 +/*These are the library functions *called in the application*
   20.37 + * 
   20.38 + *There's a pattern for the outside sequential code to interact with the
   20.39 + * VMS_HW code.
   20.40 + *The VMS_HW system is inside a boundary..  every VPThread system is in its
   20.41 + * own directory that contains the functions for each of the processor types.
   20.42 + * One of the processor types is the "seed" processor that starts the
   20.43 + * cascade of creating all the processors that do the work.
   20.44 + *So, in the directory is a file called "EntryPoint.c" that contains the
   20.45 + * function, named appropriately to the work performed, that the outside
   20.46 + * sequential code calls.  This function follows a pattern:
   20.47 + *1) it calls VPThread__init()
   20.48 + *2) it creates the initial data for the seed processor, which is passed
   20.49 + *    in to the function
   20.50 + *3) it creates the seed VPThread processor, with the data to start it with.
   20.51 + *4) it calls startVPThreadThenWaitUntilWorkDone
   20.52 + *5) it gets the returnValue from the transfer struc and returns that
   20.53 + *    from the function
   20.54 + *
   20.55 + *For now, a new VPThread system has to be created via VPThread__init every
   20.56 + * time an entry point function is called -- later, might add letting the
   20.57 + * VPThread system be created once, and let all the entry points just reuse
   20.58 + * it -- want to be as simple as possible now, and see by using what makes
   20.59 + * sense for later..
   20.60 + */
   20.61 +
   20.62 +
   20.63 +
   20.64 +//===========================================================================
   20.65 +
   20.66 +/*This is the "border crossing" function -- the thing that crosses from the
   20.67 + * outside world, into the VMS_HW world.  It initializes and starts up the
   20.68 + * VMS system, then creates one processor from the specified function and
   20.69 + * puts it into the readyQ.  From that point, that one function is resp.
   20.70 + * for creating all the other processors, that then create others, and so
   20.71 + * forth.
   20.72 + *When all the processors, including the seed, have dissipated, then this
   20.73 + * function returns.  The results will have been written by side-effect via
   20.74 + * pointers read from, or written into initData.
   20.75 + *
   20.76 + *NOTE: no Threads should exist in the outside program that might touch
   20.77 + * any of the data reachable from initData passed in to here
   20.78 + */
   20.79 +void
   20.80 +VPThread__create_seed_procr_and_do_work( TopLevelFnPtr fnPtr, void *initData )
   20.81 + { VPThdSemEnv *semEnv;
   20.82 +   SlaveVP *seedVP;
   20.83 +
   20.84 +   #ifdef SEQUENTIAL
   20.85 +   VPThread__init_Seq();  //debug sequential exe
   20.86 +   #else
   20.87 +   VPThread__init();      //normal multi-thd
   20.88 +   #endif
   20.89 +   semEnv = _VMSMasterEnv->semanticEnv;
   20.90 +
   20.91 +      //VPThread starts with one processor, which is put into initial environ,
   20.92 +      // and which then calls create() to create more, thereby expanding work
   20.93 +   seedVP = VPThread__create_procr_helper( fnPtr, initData, semEnv, -1 );
   20.94 +
   20.95 +   resume_procr( seedVP, semEnv );
   20.96 +
   20.97 +   #ifdef SEQUENTIAL
   20.98 +   VMS__start_the_work_then_wait_until_done_Seq();  //debug sequential exe
   20.99 +   #else
  20.100 +   VMS__start_the_work_then_wait_until_done();      //normal multi-thd
  20.101 +   #endif
  20.102 +
  20.103 +   VPThread__cleanup_after_shutdown();
  20.104 + }
  20.105 +
  20.106 +
  20.107 +inline int32
  20.108 +VPThread__giveMinWorkUnitCycles( float32 percentOverhead )
  20.109 + {
  20.110 +   return MIN_WORK_UNIT_CYCLES;
  20.111 + }
  20.112 +
  20.113 +inline int32
  20.114 +VPThread__giveIdealNumWorkUnits()
  20.115 + {
  20.116 +   return NUM_SCHED_SLOTS * NUM_CORES;
  20.117 + }
  20.118 +
  20.119 +inline int32
  20.120 +VPThread__give_number_of_cores_to_schedule_onto()
  20.121 + {
  20.122 +   return NUM_CORES;
  20.123 + }
  20.124 +
  20.125 +/*For now, use TSC -- later, make these two macros with assembly that first
  20.126 + * saves jump point, and second jumps back several times to get reliable time
  20.127 + */
  20.128 +inline void
  20.129 +VPThread__start_primitive()
  20.130 + { saveLowTimeStampCountInto( ((VPThdSemEnv *)(_VMSMasterEnv->semanticEnv))->
  20.131 +                              primitiveStartTime );
  20.132 + }
  20.133 +
  20.134 +/*Just quick and dirty for now -- make reliable later
  20.135 + * will want this to jump back several times -- to be sure cache is warm
  20.136 + * because don't want comm time included in calc-time measurement -- and
  20.137 + * also to throw out any "weird" values due to OS interrupt or TSC rollover
  20.138 + */
  20.139 +inline int32
  20.140 +VPThread__end_primitive_and_give_cycles()
  20.141 + { int32 endTime, startTime;
  20.142 +   //TODO: fix by repeating time-measurement
  20.143 +   saveLowTimeStampCountInto( endTime );
  20.144 +   startTime=((VPThdSemEnv*)(_VMSMasterEnv->semanticEnv))->primitiveStartTime;
  20.145 +   return (endTime - startTime);
  20.146 + }
  20.147 +
  20.148 +//===========================================================================
  20.149 +//
  20.150 +/*Initializes all the data-structures for a VPThread system -- but doesn't
  20.151 + * start it running yet!
  20.152 + *
  20.153 + * 
  20.154 + *This sets up the semantic layer over the VMS system
  20.155 + *
  20.156 + *First, calls VMS_Setup, then creates own environment, making it ready
  20.157 + * for creating the seed processor and then starting the work.
  20.158 + */
  20.159 +void
  20.160 +VPThread__init()
  20.161 + {
  20.162 +   VMS__init();
  20.163 +   //masterEnv, a global var, now is partially set up by init_VMS
  20.164 +   
  20.165 +   //Moved here from VMS.c because this is not parallel construct independent
  20.166 +   MakeTheMeasHists();
  20.167 +
  20.168 +   VPThread__init_Helper();
  20.169 + }
  20.170 +
  20.171 +#ifdef SEQUENTIAL
  20.172 +void
  20.173 +VPThread__init_Seq()
  20.174 + {
  20.175 +   VMS__init_Seq();
  20.176 +   flushRegisters();
  20.177 +      //masterEnv, a global var, now is partially set up by init_VMS
  20.178 +
  20.179 +   VPThread__init_Helper();
  20.180 + }
  20.181 +#endif
  20.182 +
  20.183 +void
  20.184 +VPThread__init_Helper()
  20.185 + { VPThdSemEnv       *semanticEnv;
  20.186 +   PrivQueueStruc **readyVPQs;
  20.187 +   int              coreIdx, i;
  20.188 + 
  20.189 +      //Hook up the semantic layer's plug-ins to the Master virt procr
  20.190 +   _VMSMasterEnv->requestHandler = &VPThread__Request_Handler;
  20.191 +   _VMSMasterEnv->slaveScheduler = &VPThread__schedule_virt_procr;
  20.192 +
  20.193 +      //create the semantic layer's environment (all its data) and add to
  20.194 +      // the master environment
  20.195 +   semanticEnv = VMS__malloc( sizeof( VPThdSemEnv ) );
  20.196 +   _VMSMasterEnv->semanticEnv = semanticEnv;
  20.197 +
  20.198 +      //create the ready queue
  20.199 +   readyVPQs = VMS__malloc( NUM_CORES * sizeof(PrivQueueStruc *) );
  20.200 +
  20.201 +   for( coreIdx = 0; coreIdx < NUM_CORES; coreIdx++ )
  20.202 +    {
  20.203 +      readyVPQs[ coreIdx ] = makeVMSPrivQ();
  20.204 +    }
  20.205 +   
  20.206 +   semanticEnv->readyVPQs          = readyVPQs;
  20.207 +   
  20.208 +   semanticEnv->numVP          = 0;
  20.209 +   semanticEnv->nextCoreToGetNewVP = 0;
  20.210 +
  20.211 +   semanticEnv->mutexDynArrayInfo  =
  20.212 +      makePrivDynArrayOfSize( (void*)&(semanticEnv->mutexDynArray), INIT_NUM_MUTEX );
  20.213 +
  20.214 +   semanticEnv->condDynArrayInfo   =
  20.215 +      makePrivDynArrayOfSize( (void*)&(semanticEnv->condDynArray),  INIT_NUM_COND );
  20.216 +   
  20.217 +   //TODO: bug -- turn these arrays into dyn arrays to eliminate limit
  20.218 +   //semanticEnv->singletonHasBeenExecutedFlags = makeDynArrayInfo( );
  20.219 +   //semanticEnv->transactionStrucs = makeDynArrayInfo( );
  20.220 +   for( i = 0; i < NUM_STRUCS_IN_SEM_ENV; i++ )
  20.221 +    {
  20.222 +      semanticEnv->fnSingletons[i].endInstrAddr      = NULL;
  20.223 +      semanticEnv->fnSingletons[i].hasBeenStarted    = FALSE;
  20.224 +      semanticEnv->fnSingletons[i].hasFinished       = FALSE;
  20.225 +      semanticEnv->fnSingletons[i].waitQ             = makeVMSPrivQ();
  20.226 +      semanticEnv->transactionStrucs[i].waitingVPQ   = makeVMSPrivQ();
  20.227 +    }   
  20.228 + }
  20.229 +
  20.230 +
  20.231 +/*Frees any memory allocated by VPThread__init() then calls VMS__shutdown
  20.232 + */
  20.233 +void
  20.234 +VPThread__cleanup_after_shutdown()
  20.235 + { /*VPThdSemEnv *semEnv;
  20.236 +   int32           coreIdx,     idx,   highestIdx;
  20.237 +   VPThdMutex      **mutexArray, *mutex;
  20.238 +   VPThdCond       **condArray, *cond; */
  20.239 + 
  20.240 + /* It's all allocated inside VMS's big chunk -- that's about to be freed, so
  20.241 + *  nothing to do here
  20.242 +  semEnv = _VMSMasterEnv->semanticEnv;
  20.243 +
  20.244 +//TODO: double check that all sem env locations freed
  20.245 +
  20.246 +   for( coreIdx = 0; coreIdx < NUM_CORES; coreIdx++ )
  20.247 +    {
  20.248 +      free( semEnv->readyVPQs[coreIdx]->startOfData );
  20.249 +      free( semEnv->readyVPQs[coreIdx] );
  20.250 +    }
  20.251 +   
  20.252 +   free( semEnv->readyVPQs );
  20.253 +
  20.254 +   
  20.255 +   //==== Free mutexes and mutex array ====
  20.256 +   mutexArray = semEnv->mutexDynArray->array;
  20.257 +   highestIdx = semEnv->mutexDynArray->highestIdxInArray;
  20.258 +   for( idx=0; idx < highestIdx; idx++ )
  20.259 +    { mutex = mutexArray[ idx ];
  20.260 +      if( mutex == NULL ) continue;
  20.261 +      free( mutex );
  20.262 +    }
  20.263 +   free( mutexArray );
  20.264 +   free( semEnv->mutexDynArray );
  20.265 +   //======================================
  20.266 +   
  20.267 +
  20.268 +   //==== Free conds and cond array ====
  20.269 +   condArray  = semEnv->condDynArray->array;
  20.270 +   highestIdx = semEnv->condDynArray->highestIdxInArray;
  20.271 +   for( idx=0; idx < highestIdx; idx++ )
  20.272 +    { cond = condArray[ idx ];
  20.273 +      if( cond == NULL ) continue;
  20.274 +      free( cond );
  20.275 +    }
  20.276 +   free( condArray );
  20.277 +   free( semEnv->condDynArray );
  20.278 +   //===================================
  20.279 +
  20.280 +   
  20.281 +   free( _VMSMasterEnv->semanticEnv );
  20.282 +  */
  20.283 +   VMS__cleanup_at_end_of_shutdown();
  20.284 + }
  20.285 +
  20.286 +
  20.287 +//===========================================================================
  20.288 +
  20.289 +/*
  20.290 + */
  20.291 +inline SlaveVP *
  20.292 +VPThread__create_thread( TopLevelFnPtr fnPtr, void *initData,
  20.293 +                          SlaveVP *creatingVP )
  20.294 + { VPThdSemReq  reqData;
  20.295 +
  20.296 +      //the semantic request data is on the stack and disappears when this
  20.297 +      // call returns -- it's guaranteed to remain in the VP's stack for as
  20.298 +      // long as the VP is suspended.
  20.299 +   reqData.reqType            = 0; //know the type because is a VMS create req
  20.300 +   reqData.coreToScheduleOnto = -1; //means round-robin schedule
  20.301 +   reqData.fnPtr              = fnPtr;
  20.302 +   reqData.initData           = initData;
  20.303 +   reqData.requestingVP       = creatingVP;
  20.304 +
  20.305 +   VMS__send_create_procr_req( &reqData, creatingVP );
  20.306 +
  20.307 +   return creatingVP->dataRetFromReq;
  20.308 + }
  20.309 +
  20.310 +inline SlaveVP *
  20.311 +VPThread__create_thread_with_affinity( TopLevelFnPtr fnPtr, void *initData,
  20.312 +                           SlaveVP *creatingVP,  int32  coreToScheduleOnto )
  20.313 + { VPThdSemReq  reqData;
  20.314 +
  20.315 +      //the semantic request data is on the stack and disappears when this
  20.316 +      // call returns -- it's guaranteed to remain in the VP's stack for as
  20.317 +      // long as the VP is suspended.
  20.318 +   reqData.reqType            = 0; //know type because in a VMS create req
  20.319 +   reqData.coreToScheduleOnto = coreToScheduleOnto;
  20.320 +   reqData.fnPtr              = fnPtr;
  20.321 +   reqData.initData           = initData;
  20.322 +   reqData.requestingVP       = creatingVP;
  20.323 +
  20.324 +   VMS__send_create_procr_req( &reqData, creatingVP );
  20.325 + }
  20.326 +
  20.327 +inline void
  20.328 +VPThread__dissipate_thread( SlaveVP *procrToDissipate )
  20.329 + {
  20.330 +   VMS__send_dissipate_req( procrToDissipate );
  20.331 + }
  20.332 +
  20.333 +
  20.334 +//===========================================================================
  20.335 +
  20.336 +void *
  20.337 +VPThread__malloc( size_t sizeToMalloc, SlaveVP *animVP )
  20.338 + { VPThdSemReq  reqData;
  20.339 +
  20.340 +   reqData.reqType      = malloc_req;
  20.341 +   reqData.sizeToMalloc = sizeToMalloc;
  20.342 +   reqData.requestingVP = animVP;
  20.343 +
  20.344 +   VMS__send_sem_request( &reqData, animVP );
  20.345 +
  20.346 +   return animVP->dataRetFromReq;
  20.347 + }
  20.348 +
  20.349 +
  20.350 +/*Sends request to Master, which does the work of freeing
  20.351 + */
  20.352 +void
  20.353 +VPThread__free( void *ptrToFree, SlaveVP *animVP )
  20.354 + { VPThdSemReq  reqData;
  20.355 +
  20.356 +   reqData.reqType      = free_req;
  20.357 +   reqData.ptrToFree    = ptrToFree;
  20.358 +   reqData.requestingVP = animVP;
  20.359 +
  20.360 +   VMS__send_sem_request( &reqData, animVP );
  20.361 + }
  20.362 +
  20.363 +
  20.364 +//===========================================================================
  20.365 +
  20.366 +inline void
  20.367 +VPThread__set_globals_to( void *globals )
  20.368 + {
  20.369 +   ((VPThdSemEnv *)
  20.370 +    (_VMSMasterEnv->semanticEnv))->applicationGlobals = globals;
  20.371 + }
  20.372 +
  20.373 +inline void *
  20.374 +VPThread__give_globals()
  20.375 + {
  20.376 +   return((VPThdSemEnv *) (_VMSMasterEnv->semanticEnv))->applicationGlobals;
  20.377 + }
  20.378 +
  20.379 +
  20.380 +//===========================================================================
  20.381 +
  20.382 +inline int32
  20.383 +VPThread__make_mutex( SlaveVP *animVP )
  20.384 + { VPThdSemReq  reqData;
  20.385 +
  20.386 +   reqData.reqType      = make_mutex;
  20.387 +   reqData.requestingVP = animVP;
  20.388 +
  20.389 +   VMS__send_sem_request( &reqData, animVP );
  20.390 +
  20.391 +   return (int32)animVP->dataRetFromReq; //mutexid is 32bit wide
  20.392 + }
  20.393 +
  20.394 +inline void
  20.395 +VPThread__mutex_lock( int32 mutexIdx, SlaveVP *acquiringVP )
  20.396 + { VPThdSemReq  reqData;
  20.397 +
  20.398 +   reqData.reqType      = mutex_lock;
  20.399 +   reqData.mutexIdx     = mutexIdx;
  20.400 +   reqData.requestingVP = acquiringVP;
  20.401 +
  20.402 +   VMS__send_sem_request( &reqData, acquiringVP );
  20.403 + }
  20.404 +
  20.405 +inline void
  20.406 +VPThread__mutex_unlock( int32 mutexIdx, SlaveVP *releasingVP )
  20.407 + { VPThdSemReq  reqData;
  20.408 +
  20.409 +   reqData.reqType      = mutex_unlock;
  20.410 +   reqData.mutexIdx     = mutexIdx;
  20.411 +   reqData.requestingVP = releasingVP;
  20.412 +
  20.413 +   VMS__send_sem_request( &reqData, releasingVP );
  20.414 + }
  20.415 +
  20.416 +
  20.417 +//=======================
  20.418 +inline int32
  20.419 +VPThread__make_cond( int32 ownedMutexIdx, SlaveVP *animPr)
  20.420 + { VPThdSemReq  reqData;
  20.421 +
  20.422 +   reqData.reqType      = make_cond;
  20.423 +   reqData.mutexIdx     = ownedMutexIdx;
  20.424 +   reqData.requestingVP = animVP;
  20.425 +
  20.426 +   VMS__send_sem_request( &reqData, animVP );
  20.427 +
  20.428 +   return (int32)animVP->dataRetFromReq; //condIdx is 32 bit wide
  20.429 + }
  20.430 +
  20.431 +inline void
  20.432 +VPThread__cond_wait( int32 condIdx, SlaveVP *waitingPr)
  20.433 + { VPThdSemReq  reqData;
  20.434 +
  20.435 +   reqData.reqType      = cond_wait;
  20.436 +   reqData.condIdx      = condIdx;
  20.437 +   reqData.requestingVP = waitingVP;
  20.438 +
  20.439 +   VMS__send_sem_request( &reqData, waitingVP );
  20.440 + }
  20.441 +
  20.442 +inline void *
  20.443 +VPThread__cond_signal( int32 condIdx, SlaveVP *signallingVP )
  20.444 + { VPThdSemReq  reqData;
  20.445 +
  20.446 +   reqData.reqType      = cond_signal;
  20.447 +   reqData.condIdx      = condIdx;
  20.448 +   reqData.requestingVP = signallingVP;
  20.449 +
  20.450 +   VMS__send_sem_request( &reqData, signallingVP );
  20.451 + }
  20.452 +
  20.453 +
  20.454 +//===========================================================================
  20.455 +//
  20.456 +/*A function singleton is a function whose body executes exactly once, on a
  20.457 + * single core, no matter how many times the fuction is called and no
  20.458 + * matter how many cores or the timing of cores calling it.
  20.459 + *
  20.460 + *A data singleton is a ticket attached to data.  That ticket can be used
  20.461 + * to get the data through the function exactly once, no matter how many
  20.462 + * times the data is given to the function, and no matter the timing of
  20.463 + * trying to get the data through from different cores.
  20.464 + */
  20.465 +
  20.466 +/*asm function declarations*/
  20.467 +void asm_save_ret_to_singleton(VPThdSingleton *singletonPtrAddr);
  20.468 +void asm_write_ret_from_singleton(VPThdSingleton *singletonPtrAddr);
  20.469 +
  20.470 +/*Fn singleton uses ID as index into array of singleton structs held in the
  20.471 + * semantic environment.
  20.472 + */
  20.473 +void
  20.474 +VPThread__start_fn_singleton( int32 singletonID,   SlaveVP *animVP )
  20.475 + {
  20.476 +   VPThdSemReq  reqData;
  20.477 +
  20.478 +      //
  20.479 +   reqData.reqType     = singleton_fn_start;
  20.480 +   reqData.singletonID = singletonID;
  20.481 +
  20.482 +   VMS__send_sem_request( &reqData, animVP );
  20.483 +   if( animVP->dataRetFromReq ) //will be 0 or addr of label in end singleton
  20.484 +    {
  20.485 +       VPThdSemEnv *semEnv = VMS__give_sem_env_for( animVP );
  20.486 +       asm_write_ret_from_singleton(&(semEnv->fnSingletons[ singletonID]));
  20.487 +    }
  20.488 + }
  20.489 +
  20.490 +/*Data singleton hands addr of loc holding a pointer to a singleton struct.
  20.491 + * The start_data_singleton makes the structure and puts its addr into the
  20.492 + * location.
  20.493 + */
  20.494 +void
  20.495 +VPThread__start_data_singleton( VPThdSingleton **singletonAddr,  SlaveVP *animVP )
  20.496 + {
  20.497 +   VPThdSemReq  reqData;
  20.498 +
  20.499 +   if( *singletonAddr && (*singletonAddr)->hasFinished )
  20.500 +      goto JmpToEndSingleton;
  20.501 +      
  20.502 +   reqData.reqType       = singleton_data_start;
  20.503 +   reqData.singletonPtrAddr = singletonAddr;
  20.504 +
  20.505 +   VMS__send_sem_request( &reqData, animVP );
  20.506 +   if( animVP->dataRetFromReq ) //either 0 or end singleton's return addr
  20.507 +    {    
  20.508 +       JmpToEndSingleton:
  20.509 +       asm_write_ret_from_singleton(*singletonAddr);
  20.510 +
  20.511 +    }
  20.512 +   //now, simply return
  20.513 +   //will exit either from the start singleton call or the end-singleton call
  20.514 + }
  20.515 +
  20.516 +/*Uses ID as index into array of flags.  If flag already set, resumes from
  20.517 + * end-label.  Else, sets flag and resumes normally.
  20.518 + *
  20.519 + *Note, this call cannot be inlined because the instr addr at the label
  20.520 + * inside is shared by all invocations of a given singleton ID.
  20.521 + */
  20.522 +void
  20.523 +VPThread__end_fn_singleton( int32 singletonID, SlaveVP *animVP )
  20.524 + {
  20.525 +   VPThdSemReq  reqData;
  20.526 +
  20.527 +   //don't need this addr until after at least one singleton has reached
  20.528 +   // this function
  20.529 +   VPThdSemEnv *semEnv = VMS__give_sem_env_for( animVP );
  20.530 +   asm_write_ret_from_singleton(&(semEnv->fnSingletons[ singletonID]));
  20.531 +
  20.532 +   reqData.reqType     = singleton_fn_end;
  20.533 +   reqData.singletonID = singletonID;
  20.534 +
  20.535 +   VMS__send_sem_request( &reqData, animVP );
  20.536 + }
  20.537 +
  20.538 +void
  20.539 +VPThread__end_data_singleton(  VPThdSingleton **singletonPtrAddr, SlaveVP *animVP )
  20.540 + {
  20.541 +   VPThdSemReq  reqData;
  20.542 +
  20.543 +      //don't need this addr until after singleton struct has reached
  20.544 +      // this function for first time
  20.545 +      //do assembly that saves the return addr of this fn call into the
  20.546 +      // data singleton -- that data-singleton can only be given to exactly
  20.547 +      // one instance in the code of this function.  However, can use this
  20.548 +      // function in different places for different data-singletons.
  20.549 +
  20.550 +   asm_save_ret_to_singleton(*singletonPtrAddr);
  20.551 +
  20.552 +   reqData.reqType          = singleton_data_end;
  20.553 +   reqData.singletonPtrAddr = singletonPtrAddr;
  20.554 +
  20.555 +   VMS__send_sem_request( &reqData, animVP );
  20.556 + }
  20.557 +
  20.558 +
  20.559 +/*This executes the function in the masterVP, so it executes in isolation
  20.560 + * from any other copies -- only one copy of the function can ever execute
  20.561 + * at a time.
  20.562 + *
  20.563 + *It suspends to the master, and the request handler takes the function
  20.564 + * pointer out of the request and calls it, then resumes the VP.
  20.565 + *Only very short functions should be called this way -- for longer-running
  20.566 + * isolation, use transaction-start and transaction-end, which run the code
  20.567 + * between as work-code.
  20.568 + */
  20.569 +void
  20.570 +VPThread__animate_short_fn_in_isolation( PtrToAtomicFn ptrToFnToExecInMaster,
  20.571 +                                    void *data, SlaveVP *animVP )
  20.572 + {
  20.573 +   VPThdSemReq  reqData;
  20.574 +
  20.575 +      //
  20.576 +   reqData.reqType          = atomic;
  20.577 +   reqData.fnToExecInMaster = ptrToFnToExecInMaster;
  20.578 +   reqData.dataForFn        = data;
  20.579 +
  20.580 +   VMS__send_sem_request( &reqData, animVP );
  20.581 + }
  20.582 +
  20.583 +
  20.584 +/*This suspends to the master.
  20.585 + *First, it looks at the VP's data, to see the highest transactionID that VP
  20.586 + * already has entered.  If the current ID is not larger, it throws an
  20.587 + * exception stating a bug in the code.  Otherwise it puts the current ID
  20.588 + * there, and adds the ID to a linked list of IDs entered -- the list is
  20.589 + * used to check that exits are properly ordered.
  20.590 + *Next it is uses transactionID as index into an array of transaction
  20.591 + * structures.
  20.592 + *If the "VP_currently_executing" field is non-null, then put requesting VP
  20.593 + * into queue in the struct.  (At some point a holder will request
  20.594 + * end-transaction, which will take this VP from the queue and resume it.)
  20.595 + *If NULL, then write requesting into the field and resume.
  20.596 + */
  20.597 +void
  20.598 +VPThread__start_transaction( int32 transactionID, SlaveVP *animVP )
  20.599 + {
  20.600 +   VPThdSemReq  reqData;
  20.601 +
  20.602 +      //
  20.603 +   reqData.reqType     = trans_start;
  20.604 +   reqData.transID     = transactionID;
  20.605 +
  20.606 +   VMS__send_sem_request( &reqData, animVP );
  20.607 + }
  20.608 +
  20.609 +/*This suspends to the master, then uses transactionID as index into an
  20.610 + * array of transaction structures.
  20.611 + *It looks at VP_currently_executing to be sure it's same as requesting VP.
  20.612 + * If different, throws an exception, stating there's a bug in the code.
  20.613 + *Next it looks at the queue in the structure.
  20.614 + *If it's empty, it sets VP_currently_executing field to NULL and resumes.
  20.615 + *If something in, gets it, sets VP_currently_executing to that VP, then
  20.616 + * resumes both.
  20.617 + */
  20.618 +void
  20.619 +VPThread__end_transaction( int32 transactionID, SlaveVP *animVP )
  20.620 + {
  20.621 +   VPThdSemReq  reqData;
  20.622 +
  20.623 +      //
  20.624 +   reqData.reqType     = trans_end;
  20.625 +   reqData.transID     = transactionID;
  20.626 +
  20.627 +   VMS__send_sem_request( &reqData, animVP );
  20.628 + }
  20.629 +//===========================================================================
    21.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
    21.2 +++ b/__brch__default	Thu Mar 01 13:20:51 2012 -0800
    21.3 @@ -0,0 +1,1 @@
    21.4 +The default branch for Vthread -- the language libraries will have fewer branches than VMS does..  might be some used for feature development, or something..
    21.5 \ No newline at end of file