Sharing Data With HP VEE
In many instrument-control applications, it
is necessary to maintain constant communications with your instruments,
record measured data, and perform some "on-the-fly"
data analysis. To do all of this concurrently in a single program
may be difficult, due to the rate at which you need to acquire
and store measured data, the processing involved in data analysis,
or the application complexity that results from the need to coordinate
the instrument control and data analysis parts of your program.
This white paper presents a technique, using shared memory on
Win32®-based systems, to split your application into two (or
more) easily-implemented processes to achieve the instrument control
and data analysis performance your application demands.
Assigning the task of instrument control and
data storage to one process, while using another for data analysis,
allows you to encapsulate the relatively straight-forward implementations
of both into a set of easily-managed, logically-grouped programs.
Shared memory provides an efficient and portable communications
channel between the related programs which will comprise your
application.
The remainder of this white paper presents the implementation of an application which uses a multi-process model to divide instrument control and data analysis responsibilities between two programs. The instrument control program is written in C and uses a Dynamically-Linked Library (DLL) to write measured data into a named shared memory segment. The data analysis program is an HP VEE application which uses the same DLL to read the shared memory segment.
The application we will present uses a C program to produce two data sets. This program would normally be the entity responsible for instrument control, but, because we want to be able to run this example regardless of the instruments or IO systems you use, we have chosen to just dummy up some data. The other part of this sample application is an HP VEE program that displays the data the C program produces. You may simultaneously run as many copies of either application as you wish. As we will develop in this paper, one of the benefits of our shared memory library is the prevention of data corruption
This application achieves inter-process communication (IPC) by having the two associated programs link against a DLL that implements our shared memory capability. The use of a DLL makes it especially convenient to add functionality to the application as the requirements expand. As long as you retain the library's original functionality, you may expand it as you wish, without the need to change the applications that depend on the library.
Shared memory in Win32® is implemented
as memory-mapped files. In essence, memory-mapped files allow
you to read from and write to files by referencing a data pointer
within your application's data space. You map the file into your
process, then whenever you reference the mapped location, any
changes you make to the associated data are reflected in the file.
The "shared" part of memory-mapped
files comes in when you consider what happens when two or more
processes map the same file concurrently. The file's data is held
in virtual memory and is committed to the disk file at system-defined
intervals. So, when two or more processes attach the same file,
what they are really doing is assigning a process-relative access
point to the file's virtual memory.
If there is no file attached to the mapped memory, the operating system never needs to commit any data changes to a disk file. This technique is how we achieve the memory sharing. One process, usually the data producer (in our case, an instrument control program) will create a memory-mapped file that doesn't reference a disk file. The data consumer program, in our case the analysis and display program, will open that mapped file. Because each program has a process-relative view of the virtual memory created when we created the mapped file, when they reference the data space that has been assigned to the mapped file, they are really referencing a piece of virtual memory common to both processes.
Because there is no disk file associated with
our shared data segment, we need to provide a way to allow processes
to identify the mapped segment they are interested in opening.
We do this by assigning each segment we create a user-definable
tag. When the instrument control program creates a segment, it
will assign that segment a tag name. When the data analysis program
wants to open the shared segment, it must reference the segment
by using the name assigned when we created that shared mapping.
Our library allows you to create as many named segments as you wish. The library creates a list of named segments as you create or open them. The library also allows you to read from and write to a particular shared segment by referencing its name. In this way you can parcel up your data into logically-grouped pieces.
Because Win32®-compliant operating systems
use preemptive multitasking, we need to provide a mechanism that
prevents data corruption. Consider what would result if our instrument
control program started writing data into a shared memory segment
then got preempted by our data analysis program before it had
the opportunity to finish writing its data. The preempting process
would be viewing a partially-updated data set. Consider further
what would happen if two process tried to write data to same segment
concurrently. The data could contain parts of both write operations.
To account for these possibilities, we have
implemented a mechanism that allows any number of reading processes
to simultaneously view shared segments, while allowing only a
single concurrent writing process. The library prohibits a writing
process from altering data when there are any processes reading
it or when there are any other processes trying to write. We also
prohibit data access to reading processes until an active write
is finished.
In this way, the library allows you maximum availability to shared data, while preventing data corruption. In a later section of this white paper, we will illustrate a situation where you can make very effective use of this technique to implement a networked instrument-control system.
We said earlier that a process wishing to open a shared segment needs to know the segment name. Our library provides a mechanism to allow a process to enumerate all of the shared segments it has created or opened. The creating process will typically make the segment identifier names available to any other interested processes through some other form of IPC. The creating process and processes wishing to open the named segments must agree upon a location or mechanism to store this segment identifier list. In our example, we use the library to allow our creating process to enumerate the shared segments, then we write the information to a previously-agreed-upon file.
There are a couple other uses for this type
of architecture, though all of the situations we discuss share
the common thread of making data available to external processes
without disrupting instrument control functions. Both of these
architectures realize the benefit of simplified implementation
when you want distributed access to data but don't want to interfere
with instrument control or product testing.
We won't go through a detailed implementation of the other two types of application architectures, but the application we do discuss in detail, coupled with a brief overview of the others, will hopefully give you enough information should you decide to implement an application yourself.
This first architecture is one that many people
have expressed an interest in. In fact, the entire inspiration
for this white paper is based on a distributed data acquisition
system I built for a customer who was testing turbochargers.
This architecture allows many separate workstations
to access data that a single set of instruments provides. One
of the workstations has the responsibility of controlling the
instruments and making the measured data available to other workstations
on a network. We'll call this workstation the "server".
The instrument control process typically has some mechanism for
allowing client workstations to tell the server what information
they need to collect. For instance, several clients might contact
the server to ask for a given type of measurement on a given channel
at a given rate. The server then would aggregate all of the measurements
and make a subset available to the requesting client.
The server workstation will implements a two-process application, where one process provides instrument control and stores data in shared memory. The other process provides a network server capability. Clients can connect to the network server to request the latest measurement data. This relieves the instrument control process of the network communication burden and allows for the best possible acquisition rates.
In this architecture, there are typically many
workstations involved in instrument control. They may represent
a parallel production line, where each set of instruments and
workstations is replicated across a manufacturing facility. We
will again refer to the instrument-control workstations as "servers".
There would be an additional client workstation that periodically
asks each of the servers for the latest measurement data or manufacturing
statistics. This allows the client to get a cross-section composite
view of the manufacturing lines.
The server would again implement a two-process
application. Again, one process is responsible for performing
the actual product testing and for writing results into shared
memory. The second process is responsible for providing network
access to the data in shared memory.
Note |
---|
This white paper is a response to requests to port to Win32® a shared memory library I developed a few years back for use in UNIX. One thing that you must always consider, when you allow multiple simultaneous access to a shared resource, is the need to prevent data corruption. One of the prototypical requirements is to implement a 'Single Writer, Multiple Reader Guard'. This allows many entities to get 'read' access to a shared resource, while preventing something from changing the resource's state or content while a 'read' operation is in progress. I did this in UNIX through the use of semaphores. The UNIX implementation of semaphores allows you to create multiple semaphores and group them into a 'set'. When you want to do something with a semaphore, you supply an array of actions you wish to apply to any given semaphore in the set. It takes only two semaphores in a set to implement a 'Single Writer, Multiple Reader Guard'. What makes this possible is that one of the actions you can ask of a semaphore is to wait until its value becomes zero. By using one semaphore in the set as an 'exclusive access' coordination point and the other as a 'counting semaphore', you can write a function to get 'write' access to a shared resource only when there are no active 'readers' or 'writers'. In doing the research necessary to port this library from UNIX to Win32®, I started looking for reference material that discussed thread synchronization. Needing to prevent data corruption is equally applicable to multi-threaded and multi-process applications. One of the references I ran across was a book titled "Advanced Windows: The Developer's Guide to the Win32® API for Windows". It is written by Jeffrey Richter and published by Microsoft Press. In the section that discusses thread synchronization, Mr. Richter presents a Win32® implementation of a 'Single Writer, Multiple Reader Guard'. Because I have always disdained reinventing wheels, I just used his implementation in my port to Win32®. The explanation of this guard's inner workings is a synopsis of the material presented in Advanced Windows. |
This section discusses the implementation of an application that uses two processes to cooperate using shared memory. One process is a C program that generates data and stores that data in shared memory. The second process is an HP VEE program that reads the shared memory and displays it graphically. We discuss in detail how we implemented the shared memory library that provides the IPC mechanism for this application.
Sharing memory space between two or more processes
on Win32®-based systems is accomplished through the use of
memory-mapped files. Memory-mapped files provide a way for your
process to read and write the data in a disk-based file by mapping
that file into your process's data space. When you reference that
mapped memory space, you are actually looking at the file system's
data cache. To improve performance, the file system will store
data that has just been read form a file or data that has just
been written to a file in a chunk of memory. The file system will,
at certain intervals, commit any changes made to the data held
in the cache to the disk file. This file buffering greatly improves
file system performance.
Because a file's data is stored in a cache,
it is possible for more than one process to look at the contents
of a data file by simply mapping the file's cache memory into
the address spaces of two or more processes. If there is no disk
file associated with the cache, then there is no need to commit
data changes to a physical storage device. However, two or more
processes may map this type of cache, just as they would had the
cache been intended for use with a disk file. This is the technique
we will use to implement our shared memory architecture.
One property of mapped files that you should
be aware of as we proceed through this paper is that the size
of the mapped segment is fixed when you map it. Reading or writing
outside the bounds of the segment can cause an access violation
that will terminate your application (with extreme prejudice ).
Your shared segment will be lost from further use if your process
should get terminated after having acquired a lock on the shared
segment but before having had the opportunity to release the lock.
It is for this reason that the library accesses shared segments
by using accessor functions of the data type that segment is intended
to hold. By knowing the data type and number of elements in a
shared segment, the library can prevent a process from "coloring
outside the lines".
To create a shared segment, you use the function
'CreateFileMapping'. This function expects an argument that is
a file handle. Supplying the hex value 0xffffffff causes our shared
segment to be created independent of a disk-based file.
There are two related fields which determine
the segment size. Each of these is a long word. The combination
of these two values allows you to create a mapped segment whose
size can be expressed in a 64-bit integer. Our implementation
sets the high-order argument to zero, so that only the low order
argument influences the segment size. We determine the data size
by knowing the size of the data type and the number of data elements
you wish to store in this segment.
We also assign to each segment a name which allows us to reference this particular shared segment from another process. This name is held in a data structure that describes each shared segment. We will describe this data structure in the section of this paper titled 'A List of Named Segments'.
hMappedFile=CreateFileMapping( (HANDLE)0xffffffff, /* create a virtual memory segment */ (LPSECURITY_ATTRIBUTES)0, /* default security */ PAGE_READWRITE, 0, /* high order mapping size */ dataSize, /* low order mapping size */ segmentEntry->segmentName); if (!hMappedFile) { return(vErrorMapFailed); }
Once the shared segment is created, you 'map'
it into your process' data space. This allows you to reference
segment contents by simply copying data into a pointer or by using
the pointer as the source for data you wish to read. You map a
segment with the function 'MapViewOfFile'
You can use MapViewOfFile to 'window in' on your mapped segment. We showed earlier that the size of the mapped segment is held in a 64-bit data structure. The mapped view allows you to specify a starting offset into the shared segment. The offset is also a 64-bit data structure with the same format used in CreateFileMapping. You can have as many views into a mapped segment as you wish. You can also specify how big a chunk of the shared segment you wish to map. By controlling the starting offset and the size of the mapping, you can create a window into your data. In our library, we map our segment starting at offset zero and set the map size to be the entire segment size.
baseAddr=MapViewOfFile(hMappedFile, FILE_MAP_READ | FILE_MAP_WRITE, 0, /* high order mapping offset */ 0, /* low order mapping offset */ 0); /* map the entire segment */ if (!baseAddr) { CloseHandle(hMappedFile); return(vErrorMapFailed); }
We have already mentioned the need to provide a mechanism to prevent data corruption. Win32® provides a number of different types of synchronization objects to accomplish this. The synchronization code uses the following structure:
/************************************************************* Module name: SWMRG.H Notices: Copyright (c) 1995 Jeffrey Richter *************************************************************/ // The single-writer/multiple-reader guard // compound synchronization object typedef struct SingleWriterMultiReaderGuard { // This mutex guards access to the other objects // managed by this data structure and also indicates // whether any writer threads are writing. HANDLE hMutexNoWriter; // This manual-reset event is signaled when // no reader threads are reading. HANDLE hEventNoReaders; // This semaphore is used simply as a counter that is // accessible between multiple processes. It is NOT // used for thread synchronization. // The count is the number of reader threads reading. HANDLE hSemNumReaders; } SWMRG, *PSWMRG;
This structure uses a combination of three
types of synchronization objects to implement a single writer,
multiple reader guard: a mutex, a semaphore, and a manual reset
event. By grouping these into a structure, you can act on all
three at once. It is important that access to these guard fields
be 'atomic', that is you either get access to every thing or nothing.
To gain partial access when another, preempting entity gets partial
access leads to data corruption. That's what we're trying to prevent.
The other benefit of grouping these objects into a structure is that you can create them with a common name and associate the structure with the shared data the guards protect. This makes it greatly easier to deal with multiple named segments.
'Mutex' is a contraction for 'mutual exclusion'.
Like the name indicates, whenever something owns a mutex object,
nothing else can gain access to it. This is a good choice for
use as our writer guard, in that only a single writer is allowed
to access our data and only then if there are no other writers
or active readers.
A mutex is created with a call to 'CreateMutex'.
Like most of the other object creation functions, CreateMutex
returns a 'handle' that can be used to refer to an object. You
can name a mutex when you create it, so, if we give our mutex
a name similar to the name we assign our shared segment, we can
easily associate a mutex object with our named shared memory.
Two or more processes may synchronize their
access to a shared resource by referring to the same mutex. To
open an already-created mutex, you use the function 'OpenMutex'.
Mutexes and many other Win32® objects have the notion of s
reference count. When a process creates a mutex, the new object
has its reference count set to 1. When additional processes try
to create a mutex of the same name, the operating system does
not create a new object. It simply increments the reference count
and returns a handle to the already-existing object. This is also
true when you open an existing mutex with a call to OpenMutex.
Synchronization objects have the concept of
a 'signaled' state. It is when a mutex is in this state that an
object may gain ownership. Until the owning entity releases the
mutex, nothing else can gain access to it. You gain access to
a mutex by calling one of the functions 'WaitForSingleObject'
or 'WaitForMultipleObjects'. Both of these functions accept a
handle (or array of handles) to synchronization objects. Passing
a mutex to one of these functions automatically sets the mutex's
state to non-signaled, indicating that nothing else can gain access
until you release the mutex. You release a mutex, returning it
to its signaled state, by calling 'ReleaseMutex'.
Both of the functions WaitForSingleObject and WaitForMultipleObjects accept a time-out argument. This argument specifies a time in milli-seconds to wait to gain access to an object before giving up. You can tell these functions to wait forever by specifying a time-out value of 'INFINITE'. Our library allows you to specify a time-out value that applies to all accesses to shared segments with the function 'setTimeOut'. The default, if you don't specify otherwise is to wait indefinitely.
You delete a synchronization object by calling 'CloseHandle'. This function will disassociate the handle identifying the given object from the object itself. It will also decrement the object's reference count. When the object's reference count falls to zero, then operating system destroys the object.
Semaphores are typically used to limit access
to a shared resource by implementing a reference counting scheme.
When you create a semaphore, you specify the maximum number of
references a semaphore will allow. Semaphores are signaled when
their reference count is greater than zero and non-signaled when
the reference count is equal to zero.
Unlike mutexes, Win32® doesn't know who
owns a semaphore, so when you call WaitFor[Single | Multiple]Objects[s],
Win32® checks to see if the reference count is greater than
zero. If so, it allows the calling process or thread to continue.
The ReleaseSemaphore function increments a
semaphore's reference count. In our case, we use a semaphore in
conjunction with a manual reset event to let us know when there
are readers accessing our shared segment.
As with the other synchronization objects we have discussed, you can name a semaphore. You create and open them with calls to CreateSemaphore and OpenSemaphore and destroy them with a call to CloseHandle.
Events are used to signal the fact that some
operation has completed. Manual reset events can signal several
processes or threads that something has completed. We use a manual
reset event to signal the fact that that there are or aren't any
readers.
You create a manual reset event with a call
to the function 'CreateEvent', and, as with all the other Win32®
objects we will examine, you can uniquely identify it by giving
it a name. You can direct that a manual reset event be set to
an initial state when you create it. In our case, we set its initial
state to be signaled. You can open an existing reset event with
a call to OpenEvent.
Unlike mutexes and semaphores, the WaitFor[Single | Multiple]Objects[s] calls do not automatically set an event's state. Instead, you must set the state by calling SetEvent or ResetEvent. SetEvent sets a reset event object to its signaled state, while ResetEvent sets it to its non-signaled state. We set our reset event object's state to signaled when the last reader is done reading, opening access to writers. We set our reset event object's state to non-signaled when the first reader accesses our shared segment. This prevents writers from gaining concurrent access.
To gain a write lock on a shared segment, we
wait for our writer mutex and our reader event. Waiting for the
mutex has the effect of denying access to it until we release
it. If there are active readers, the manual reset event will be
non-signaled, and our wait call would block us out until all the
readers had finished.
To release our write lock, all we need to do
is release the mutex. Manual reset event states are set by explicit
calls, so it is not necessary for us to do anything further.
To gain a read lock, we wait for the writer
mutex. When we get it, we increment our semaphore count and, if
this is the first reader, set the event to its non-signaled state.
We then release the mutex. Releasing the mutex is a very important
step. Because each reader must get a hold of this object in order
to gain access to the shared data, failure to release the mutex
would prevent multiple reader access. Likewise, the writer grabs
and holds the mutex, preventing any other access while it is writing.
In effect, the mutex acts a 'gate keeper'.
To release the read lock, we wait for the mutex and the semaphore. Semaphores are signaled when their reference counts are greater than zero. If this is the last reader, we set the manual reset event to its signaled state, thereby allowing write access. In the case where we are not the last reader, we decrement the reference count. In either case, we release the mutex.
/************************************************************ Module name: SWMRG.C Notices: Copyright (c) 1995 Jeffrey Richter ************************************************************/ #include "AdvWin32.H" /* See Appendix B for details. */ #include#pragma warning(disable: 4001) /* Single-line comment */ #include #include #include "SWMRG.H" // The header file ///////////////////////////////////////////////////////////// LPCTSTR ConstructObjName ( LPCTSTR lpszPrefix, LPCTSTR lpszSuffix, LPTSTR lpszFullName, size_t cbFullName, PBOOL fOk) { *fOk = TRUE; // Assume success. if (lpszSuffix == NULL) return(NULL); if ((_tcslen(lpszPrefix) + _tcslen(lpszSuffix)) >= cbFullName) { // If the strings will overflow the buffer, // indicate an error. *fOk = FALSE; return(NULL); } _tcscpy(lpszFullName, lpszPrefix); _tcscat(lpszFullName, lpszSuffix); return(lpszFullName); } ///////////////////////////////////////////////////////////// BOOL SWMRGInitialize(PSWMRG pSWMRG, LPCTSTR lpszName) { TCHAR szFullObjName[100]; LPCTSTR lpszObjName; BOOL fOk; // Initialize all data members to NULL so that we can // accurately check whether an error has occured. pSWMRG->hMutexNoWriter = NULL; pSWMRG->hEventNoReaders = NULL; pSWMRG->hSemNumReaders = NULL; // This mutex guards access to the other objects // managed by this data structure and also indicates // whether there any writer threads are writing. // Initially no thread owns the mutex. lpszObjName = ConstructObjName( __TEXT("SWMRGMutexNoWriter"), lpszName, szFullObjName, ARRAY_SIZE(szFullObjName), &fOk); if (fOk) pSWMRG->hMutexNoWriter = CreateMutex(NULL, FALSE, lpszObjName); // Create the manual-reset event that is signalled when // no reader threads are reading. // Initially no reader threads are reading. lpszObjName = ConstructObjName( __TEXT("SWMRGEventNoReaders"), lpszName, szFullObjName, ARRAY_SIZE(szFullObjName), &fOk); if (fOk) pSWMRG->hEventNoReaders = CreateEvent(NULL, TRUE, TRUE, lpszObjName); // Initialize the variable that indicates the number of // reader threads that are reading. // Initially no reader threads are reading. lpszObjName = ConstructObjName( __TEXT("SWMRGSemNumReaders"), lpszName, szFullObjName, ARRAY_SIZE(szFullObjName), &fOk); if (fOk) pSWMRG->hSemNumReaders = CreateSemaphore(NULL, 0, 0x7FFFFFFF, lpszObjName); if ((NULL == pSWMRG->hMutexNoWriter) || (NULL == pSWMRG->hEventNoReaders) || (NULL == pSWMRG->hSemNumReaders)) { // If a synchronization object could not be created, // destroy any created objects and return failure. SWMRGDelete(pSWMRG); fOk = FALSE; } else { fOk = TRUE; } // Return TRUE upon success, FALSE upon failure. return(fOk); } ///////////////////////////////////////////////////////////// void SWMRGDelete(PSWMRG pSWMRG) { // Destroy any synchronization objects that were // successfully created. if (NULL != pSWMRG->hMutexNoWriter) CloseHandle(pSWMRG->hMutexNoWriter); if (NULL != pSWMRG->hEventNoReaders) CloseHandle(pSWMRG->hEventNoReaders); if (NULL != pSWMRG->hSemNumReaders) CloseHandle(pSWMRG->hSemNumReaders); } ///////////////////////////////////////////////////////////// DWORD SWMRGWaitToWrite(PSWMRG pSWMRG, DWORD dwTimeout) { DWORD dw; HANDLE aHandles[2]; // We can write if the following are true: // 1. The mutex guard is available and // no other threads are writing. // 2. No threads are reading. aHandles[0] = pSWMRG->hMutexNoWriter; aHandles[1] = pSWMRG->hEventNoReaders; dw = WaitForMultipleObjects(2, aHandles, TRUE, dwTimeout); if (dw != WAIT_TIMEOUT) { // This thread can write to the shared data. // Because a writer thread is writing, the mutex should not // not be released. This stops other writers and readers. } return(dw); } ///////////////////////////////////////////////////////////// void SWMRGDoneWriting(PSWMRG pSWMRG) { // Presumably, a writer thread calling this function has // successfully called WaitToWrite. This means that we // do not have to wait on any synchronization objects // here because the writer already owns the mutex. // Allow other writer/reader threads to use // the SWMRG synchronization object. ReleaseMutex(pSWMRG->hMutexNoWriter); } ///////////////////////////////////////////////////////////// DWORD SWMRGWaitToRead(PSWMRG pSWMRG, DWORD dwTimeout) { DWORD dw; LONG lPreviousCount; // We can read if the mutex guard is available // and no threads are writing. dw = WaitForSingleObject(pSWMRG->hMutexNoWriter, dwTimeout); if (dw != WAIT_TIMEOUT) { // This thread can read from the shared data. // Increment the number of reader threads. ReleaseSemaphore(pSWMRG->hSemNumReaders, 1, &lPreviousCount); if (lPreviousCount == 0) { // If this is the first reader thread, // set our event to reflect this. ResetEvent(pSWMRG->hEventNoReaders); } // Allow other writer/reader threads to use // the SWMRG synchronization object. ReleaseMutex(pSWMRG->hMutexNoWriter); } return(dw); } ///////////////////////////////////////////////////////////// void SWMRGDoneReading(PSWMRG pSWMRG) { BOOL fLastReader; HANDLE aHandles[2]; // We can stop reading if the mutex guard is available, // but when we stop reading we must also decrement the // number of reader threads. aHandles[0] = pSWMRG->hMutexNoWriter; aHandles[1] = pSWMRG->hSemNumReaders; WaitForMultipleObjects(2, aHandles, TRUE, INFINITE); fLastReader = (WaitForSingleObject(pSWMRG->hSemNumReaders, 0) == WAIT_TIMEOUT); if (fLastReader) { // If this is the last reader thread, // set our event to reflect this. SetEvent(pSWMRG->hEventNoReaders); } else { // If this is NOT the last reader thread, we successfully // waited on the semaphore. We must release the semaphore // so that the count accurately reflects the number // of reader threads. ReleaseSemaphore(pSWMRG->hSemNumReaders, 1, NULL); } // Allow other writer/reader threads to use // the SWMRG synchronization object. ReleaseMutex(pSWMRG->hMutexNoWriter); } //////////////////////// End Of File ////////////////////////
In the above description, we said that we can't write while we're reading, and we can't read while we're writing. What happens if we never stop reading or writing? That situation is known as 'starving a thread'. It is important to try to keep your accesses to shared data short and to use a time-out mechanism. Otherwise, you could get into the situation where a writer process or (less likely) a reader process cannot gain access to the shared segment.
In an application, you may wish to have more
than one shared segment. You might wish to have one segment per
digitizer card in a VXI-based system, so that the time you spend
writing to a particular segment can be separated from the amount
of total time the application spends in reading the data. To accomplish
this, our shared memory library has the capability to create and
manage a list of shared segments. Note that this discussion is
for explanatory purposes only. The shared memory library uses
the list internally, and, to take advantage of the shared memory
facilities, the shared memory library exposes a set of functions
that does the list management for you.
The list is actually a stack and is implemented
as a doubly linked list. The list library provides functions to
make it easy to:
You create entries, add entries to, and subtract entries from a list using the following functions:
struct node *createNewNode(void *data){ struct node *newNode; newNode=calloc(sizeof(struct node), 1); if(newNode){ newNode->data=data; } return(newNode); } struct node *push(struct node **head, struct node *newNode){ struct node *headPtr; headPtr = *head; if(headPtr){ headPtr->prev=newNode; } newNode->next=headPtr; *head=newNode; return(newNode); } void *pop(struct node **head){ return(removeFromList(head, *head)); } void *removeFromList(struct node **head, struct node *nodeToRemove){ struct node *prev; struct node *next; struct node *headPtr; void *nodeData; headPtr = *head; nodeData=nodeToRemove->data; prev=nodeToRemove->prev; next=nodeToRemove->next; if(prev){ prev->next=next; } if(next){ next->prev=prev; } /* ** if we are removing the node that is the head of the list, which ** could by definition be the last node in the list, adjust the ** head pointer's prev and next pointers. */ if(nodeToRemove == headPtr){ *head=nodeToRemove->next; headPtr = *head; if(*head){ headPtr->prev=nodeToRemove->prev; } } free(nodeToRemove); return(nodeData); }
The functions you will use most often are 'push' and 'pop'. 'push' puts the data pointed to by 'newnode' onto the front of the list pointed to by 'head'. 'pop' removes the first entry from the start of the list. Both 'newNode' and 'head' are pointers to structures defined like this:
struct node { struct node *prev; struct node *next; void *data; };
'prev' and 'next' are pointers to the previous and next entries in the list. The data you are interested in is pointed to by 'data'.
To make the list processing more manageable,
we have added two iteration functions to the list library. These
functions remove the burden of traversing the list when you want
to do something with the data in the list. One of the list library
functions applies a user-defined function to all the data in your
list. The other applies a user-defined function to the data in
a list until the user-defined function indicates that it wishes
to stop traversing the list.
The function named 'iterateOver' applies a user-defined function to every data element in the list. You might use this to make a list of all the shared segment names your application has currently defined or to count up the number of segments currently allocated. 'iterateOver' is defined like this:
struct node *iterateOver(struct node *head, iterateFn iterate){ struct node *listPtr; listPtr=head; while(listPtr){ iterate(listPtr->data); listPtr=listPtr->next; } return(listPtr); }
It takes a pointer to the start of a list and a pointer to a function. The function you supply must return void and accept a void * as the single argument. The void * points at the data you put into the list. The function you supply is known as a 'call back'. The 'iterateOver' list library function goes through each element in the list and calls your function with the data contained in each successive list entry. A call back function that counts the number of shared segments currently allocated might look like this:
void numSegments(void *nodeData) { nSegments++; return; }
'nSegments' is a static or global variable
that will retain changes as you function gets called multiple
times. Note that we don't use the 'nodeData' here.
To get the list library to call your function for each list entry, you could write something like this:
long numberOfSegments(void) { struct node *listPtr; nSegments=0; listPtr=iterateOver(veeSharedMemoryList, numSegments); return(nSegments); }
When the list library is finished traversal,
'nSegments' holds the total number of list entries.
The list library function named 'iterateUntil' traverses a list, calling a user-defined function with the data in each list entry until the user-defined function indicates to the library that it wishes to stop the list traversal. This might be useful when you want to find out if your current list of shared segments contains a segment of a given name. 'iterateUntil' is defined like this:
struct node *iterateUntil(struct node *head, void *cmpData, compareFn cmp){ struct node *listPtr; listPtr=head; while(listPtr){ if(!cmp(listPtr->data, cmpData)){ break; } listPtr=listPtr->next; } return(listPtr); }
This function also takes a pointer to the start
of a list and a pointer to a user-defined function. There is one
additional piece of data it requires. That is the value you wish
to compare your list entry data to. In the specific case of searching
for a shared segment of a specific name, the comparison data would
be the shared segment name you wish to find. This function will
return a pointer to the list entry where a match was found or
a NULL pointer if no match was found.
A call back function that compares segment names to a name you are searching for might look like this:
BOOL compareSegName(void *nodeData, void *cmpData){ struct VeeSharedDataInfo *segmentInfo=(struct VeeSharedDataInfo *)nodeData; char *segmentName; segmentName=segmentInfo->segmentName; if(!strcmp(segmentName, (char *)cmpData)){ /* we found a match */ return(0); } /* keep searching */ return(1); }
Notice that, when we find a match, we return
zero, indicating to the list library that we no longer need to
continue the search.
To get the list library to perform our search, we might write something like:
short segmentIsInList(char *segmentName, enum DataType type, long nElements){ struct node *listPtr; struct VeeSharedDataInfo *segmentEntry; short returnVal=0; listPtr=iterateUntil(veeSharedMemoryList, (void *) segmentName, compareSegName); if(listPtr){ segmentEntry=(struct VeeSharedDataInfo *)listPtr->data; if((segmentEntry->dataType == type) && (segmentEntry->nDataElements == nElements)){ returnVal=1; } else{ returnVal=vErrorAlreadyExists; } } return(returnVal); }
The library contains a set of functions intended for use in an application that 'produces' data which another application will 'consume'. In our example, the data producer is the process that performs instrument control.
One of the properties of a memory-mapped file
is that its size, specified when you first create it, is fixed.
Writing beyond the end of a memory-mapped file results in an access
violation. Because of this, our library provides four functions
to allow you to create shared segments of the appropriate size.
We do this by parameterizing the memory-mapped file size by the
type of data and number of elements we want to store. The functions
are:
You can assign a descriptive name to each segment
you create. If you try to create a segment that is already defined,
the library just returns as if it had created a new on for you.
Once we have successfully created a new segment, we fill in the information needed to manage the segment, then add this information to the list of shared segments. The following function does the bulk of the work in adding a new segment:
short openSharedMemory(char *segmentName, enum DataType type, long nElements, BOOL openExisting){ short searchResult=0; short mapReturn; struct VeeSharedDataInfo *segmentEntry; struct node *newNode; char *newName; if(veeSharedMemoryList){ searchResult=segmentIsInList(segmentName, type, nElements); if(searchResult < 0){ return(searchResult); } } if (!searchResult) { newName=strdup(segmentName); if (!newName) { return(vErrorAllocFailed); } segmentEntry=calloc(sizeof(struct VeeSharedDataInfo), 1); if (!segmentEntry) { free(newName); return(vErrorAllocFailed); } newNode=createNewNode((void *)segmentEntry); if (!newNode) { free(newName); free(segmentEntry); return(vErrorAllocFailed); } segmentEntry->segmentName=newName; segmentEntry->nDataElements=nElements; segmentEntry->dataType=type; if (!veeSharedMemoryList) { veeSharedMemoryList=newNode; } else { push(&veeSharedMemoryList, newNode); } if (openExisting) { mapReturn=openMappedSegment(segmentEntry); } else { mapReturn=createMappedSegment(segmentEntry); } if (mapReturn < 0) { pop(&veeSharedMemoryList); free(newName); free(segmentEntry); return(mapReturn); } } return(0); }
Another of the functions generally associated
with a data producer is the enumeration of allocated resources.
In a typical situation, the data producer will construct a list
of its shared segments, then store that list in a location that
a data consumer can read. The data consumer will use this information
when it wants to gain access to the shared data.
When you wish to enumerate the list of shared
segments, you first allocate an array of strings that the library
will fill in with the names of each of the segments in your list.
You then ask the library to fill in this array up to the number
of entries you have allocated. The library returns to you the
number of entries it actually filled, if it made fewer entries
than you had allocated space for. We chose this technique over
the technique where the library dynamically allocates and returns
you a list, because the latter has so much potential to create
memory leaks.
The calling application also provides an array
of longs that holds the number of elements of a given type that
each individual shared segment may contain. With this, the calling
application provides an array of types that corresponds to the
type specified when we first created each segment.
With this information, we can completely enumerate
all the pertinent information about our list of shared segments.
A consuming application can tell how many segments we have, how
to identify each segment, and how to know the size and type of
each segment.
We employ our list iteration functions to accomplish
the enumeration. Note that we used static variables to communicate
information to our callback function. This is an artifact of using
our iteration functions. If we don't use this technique, we would
be forced to provide a unique form of iteration function for each
possible combination of callback function return and argument
type we might dream up.
The following functions implement our enumeration capability:
long enumerateSharedMemory(char **segmentNames, long *nElements, short *dataType, long nEntries) { struct node *listPtr; nIterations=0; nReturnEntries=nEntries; returnSegNames=segmentNames; returnElements=nElements; returnTypes=dataType; listPtr=iterateOver(veeSharedMemoryList, enumerateSegments); return(nIterations); } void enumerateSegments(void *nodeData) { struct VeeSharedDataInfo *segmentEntry; if (nIterations < nReturnEntries) { segmentEntry=(struct VeeSharedDataInfo *)nodeData; strncpy(returnSegNames[nIterations], segmentEntry->segmentName, strlen(returnSegNames[nIterations]) -1); returnElements[nIterations]=segmentEntry->nDataElements; returnTypes[nIterations]=segmentEntry->dataType; nIterations++; } return; }
When you write to a shared segment, you need
to be aware of a couple things. First, you can't write beyond
the end of the mapped segment. Well, you can try, but you'll likely
find yourself on a one-way trip to GPF city. Second, as we have
already discussed, you want the system to prevent you from writing
to a shared segment that any other process is trying to concurrently
read or write. Lastly, you don't want an attempt to write to hang
the writing process indefinitely.
Our library provides a set of functions to
allow you to write to shared memory, while observing the above
criterion. There are four functions to choose from, depending
upon the type of data you wish to use. Specifying the data type,
along with the number of elements you wish to write, allows the
library to check the size of your data against the allocated segment
size. The library will copy as much data as will fit into the
allocated storage space.
The following is a list of writing functions
the library provides:
The library also attempts to optimize your
access to any individual block of memory in the shared segment
list. You would rather not need to search the list each time you
want to access it. In general, memory accesses refer to the same
segment repetitively. With this in mind, our library keeps track
of the last segment you referenced and caches its location. If
a successive memory access refers to the same segment we have
cached, we jump straight to that location, eliminating the overhead
of a linear search.
The relevant functions are listed below:
long writeLongData(char *segmentName, long *data, long nLongs) { size_t dataSize; dataSize=nLongs * sizeof(long); return(writeSegment(segmentName, data, dataSize)); }
long writeSegment(char *segmentName, void *data, size_t dataSize) { struct node *listPtr; long canWrite; size_t allocatedDataSize; /* ** We don't want to search the list of shared memory segments ** every time we read or write. If we haven't yet written to ** or read from any memory, or the last action referenced a ** different segment, find it in the list. This way we can ** remember it for future use. */ if ((!lastNameReferenced) || (strcmp(segmentName, lastNameReferenced))) { listPtr=iterateUntil(veeSharedMemoryList, (void *) segmentName, compareSegName); if (!listPtr) { return(vErrorWriteFailed); } lastSegmentReferenced=(struct VeeSharedDataInfo *)listPtr->data; lastNameReferenced=segmentName; } /* ** If we are trying to write more data than we have allocated ** room for in the shared memory segment, we can crash. */ allocatedDataSize=(size_t)allocatedSize(lastSegmentReferenced); if (allocatedDataSize < dataSize) { dataSize=allocatedDataSize; } /* ** Acquire a write lock & write the data. Then release the ** write lock. */ canWrite=getWriteLock(lastSegmentReferenced); if (canWrite < 0) { return(canWrite); } memcpy(lastSegmentReferenced->baseAddr, data, dataSize); SWMRGDoneWriting(&lastSegmentReferenced->sharedMemoryGuard); return(0); }
long getWriteLock(struct VeeSharedDataInfo *segmentEntry) { DWORD waitResult; waitResult=SWMRGWaitToWrite(&segmentEntry->sharedMemoryGuard, timeToWait); return(lockResult(waitResult)); }
Once the data-producing application has created the list of shared segments, it (the producing application) and the consumer need to set up some form of protocol to communicate the information necessary for the consumer to identify the shared memory. Once the producer and consumer have agreed upon an exchange protocol, it is then up to the consumer to open the existing shared segments.
We do this by using the same library function
we used to initially create the segments. The last argument to
our library function 'openSharedMemory' is 'openExisting'. If
this flag is true, we will call the Win32® API function 'OpenFileMapping'
instead of the call to 'CreateFileMapping' we used before.
We could have opted to use a call to 'CreateFileMapping'
for both situations. An argument allows you to specify that 'CreateFileMapping'
should return you a handle to an existing segment, if it already
exists.
Calling 'OpenFileMapping' causes the operating system to increment a reference count associated with the mapped file. The operating system uses these reference counts to know when an object is no longer used and can be safely removed from the system. In this way, the operating system prevents resource leaks that would otherwise lead to deteriorating systemic performance.
We can apply all of the considerations we listed
when discussing writing to a shared segment to a discussion of
reading shared segments. We again provide four functions, parameterized
by data type and number of elements:
The function implementations are listed below:
long readLongData(char *segmentName, long *data, long nLongs) { size_t dataSize; dataSize=nLongs * sizeof(long); return(readSegment(segmentName, data, dataSize)); }
long readSegment(char *segmentName, void *data, size_t dataSize) { struct node *listPtr; long canRead; size_t allocatedDataSize; if ((!lastNameReferenced) || (strcmp(lastNameReferenced, segmentName))) { listPtr=iterateUntil(veeSharedMemoryList, (void *)segmentName, compareSegName); if (!listPtr) { return(vErrorReadFailed); } lastSegmentReferenced=(struct VeeSharedDataInfo *)listPtr->data; lastNameReferenced=segmentName; } allocatedDataSize=(size_t)allocatedSize(lastSegmentReferenced); if (allocatedDataSize < dataSize) { dataSize=allocatedDataSize; } canRead=getReadLock(lastSegmentReferenced); if (canRead < 0) { return(canRead); } memcpy(data, lastSegmentReferenced->baseAddr, dataSize); SWMRGDoneReading(&lastSegmentReferenced->sharedMemoryGuard); return(0); }
long getReadLock(struct VeeSharedDataInfo *segmentEntry) { DWORD waitResult; waitResult=SWMRGWaitToRead(&segmentEntry->sharedMemoryGuard, timeToWait); return(lockResult(waitResult)); }
When you no longer need the shared segments,
it is a good practice to get rid of them. Leaving unused objects
hanging around needlessly consumes system resources.
You remove an object from the system by calling
the Win32® API function 'UnmapViewOfFile', followed by a call
to 'CloseHandle'. The first of these functions causes the operating
system to remove the reference to a mapped file from your process
space. Any future attempt to access the pointer previously associated
with the mapped file will result in an access violation. 'CloseHandle'
decrements the reference count associated with mapped file. When
the reference count drops to zero, the operating system removes
the file mapping from its virtual memory pool.
The library provides two variants of functions
to destroy shared segments: one that allows you to remove a single,
named function; and one that gets rid of all the named segments.
We need to be careful when we are asked to remove our cached segment.
We could run into a problem if we removed the segment our read/write
cache points to. If we try to do that, we remove the cache.
The relevant library functions are listed below. Notice the use of the list iteration function to point us at the correct list entry.
long destroySharedMemory(char *segmentName) { struct VeeSharedDataInfo *segmentEntry; struct node *listPtr; long returnVal=0; BOOL result; listPtr=iterateUntil(veeSharedMemoryList, (void *)segmentName, compareSegName); if (listPtr) { segmentEntry=(struct VeeSharedDataInfo *)removeFromList( &veeSharedMemoryList, listPtr); result=removeMappedSegment(segmentEntry); returnVal=(long)((result == TRUE ) ? 0 : -1); } return(returnVal); }
long destroyAllSharedMemory(void) { BOOL result; while (veeSharedMemoryList) { result=removeMappedSegment(pop(&veeSharedMemoryList)); } return((long)((result == TRUE) ? 0 : -1)); }
BOOL removeMappedSegment(struct VeeSharedDataInfo *segmentEntry) { BOOL result; if (lastNameReferenced && (!strcmp(lastNameReferenced, segmentEntry->segmentName))) { lastNameReferenced=(char *)0; lastSegmentReferenced=(struct VeeSharedDataInfo *)0; } result=UnmapViewOfFile(segmentEntry->baseAddr); CloseHandle(segmentEntry->segmentHandle); destroyReadWriteGuard(segmentEntry); free(segmentEntry->segmentName); free(segmentEntry); return(result); }
This example illustrates the use of a C program
and an HP VEE program communicating through shared memory. The
C program fulfills the role of the data producer, while the VEE
program dynamically displays the shared memory contents.
The data producer fills two floating-point
arrays with sine and cosine data, then creates a shared segment
to hold the data. It then enumerates the list of shared segments
and writes that data to file for the consumer to read. The producer
then goes into a loop, alternating between writing the sine and
cosine data into shared memory.
The producer application is listed below:
/* ** $RCSfile: master.c $ ** $Revision: 1.6 $ ** $Author: doomer $ ** $Date: 1996/02/08 11:44:14 $ ** Copyright (c) 1996 John Dumais */ #include "sharedMemory.h" #include#include #include #define NUM_ARRAY_ELEMENTS 1440 #define PI 3.1415927 #define N_SEGMENT_ENTRIES 4 #define DEFAULT_ENTRY "defaultEntry" static double cosineArray[NUM_ARRAY_ELEMENTS]; static double sineArray[NUM_ARRAY_ELEMENTS]; void initializeSineArray(void) { long i; double sampleInterval; sampleInterval=2*PI/(NUM_ARRAY_ELEMENTS -1); for (i=0; i 0) { segmentFile=fopen("\\tmp\\segment.dat", "w"); if (segmentFile) { fprintf(segmentFile, "%ld\n", nSegments); for (i=0; i Figure 27: The Data Producer
The Consumer App
Our VEE program reads, from a previously-agreed-upon file, the number of shared segments our data producer created. For each, we read the segment name, type, and number of elements. Then we open those segments. We loop indefinitely, displaying the shared segment contents.
Figure 28: The Consumer
Source Code
Shared Memory
List Management
Data Producer Program
Data Consumer Program
References
- Jeffrey Richter, Advanced Windows: The Developer's Guide to the Win32® API for Windows NT™ 3.5 and Windows 95, Microsoft Press, 1995.