?? mmarch.doc
字號:
long dRespExch,
long *pdRqHndlRet,
long dnpSend
char *pData1SR,
long dcbData1SR,
char *pData2SR,
long dcbData2SR,
long dData0,
long dData1,
long dData2 );
At first glance, you'll see that the call for making a request is a little bit more complicated than SendMsg. But, as the plot unfolds you will see just how powerful this messaging mechanism can be. Lets look at each of the parameters:
pSvcName - Pointer to the service name you are requesting.
wSvcCode - Number of the specific function you are requesting. These are documented independently by each service.
dRespExch - Exchange that the service will respond to with the results (an exchange you have allocated).
*pdRqHndlRet - Pointer where the OS will return a handle to you used to identify this request when you receive the response at your exchange.
dnpSend - is the number (0, 1 or 2) of the two data pointers that are moving data from you to the service. The service already knows this, but network transport mechanisms do not. If pSend1 was a pointer to some data the service was getting from your memory area, and pSend2 was not used or was pointing to your memory area for the service to fill, dnpSend would be 1. If both data items were being sent to you from the service then this would be 0.
*pData1 - Pointer to memory in your address space that the service must access (either to read or write as defined by the service code). For instance, the File System an Open File function. For this function, pData1 points to the file name (data being sent to the service). This may even point to a structure or an array depending on how much information is needed for this particular function (service code).
dcbData1 - How many bytes the pDataSend points to. Using the Open File example above, this would be the size (length) of the filename.
*pData2 - This is a second pointer exactly like pData1 described above.
dcbData2 - This is the same as dcbData1.
dData0
dData1
dData2 - These are 3 DWords that provide additional information for the service. In many functions you will not even use pData1, or pData2 to send data to the service, but will simply fill in a value in one or more of these 3 DWords. These can never be pointers to data. This will be explained later in memory management.
The Respond primitive is much less complicated than Request. This doesn't mean the system service has it easy. There's still a little work to do. Here is the C prototype:
Respond(long dRqHndl, long dStatRet);
dRqHndl - the handle to the request block that the service is responding to.
dStatRet - the status or error code returned to the requestor.
A job (your program) calls Request and asks a service to do something, then it calls WaitMsg and sits at an exchange waiting for a reply. If you remember, the message that comes in to an exchange is an 8 byte (2 DWords) message. Two questions arise:
1) How do we know that this is a response or just a simple message sent here by another task?
2) Where is the response data and status code?
First, The content of the 8 byte message tells you if it is a message or a response. The convention is the value of the first DWord in the message. If it is 80000000h or above, it is NOT a response. Second, this DWord should match the Request Handle you were provided when you made the Request call (remember pRqHndlRet?). If this is the response, the second DWord is the status code or error from the service. Zero (0) usually indicates no error, although its exact meaning is left up to the service.
Second, if the request was serviced (with no errors), the data has already been placed into your memory space where pData1 or 2 was pointing. This is possible because the kernel provides alias pointers to the service into your data area to read or write data. Also, if you were sending data into the service via pData1 or 2, the kernel has aliased this pointer for the service as well, and the service has already read your data and (hopefully) done something with it.
Not as difficult as you expected. right? Let me guess, this aliasing thing with memory addresses is still a bit muddy. A little further into this section we cover memory management completely which should clear it up entirely.
Link Blocks
I keep referring to how a message or a task just "sits" or "waits" at an exchange. An exchange is a small structure in memory. We need a way to attach these things to the exchange. We could use Elmer's Glue, but this could slow things down a bit. Instead, we opted to use something called a Link Block (LB). A Link Block is a little structure (smaller than an exchange) that becomes a link in a linked list of items that are connected to an exchange. Not very big, but still very important. You will find out how important they are if you run out of them!
There is one link block in use for every outstanding request, one for every message waiting at an exchange, and one for every task waiting at an exchange or on the ready queue. This can add up to hundreds in a real hurry.
Task Scheduling
MMURTL task switches occur for only three reasons:
1) An outside event (an interrupt) caused a message to
be sent to a task that has an equal or higher priority than the currently running task.
2) The currently running task can't continue because it needs more information from the outside world such as keystrokes, mouse position, file access, timer services (or whatever). In this case it sends a "request" and goes into a "waiting" state and the next highest priority task executes. SendMsg, WaitMsg, Request, and Respond will cause a reevaluation of the Ready Queue. CheckMsg, and ISendMsg do not.
3) The OS has detected that there is a process with a priority on the ready queue with the same or higher priority than the task that is currently running, and a predetermined amount of time has lapsed. This is the only "time-slicing" that MMURTL does.
This means that there are only 3 states a task can be in:
1) Waiting (at an exchange)
2) Ready to Run (on the Ready Queue)
3) Running
In order to respond to messages properly and be fair to all the tasks running on the system, MMURTL uses a prioritized task scheduler. When each of the tasks is created they are assigned a priority.
Assigning priorities may be the hardest part of writing system services. Applications have it easy. They all run at a much lower priority, and all applications run at about the same priority. Tasks that handle data for time critical functions like servicing communications buffers and overall device management will have higher priorities.
MMURTL has 32 priorities with 0 being the highest and 31 the lowest. General purpose applications (editors, word processors, compilers, etc.) all run at 25. As you can see this leaves 0 through 24 for more important things, and only 26 through 31 for less important things (spoolers, etc.).
The Ready Queue is like 32 exchanges where only tasks can wait.
MMURTL does do some time slicing, but it's only between tasks that have equal priorities. All applications running would therefore get an equal shot at the CPU. The time slicing is accomplished by the timer interrupt function which serves several purposes in the system. The quick check of the Ready Queue adds very little overhead to the timer function.
MMURTL tasks are managed with the 386 task management structure called a Task State Segment (TSS). It is well documented in the Intel literature, but system builders can add fields to the TSS for support of additional items that are relevant to a particular implementation. We have added a few. If you look at the TSS structure defined in the code, you will see we have added things like pJCB (pointer to Job COntrol Block) to keep track of who owns the task, and also a place to hold a default exchange that is used by the operating system for some functions.
You now have a pretty good idea about MMURTL's tasking model. The next important item on the agenda is memory management.
---------------------------------------
MMURTL Memory Management and Protection
MMURTL uses the 386/486 hardware based paging for memory allocation and management. The concept of hardware paging is not as complicated as it first seems (it only took half my natural life to get it straight). Before we dive into this complicated topic, we need to get some more terms out of the way. These are not disputed at all because they are 100 percent Intel defined in their documentation. You simply need to know what I'm talking about.
Physical memory is the memory chips and their addresses
as accessed by the hardware. If I put address 00001 on the address bus of the processor I am addressing the second byte of physical memory (address 0 is the first. Sorry if I insulted you...).
Linear memory is what programs use as they run. This memory is actually translated by the paging hardware to physical addresses that MMURTL manages. Programs running in MMURTL have no idea where they are physically running in the machine's hardware address space, nor would they want to. These are "Fake" addresses, but very real to every task on the system.
Logical memory is the memory that programs deal with and is based around a "selector." A protected mode program's memory is always referenced to a selector which is mapped (in a table) to linear memory by the OS and is translated by the processor. The selectors are managed in a table called the Global Descriptor Table. It is read by the processor where an additional address translation takes place. The GDT allows you to set up a zero based address space that really isn't at linear address zero. This means you can locate code or data anywhere you want in the 4 Gb of linear memory, and still reference it as if it were at offset zero. This is a very handy feature, even though we only use it in two places. The effect of the BASE offset in a GDT entry can be ignored by simply setting the base to zero. Which means the entire address space of the processor is equal to the address space in this selector. This is how we have the Data segment set up for all MMURTL jobs.
If you are familiar with segmented programming, you know that with MS-DOS, programs generally had one data segment which was usually shared with the stack, and one or more code segments. This was commonly referred to as a "Medium Memory Model" program. In the 80x86 Intel world there are Tiny, Small, Medium, Large, and Huge models to accommodate the variety of segmented programming needs. In MMURTL there is only ONE memory model. It is most analogous to the small memory model where you have two segments. One is for code and the other is for data and stack. This may sound like a restriction until you consider that a single segment can be as large as all physical memory (larger with demand page virtual memory, which MMURTL may have someday if I can clone myself).
MMURTL really doesn't provide memory management in the sense that compilers and language systems provide a Heap or an area that is managed and cleaned up for the caller. MMURTL is a Paged memory system. MMURTL hands out (allocates) pages of memory as they are requested, and returns them to the pool of free pages when they are turned in (DeAllocated). MMURTL manages all the memory in the processor's address space as pages.
A Page is Four Kilobytes (4Kb) of contiguous memory. It is always on a 4Kb boundary of physical as well as linear addressing.
Segmentation
Before we jump into paging, lets get segmentation completely out of the way. Segmentation was great when you had to live with 64K segments. We don't. We use almost no segmentation. The OS and all applications use only 3 defined segments: The OS Code segment, the Application Code segment, and one data segment for everyone. MMURTL has it's own code segment to make things easier for the OS programmer, and for protection within the OS pages of memory. Making the OS code zero based from it's own selector is not a necessity, but nice to have.
The "selectors" (segment numbers for those coming from real mode programming) are fixed. These will never change in MMURTL as long as they are legal on Intel and work-alike processors.
The OS code segment is 08h.
The User code segment is 18h.
The Common Data segment is 10h.
MMURTL's memory management scheme allows us to use 32 bit data pointers exclusively. This greatly simplifies every program we write. It also speeds up the code by maintaining the same selectors throughout most of the program's execution. The only selector that will change is the code selector as it goes through a call gate into the OS and back again. This means the ONLY 48 bit pointers you will ever use in MMURTL are for an OS call address (16 bit selector, 32 bit offset).
Paging
Paging allows us to manage physical and linear memory address with simple table entries. These table entries are used by the hardware to translate (or map) PHYSICAL memory to what is called LINEAR memory. Linear memory is what applications see as their own address space. For instance, we can take the very highest 4K page in physical memory and map it into the application's linear space as the second page of its memory. This 4K page of memory becomes addresses 4096 through 8191 even though it's really sitting up at a physical 16 megabyte address (if we had 16 megs of RAM). No, it's not magic, but it's close...
Page Tables (PTs)
The tables that hold these translations are called Page Tables (PTs). Each entry in a PT is called a Page Table Entry (PTE). There are 1024 PTEs in every PT. Each PTE is four bytes long. Aren't acronyms fun? Sure, right up there with CTS (Carpal Tunnel Syndrome).
With 1024 entries (PTEs) each representing 4K, one 4K Page Table can manage 4 megabytes of linear/physical memory. That's not too much overhead for what we get out of it.
Here's the tricky part (like the rest was easy?). The OS itself is technically not a JOB. Sure, it has code and data and a task or two. But most of the OS code (specifically the kernel) runs in the task of the job that called it. The kernel itself is never scheduled for execution (sounds like a "slacker" to me). Because of this, the OS really doesn't own any of it's memory. The OS is SHARED by all the other jobs running on the system. The Page Tables that show where the OS code and data are located get mapped into EVERY job's memory space. Yes, that's right, MMURTL is nothing but a Lady of the Evening... sharing everything it has with every job that comes along. What a way to live.
Now, a few more acronyms just to ensure you're pushed over the edge I've got you perched on.
?? 快捷鍵說明
復制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號
Ctrl + =
減小字號
Ctrl + -