Interprocess communication ipc ค อ อะไร ม ก ประเภท

The interprocessor communication (IPC) peripheral is used to send and receive events between MCUs in the system.

The following figure illustrates the IPC peripheral in a multi-MCU system, where each MCU has one dedicated IPC peripheral. The IPC peripheral can be used to send and receive events to and from other IPC peripherals.

Figure 1. IPC block diagram

An instance of the IPC peripheral can have multiple SEND tasks and RECEIVE events. A single SEND task can be configured to signal an event on one or more IPC channels, and a RECEIVE event can be configured to listen on one or more IPC channels. The IPC channels that are triggered in a SEND task can be configured through the registers, and the IPC channels that trigger a RECEIVE event are configured through the registers. The figure below illustrates how the and registers work. Both the SEND task and the RECEIVE event can be connected to all IPC channels.

Figure 2. IPC registers SEND_CNF and RECEIVE_CNF

A SEND task can be viewed as broadcasting events onto one or more IPC channels, and a RECEIVE event can be seen as subscribing to a subset of IPC channels. It is possible for multiple IPCs to trigger events onto the same PPI channel at the same time. When two or more events on the same channel occur within tIPC, the events may be merged into a single event seen from the IPC receiver. One of the events can therefore be lost. To prevent this, the user must ensure that events on the same IPC channel do not occur within tIPC of each other. When implementing firmware data structures, such as queues or mailboxes, this can be done by using one IPC channel for acknowledgements.

An IPC event often does not contain any data itself, it is used to signal other MCUs that something has occurred. Data can be shared through shared memory, for example in the form of a software implemented mailbox, or command/event queues. It is up to software to assign a logical functionality to an IPC channel. For instance, one IPC channel can be used to signal that a command is ready to be executed, and any processor in the system can subscribe to that particular IPC channel and decode/execute the command.

General purpose memory

The registers can be used freely to store information. These registers are accessed like any other of the IPC peripheral's registers. Note that the contents of the GPMEM registers are not shared between the instances of the peripherals. I.e. writing the GPMEM register of one peripheral does not change the value in another.

Inter-Process Communication (IPC) allows isolated processes to communicate securely and is key to building more complex applications.

Tauri uses a particular style of Inter-Process Communication called , where processes exchange requests and responses serialized using some simple data representation. Message Passing should sound familiar to anyone with web development experience, as this paradigm is used for client-server communication on the internet.

Message passing is a safer technique than shared memory or direct function access because the recipient is free to reject or discard requests as it sees fit. For example, if the Tauri Core process determines a request to be malicious, it simply discards the requests and never executes the corresponding function.

In the following, we explain Tauri's two IPC primitives - Events and Commands - in more detail.

Events

Events are fire-and-forget, one-way IPC messages that are best suited to communicate lifecycle events and state changes. Unlike , Events can be emitted by both the Frontend and the Tauri Core.

sequenceDiagram participant F as Frontend participant C as Tauri Core F-)+C: IPC request note over C: Perform computation, write to file system, etc. C-)-F: Response

This document defines the interprocess communication (IPC) and is applicable to AIX versions 4 and 5.


What is interprocess communication?

Official definition: Interprocess communication (IPC) is used for programs to communicate data to each other and to synchronize their activities. Semaphores, shared memory, and internal message queues are common methods of interprocess communication.

What it means: IPC is a method for two or more separate programs or processes to communicate with each other. This avoids using real disk-based files and the associated I/O overhead to pass information. Like a file, you must first create or open the resource, use it and close it. Like real files, the resources have an owner, a group, and permissions. Until you remove the resource it continues to exist. Unlike real disk-based files, semaphores, message queues and shared memory do not persist across reboots.


Reasons to use interprocess communication

Use IPCs when you need to talk between programs, you want the talking to be fast, and you do not want to write the code to manage the low-level details of communication between the processes. Since these are kernel routines, the kernel will take care of the details of the communication. For example, when you are waiting for a resource that is protected by a semaphore to become available, if you request access and the resource is currently in use, the kernel will place you in a waiting queue. When the resource becomes available, the kernel unblocks your process and you can continue. The kernel also ensures that operations are atomic, which means that a test and increment operation to set a semaphore cannot be interrupted.


Similarities and differences in IPC calls

All of the resources in the interprocess communication routines act in a similar manner and the function calls used are very similar:

Functionally Message System Call Shared
                      Queue         Semaphore           Memory 

Allocate an IPC, msgget semget shmget gain access to an IPC. control an IPC, obtain/ msgctl semctl shmctl modify status info, remove an IPC. IPC operations; send/ msgsnd shmat receive messages. msgrcv semop shmdt Perform semaphore operations. Attach/free a shared memory segment

IPC routines differ in the amount of data that is manipulated:

  • A semaphore is a long integer, that is a single number.
  • A message queue can contain up to 4096 characters.
  • A shared memory region can be up to 1TB long (AIX 5.2 64bit app).

How to see what is currently active

Two command line utilities are available:

ipcrm removes semaphores, message queues and shared memory areas from the system.

ipcs shows status of semaphores, message queues and shared memory.

The ipcrm command is a front end for the shmctl, semctl, and msgctl system calls. Depending upon what flags are passed to the command and if the caller has proper permissions, this will mark the proper resource for deletion. As with disk based files, if someone is currently using the resource, it will remain available to that process until they detach from it.

The ipcs command is used to view current status:

ipcs -am (shared memory)

ipcs -aq (message queues)

ipcs -as (semaphores)

Fields common to each of the IPC types

Some of the fields are common to each of the IPC types:

T Type= m (shared memory), s (semaphore) or q (message queues) ID This is the identifier for the entry similar to a file descriptor. It is used by the operations function calls to access the resource after a get is performed on it. KEY Similar to a file name, this is what the routine uses to get, or open, the resource. When you get this name, the return value is the ID. If the key is 0xFFFFFFFF (IPC_PRIVATE), this entry can only be used by related, parent/child processes. MODE Permissions and status. The meaning of the field is different for each of the IPC types.

The first two characters can be one of these: RSCD

R and S These are for the message queues and indicate a process is waiting on a message send or receive call. D Indicates the shared memory segment has been deleted but will not disappear until the last process attached to it releases it.

NOTE: The key will be changed to ipc_private, that is, all zeros (0) when the segment has been marked for removal but still has attached processes. C

This shared memory segment will be cleared when the first process attaches to it. Flag not set. The next nine characters are permission bits with r indicating read access for this position; w indicating either write or alter access, depending upon the IPC facility it is attached to. No permissions for this operation.

IPCs need permissions for owner, group and other.

OWNER The login name of the owner GROUP The name of the group that owns the entry CREATOR The login name of the creator of the entry CGROUP The group of the creator of the entry CTIME The time when the associated entry was created or changed

Headings that are special to message queues

CBYTES The current total byte count of all messages in the queue QNUM The number of messages currently in the queue QBYTES The maximum number of bytes allowed for all messages in the queue LSPID The ID of the last process that sent a message to the queue LRPIC The ID of the last process that received a message in the queue STIME The time the last message was sent to the queue RTIME The time the last message was received, or read, from the queue

Headings that are special to semaphores

NSEMS The number of semaphores in the set for this entry OTIME The time the last operations was done for this semaphore entry

Headings that are special to shared memory

NATTCH The current number of processes attached to this segment SEGSZ The size of the segment in bytes CPID The process ID of the creator of the segment LPID The process ID of the last process to either attach or detach from this segment ATIME The time of the last attach to the segment DTIME The time of the last detach from the segment


Semaphores

Semaphores are specialized data structures used to coordinate access to a non-sharable resource. Cooperating, or possibly competing, processes use semaphores to determine if a specific resource is available. If a resource is unavailable, by default, the system will place the requesting process in an associated queue. The system will notify the waiting process when the resource is available. This alleviates the process from using polling to determine the availability of the resource.

Most often, semaphores are used for process synchronization, or software lock. Semaphores are normally of either type binary or counting, depending upon how they are used. A binary semaphore controls a single resource and it is either 0, indicating that the resource is in use, or 1, indicating that the resource is available. A counting semaphore increments and decrements a counter, a non-negative integer to determine if an instance of the controlled resource is currently available.

The system assures that the test and increment operation is atomic, which means it cannot be divided or interrupted.


Message queues

A message queue is used for passing small amounts of information between processes in a structured manner. Information to be communicated is placed in a predefined message structure. The process generating the message specifies its type (user-defined) and places the message in a system-maintained message queue. Processes accessing the message queue can use the message type to selectively read messages of specific types in a first-in-first-out manner. Message queues provide the user with a means of multiplexing data from multiple producers. Non-related processes executing at different times, can use a message queue to pass information.


Shared memory

Shared memory allows multiple processes to share virtual memory space. This method is the fastest to coordinate but not necessarily the easiest for processes to communicate with one another. In general, one process creates/allocates the shared memory segment. The size and access permissions for the segment are set when it is created. The process then attaches, or opens, the shared segment, causing it to be mapped into its current data space. If needed, the creating process then initializes the shared memory. Once created, and if permissions permit, other processes can gain access to the shared memory and map it into their data space.

Ordinarily, semaphores are used to coordinate access to a shared memory segment. When a process is finished with the shared memory segment, it can detach from it without deleting. The creator of the segment may grant ownership of the segment to another process. When all processes are finished with the shared memory segment, the process that created the segment is usually responsible for removing it.

Information is communicated by accessing shared process data space. This is the fastest method of interprocess communication. Shared memory allows participating processes to randomly access the shared memory segment.


Understanding memory mapping

The speed in which application instructions are processed on a system is proportionate to the number of access operations required to obtain data outside of program-addressable memory. The system provides two methods for reducing the transactional overhead associated with these external read and write operations. You can map file data into the process address space. You can also map processes to anonymous memory regions that may be shared by cooperating processes.

Memory-mapped files provide a mechanism for a process to access files by directly incorporating file data into the process address space. The use of mapped files can significantly reduce I/O data movement since the file data does not have to be copied into process data buffers, as is done by the read and write subroutines. When more than one process maps the same file, its contents are shared among them, providing a low-overhead mechanism by which processes can synchronize and communicate.

Mapped memory regions, also called shared memory areas, can serve as a large pool for exchanging data among processes. The available subroutines do not provide locks or access control among the processes. Therefore, processes using shared memory areas must set up a signal or semaphore control method to prevent access conflicts and to keep one process from changing data that another is using. Shared memory areas can be most beneficial when the amount of data to be exchanged between processes is too large to transfer with messages, or when many processes maintain a common large database.


Mapping files with the shmat subroutine

Mapping can be used to reduce the overhead involved in writing and reading the contents of files. Once the contents of a file are mapped to an area of user memory, the file may be manipulated as if it were data in memory, using pointers to that data instead of input/output calls. The copy of the file on disk also serves as the paging area for that file, which saves paging space.

A program can use any regular file as a mapped data file. You can also extend the features of mapped data files to files containing compiled and executable object code. Because mapped files can be accessed more quickly than regular files, the system can load a program more quickly if its executable object file is mapped to a file.

For more information on using any regular file as a mapped data file, see "Creating a mapped data file with the shmat subroutine" in your online documentation.

[{"Product":{"code":"SWG10","label":"AIX"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Component":"Process and memory management","Platform":[{"code":"PF002","label":"AIX"}],"Version":"5.3;5.2;5.1;4.3","Edition":"","Line of Business":{"code":"LOB08","label":"Cognitive Systems"}}]