Dynamic interconnection Networks
- Dynamic interconnection networks are connections between processing nodes and memory nodes that are usually connected through switching element nodes.
- It is scalable because the connections can be reconfigured before or even during the execution of a parallel program.
- Here instead of fixed connections, the switches or arbiters are used.
- The dynamic networks are normally used in shared memory (SM) multiprocessors.
- These networks use configurable paths and do not have a processor associated with each node.
- Switches or arbiters must be used along the connecting paths to offer the dynamic connectivity instead ofusing fixed connections.
- The price tags of dynamic interconnection networks are attributed to the cost of wires, switches , arbiters and connectors required.
Shared path Networks: Shared bus
Shared bus organization is simply extension of the buses employed in uniprocessors. It contain same bus lines(address, data, contro~ interrupt) and some additional ones to solve the contention on the bus when several processors simultaneously want to use the shared bus.
These lines are called arbitration lines and play a crucial role in the implementation of shared buses. Secondly, the shared bus is a very cost-effective interconnection scheme. It can be raising the number of processors does not improve the price of the shared bus. However, the contention on the shared bus represents a strong limitation on the number of application processors.
Obviously, as the nuinber of processors on the bus increases, the probability of contention also increases proportionally, reaching a point when the whole bandwidth of the bus is exhausted by the processors, and hence, adding a new processor will no( cause any potential speed-up in the multiprocessor. One of the main · design issues in shared bus multiprocessors is the enhancement of the number of applicable processors by different methods.
There are the three most important techniques are as follows -
• It can be introducing private memory.
• It can be introducing coherent cache memory.
• It can be introducing multiple buses.
With0ut these improvements, the applicable number of processors is in the range of 3-5- By introducing private memory and coherent cache memory, the number of processors can be increased by an order of magnitude up to 30 processors. Bus hierarchies open the way to constructing scalable shared memory syStem based on bus interconnection.
According to the state of the bus request lines and the applied bus allocation policy, the arbiter grants one of the requesters via the grant lines.
Ahhough the uniprocessor and multiprocessor buses are very similar. There is an important difference in their mode of operation. Uniprocessor and first-generation multiprocessor systems use locked buses.e.g:- MultiBus, VMEbus The second-generation multiprocessor used pended buses.
A memory write access need two phases that are as follows -
Phase I:- The address and data are transferred i.e., bus to the memory controller.
Phase2:- The memory write operation including parity check, error correction, and so on is executed by the memory controller.
The exploitation of the fast bus en be further improved by optimizing memory read access. A memory read can access three phases are as follows -
• The address is transferred via bus to the memory controller. • The memory read operation is executed by the memory controller.
• The data is transferred via bus to requesting processor.




Comments
Post a Comment