Ethernet plays an important role as a communication platform for automation systems. The Key strength of Ethernet is in its ability to run many protocols simultaneously on the same network. Ethernet enables flexibility, scalability and performance of industrial networks in a way not seen before in automated systems.
Contents
Principle of Operation of Ethernet Network
Ethernet is a random access (RA) network also referred to as carrier sense multiple access (CSMA) network. Each node listens to the network and can start transmitting at any time that the network is free. Typically, once the network is clear, a node must wait for a specified amount of time (interframe time) before sending a message. To reduce collisions on the network, nodes wait an additional random amount of time called the backoff time before they start transmitting. Some types of messages such as medium access control (MAC) layer acknowledgement packets (ACKs) may be sent a shorter interframe time. Priorities can be implemented by allowing for shorter interframe times for higher priority traffic. Nevertheless, if two nodes start sending messages at the exact same time or if the second node starts transmitting before the first message arrives at the second node, there will be a collision on the network. Collisions in Ethernet network are destructive; the data will be corrupted, and the messages must be resent.
Wired-Ethernet Networks: Types & Operations
We have several types of Wired Ethernet networks; we look at each type and the basic principle of operation:
Hub-Based Ethernet
Hub-based Ethernet employs hub(s) to interconnect the devices on a network. This Ethernet type is commonly used in the office environment.
When a packet comes into one hub interface, the hub simply broadcasts the packet to all the other hub interfaces. Therefore, all the devices on the same network receive the same packet simultaneously, and message collisions are possible.
Collisions are dealt with using collision sense multiple access (CSMA)/collision detection (CD) protocol as specified in the IEEE 802.3 network standard. The operation of this protocol is as follows: when a node wants to transmit, it listens to the network. If the network is busy, the node waits until the network is free, otherwise, it can transmit immediately (assuming an interframe delay as elapsed since the last message on the network). If two or more nodes listen to the idle network and decide to transmit simultaneously, the messages of these transmitting nodes collide, and the messages are corrupted. While transmitting, a node must listen to detect a message collision. On detecting a collision between two or more messages, a transmitting node transmits 32 jam bits and waits for a random length of time to retry its transmission. This random time is determined by the standard binary exponential backoff algorithm. The retransmission time is randomly selected between 0 and 2i slot times where i denotes the ith collision event detected by the node and one slot time is the minimum time needed for a round trip transmission. Nevertheless after 10 collisions have been reached, the interval is fixed at a maximum of 1023 slots. After 16 collisions, the node stops attempting to transmit and reports failure back to the node microprocessor. Further recovery may be tried in higher layers.
The Ethernet data payload size is between 46 -1500 bytes. There is non-zero minimum data size requirement because the standard states that valid frames must be at least 64 bytes long that includes 16 bytes of overhead. If the data portion of the frame is less than 46 bytes, the pad field is used to fill out the frame to the minimum size.
Recommended: The Ultimate Guide to Electrical Maintenance
Switched Ethernet
Switched Ethernet utilizes switches to divide the network architecture, thereby avoiding collisions, increasing network efficiency and improving determinism. This type of Ethernet is widely employed in manufacturing applications.
The key difference between switch and hub-based Ethernet networks is the intelligence of forwarding packets. Hubs simply pass an incoming traffic from any port to all other ports; whilst switches learn the topology of the network and forward packets to the destination port only.
In a star like network layout, every node is connected with a single cable to the switch as a full-duplex point-to-point link. Hence, collisions can no longer occur on any network cable. Switched Ethernet relies on this star topology layout to achieve this collision free property.
Don’t miss out on key updates, join our newsletter List
Related: Topological Network Structures used in Fieldbus Systems
Switches use the cut-through or store-and-forward techniques to forward packets from one port to another, using per-port buffers for packets waiting to be sent on that port.
Switches employing cut-through technique, first read the MAC (medium access control) address and then forward the packet to the destination port according to the MAC address of the destination and the forwarding table on the switch. Alternatively, switches that employ store-and-forward technique examine the complete packet first using the CRC (cyclic redundancy check) code; the switch will first verify that the frame has been correctly transmitted before forwarding the packet to the destination port. If there is an error, the frame will be discarded. The store-and-forward switches are slower but they won’t forward any corrupted packets.
Though there are no message collisions on the networks, congestion may occur inside the switch when one port suddenly receives a large number of packets from other ports. If buffers inside the switch overflow, messages will be lost. Three main queuing principles are implemented inside the switch in this case:
- First in first out (FIFO) queue
- Priority queue
- Per flow queue
How to Access Our Premium Articles & Resources Learn More
The FIFO queue is a conventional method that is fair and simple. Nevertheless, if the network traffic is heavy, the network quality of service for time and fair delivery cannot be guaranteed.
In the priority queueing scheme, the network manager reads some of the data frames to distinguish which queues will be more important. Thus, the packets can be classified into different levels of queues. Queues with high priority will be processed first followed by queues with low priority until the buffer is empty.
With the per-flow queue technique, queues are assigned different levels of priority. All queues are then processed one by one according to priority; therefore, the queues with higher priority will generally have higher performance and could potentially block queues with low priority.
Therefore even though switched Ethernet can avoid the extra delays due to collisions and retransmissions, it can introduce delays associated with buffering and forwarding.
Related: Basic Features of Controller Area Network (CAN)
Industrial Ethernet
There are a number of Ethernet protocols designed specifically for industrial applications. They include:
- EtherNet/IP
- Modbus/TCP
- PROFINET
Even though these 3 protocol specifications vary to some extent at all levels of the OSI model, they all fundamentally employ or recommend switched Ethernet aforementioned. Therefore, the differences in performance between industrial Ethernet technologies lie more with the devices than the protocols.
Real-time Ethernet Networks
In networked control systems, communication networks that can guarantee the delivery of a transmitted message before a preset operational deadline or within a time interval are termed as real-time networks.
Examples of industrial applications that employ real-time Ethernet (RTE) are:
- Motion control e.g. servo motors or drives
- Automotive manufacturing
- Dynamic chemical processing control
RTE protocols try to first address the nondeterminism in communication delay introduced by the CSMA-CD (carrier sense multiple access-collision detection) network arbitration algorithm. The common method employed to eliminate the need for collision arbitration is to specify a single management node/master for the network and to impose master slave, poll-response schedule on all connected nodes e.g. EtherNet/IP and PROFINET real-time networks. In addition, many real-time protocols mandate the use of specialized network switches to minimize packet collisions. Modern managed switches are able to significantly improve network determinism and speed by prioritizing Ethernet packets if they are generated by a real-time software application or are part of high-priority communication session or containing time-sensitive data.
In order to further ensure that the network delay is deterministic and uniform, for all the nodes in the network, some RTE protocols use isochronous communication that have either a passed token or a preset communication table e.g. EtherCAT, SERCOS, Ethernet Powerlink, TCnet, and CIP motion.
You can also read:
- What is Fieldbus in Industrial Communication Networks?
- Features of HART Communication Protocol
- Basic Features of Foundation Fieldbus (FF)
Comments
One response to “Types and Operations of Wired Ethernet Networks”
[…] control networks can replace the traditional point-to-point wired systems while providing a number of […]