Concepts for Structuring a Network

Designing and Planning a network

To obtain optimum performance within a network, it is absolutely essential to plan the network beforehand. This applies to both the initial installation as well as its expansion. Rashly installed networks can not only cause poor network performance, they even can lead to a loss of data since restrictions given by the standard are possibly not kept. At first glance, designing a network causes additional costs, but it will later reduce maintenance expenditures during operation.

The following sections shall explain some principle methods for determining a suitable network structure and give some hints how to find out the network utilization and performance.

Models

Regarding the network technology the following three general models are distinguished:

  • Hierarchy model
  • Redundant model
  • Safe model

The selection of the suitable model as a basis for planning a network depends on the specific requirements of the installation. Office networks are typically built up based on the hierarchy model since the individual clients do not very often exchange data with each other but only periodically contact the server. Installation-internal networks which do not have any connection to the company network often only consist of automation devices and do not have a server. The connected controllers transmit data in short intervals directly to each other. Furthermore, the operational safety of installation-internal networks has a higher importance since data transmission malfunctions can result in incorrect behavior of the installation or even in production stops. In such cases it is more suitable to choose the redundant model or a safe model.

In the end, all three models shown above are based on the use of switching hubs (switches). Whereas in the past simple hubs were increasingly used to set up a network wherever permitted by the requirements, today almost exclusively switches are used. Using switches, historic Ethernet rules such as the length restrictions of a collision domain no longer have to be observed. This considerably simplifies the network design. Even though the use of switches could make us believe that networks can be expanded to infinite size, it has to be considered that each switch involved in a data transfer causes a delay. Therefore, the IEEE-802.1d bridging standard recommends to limit the number of switches to be passed between two terminal devices to a maximum of seven switches.

Hierarchy Model

The hierarchy model intends the subdivision of the network into several levels and a graduation of the data rate between the individual levels. For this purpose, normally at least two grades are used e.g. by connecting the server with a data rate of 100 Mbit/s to the network and the clients with 10 Mbit/s. The advantage of this design is that the server has 10 times the bandwidth of the clients available which enables the server to provide sufficient bandwidth and response time for several clients. Despite the fact that 10 times the bandwidth does not mean that 10 clients can simultaneously access the server, the data transmitted to or from the clients do only need one tenth of the time. In total, this reduces the response time for each individual client.

../_images/055cee2c6288f3180a3313904c8132ad1

Hierarchy model

1 Server
2 Switch
3 Client

When dimensioning the individual levels the utilization of the particular level has to be considered. Devices connected to each other via hubs can only be operated in half-duplex mode. Consequently they have to share the commonly used network (shared media). If the utilization of such a shared media is higher than 40 % over a longer period of time, a switch should be used instead of a hub in order to subdivide the collision domain and thus remove load from it. The utilization threshold within such a switched media is 80 %. If this value is exceeded, the utilization should be reduced by selecting a smaller grouping.

Redundant Model

The meshed Ethernet structure is a typical example for a redundant network model. To obtain fault tolerance, several connections have to be established between switches or nodes. This way, data exchange can be performed using another (redundant) connection if one connection fails.

../_images/0e99f34f629c54520a331390132568791

Redundant model

1 Server
2 Switch
3 Client

However, this meshed constellation leads to loops which would make well-ordered data exchange impossible. The loops would cause the broadcast or multicast data packages to endless stray in the network. In order to suppress such loops, the spanning tree mechanism Network Components is used which always activates only one unique connection and deactivates all other possible connections. On the occurrence of a fault (e.g. caused by an interruption of the network line) the redundant connection is re-activated and then maintains communication between the switches. However, switching of the connection is not without interruption. The time needed for switching depends on the size and structure of the network.

The use of link aggregation which is often also called “trunking” likewise provides increased transmission reliability. Link aggregation actually means the parallel connection of several data lines. This way the bandwidths of the individual data lines are bundled in order to increase the total bandwidth. Furthermore, the parallel connection establishes a redundant connection. If one data line fails, the data can still be transmitted via the remaining lines even though only with reduced bandwidth.

Safe Models

To obtain a certain grade of safety for the transmitted data against unauthorized access or to optimize the network utilization, it is suitable to design so-called Virtual Bridged Local Area Networks (VLANs). In a VLAN the data flow is grouped. The simplest variant of a VLAN is obtained by a port-related grouping which means that particular ports of a switch are assigned to a VLAN and data exchange is then only performed within this VLAN. A VLAN can be considered as a group of terminal stations which communicate like in a usual LAN although they can be located in different physical segments. In the end, establishing VLANs leads to a limitation of the broadcast domains. As a result, all subscribers of a VLAN only receive data packages which have been sent by subscribers of the same VLAN. Independent of their physical location, all subscribers of a VLAN are logically put together to one broadcast domain. The limitation of the broadcast domains relieves load from the network and provides safety since only the members of the VLAN are able to receive the data packages.

../_images/5222151d629c8d5c0a3313901e11812d1

Safe models

In order to enable a terminal device connected to a switch to exchange data beyond the borders of the VLAN, the port of the switch has to be assigned to several VLANs. Apart from the simple variant of the port-based VLAN, it is also possible to establish VLANs by evaluating additional information contained in the Ethernet frames.

Utilization and Performance

In the description of the network models it has already been mentioned that the existing hubs should be replaced by switches in order to subdivide the collision domain and thus remove load, if the utilization of a shared media is higher than 40 % over a longer period of time. If the utilization within such a switched media is permanently above 80 %, it is recommended to further relieve load by performing smaller grouping.

However, a network should basically not be dimensioned for the burst utilization. During normal operation usually many smaller data packages are transmitted rather than large data streams. This means that the network load regarding the bandwidth is not as high. Nevertheless, if any bottle-necks occur, the simplest method to eliminate them is to increase the data rate (e.g. from 10 Mbit/s to 100 Mbit/s). In existing networks, however, this is not always possible without problems since the cable infrastructure is possibly not suitable for the higher data rate and the expenditure for a new cabling is possibly not defensible. The only solution in such cases is a segmentation of the network which results in a reduction of the number of devices within the network or collision domain and thus provides more bandwidth for the remaining devices.

A segmentation of a network can be obtained with routers, bridges or switches. However, segmentation is only meaningful if the 80/20 rule is considered and observed. The 80/20 rule says that 80 % of the data traffic have to take place within the segment and only 20 % of the data traffic are forwarded to another segment. This is why a previous analysis of the network traffic is required to enable meaningful grouping. In this analysis it has to be determined which station is communicating with which other stations in the network and which amount of data is flowing for this communication. For shared media the network should be divided in a way that stations producing roughly the same load should be grouped in one collision domain, if it is not possible to make a division based on the communication paths. This way it is guaranteed that stations with lower data traffic are able to meet the typical requirements regarding short response times. Stations with permanently high data traffic generally cause a drastic increase of the response times.

Best performance increase can be obtained by using switches and connecting each single station directly to the switches. This way each station has its own connection to a switch and thus can use the full bandwidth of a port in full-duplex mode. This subdivision and the provision of the dedicated connections is called micro-segmentation. For micro-segmentation the 80/20 rule does no longer apply. It has only to be guaranteed that a switch is able to provide sufficient internal bandwidth.

../_images/f32720b66288ccf20a33139044fb5da71

Direct connection of all stations to switches

In order to plan a network with optimum performance, we have to think about the question what a network is able to achieve at all. Taking the standards as a basis it can be determined how many data per time can be transmitted via a network theoretically. The smallest Ethernet frame size is 64 bytes long and contains 46 bytes of user data, the maximum frame size is 1518 bytes at 1500 bytes of user data, each plus 64 bits for the preamble and 96 bits for the inter-frame gap. This results in a minimum length of 672 bits (64 x 8 + 64 + 96) and a maximum length of 12304 bits (1518 x 8 + 64 + 96). The transmission of one bit takes 10 ns for fast Ethernet (100 Mbit/s) and 100 ns for Ethernet (10 Mbit/s). Using these values we can calculate how many data packages of the smallest and the maximum length can be transmitted per second theoretically (see tables). The calculation of the corresponding amount of user data which can be transmitted (without taking into account the additional overheads of the higher protocols) now shows the considerably higher protocol overhead caused by the small data packages.

Data rate at 10 Mbit/s:
10 Mbit/s

Length

[bits]

Time/bit

[ns]

Time/frame

[ns]

Frames

[ns]

User data/

frame  [1/s]

User data

[bytes/s]

min. frame 672 100 67 200 14 880 46 684 480
max. frame 12 304 100 1 230 400 813 1 500 1 219 500
Data rate at 100 Mbit/s:
100 Mbit/s

Length

[bits]

Time/bit

[ns]

Time/frame

[ns]

Frames

[ns]

User data/

frame  [1/s]

User data

[bytes/s]

min. frame 672 10 6 720 148 800 46 6 844 800
max. frame 12 304 10 123 400 8 127 1 500 12 195 000

The corresponding net bandwidth can be calculated from the ratio of the amount of user data per second to the available network bandwidth. The net bandwidth is independent of the transmission rate and calculated in the following table taking a transmission rate of 100 Mbit/s as an example.

Net bandwidth at 100 Mbit/s:
100 Mbit/s

User data

[bits/s]

Network bandwidth

[bits/s]

Net bandwidth

[%]

min. frame 54 758 400 100 000 000 54.7
max. frame 97 524 000 100 000 000 97.5

These calculations point out that the percentage of the network performance is considerably higher for the transmission of larger frames. The efficiency of the data transmission which is independent of the transmission rate is shown in the following table using some selected frame sizes as an example. However, the values given in the table only consider the protocol overhead of the MAC and the network layer. The user data are reduced accordingly by the additional overhead of the corresponding higher layers.

Efficiency of data transmission:

User data

[bits]

Frame size

[bits]

Overhead

[%]

Efficiency

[%]

1500 1518 1.2 98.8
982 1000 1.8 98.2
494 512 3.6 96.4
46 64 39.1 60.9

A calculation of the typical transmitted frame sizes may be still possible for small closed networks inside an installation with only automation devices connected. But, for instance, if PCs are additionally connected to the network (even if they are connected only temporarily) the frame sizes can vary considerably. This makes it impossible to perform an exact calculation of the bandwidth or to make a precise statement regarding the performance. However, the following index values could be determined with the help of various studies about network performance.

  • For low utilization of 0 to 50 % of the available bandwidth, short response times can be expected. The stations are able to send frames with a typical delay of smaller than 1 ms.
  • For medium utilization between 50 and 80 %, the response times can possibly increase to values between 10 and 100 ms.
  • For high utilization over 80 %, high response time and wide distribution can be expected. The sending of frames can possibly take up to 10 seconds.

This is why the following principles should be observed when designing an Ethernet network.

  • Mixed operation of stations which have to transmit high data volumes and stations which have to operate with short response times (real time) should be avoided. Due to the wide distribution, short response times cannot be guaranteed within such combinations.
  • As few as possible stations should be positioned inside of one collision domain. For this purpose, collision domains should be subdivided using switching hubs.

Utilization and Performance

In the description of the network models it has already been mentioned that the existing hubs should be replaced by switches in order to subdivide the collision domain and thus remove load, if the utilization of a shared media is higher than 40 % over a longer period of time. If the utilization within such a switched media is permanently above 80 %, it is recommended to further relieve load by performing smaller grouping.

However, a network should basically not be dimensioned for the burst utilization. During normal operation usually many smaller data packages are transmitted rather than large data streams. This means that the network load regarding the bandwidth is not as high. Nevertheless, if any bottle-necks occur, the simplest method to eliminate them is to increase the data rate (e.g. from 10 Mbit/s to 100 Mbit/s). In existing networks, however, this is not always possible without problems since the cable infrastructure is possibly not suitable for the higher data rate and the expenditure for a new cabling is possibly not defensible. The only solution in such cases is a segmentation of the network which results in a reduction of the number of devices within the network or collision domain and thus provides more bandwidth for the remaining devices.

A segmentation of a network can be obtained with routers, bridges or switches. However, segmentation is only meaningful if the 80/20 rule is considered and observed. The 80/20 rule says that 80 % of the data traffic have to take place within the segment and only 20 % of the data traffic are forwarded to another segment. This is why a previous analysis of the network traffic is required to enable meaningful grouping. In this analysis it has to be determined which station is communicating with which other stations in the network and which amount of data is flowing for this communication. For shared media the network should be divided in a way that stations producing roughly the same load should be grouped in one collision domain, if it is not possible to make a division based on the communication paths. This way it is guaranteed that stations with lower data traffic are able to meet the typical requirements regarding short response times. Stations with permanently high data traffic generally cause a drastic increase of the response times.

Best performance increase can be obtained by using switches and connecting each single station directly to the switches. This way each station has its own connection to a switch and thus can use the full bandwidth of a port in full-duplex mode. This subdivision and the provision of the dedicated connections is called micro-segmentation. For micro-segmentation the 80/20 rule does no longer apply. It has only to be guaranteed that a switch is able to provide sufficient internal bandwidth.

../_images/f32720b66288ccf20a33139044fb5da71

Direct connection of all stations to switches

In order to plan a network with optimum performance, we have to think about the question what a network is able to achieve at all. Taking the standards as a basis it can be determined how many data per time can be transmitted via a network theoretically. The smallest Ethernet frame size is 64 bytes long and contains 46 bytes of user data, the maximum frame size is 1518 bytes at 1500 bytes of user data, each plus 64 bits for the preamble and 96 bits for the inter-frame gap. This results in a minimum length of 672 bits (64 x 8 + 64 + 96) and a maximum length of 12304 bits (1518 x 8 + 64 + 96). The transmission of one bit takes 10 ns for fast Ethernet (100 Mbit/s) and 100 ns for Ethernet (10 Mbit/s). Using these values we can calculate how many data packages of the smallest and the maximum length can be transmitted per second theoretically (see tables). The calculation of the corresponding amount of user data which can be transmitted (without taking into account the additional overheads of the higher protocols) now shows the considerably higher protocol overhead caused by the small data packages.

Data rate at 10 Mbit/s:
10 Mbit/s

Length

[bits]

Time/bit

[ns]

Time/frame

[ns]

Frames

[ns]

User data/

frame  [1/s]

User data

[bytes/s]

min. frame 672 100 67 200 14 880 46 684 480
max. frame 12 304 100 1 230 400 813 1 500 1 219 500
Data rate at 100 Mbit/s:
100 Mbit/s

Length

[bits]

Time/bit

[ns]

Time/frame

[ns]

Frames

[ns]

User data/

frame  [1/s]

User data

[bytes/s]

min. frame 672 10 6 720 148 800 46 6 844 800
max. frame 12 304 10 123 400 8 127 1 500 12 195 000

The corresponding net bandwidth can be calculated from the ratio of the amount of user data per second to the available network bandwidth. The net bandwidth is independent of the transmission rate and calculated in the following table taking a transmission rate of 100 Mbit/s as an example.

Net bandwidth at 100 Mbit/s:
100 Mbit/s

User data

[bits/s]

Network bandwidth

[bits/s]

Net bandwidth

[%]

min. frame 54 758 400 100 000 000 54.7
max. frame 97 524 000 100 000 000 97.5

These calculations point out that the percentage of the network performance is considerably higher for the transmission of larger frames. The efficiency of the data transmission which is independent of the transmission rate is shown in the following table using some selected frame sizes as an example. However, the values given in the table only consider the protocol overhead of the MAC and the network layer. The user data are reduced accordingly by the additional overhead of the corresponding higher layers.

Efficiency of data transmission:

User data

[bits]

Frame size

[bits]

Overhead

[%]

Efficiency

[%]

1500 1518 1.2 98.8
982 1000 1.8 98.2
494 512 3.6 96.4
46 64 39.1 60.9

A calculation of the typical transmitted frame sizes may be still possible for small closed networks inside an installation with only automation devices connected. But, for instance, if PCs are additionally connected to the network (even if they are connected only temporarily) the frame sizes can vary considerably. This makes it impossible to perform an exact calculation of the bandwidth or to make a precise statement regarding the performance. However, the following index values could be determined with the help of various studies about network performance.

  • For low utilization of 0 to 50 % of the available bandwidth, short response times can be expected. The stations are able to send frames with a typical delay of smaller than 1 ms.
  • For medium utilization between 50 and 80 %, the response times can possibly increase to values between 10 and 100 ms.
  • For high utilization over 80 %, high response time and wide distribution can be expected. The sending of frames can possibly take up to 10 seconds.

This is why the following principles should be observed when designing an Ethernet network.

  • Mixed operation of stations which have to transmit high data volumes and stations which have to operate with short response times (real time) should be avoided. Due to the wide distribution, short response times cannot be guaranteed within such combinations.
  • As few as possible stations should be positioned inside of one collision domain. For this purpose, collision domains should be subdivided using switching hubs.