In this brief article, we will discuss ‘What are Internet Generations?’ the evolution of cellular mobile telecommunication systems from the first-generation network to the ongoing fifth-generation network and upcoming or forthcoming sixth-generation networks. So we’ll be talking about some of the feature’s limitations and some fundamental technologies in all of these generations of our telecommunication systems.
What are Internet Generations?
I am Sourav Khanna, and my area of work deals with electronics and communications before moving towards the first generation networks; let us look into the timeline and how these generations are stacked together so over here you can observe that the second generation was basically in 1991. So anything before 1991 was the first generation Network 1g right, and after ten years, we got 3G; approximately ten years we got 4G, we are presently on the verge of implementation for 5g.
So then we would have a time frame forecast for 6 G networks, so this timeline doesn’t mean that between 1991 and 2001.
There was no change in the network; there was a considerable change. Still, that change was more related to the upgrade than infrastructural change. For example, in 1991, you had 2 G, and in between, you had 2.5 g, also called GPRS and then you got 3G and so on. So the significant change or the infrastructure change is after ten years. So let us look into the features of the first-generation network first experience of wireless telephone communication was introduced in the first-generation networks.
It was launched under several brand names, including advanced mobile phone systems in the USA, in 1983. We also had a protocol for Europe and Japan. So we had bulky cell phones, but the main thing was it allowed us to communicate utilizing wireless channels right and then it was built on analogue signals. It was using frequency division multiple access FDMA.
This is similar to the FM broadcast receiver you have in your vehicles that have different channels or are allocated various frequency bands, so if you tuned to 88 MHz, at all times, the 88 MHz station transmits the intended data. By doing this, you can make calls freely within a network, i.e. if you are in the USA, you will be using amps. Within AMPS, you can have handovers.
but not beyond it right, so you need the same infrastructure to do the handover. Also, the speed limit was quite low, but it was at least good enough for voice communication. It was 2.4 kilobits per second, right.
So since it was a seminal technology, there were several limitations, including the big size and the weight of the device, so it was not very handy. Moreover, battery drainage issues were there because there were a lot of components that were not based on the current semiconductor devices, so they were consuming a lot of power. Then you had low SNR, which is the signal-to-noise ratio, so you had a bad voice quality and poor encryption and security.
So why it have poor encryption and security because each user was given a specific frequency band so there was a possibility of eavesdropping so somebody else can tap on a given channel and hence there was a concern of security, and last but not the least, they were real issues in terms of call drops when there was mobility so that is there was a terrible performance in terms of handover also known as handoff so why it was like that.
Assuming this is one base transceiver station which is transmitting signal to a cell phone which we have represented over here at the mobile station so if the mobile station is moving from this location to this location so the quality of signal from the original base station the old base station will be dropping right.
At the same time the SNR for the second base station will be increasing right so we need some algorithms by means of which the mobile station can be connected with either of the two base stations and if the algorithm is not efficient we would have a lot of call drops specifically in this region so the issue of call drops were actually later on fixed in following generation networks.
GSM Network | What is GSM Network?
So, citing all these limitations, second-generation networks emerged. Second-generation networks also go by the name of global system for mobile communication that is GSM, and it was launched in 1991 in Finland. Note that a Finnish company, Nokia, has given us a lot of communication infrastructure and cell phones right.
Most of us would have seen this cell phone that is Nokia 3310, so the significant difference between first-generation and second-generation was the digital signals. Hence, in the second generation, we had digital signals.
For that, we had QPSK right that is in-phase quadrature-phase and then you would have some symbols so any given time frame, you could send a symbol of two bits rather than one bit and that was a major advantage, so it gave us, so it gave rise to data rate as we will see next also in 2 G.
The short message services were enabled moreover digital encryption was also enforced so we had more security and encryption in 2g networks as compared to the 1g Network.
Furthermore, it was based on time division multiple access rather than frequencies. Now, the time was divided right, so each user was given a specific time to communicate. Moreover, the uplink and downlink can also transmit based on different time slots on the same frequency channel. An essential feature over here is the concept of cellular technology.
If you look into this figure, we have different cells that say this is our first frequency, which is f1 right, so the neighbouring frequencies are over here shown say this is f2 f3 and f4 privilege. Hence, all the adjacent cells have different frequency bands right to not interfere with this given cell. Still, at the same time, you have the capability of reintroducing this f1 over here, right so?
The signals from this base station will have minimal effect on this base station note that this is a straightforward structure based on the Hexagon grid. Still, significantly intra-channel interference has been mitigated to a reasonable extent.
2G networks provide better quality this is simply because in digital signals now we have ones and zeros, and the threshold detection is relatively easy as compared to analogue signals right also, we have reliable horizontal handoff; proper handoff is hand over as I mentioned previously and what do we mean by horizontal handoff.
Horizontal handoff is simply the handoff between the same technologies, that is if you are using a cellular network. Again you’re moving towards a cellular network so that kind of switch is a basically horizontal handoff. Still, you also have a vertical handoff, and that vertical kind of handoff is simply if you are in a cellular network, but you are loading some data.
Still, at the same time, you also have broadband Wi-Fi availability in your given location so you could have a handle handoff between the base station and the Wi-Fi or the broadband technology so in 2g the handoff is more reliable you have specific algorithms including the HO_Margin that is if again the mobile station is moving from this location to this location initially.
It was mated to an old base station, and now it would be connected to a new base station. Still, as it is moving from this point to this point, so over here or over here, the handoff has not taken place, so if the threshold is met. That threshold level is based on this HO_Margin right until then, the difference between the two signals is considerable.
So there will be no handover, and once there is a handoff, then this mobile station will be connected to the new based receiver, the transceiver station. Hence, such kinds of algorithms give us a more reliable handoff, and the possibility of call drop during handoff is minimized. Importantly we have a speed improvement to 64 kilobits per second from 2.4 kilometres per second of 1G Network.
You can also read these articles