Monday, January 10, 2011

Radio Signaling Channels

 

Radio Signaling Channels


I am explaining Radio signaling channels how they work
Below are the main types of signaling Channels used for radio communication




  • TCHF - Full rate traffic channel.
  • TCHH - Half rate traffic channel.


Common Control Channels (CCH)
Used for signaling between the BTS and the MS and to request and grant access to the network.

Broadcast Channels (BCH)

Transmitted by the BTS to the MS. This channel carries system parameters needed to identify the network, synchronize time and frequency with the network, and gain access to the network.

Standalone Dedicated Control Channels (SDCCH)

Used for call setup.

Associated Control Channels (ACCH)

Used for signaling associated with calls and call-setup. An ACCH is always allocated in conjunction with a TCH or a SDCCH.
The above signaling channels can be further divided into the following logical channels:

Broadcast Channels (BCH)
     Broadcast Control Channel (BCCH)
     Frequency Correction Channel (FCCH)
     Synchronization Channel (SCH)
     Cell Broadcast Channel (CBCH)



Common Control Channels (CCCH)
     Paging Channel (PCH)
     Random Access Channel (RACH)
     Access Grant Channel (AGCH)

Standalone Dedicated Control Channel (SDCCH)
     Associated Control Channel (ACCH)
     Fast Associated Control Channel (FACCH)
     Slow Associated Control Channel (SACCH)


Let's discuss each type of logical channel individually.


Broadcast Channels (BCH)

Broadcast Control Channel (BCCH)

BCCH is a downlink channel. This channel contains system parameters needed to identify the network and gain access. These parameters include the Location Area Code (LAC), the Mobile Network Code (MNC), the frequencies of neighboring cells, and access parameters.

Frequency Correction Channel (FCCH)
FCCH is a downlink channel.  This channel is used by the MS as a frequency reference. This channel contains frequency correction bursts.




Synchronization Channel (SCH)

 SCH is a downlink channel. This channel is used by the MS to learn the Base Station Information Code (BSIC) as well as the TDMA frame number (FN). This lets the MS know what TDMA frame they are on within the hyper frame.



Cell Broadcast Channel (CBCH)

CBCH is a downlink channel. This channel is not truly its own type of logical channel. The CBCH is for point-to-omnipoint messages. It is used to broadcast specific information to network subscribers; such as weather, traffic, sports, stocks, etc. Messages can be of any nature depending on what service is provided. Messages are normally public service type messages or announcements. The CBCH isn’t allocated a slot for itself; it is assigned to an SDCCH. It only occurs on the downlink. The CBCH usually occupies the second sub slot of the SDCCH. The mobile will not acknowledge any of the messages.



Common Control Channels (CCCH)



Paging Channel (PCH)  

PCH is a downlink channel. This channel is used to inform the MS that it has incoming traffic. The traffic could be a voice call, SMS, or some other form of traffic.



Random Access Channel (RACH)

RACH is a Uplink channel. This channel is used by a MS to request an initial dedicated channel from the BTS. This would be the first transmission made by a MS to access the network and request radio resources. The MS sends an Access Burst on this channel in order to request access.

Access Grant Channel (AGCH)

AGCH is a downlink channel. This channel is used by a BTS to notify the MS of the assignment of an initial SDCCH for initial signaling.



Standalone Dedicated Control Channel (SDCCH)  



SDCCH is used as both uplink and downlink. This channel is used for signaling and call setup between the MS and the BTS.



Associated Control Channels (ACCH)



Fast Associated Control Channel (FACCH)

 Used as both UPLINK/DOWNLINK - This channel is used for control requirements such as handoffs. There is no TS and frame allocation dedicated to a FAACH. The FAACH is a burst-stealing channel; it steals a Timeslot from a Traffic Channel (TCH).

Slow Associated Control Channel (SACCH)

Used as both UPLINK/DOWNLINK - This channel is a continuous stream channel that is used for control and supervisory signals associated with the traffic channels.



Signaling Channel Mapping

 Normally the first two timeslots are allocated to signaling channels.

Need to remember that Control Channel composed of 51 TDMA frames.

On a time slot within the multi-frame, the 51 TDMA frames are divided up and allocated to the various logical channels.

There are several channel combinations allowed in GSM. Some of the more common ones are:


FCCH + SCH + BCCH + CCCH
BCCH + CCCH
FCCH + SCH + BCCH + CCCH + SDCCH/4(0..3) + SACCH/C4(0..3)
SDCCH/8(0 .7) + SACCH/C8(0 . 7)

Sunday, January 9, 2011

Difference between Modulation(Analog and Digital), multiplexing and multiple access


 The contents of this article are taken from different sources.

Difference between Modulation(Analog and Digital), multiplexing and multiple access

Modulation, keying, Multiplexing and Multiple Access are the basic terms used for any type of network. Usually some of us get confused by these terms. In this post lets discuss about what exactly these terms means and how they are different from each other.

MODULATION:

Usually, the signal that we want to transmit, say a speech signal with 4000 Hz frequency, will require a very big antenna. For any signal the frequency f is related to wavelength L as
c = L * f ………………………… (i)

Where c is velocity of light. And antenna length is generally taken as L/2 which simply means for our case antenna length is 75000 m, obviously this size of antenna is too big to use on day to day basis. That is why we take our speech signal or the desired signal and take another high frequency signal known as carrier (carrier can be any signal but should have high frequency and in practice we use a simple continuous wave signal), now we alter one or more parameters of this career signal in accordance with our desired signal, this parameters can be any one or combination of parameters. The basic parameters are amplitude, frequency, and phase of the signal. The result of this alteration we get is known as modulated signal, the desired signal which we wanted to transmit is known as modulating signal also known as base band signal and modulated signal is also known as band pass signal. The whole process is known as MODULATION.

Two forms of modulation are generally distinguished, although they have many properties in common: If the modulating signal's amplitude varies continuously with time, it is said to be an analog signal and the modulation is referred to as analog. In the case where the modulating signal may vary its amplitude only between a finite number of values and the change may occur only at discrete moments in time, the modulating signal is said to be a digital signal and the modulation is referred to as digital or keying.

In most applications of modulation the carrier signal is a sine wave, which is completely characterized by its amplitude, its frequency, and its phase relative to some point in time. Modulating the carrier then amounts to varying one or more of these parameters in direct proportion to the amplitude of the modulating signal. In analog modulation systems, varying the amplitude, frequency, or phase of the carrier signal results in amplitude modulation (AM), frequency modulation (FM), or phase modulation (PM), respectively. Since the frequency of a sine wave expressed in radians per second equals the derivative of its phase, frequency modulation and phase modulation are sometimes subsumed under the general term “angle modulation” or “exponential modulation.”

If the modulating signal is digital, the modulation is termed amplitude-shift keying (ASK), frequency-shift keying (FSK), or phase-shift keying (PSK), since in this case the discrete amplitudes of the digital signal can be said to shift the parameter of the carrier signal between a finite number of values. For a modulating signal with only two amplitudes, “binary” is sometimes added before these terms.
Digital modulating signals with more than two amplitudes are sometimes encoded into both the amplitude and phase of the carrier signal. For example, if the amplitude of the modulating signal can vary between four different values, each such value can be encoded as a combination of one of two amplitudes and one of two phases of the carrier signal. Quadrature amplitude modulation (QAM) is an example of such a technique.

In certain applications of modulation the carrier signal, rather than being a sine wave, consists of a sequence of electromagnetic pulses of constant amplitude and time duration, which occur at regular points in time. Changing one or the other of these parameters gives rise to three modulation schemes known as pulse-position modulation (PPM), pulse-duration modulation (PDM), and pulse-amplitude modulation (PAM), in which the time of occurrence of a pulse relative to its nominal occurrence, the time duration of a pulse, or its amplitude are determined by the amplitude of the modulating signal

MULTIPLEXING:

Basically there are two types of system, time domain and frequency domain. In time domain we transmit frames, and in frequency domain we transmit in accordance with frequency. Now if there is more than one source of signal and we want to transmit them together then we implement multiplexing. In multiplexing we mix the source signals (off course with some precautions) say if we want to mix them in time domain then our frame will contain some packets form source A and some packets from source B and so on depending upon the constraints of the channel and time frame. The signals that source are generating can either be modulated signals or we can even send our multiplexed signal to the modulator and then modulate the signal. At the receiving end be de-multiplex the signals. In multiplexing we do not provide a dedicated resource to a single source. I.e. we do not dedicate the complete time frame to a single source (in our case it is time frame). Multiplexing is also seen as you are travelling on a four lane road and suddenly it get narrower and turned to single lane, at this point the traffic police will allow one car from each lane to drive through that narrow single lane, this is what we called MULTIPLEXING.

MULTIPLE ACCESS:

As the name suggest, multiple access means multiple users can access the channel or link. Multiple access provides dedicated resources to the user (with a time constraint) in comparison to the multiplexing which does not provide any type of resources. There are many type of Multiple access schemes like FDMA frequency division multiple access, TDMA time division multiple access, CDMA code division multiple access, SDMA space division multiple access etc. take the example of FDMA, the whole frequency band is divided into small frequency bands called channels, now each channel is having certain capacity to take the traffic say a channel can accommodate single user at time, then the whole frequency bandwidth can be access by as many users as there are channels, mathematically if we are having a bandwidth of 200 KHz and channel bandwidth is 50 KHz then it means we can accommodate 4 users at a time by giving 50 KHz channel to each. This is so called multiple access, i.e. multiple users can access the bandwidth simultaneously and we do not require any additional hardware at the receiving end to separate the desired user from the other users as we do in Multiplexing. In reality the concept of Multiple Access is more complicated and In GSM each channel can accommodate 8 users at a time and each channel has 200 KHz bandwidth.

Orthogonal Frequency Division Multiplexing (OFDM)

The contents of this article are taken from different sources.

 OFDM

Introduction

Digital multimedia applications as they are getting common lately create an ever increasing demand for broad band communication systems. Although the technical requirements for related products are very high the solutions must be cheap to implement since we are basically talking about consumer products.
Whereas for the satellite channel and for the cable channel such cost-efficient solutions already exist for the terrestrial link (i.e. classical TV broadcasting) the requirements are so high that the 'standard' solutions are no longer feasible or lead to sub optimal results. Orthogonal Frequency Division Multiplexing (OFDM) is a method that allows to transmit high data rates over extremely hostile channels at a comparable low complexity. OFDM has been chosen as the transmission method for the European radio (DAB) and TV (DVB-T) standard. Due to its numerous advantages it is under Discussion for future broadband application such as wireless ATM as well.

OFDM and the orthogonality principle

The general problem: Data transmission over multipath channels

Differently from satellite communication where we have one single direct path from transmitter to receiver in the classical terrestrial broadcasting scenario we have to deal with a multipath- channel: The transmitted signal arrives at the receiver in various paths (see figure 1) of different length. Since multiple versions of the signal interfere with each other (inter symbol interference (ISI)) it becomes very hard to extract the original information.
  
Figure 1: Multipath transmission in a broadcasting application
The common representation of the multipath channel is the channel impulse response (cir) of the channel which is the signal at the receiver if a single pulse is transmitted (figure 2).
  
Figure 2: Effective length of cir
Let's assume a system transmitting discrete information in time intervals T. The critical measure concerning the multipath-channel is the delay of the longest path with respect to the earliest path. A received symbol can theoretically be influenced by previous symbols. This influence has to be estimated and compensated for in the receiver, a task which may become very challenging.

Single carrier approach

In figure 3 the general structure of a single carrier transmission system is depicted. The transmitted symbols are pulse formed by a transmitter filter. After passing the multipath channel in the receiver a filter matched to the channel is used to maximize signal to noise ratio a device used to extract the data.
  
Figure 3: Basic structure of a single carrier system
The scenario we are dealing with in DVB-T is characterized by the following conditions:
           Transmission Rate:
           Maximum channel delay:
For the single carrier system this results in an ISI of: 
The complexity involved in removing this interference in the receiver is tremendous. In the scenario under consideration here, using such an approach will only lead to sub-optimal results. This is the main reason why the multi carrier approach is used.

Multi carrier approach

Figure 4 shows the general structure of a multicarrier system. 
  
Figure 4: Basic structure of a multicarrier system
The original data stream of rate R is multiplexed into N parallel data streams of rate 
each of the data streams is modulated with a different frequency and the resulting signals are transmitted together in the same band. Correspondingly the receiver consists of N parallel receiver paths. Due to the prolonged distance in between transmitted symbols the ISI for each sub system reduces to 
In the case of DVB-T we have N=8192 leading to an ISI of 
Such little ISI can often be tolerated and no extra counter measure such as an equalizer is needed. Alas as far as the complexity of a receiver is concerned a system with 8192 parallel paths still isn't feasible. This asks for a slight modification of the approach which leads us to the concept of OFDM.

Orthogonal Frequency Division Multiplexing

In OFDM the subcarrier pulse used for transmission is chosen to be rectangular. This has the advantage that the task of pulse forming and modulation can be performed by a simple Inverse Discrete Fourier Transform (IDFT) which can be implemented very efficiently as a I Fast Fourier Transform (IFFT). Accordingly in the receiver we only need a FFT to reverse this operation. According to the theorems of the Fourier Transform the rectangular pulse shape will lead to a sin(x)/x type of spectrum of the subcarriers (see figure 5).
  
Figure 5: OFDM and the orthogonality principle
Obviously the spectrums of the subcarriers are not separated but overlap. The reason why the information transmitted over the carriers can still be separated is the so called orthogonality relation giving the method its name. By using an IFFT for modulation we implicitly chose the spacing of the subcarriers in such a way that at the frequency where we evaluate the received signal (indicated as arrows) all other signals are zero. In order for this orthogonality to be preserved the following must be true:
  1. The receiver and the transmitter must be perfectly synchronized. This means they both must assume exactly the same modulation frequency and the same time-scale for transmission (which usually is not the case).
  2. The analog components, part of transmitter and receiver, must be of very high quality.
  3. There should be no multipath channel.
In particular the last point is quite a pity, since we have chosen this approach to combat the multipath channel. Fortunately there's an easy solution for this problem: The OFDM symbols are artificially prolonged by periodically repeating the 'tail' of the symbol and precede the symbol with it (see figure 5). At the receiver this so called guard interval is removed again. As long as the length of this interval is longer than the maximum channel delay all reflections of previous symbols are removed and the orthogonality is preserved. Of course this is not for free, since by preceding the useful part of length by the guard interval we lose some parts of the signal that cannot be used for transmitting information. Taking all this into account the signal model for the OFDM transmission over a multipath channel becomes very simple: The transmitted symbols at time-slot l and subcarrier k are only disturbed by a factor which is the channel transfer function (the fourier transform of the cir) at the subcarrier frequency, an by additional white Gaussian noise n
The influence of the channel can easily be removed dividing by.
As far as the analog components are concerned experience has shown that in the broadcasting applications under consideration here, they are not so critical. What remains is to establish 'perfect' synchronization. This requires a very sophisticated receiver. The general structure and the receiver of such a receiver which we have developed for the DVB-T application.

An OFDM receiver for DVB-T

Tasks of the inner receiver and receiver structure

As mentioned before in order for a digital transmission system to work, receiver and transmitter have to be synchronized. This involves the following tasks:
  • Timing synchronization: Since it is unknown to the receiver, to which exact (absolute) time instant the symbol has been transmitted and how long the dispersion of the channel is, one essential task is to find the 'beginning' of a received OFDM symbol. Thus the time scales of transmitter and receiver are synchronized and the removal of the guard interval can be done with the required accuracy.
  • Frequency synchronization: The signal is usually not transmitted in baseband but modulated with a radio carrier at a frequency assigned by the standard. Though this frequency is known to the receiver the tolerance of the RF components usually applied is so large that there will be a frequency-deviation. In many cases this deviation will be too large for a reliable data transmission. It therefore must be estimated and compensated at the receiver.
  • Sampling-clock synchronization: The signal produced by the FFT will be converted into an analog signal assuming a certain span of time between two values. At the receiver the down converted RF signal is sampled in order to obtain a discrete time signal for further (digital) processing. The sampling times assumed in the receiver must match very accurately in order to avoid a degradation of the performance. A possible deviation between transmitter and receiver must again be estimated and compensated.
  • Channel estimation: If a coherent modulation scheme is used (which must not be necessarily the case) according to equation (0.4) the channel transfer function must be estimated and compensated.
A receiver structure that allows to estimate and compensate all parameters required is depicted in figure 6. 
  
Figure 6: Receiver structure for a DVB-T receiver
In addition to the elementary tasks found in single carrier receivers too for the receiver under consideration here two further tasks can be identified:
  1. TPS detection: So called TPS (transmission parameter signaling) data is provided in DVB-T to inform the receiver about the modulation and coding scheme used. This information is provided via selected subcarriers that are modulated in a robust differential BPSK.
  2. CPE detection (and correction): The common phase error (CPE) is a phenomena that results from imperfections of the oscillators used for modulation and demodulation. Instead of providing a stable frequency real oscillators tend to provide a frequency that is slowly changing in time. This change in time leads to an additional modulation of the OFDM signal which in some cases must be estimated and compensated. For the constellations used in DVB-T it can be shown that due to other reasons the quality of the oscillators must be so high that this effect can be neglected.
We will not go into detail as far as the implementation of the single components. What proves to be the most critical component of the receiver is the channel estimation unit. We will therefore go a little more into detail.

Channel estimation for OFDM

The method of channel estimation implied by the frame structure of DVB-T is channel estimation via interpolation. The basic principle is depicted in figure 7. 
  
Figure 7: Principle of channel estimation via interpolation
Embedded into the OFDM data stream are training symbols (depicted as arrows) that can be used to obtain samples of the channel transfer function.
The values of the channel in between the samples can then be obtained via a interpolation procedure. Generally we have a two dimensional interpolation problem. Fortunately the problem can be separated into a interpolation in time and in frequency. The most critical task is the design of the interpolation filters used. Both interpolations must agree with the sampling theorem:
  • The interpolation in time is bandlimited by the time-variant behavior of the channel. This is cause by a movement of the receiver and by uncompensated synchronization errors. The maximum allowable bandwidth of these disturbances is determined by the number of training symbols in one subcarrier.
  • Due to the duality of time and frequency the interpolation in frequency is bandlimited by the length of the cir. The maximum allowable cir-length thus is not only determined by the length of the guard interval but also by the number of training symbols in one OFDM symbol. If we use fixed filters for implementation where the maximum dispersion to be assumed is given by the length of the guard interval this implies that for short guard intervals the channel can be estimated with a higher accuracy than with a larger guard interval.
For interpolation in frequency a interpolation filter optimized according to the Wiener filter theory is used. For interpolation in time a linear interpolation is sufficient.

 Performance of the complete receiver

Figure 8 shows the results achievable with the channel-estimator described in the previous section. The application is a DVB-T receiver according to the European standard operating in 8k mode.
  
figure 8: Achievable performance for different channel estimators
As we can see the achievable system performance very much depends on the achievable quality of the channel estimator. Since it is higher for small cirs thus the performance of the receiver will be better. The loss with respect to the performance with ideal channel estimation ranges from about 0.5 dB for the smallest guard interval up to 1.6 dB for the largest guard interval. Also included are the results for a dynamic channel. Using linear interpolation in time will not further degrade the system. Alas if we try do without any interpolation in time the additional loss in performance is significant.

Key Terms and Details for Refference

DAB
Digital Audio Broadcasting (DAB) is a digital radio technology for broadcasting radio stations, used in several countries, particularly in Europe. As of 2006, approximately 1,000 stations worldwide broadcast in the DAB format.
The DAB standard was initiated as a European research project in the 1980s, and the BBC launched the first DAB digital radio in 1995. DAB receivers have been available in many countries since the end of the nineties. DAB may offer more radio programmes over a specific spectrum than analogue FM radio. DAB is more robust with regard to noise and multipath fading for mobile listening, since DAB reception quality first degrades rapidly when the signal strength falls below a critical threshold, whereas FM reception quality degrades slowly with the decreasing signal.
An "informal listening test" by Professor Sverre Holm has shown that for stationary listening the audio quality on DAB is lower than FM stereo, due to most stations using a bit rate of 128 kbit/s or less, with the MP2 audio codec, which requires 160 kbit/s to achieve perceived FM quality. 128 kbit/s gives better dynamic range or signal-to-noise ratio than FM radio, but a more smeared stereo image, and an upper cutoff frequency of 14 kHz, corresponding to 15 kHz of FM radio. However, "CD sound quality" with MP2 is possible "with 256..192 kbps".

DAB+
An upgraded version of the system was released in February 2007, which is called DAB+. DAB is not forward compatible with DAB+, which means that DAB-only receivers will not be able to receive DAB+ broadcasts. DAB+ is approximately twice as efficient as DAB due to the adoption of the AAC+ audio codec, and DAB+ can provide high quality audio with as low as 64kbit/s. Reception quality will also be more robust on DAB+ than on DAB due to the addition of Reed-Solomon error correction coding.
More than 20 countries provide DAB transmissions, and several countries, such as Australia, Italy, Malta and Switzerland, have started transmitting DAB+ stations. See Countries using DAB/DMB. However, DAB radio has still not replaced the old FM system in popularity.

DVB-T
DVB-T is an abbreviation for Digital Video Broadcasting — Terrestrial; it is the DVB European-based consortium standard for the broadcast transmission of digital terrestrial television that was first publicated in 1997 and first broadcast in the UK in 1998. This system transmits compressed digital audio, video and other data in an MPEG transport stream, using coded orthogonal frequency-division multiplexing (COFDM or OFDM) modulation.
FFT
A fast Fourier transform (FFT) is an efficient algorithm to compute the discrete Fourier transform (DFT) and its inverse. There are many distinct FFT algorithms involving a wide range of mathematics, from simple complex-number arithmetic to group theory and number theory; this article gives an overview of the available techniques and some of their general properties, while the specific algorithms are described in subsidiary articles linked below.
A DFT decomposes a sequence of values into components of different frequencies. This operation is useful in many fields (see discrete Fourier transform for properties and applications of the transform) but computing it directly from the definition is often too slow to be practical. An FFT is a way to compute the same result more quickly: computing a DFT of N points in the naive way, using the definition, takes O(N2) arithmetical operations, while an FFT can compute the same result in only O(N log N) operations. The difference in speed can be substantial, especially for long data sets where N may be in the thousands or millions—in practice, the computation time can be reduced by several orders of magnitude in such cases, and the improvement is roughly proportional to N / log(N). This huge improvement made many DFT-based algorithms practical; FFTs are of great importance to a wide variety of applications, from digital signal processing and solving partial differential equations to algorithms for quick multiplication of large integers.
The most well known FFT algorithms depend upon the factorization of N, but (contrary to popular misconception) there are FFTs with O(N log N) complexity for all N, even for prime N. Many FFT algorithms only depend on the fact that is an Nth primitive root of unity, and thus can be applied to analogous transforms over any finite field, such as number-theoretic transforms.
Since the inverse DFT is the same as the DFT, but with the opposite sign in the exponent and a 1/N factor, any FFT algorithm can easily be adapted for it.
DFT

The discrete Fourier transform (DFT) is a specific kind of Fourier transform, used in Fourier analysis. It transforms one function into another, which is called the frequency domain representation, or simply the DFT, of the original function (which is often a function in the time domain). But the DFT requires an input function that is discrete and whose non-zero values have a limited (finite) duration. Such inputs are often created by sampling a continuous function, like a person's voice. Unlike the discrete-time Fourier transform (DTFT), it only evaluates enough frequency components to reconstruct the finite segment that was analyzed. Using the DFT implies that the finite segment that is analyzed is one period of an infinitely extended periodic signal; if this is not actually true, a window function has to be used to reduce the artifacts in the spectrum. For the same reason, the inverse DFT cannot reproduce the entire time domain, unless the input happens to be periodic (forever). Therefore it is often said that the DFT is a transform for Fourier analysis of finite-domain discrete-time functions. The sinusoidal basis functions of the decomposition have the same properties.
The input to the DFT is a finite sequence of real or complex numbers (with more abstract generalizations discussed below), making the DFT ideal for processing information stored in computers. In particular, the DFT is widely employed in signal processing and related fields to analyze the frequencies contained in a sampled signal, to solve partial differential equations, and to perform other operations such as convolutions or multiplying large integers. A key enabling factor for these applications is the fact that the DFT can be computed efficiently in practice using a fast Fourier transform (FFT) algorithm.
FFT algorithms are so commonly employed to compute DFTs that the term "FFT" is often used to mean "DFT" in colloquial settings. Formally, there is a clear distinction: "DFT" refers to a mathematical transformation or function, regardless of how it is computed, whereas "FFT" refers to a specific family of algorithms for computing DFTs. The terminology is further blurred by the (now rare) synonym finite Fourier transform for the DFT, which apparently predates the term "fast Fourier transform" (Cooley et al., 1969) but has the same initialism.

ISI (intersymbol interference)

intersymbol interference (ISI) is a form of distortion of a signal in which one symbol interferes with subsequent symbols. This is an unwanted phenomenon as the previous symbols have similar effect as noise, thus making the communication less reliable. ISI is usually caused by multipath propagation or the inherent non-linear frequency response of a channel causing successive symbols to "blur" together. The presence of ISI in the system introduces errors in the decision device at the receiver output. Therefore, in the design of the transmitting and receiving filters, the objective is to minimize the effects of ISI, and thereby deliver the digital data to its destination with the smallest error rate possible. Ways to fight intersymbol interference include adaptive equalization and error correcting codes.

Long Term Evolution (LTE)

The contents of this article are taken from different sources.

LTE:

LTE (both radio and core network evolution) is now on the market. Release 8 was frozen in December 2008 and this has been the basis for the first wave of LTE equipment. LTE specifications are very stable, with the added benefit of small enhancements being introduced in Release 9, a Release that will be functionally frozen in December 2009.
Motivation for 3GPP Release 8 - The LTE Release
  • Need to ensure the continuity of competitiveness of the 3G system for the future
  • User demand for higher data rates and quality of service
  • Packet Switch optimised system
  • Continued demand for cost reduction (CAPEX and OPEX)
  • Low complexity
  • Avoid unnecessary fragmentation of technologies for paired and unpaired band operation
LTE Release 8 Key Features
v      High spectral efficiency
Ø       OFDM in Downlink, Robust against multipath interference & High affinity to advanced techniques such as Frequency domain channel-dependent scheduling & MIMO
Ø       DFTS-OFDM(“Single-Carrier FDMA”) in Uplink, Low PAPR, User orthogonality in frequency domain
Ø       Multi-antenna application
v      Very low latency
Ø       Short setup time & Short transfer delay
Ø       Short HO latency and interruption time; Short TTI, RRC procedure, Simple RRC states
v      Support of variable bandwidth
Ø       1.4, 3, 5, 10, 15 and 20 MHz
v      Simple protocol architecture
Ø       Shared channel based
Ø       PS mode only with VoIP capability
v      Simple Architecture
Ø       eNodeB as the only E-UTRAN node
Ø       Smaller number of RAN interfaces, eNodeB « MME/SAE-Gateway (S1), eNodeB « eNodeB (X2)
v      Compatibility and inter-working with earlier 3GPP Releases
v      Inter-working with other systems, e.g. cdma2000
v      FDD and TDD within a single radio access technology
v      Efficient Multicast/Broadcast
Ø       Single frequency network by OFDM
v      Support of Self-Organising Network (SON) operation
LTE Release 8 Major Parameters



LTE-Release 8 User Equipment Categories



LTE Historical Information

Initiated in 2004, the Long Term Evolution (LTE) project focused on enhancing the Universal Terrestrial Radio Access (UTRA) and optimizing 3GPP’s radio access architecture.
Targets were to have average user throughput of three- to four-times the Release 6 HSDPA levels in the Downlink (100Mbps), and two to three times the HSUPA levels in the Uplink (50Mbps).
In 2007, the LTE of the 3rd generation radio access technology – "E UTRA" – progressed from the feasibility study stage to the first issue of approved Technical Specifications. By the end of 2008, the specifications were sufficiently stable for commercial implementation.
Orthogonal Frequency Division Multiplexing (OFDM) was selected for the Downlink and Single Carrier-Frequency Division Multiple Access (SC-FDMA) for the Uplink. The Downlink supporting data modulation schemes QPSK, 16QAM, and 64QAM and the Uplink BPSK, QPSK, 8PSK and 16QAM.
LTE’s E UTRA uses a number of defined channel bandwidths between 1.25 and 20 MHz (contrasted with UTRA’s fixed 5 MHz channels).

4 x Increased Spectral Efficiency, 10 x Users Per Cell

Spectral efficiency is increased by up to four-fold compared with UTRA, and improvements in architecture and signalling reduce round-trip latency. Multiple Input / Multiple Output (MIMO) antenna technology should enable 10 times as many users per cell as 3GPP’s original W CDMA radio access technology.
To suit as many frequency band allocation arrangements as possible, both paired (FDD) and unpaired (TDD) band operation is supported. LTE can co-exist with earlier 3GPP radio technologies, even in adjacent channels, and calls can be handed over to and from all 3GPP’s previous radio access technologies.
In the same time frame as the development of LTE, 3GPP’s core network has been undergoing System Architecture Evolution (SAE), optimizing it for packet mode and in particular for the IP-Multimedia Subsystem (IMS) which supports all access technologies.

Saturday, January 8, 2011

Evolution of the Mobile Technology

The contents of this article are taken from different sources.

Evolution of the Mobile Technology 

The first radiotelephone service was introduced in the US at the end of the 1940s, and was meant to connect mobile users in cars to the public fixed network. In the 1960s, a new system launched by Bell Systems, called Improved Mobile Telephone Service" (IMTS), brought many improvements like direct dialing and higher bandwidth. The first analog cellular systems were based on IMTS and developed in the late 1960s and early 1970s. The systems were "cellular" because coverage areas were split into smaller areas or "cells", each of which is served by a low power transmitter and receiver.

First generation:-
1G analog system for mobile communications saw two key improvements during the 1970s: the invention of the microprocessor and the digitization of the control link between the mobile phone and the cell site. AMPS ( Advance mobile phone system ) was first launched by US which is 1G mobile system. It is best on FDMA technology which allows users to make voice calls within one country.

Second generation:-

2G digital cellular systems were first developed at the end of the 1980s. These systems digitized not only the control link but also the voice signal. The new system provided better quality and higher capacity at lower cost to consumers. GSM (Global system for mobile communication) was the first commercially operated digital cellular system which is based on TDMA.

Third generation:-

3G systems promise faster communications services, including voice, fax and Internet, anytime and anywhere with seamless global roaming. ITU’s IMT-2000 global standard for 3G has opened the way to enabling innovative applications and services (e.g. multimedia entertainment, infotainment and location-based services, among others). The first 3G network was deployed in Japan in 2001. 2.5G networks, such as GPRS (Global Packet Radio Service) are already available in some parts of Europe.
3G technology supports 144 Kbps bandwidth, with high speed movement (e.g. vehicles), 384 Kbps (e.g. on campus) & 2 Mbps for stationary (e.g.inbuilding )

Fourth generation:-

At present the download speed for mode data is limited to 9.6 kbit/sec which is about 6 times slower than an ISDN (Integrated services digital network) fixed line connection. Recently, with 504i handsets the download data rate was increased 3-fold to 28.8kbps. However, in actual use the data rates are usually slower, especially in crowded areas, or when the network is "congested". For third generation mobile (3G, FOMA) data rates are 384 kbps (download) maximum, typically around 200kbps, and 64kbps upload since spring 2001. Fourth generation (4G) mobile communications will have higher data transmission rates than 3G. 4G mobile data transmission rates are planned to be up to 20 megabits per second.

Before understanding 4G, we must know what is 3G ? 3G initiative came from device manufactures, not from operators. In 1996 the development was initiated by Nippon Telephone & Telegraph (NTT) and Ericsson; in 1997 the Telecommunications Industry Association (TIA) in the USA chose CDMA as a technology for 3G; in 1998 the European Telecommunications Standards Institute (ETSI) did the same thing; and finally, in 1998 wideband CDMA (W-CDMA) and cdma2000 were adopted for the Universal Mobile Telecommunications System (UMTS).

W-CDMA and CDMA 2000 are two major proposals for 3G. In this CDMA the information bearing signal is multiplied with another faster ate, wider bandwidth digital signal that may carry a unique orthogonal code. W-CDMA uses dedicated time division multiplexing (TDM) whereby channel estimation information is collected from another signal stream. CDMA 2000 uses common code division multiplexing (CDM) whereby channel estimation information can be collected with the signal stream.

Access Technologies (FDMA, TDMA, CDMA) -

FDMA:

Frequency Division Multiple Access (FDMA) is the most common analog system. It is a technique whereby spectrum is divided up into frequencies and then assigned to users. With FDMA, only one subscriber at any given time is assigned to a channel. The channel therefore is closed to other conversations until the initial call is finished, or until it is handed-off to a different channel. A "full-duplex" FDMA transmission requires two channels, one for transmitting and the other for receiving. FDMA has been used for first generation analog systems.

TDMA:

Time Division Multiple Access (TDMA) improves spectrum capacity by splitting each frequency into time slots. TDMA allows each user to access the entire radio frequency channel for the short period of a call. Other users share this same frequency channel at different time slots. The base station continually switches from user to user on the channel. TDMA is the dominant technology for the second generation mobile cellular networks.

CDMA:

Code Division Multiple Access is based on "spread" spectrum technology. Since it is suitable for encrypted transmissions, it has long been used for military purposes. CDMA increases spectrum capacity by allowing all users to occupy all channels at the same time. Transmissions are spread over the whole radio band, and each voice or data call are assigned a unique code to differentiate from the other calls carried over the same spectrum. CDMA allows for a " soft hand-off" , which means that terminals can communicate with several base stations at the same time.

Beyond 3G

In the field of mobile communication services, the 4G mobile services are the advanced version of the 3G mobile communication services. The 4G mobile communication services are expected to provide broadband, large capacity, high speed data transmission, providing users with high quality color video images, 3D graphic animation games, audio services in 5.1 channels. We have been researching the vision of 4G mobile communication systems, services, and architectures. We also have been developing the terminal protocol technology for high capacity, high speed packet services, public software platform technology that enables downloading application programs, multimode radio access platform technology, and high quality media coding technology over mobile networks.

Reasons To Have 4G -

1. Support interactive multimedia services: teleconferencing, wireless Internet, etc.
2.Wider bandwidths, and higher bit rates.
3. Global mobility and service portability.
4. Low cost.
5. Scalability of mobile networks.


First-generation wireless networks were targeted primarily at voice and data communications occurring at low data rates. Recently, we have seen the evolution of second- and third-generation wireless systems that incorporate the features provided by broadband. In addition to supporting mobility, broadband also aims to support multimedia traffic, with quality of service (QoS) assurance. We have also seen the presence of different air interface technologies, and the need for interoperability has increasingly been recognized by the research community.
Wireless communications have become very pervasive. The number of mobile phones and wireless Internet users has increased significantly in recent years. Traditionally, first-generation wireless networks were targeted primarily at voice and data communications occurring at low data rates.
Recently, we have seen the evolution of second- and third-generation wireless systems that incorporate the features provided by broadband. In addition to supporting mobility, broadband also aims to support multimedia traffic, with quality of service (QoS) assurance. We have also seen the presence of different air interface technologies, and the need for interoperability has increasingly been recognized by the research community.
Wireless networks include local, metropolitan, wide, and global areas. In this chapter, we will cover the evolution of such networks, their basic principles of operation, and their architectures.

Evolution of Mobile Cellular Networks

First-Generation Mobile Systems

The first generation of analog cellular systems included the Advanced Mobile Telephone System (AMPS) which was made available in 1983. A total of 40MHz of spectrum was allocated from the 800MHz band by the Federal Communications Commission (FCC) for AMPS. It was first deployed in Chicago, with a service area of 2100 square miles. AMPS offered 832 channels, with a data rate of 10 kbps. Although omnidirectional antennas were used in the earlier AMPS implementation, it was realized that using directional antennas would yield better cell reuse. In fact, the smallest reuse factor that would fulfill the 18db signal-to-interference ratio (SIR) using 120-degree directional antennas was found to be 7. Hence, a 7-cell reuse pattern was adopted for AMPS. Transmissions from the base stations to mobiles occur over the forward channel using frequencies between 869-894 MHz. The reverse channel is used for transmissions from mobiles to base station, using frequencies between 824-849 MHz.
In Europe, TACS (Total Access Communications System) was introduced with 1000 channels and a data rate of 8 kbps. AMPS and TACS use the frequency modulation (FM) technique for radio transmission. Traffic is multiplexed onto an FDMA (frequency division multiple access) system. In Scandinavian countries, the Nordic Mobile Telephone is used.

Second-Generation Mobile Systems

Compared to first-generation systems, second-generation (2G) systems use digital multiple access technology, such as TDMA (time division multiple access) and CDMA (code division multiple access). Global System for Mobile Communications, or GSM, uses TDMA technology to support multiple users.
Examples of second-generation systems are GSM, Cordless Telephone (CT2), Personal Access Communications Systems (PACS), and Digital European Cordless Telephone (DECT). A new design was introduced into the mobile switching center of second-generation systems. In particular, the use of base station controllers (BSCs) lightens the load placed on the MSC (mobile switching center) found in first-generation systems. This design allows the interface between the MSC and BSC to be standardized. Hence, considerable attention was devoted to interoperability and standardization in second-generation systems so that carrier could employ different manufacturers for the MSC and BSCs.
In addition to enhancements in MSC design, the mobile-assisted handoff mechanism was introduced. By sensing signals received from adjacent base stations, a mobile unit can trigger a handoff by performing explicit signalling with the network.
Second generation protocols use digital encoding and include GSM, D-AMPS (TDMA) and CDMA (IS-95). 2G networks are in current use around the world. The protocols behind 2G networks support voice and some limited data communications, such as Fax and short messaging service (SMS), and most 2G protocols offer different levels of encryption, and security. While first-generation systems support primarily voice traffic, second-generation systems support voice, paging, data, and fax services.

2.5G Mobile Systems

The move into the 2.5G world will begin with General Packet Radio Service (GPRS). GPRS is a radio technology for GSM networks that adds packet-switching protocols, shorter setup time for ISP connections, and the possibility to charge by the amount of data sent, rather than connection time. Packet switching is a technique whereby the information (voice or data) to be sent is broken up into packets, of at most a few Kbytes each, which are then routed by the network between different destinations based on addressing data within each packet. Use of network resources is optimized as the resources are needed only during the handling of each packet.
The next generation of data heading towards third generation and personal multimedia environments builds on GPRS and is known as Enhanced Data rate for GSM Evolution (EDGE). EDGE will also be a significant contributor in 2.5G. It will allow GSM operators to use existing GSM radio bands to offer wireless multimedia IP-based services and applications at theoretical maximum speeds of 384 kbps with a bit-rate of 48 kbps per timeslot and up to 69.2 kbps per timeslot in good radio conditions. EDGE will let operators function without a 3G license and compete with 3G networks offering similar data services. Implementing EDGE will be relatively painless and will require relatively small changes to network hardware and software as it uses the same TDMA (Time Division Multiple Access) frame structure, logic channel and 200 kHz carrier bandwidth as today's GSM networks. As EDGE progresses to coexistence with 3G WCDMA, data rates of up to ATM-like speeds of 2 Mbps could be available.
GPRS will support flexible data transmission rates as well as continuous connection to the network. GPRS is the most significant step towards 3G.

Third-Generation Mobile Systems

Third-generation mobile systems are faced with several challenging technical issues, such as the provision of seamless services across both wired and wireless networks and universal mobility. In Europe, there are three evolving networks under investigation: (a) UMTS (Universal Mobile Telecommunications Systems), (b) MBS (Mobile Broadband Systems), and (c) WLAN (Wireless Local Area Networks).
The use of hierarchical cell structures is proposed for IMT2000. The overlaying of cell structures allows different rates of mobility to be serviced and handled by different cells. Advanced multiple access techniques are also being investigated, and two promising proposals have evolved, one based on wideband CDMA and another that uses a hybrid TDMA/CDMA/FDMA approach.
Figure 1. The architecture of a cellular wireless network based on ATM.
Figure 1.

First-generation wireless networks were targeted primarily at voice and data communications occurring at low data rates. Recently, we have seen the evolution of second- and third-generation wireless systems that incorporate the features provided by broadband. In addition to supporting mobility, broadband also aims to support multimedia traffic, with quality of service (QoS) assurance. We have also seen the presence of different air interface technologies, and the need for interoperability has increasingly been recognized by the research community.

Global System for Mobile Communications (GSM)

GSM is commonly referred to as the second-generation mobile cellular system. GSM has its own set of communication protocols, interfaces, and functional entities. It is capable of supporting roaming, and carrying speech and data traffic.
The GSM network architecture (see Figure 2) comprises several base transceiver stations (BTS), which are clustered and connected to a base station controller (BSC). Several BSCs are then connected to an MSC. The MSC has access to several databases, including the visiting location register (VLR), home location register (HLR), and equipment identity register (EIR). It is responsible for establishing, managing, and clearing connections, as well as routing calls to the proper radio cell. It supports call rerouting at times of mobility. A gateway MSC provides an interface to the public telephone network.
The network architecture of GSM.
The HLR provides identity information about a GSM user, its home subscription base, and service profiles. It also keeps track of mobile users registered within its home area that may have roamed to other areas. The VLR stores information about subscribers visiting a particular area within the control of a specific MSC.

Table 1.1. The IMSI in GSM

Mobile Country Code
Mobile Network Code
Mobile Subscriber
Identification
Code

The authentication center (AC) is used to protect subscribers from unauthorized access. It checks and authenticates when a user powers up and registers with the network. The EIR is used for equipment registration so that the hardware in use can be identified. Hence if a device is stolen, service access can be denied by the network. Also, if a device has not been previously approved by the network vendor (perhaps subject to the payment of fees by the user), EIR checks can prevent the device from accessing the network.
In GSM, each mobile device is uniquely identified by an IMSI (international mobile subscriber identity). It identifies the country in which the mobile system resides, the mobile network, and the mobile subscriber. The IMSI is stored on a subscriber identity module (SIM), which can exist in the form of a plug-in module or an insertable card. With a SIM, a user can practically use any mobile phone to access network services.


Figure 2.

General Packet Radio Service (GPRS)

The GSM general packet radio service (GPRS) is a data overlay over the voice-based GSM cellular network. It consists of a packet wireless access network and an IP-based backbone. GPRS is designed to transmit small amounts of frequently sent data or large amounts of infrequently sent data. GPRS has been seen as an evolution toward UMTS (Universal Mobile Telecommunications Systems). Users can access IP services via GPRS/GSM networks.
GPRS services include both point-to-point and point-to-multipoint communications. The network architecture of GPRS is shown in Figure 3. Gateway GSN (GGSN) nodes provide interworking functions with external packet-switched networks. A serving GPRS support node (SGSN), on the other hand, keeps track of an individual mobile station's location and provides security and access control. As shown in Figure 3, base stations (BSSs) are connected to SGSNs, which are subsequently connected to the backbone network. SGSNs interact with MSCs and various databases to support mobility management functions. The BSSs provide wireless access through a TDMA MAC protocol. Both the mobile station (MS) and SGSNs execute the SNDCP (Subnetwork-Dependent Convergence Protocol), which is responsible for compression/decompression and segmentation and reassembly of traffic. The SGSNs and GGSNs execute the GTP (GPRS Tunnelling Protocol), which allows the forwarding of packets between an external public data networks (PDN) and mobile unit (MU). It also allows multiprotocol packets to be tunneled through the GPRS backbone.
Figure 3. Architecture of GSM general packet radio service.


Personal Communications Services (PCSs)

The FCC defines PCS as "Radio communications that encompass mobile and ancillary fixed communication that provides services to individuals and business and can be integrated with a variety of competing networks." However, the Telecommunications Industry Association (TIA) has a different definition for PCS:
A mobile radio voice and data service for the provision of unit-to-unit communications, which can have the capability of public switched telephone network access, and which is based on microcellular or other technologies that enhance spectrum capacity to the point where it will offer the potential of essentially ubiquitous and unlimited, untethered communications.
PCS can also be defined in a broader sense6 as a set of capabilities that allows some combination of personal mobility and service management. In short, PCS is a commonly used term that defines the next generation of advanced wireless networks providing personalized communication services. In Europe, the term "personal communication networks (PCNs)" is used instead of PCS.
The basic requirements for a PCS are:
·         Users must be able to make calls wherever they are
·         Offered services must be reliable and of good quality
·         Provision of multiple services such as voice, fax, video, paging, etc., must be available.
Unlike AMPS, PCS is aimed at the personal consumer industry for mass consumption. The FCC's view of PCS is one where the public switched telephone network (PSTN) is connected to a variety of other networks, such as CATV (cable television), AMPS cellular systems, etc.

1.5 Wireless LANs (WLANS)

Wireless LAN technology has evolved to extend to existing wired networks. Local area networks (LANs) are mostly based on Ethernet media access technology that consists of an interconnection of hosts and routers. LANs are restricted by distance. They are commonly found in offices and inside buildings. Interconnection using wires can be expensive when it comes to relocating servers, printers, and hosts.
Now, more wireless LANs (WLANs) are being deployed in offices. Most WLANs are compatible with Ethernet, and hence, there is no need for protocol conversion. The IEEE has standardized 802.11 protocols to support WLANs media access. A radio base station can be installed in a network to serve multiple wireless hosts over 100-200 m. A host (for example, a laptop) can be wirelessly enabled by installing a wireless adapter and the appropriate communication driver. A user can perform all network-related functions as long as he or she is within the coverage area of the radio base station. This gives the user the capability to perform work beyond his or her office space.
As shown in Figure 4, several overlapping radio cells can be used to provide wireless connectivity over a desired region. If a wireless host migrates from one radio cell to another within the same subnet, then there is no handoff. It is basically bridging, since the host's packet will eventually be broadcast onto the same Ethernet backbone.
Figure 4. A WLAN with an Ethernet wired backbone.
WLANs support existing TCP/IP-based applications. There has been considerable debate in the past as to the low throughput WLANs provide compared to high-speed wired networks. It was not long ago that switched Ethernet technology evolved, bringing the communication throughput of Ethernet into the gigabit range.
The desire to support higher throughput and ad hoc mobile communications has prompted the ETSI (European Communications Standard Institute) to produce a standard for high-performance Radio LAN (HIPERLAN), at 20Mbps throughput with a self-organizing and distributed control network architecture. HIPERLAN II is a wireless ATM system operating at the 17GHz band.


Universal Mobile Telecommunications System (UMTS)

The Universal Mobile Telecommunications System (UMTS) is commonly referred to as a third-generation system. It is targeted to be deployed in 2002. UMTS employs an ATM-based switching network architecture and aims to provide services for both mobile and fixed subscribers by common call-processing procedures. The UMTS architecture is split into core (switching) networks, control (service) networks, and access networks. The core network is responsible for performing switching and transmission functions. The control network supports roaming through the presence of mobility management functions. Finally, the radio access network provides channel access to mobile users and performs radio resource management and signalling. UMTS will include both terrestrial and global satellite components.
The UMTS network comprises: (a) the mobile terminal, (b) the base transceiver station (BTS), (c) the cell site switch (CSS), (d) mobile service control points (MSCP), and (e) the UMTS mobility service (UMS). UMTS employs a hierarchical cell structure, with macrocells overlaying microcells and picocells. Highly mobile traffic is operated on the macrocells to reduce the number of handoffs required. UMTS aims to support roaming across different networks.
The UMTS Radio Access System (UTRA) will provide at least 144 kbps for full-mobility applications, 384 kbps for limited-mobility applications, and 2.048 Mbps for low-mobility applications. UMTS terminals will be multiband and multimode so that they can work with different standards.
UMTS is also designed to offer data rate on-demand. The network will react to a user's needs, based on his/her profile and current resource availability in the network. UMTS supports the virtual home environment (VHE) concept, where a personal mobile user will continue to experience a consistent set of services even if he/she roams from his/her home network to other UMTS operators. VHE supports a consistent working environment regardless of a user's location or mode of access. UMTS will also support adaptation of requirements due to different data rate availability under different environments, so that users can continue to use their communication services.
To support universal roaming and global coverage, UMTS will include both terrestrial and satellite systems. It will enable roaming with other networks, such as GSM. UMTS will provide a flexible broadband access technology that supports both IP and non-IP traffic in a variety of modes, such as packet, circuit-switched, and virtual circuit.

 IMT2000

The ITU (International Telecommunications Union) has introduced a new framework of standards by the name IMT2000, which is a federation of systems for third-generation mobile telecommunications. IMT2000 aims to provide: (a) high-speed access, (b) support for broadband multimedia services, and (c) universal mobility. Frequency spectrum has been allocated for IMT2000 by the ITU. Several multiple-access protocols based on code division have been proposed by many different countries. The ITU has approved the CDMA2000 radio access system as the CDMA multicarrier member of the IMT2000 family of standards. CDMA2000 is capable of supporting IS-41 and GSM-MAP to ensure backward compatibility. IS-41 is a network protocol standard that supports interoperator roaming. It allows MSCs of different service providers to exchange information about their subscribers to other MSCs on-demand.

IS-95, cdmaOne and cdma2000 Evolution

The IS-95 air interface was standardized by TIA in July 1993. Networks that utilize IS-95 CDMA air interface and the ANSI-41 network protocol are known as cdmaOne networks. IS-95 networks use one or more 1.25 MHz carriers and operate within the 800 and 1900 MHz frequency bands.
Following the launch of the first cdmaOne network in Hong Kong in 1995, the number of cdmaOne subscribers has grown into millions. cdmaOne networks provide soft handoffs and higher capacity than traditional AMPS networks, with data rates up to 14.4 kbps. CdmaOne is based on IS-95A technology. IS-95B improves this technology further by providing higher data rates for packet- and circuit-switched CDMA data, with data rates up to 115 kbps.
This evolution continues with cdma2000, which is the third generation version of IS-95. This new standard is developed to support third generation services as defined by ITU. cdma2000 is divided into two parts, namely: (a) IS-2000/cdma200 1X, and (b) IS-2000A/cdma2000 3X. cdma2000 1X standard delivers twice the voice capacity of cdmaOne with a data rate of 144 kbps. The term 1X, as derived from 1XRTT (radio transmission technology), is used to signify that the standard carrier on the air interface is 1.25 MHz, which is similar to IS-95A and IS-95B. In cdma2000 3x, the term 3X, derived from 3XRTT, is used to signify three times 1.25 MHz, i.e., 3.75 MHz. cdma2000 3X offers greater capacity than 1X with data rates up to 2 Mbps while retaining backward compatibility with earlier 1X and cdmaOne deployments.
Lately, 3GPP (Third Generation Partnership Project) is formed to defined standards for third generation all-IP networks. It is also responsible for the production of globally applicable technical specifications and reports for a 3G mobile system based on evolved GSM core networks and the radio access technologies that they support (i.e., Universal Terrestrial Radio Access (UTRA) both Frequency Division Duplex (FDD) and Time Division Duplex (TDD) modes).