Topics

UDR56k-4 Sensitivity & BER

Darren Long <darren.long@...>
 

Hi all.

I was just reading the datasheet for the UDR56k-4 and noticed the receiver sensitivity figures:

Receiver Sensitivity, BER 10-3
•    4k8 -113 dBm
•    9k6 -110 dBm
•    56k -100 dBm

That BER of 10^-3 is pretty severe. According to my calcs in octave:

octave:10> ber = 10^-3
ber =  0.0010000
octave:11> ploss = (1-(1-ber)^(128*8))
ploss =  0.64103
octave:12> ploss = (1-(1-ber)^(256*8))
ploss =  0.87114
octave:13> ploss = (1-(1-ber)^(1500*8))
ploss =  0.99999

That's a 64% packet loss rate for 128 byte frames, 87% for 256 byte frames and almost 100% for a typical TCP/IP MTU.   Not a particularly good reference for planning link budgets, I wouldn't have thought.

A BER of 10^-4 looks to be a more useful baseline for a sensitivity figure:

octave:19> ber =  0.00010000
ber =  1.0000e-04
octave:20> ploss = (1-(1-ber)^(128*8))
ploss =  0.097336
octave:21> ploss = (1-(1-ber)^(256*8))
ploss =  0.18520
octave:22> ploss = (1-(1-ber)^(1500*8))
ploss =  0.69882

A ~20% loss rate would be manageable for say a reliable AX.25 connection mode link, but would kill TCP performance over an unreliable link.  NORM would cope OK, I've regularly seen it do well with >50% loss rates. 

I suppose the BER/(Eb/N0) curves for the modems in question would show how much margin one needed to get up to a BER of 10^e-4.  Would it be more useful to have the sensitivity figures quoted for a better BER?

What do you think?

Cheers,

Darren Long, G0HWW

Bryan Hoyer <bhhoyer@...>
 

Hi Darren,

You are quite right. 10-3 is basically where AX25 falls apart as 1 error in 1000 prevents a 128 byte packet from ever getting through. So in a sense it is the limit of a usable channel for AX.25.

Those numbers are both preliminary and conservative. We will publish Eb/No curves after final characterization.

On the other hand  even a little FEC would make this a usable channel.

We will be releasing our plans for future protocols at DCC in September.

Cheers,
Bryan K7UDR

On Aug 6, 2013, at 6:12 AM, Darren Long <darren.long@...> wrote:

 

Hi all.

I was just reading the datasheet for the UDR56k-4 and noticed the receiver sensitivity figures:

Receiver Sensitivity, BER 10-3
•    4k8 -113 dBm
•    9k6 -110 dBm
•    56k -100 dBm

That BER of 10^-3 is pretty severe. According to my calcs in octave:

octave:10> ber = 10^-3
ber =  0.0010000
octave:11> ploss = (1-(1-ber)^(128*8))
ploss =  0.64103
octave:12> ploss = (1-(1-ber)^(256*8))
ploss =  0.87114
octave:13> ploss = (1-(1-ber)^(1500*8))
ploss =  0.99999

That's a 64% packet loss rate for 128 byte frames, 87% for 256 byte frames and almost 100% for a typical TCP/IP MTU.   Not a particularly good reference for planning link budgets, I wouldn't have thought.

A BER of 10^-4 looks to be a more useful baseline for a sensitivity figure:

octave:19> ber =  0.00010000
ber =  1.0000e-04
octave:20> ploss = (1-(1-ber)^(128*8))
ploss =  0.097336
octave:21> ploss = (1-(1-ber)^(256*8))
ploss =  0.18520
octave:22> ploss = (1-(1-ber)^(1500*8))
ploss =  0.69882

A ~20% loss rate would be manageable for say a reliable AX.25 connection mode link, but would kill TCP performance over an unreliable link.  NORM would cope OK, I've regularly seen it do well with >50% loss rates. 

I suppose the BER/(Eb/N0) curves for the modems in question would show how much margin one needed to get up to a BER of 10^e-4.  Would it be more useful to have the sensitivity figures quoted for a better BER?

What do you think?

Cheers,

Darren Long, G0HWW


"Michael E. Fox - N6MEF" <n6mef@...>
 

Darren,

 

It’s even worse than that, especially in the real world.  A 20% loss rate will cause enough retransmits that a cascade failure will occur.  In other words, for those 20% that must be retransmitted, 20% of those will need to be retransmitted again, and so on.  If the link is more than a single user occasionally checking his BBS for short messages, the link becomes worthless pretty quickly.  For example, if there are a 3-4 systems on the frequency, even at only 128 byte packets, the channel becomes hopelessly clogged in very short order.

 

Even your 10^-4 example shows 10% loss of just 128 byte packets.  Again, way too high for anything more than a single user talking to a single station.  So, 20% loss is NOT manageable for AX.25.  Not even 10%.

 

When we first deployed our BBS network, we performed real testing with 9600 baud TNCs (no FEC).  We were getting somewhere in the range of 5% to 10% packet loss at 9600 baud and 0% packet loss at 1200 baud.  (This was not a lab test.  This was between real sites using real antennas.)  Two different TNC brands were used and, yes, deviation was verified to be per manufacturer’s specs.  As a result, even with the higher baud rate, the effective throughput was lower on 9600 than on 1200 baud. 

 

To think that one could go higher in speed and not make things even worse, is just not facing reality.  In *real* environments, with more than just one user talking to one other station at a time, the BER must be much higher (10^-5 to 10^-6) in order for the channel to not rapidly degrade due to cascading retransmits.  For any *real* environment, FEC is going to be essential. 

 

The standard level for measuring receiver sensitivity in digital radios is 10^-5 BER.  Anyone who is familiar with P25 or DMR testing will be familiar with modulation fidelity vs. BER.  Those systems have error correction.  The modulation fidelity (measure of how accurately the symbols are being received) can degrade quite severely while still maintaining a 0% BER.  Without forward error correction, these systems would be unusable in any real environment. 

 

I’d love to deploy the UDR56 on our BBSs that share a common forwarding frequency.  But without FEC, there’s just no way.  The result would be predictably terrible.

 

Michael

N6MEF

 

 

 

From: UniversalDigitalRadio@... [mailto:UniversalDigitalRadio@...] On Behalf Of Darren Long
Sent: Tuesday, August 06, 2013 6:12 AM
To: UniversalDigitalRadio@...
Subject: [UniversalDigitalRadio] UDR56k-4 Sensitivity & BER

 

 

Hi all.

I was just reading the datasheet for the UDR56k-4 and noticed the receiver sensitivity figures:

Receiver Sensitivity, BER 10-3

•    4k8 -113 dBm

•    9k6 -110 dBm

•    56k -100 dBm


That BER of 10^-3 is pretty severe. According to my calcs in octave:

octave:10> ber = 10^-3
ber =  0.0010000
octave:11> ploss = (1-(1-ber)^(128*8))
ploss =  0.64103
octave:12> ploss = (1-(1-ber)^(256*8))
ploss =  0.87114
octave:13> ploss = (1-(1-ber)^(1500*8))
ploss =  0.99999

That's a 64% packet loss rate for 128 byte frames, 87% for 256 byte frames and almost 100% for a typical TCP/IP MTU.   Not a particularly good reference for planning link budgets, I wouldn't have thought.

A BER of 10^-4 looks to be a more useful baseline for a sensitivity figure:

octave:19> ber =  0.00010000
ber =  1.0000e-04
octave:20> ploss = (1-(1-ber)^(128*8))
ploss =  0.097336
octave:21> ploss = (1-(1-ber)^(256*8))
ploss =  0.18520
octave:22> ploss = (1-(1-ber)^(1500*8))
ploss =  0.69882

A ~20% loss rate would be manageable for say a reliable AX.25 connection mode link, but would kill TCP performance over an unreliable link.  NORM would cope OK, I've regularly seen it do well with >50% loss rates. 

I suppose the BER/(Eb/N0) curves for the modems in question would show how much margin one needed to get up to a BER of 10^e-4.  Would it be more useful to have the sensitivity figures quoted for a better BER?

What do you think?

Cheers,

Darren Long, G0HWW

Darren Long <darren.long@...>
 

On 06/08/13 16:30, Bryan Hoyer wrote:
�

Hi Darren,


You are quite right. 10-3 is basically where AX25 falls apart as 1 error in 1000 prevents a 128 byte packet from ever getting through. So in a sense it is the limit of a usable channel for AX.25.

Those numbers are both preliminary and conservative. We will publish Eb/No curves after final characterization.

Jolly good.

On the other hand �even a little FEC would make this a usable channel.
Indeed.


We will be releasing our plans for future protocols at DCC in September.

I'll make sure to keep an eye on the proceedings.

Cheers,

Darren, G0HWW

Darren Long <darren.long@...>
 

On 06/08/13 17:12, Michael E. Fox - N6MEF wrote:
 
It’s even worse than that, especially in the real world.  A 20% loss rate will cause enough retransmits that a cascade failure will occur.  In other words, for those 20% that must be retransmitted, 20% of those will need to be retransmitted again, and so on.  If the link is more than a single user occasionally checking his BBS for short messages, the link becomes worthless pretty quickly.  For example, if there are a 3-4 systems on the frequency, even at only 128 byte packets, the channel becomes hopelessly clogged in very short order.

 

Even your 10^-4 example shows 10% loss of just 128 byte packets.  Again, way too high for anything more than a single user talking to a single station.  So, 20% loss is NOT manageable for AX.25.  Not even 10%.


Around here in order to test anything I have to buy more radios and computers, there's no-one else to talk to.  That said, I'm interested in getting more radios :)

When we first deployed our BBS network, we performed real testing with 9600 baud TNCs (no FEC).  We were getting somewhere in the range of 5% to 10% packet loss at 9600 baud and 0% packet loss at 1200 baud.  (This was not a lab test.  This was between real sites using real antennas.)  Two different TNC brands were used and, yes, deviation was verified to be per manufacturer’s specs.  As a result, even with the higher baud rate, the effective throughput was lower on 9600 than on 1200 baud. 

Interesting, thanks for the info.  This was using Connected Mode AX.25 or TCP/IP over Disconnected Mode?

 

To think that one could go higher in speed and not make things even worse, is just not facing reality.  In *real* environments, with more than just one user talking to one other station at a time, the BER must be much higher (10^-5 to 10^-6) in order for the channel to not rapidly degrade due to cascading retransmits.  For any *real* environment, FEC is going to be essential. 


Agreed.  John Ronan, EI7IG, and I have been experimenting with Delay Tolerant Networks over ham radio links.  There is convergence layer support for Connected Mode AX.25 (implemented by yours truly) in the DTN2 reference implementation and a Nack Oriented Reliable Multicast (NORM) protocol convergence layer too, both of which seem useful.  We've never tried NORM over UDP/IP/UI-Frames on AX.25 yet, but John's been dabbling with it on D-STAR DD.  NORM's packet level erasure codes work well, but with some modest link layer FEC too I should think it would work very well.

The standard level for measuring receiver sensitivity in digital radios is 10^-5 BER.  Anyone who is familiar with P25 or DMR testing will be familiar with modulation fidelity vs. BER.  Those systems have error correction.  The modulation fidelity (measure of how accurately the symbols are being received) can degrade quite severely while still maintaining a 0% BER.  Without forward error correction, these systems would be unusable in any real environment. 


Sure.   This is why i thought quoting the sensitivity for a BER of 10^-3 was a bit odd.  I assume that post-correction residual BER figures for modems with FEC would be used in the sensitivity figures when presented in the specs.

I’d love to deploy the UDR56 on our BBSs that share a common forwarding frequency.  But without FEC, there’s just no way.  The result would be predictably terrible.

I hope that any new modems developed for the UDR56k are prototyped in gnuradio and/or implemented in a userspace soundmodem so that I can experiment with them.  I don't currently have a car and I'm not too tempted to buy 2 UDR56k  units to run at home, but I was thinking of getting one in the hope that all my old AX.25 kit would interoperate with it and that I could use my USRP or a soundmodem to try out any fancy new waveforms. 

I wonder what the minimum tx power level achievable with the UDR56k is?


Cheers,

Darren, G0HWW



"Michael E. Fox - N6MEF" <n6mef@...>
 


Around here in order to test anything I have to buy more radios and computers, there's no-one else to talk to.  That said, I'm interested in getting more radios :)

Absolutely!  I’m all for that!

Interesting, thanks for the info.  This was using Connected Mode AX.25 or TCP/IP over Disconnected Mode?

We tried both.  It doesn’t matter.  The errored packet still needs to be retransmitted, regardless of whether it is AX.25 or TCP/IP that makes the decision.




Sure.   This is why i thought quoting the sensitivity for a BER of 10^-3 was a bit odd.  I assume that post-correction residual BER figures for modems with FEC would be used in the sensitivity figures when presented in the specs.

I believe that’s the case.  BER is certainly important and one needs to know where that break point is.  But it’s also important to understand that you can’t design a system to operate at the worst case 0% BER point.  The reason is that, with FEC, BER stays at 0% until the signal is so dirty that error correction can’t help it any more.  At that point, BER jumps up very quickly from 0% to 5% or more with just a dB or two of reduced C/I (Carrier to Interference ratio).  In other words, a 0% BER signal and a total unusable signal can be very, very close in C/I.  So, at the point where 0% BER is lost, you have little to no fade margin and your system will not be reliable.  In practice, the modulation fidelity value is monitored during such things as drive tests because it changes gradually as the signal quality gets worse.  That can help you determine the type of fade margin you need to build into your link budget.  Then, depending on the fade margin you need for your environment, you can then determine how much signal you need to stay away from the danger point.

 

On another point, I note that the UDR56K datasheet says the modulation types are FSK and GMSK.  I also know that the digital radio community is moving to PSK as a more reliable way to transmit higher bandwidths within the same spectrum.  I’m not a modulation engineer, but as I understand it, it’s evidently possible to switch more quickly and accurately between phases than it is to switch between frequencies, making for less bit/symbol errors.  I wonder if PSK, QPSK, etc. emission types are even allowed by our FCC Part 97 rules which govern ham radio.  Hmm… something to check.

 

Michael

N6MEF

Bryan Hoyer <bhhoyer@...>
 

On the topic of Modulation

The data-sheet lists the modulation types implemented in the first release. These types support existing digital standards for compatibility.

The UDR uses an IQ XCVR and as such is capable of doing other modulation types, limited by the regs and the linearity of the PA. The modem is implemented in software and can be updated in the field.

We will release a full briefing on our software architecture at DCC.

Cheers,
Bryan K7UDR

On Aug 7, 2013, at 9:23 AM, "Michael E. Fox - N6MEF" <n6mef@...> wrote:

 


Around here in order to test anything I have to buy more radios and computers, there's no-one else to talk to.  That said, I'm interested in getting more radios :)

Absolutely!  I’m all for that!

Interesting, thanks for the info.  This was using Connected Mode AX.25 or TCP/IP over Disconnected Mode?

We tried both.  It doesn’t matter.  The errored packet still needs to be retransmitted, regardless of whether it is AX.25 or TCP/IP that makes the decision.




Sure.   This is why i thought quoting the sensitivity for a BER of 10^-3 was a bit odd.  I assume that post-correction residual BER figures for modems with FEC would be used in the sensitivity figures when presented in the specs.

I believe that’s the case.  BER is certainly important and one needs to know where that break point is.  But it’s also important to understand that you can’t design a system to operate at the worst case 0% BER point.  The reason is that, with FEC, BER stays at 0% until the signal is so dirty that error correction can’t help it any more.  At that point, BER jumps up very quickly from 0% to 5% or more with just a dB or two of reduced C/I (Carrier to Interference ratio).  In other words, a 0% BER signal and a total unusable signal can be very, very close in C/I.  So, at the point where 0% BER is lost, you have little to no fade margin and your system will not be reliable.  In practice, the modulation fidelity value is monitored during such things as drive tests because it changes gradually as the signal quality gets worse.  That can help you determine the type of fade margin you need to build into your link budget.  Then, depending on the fade margin you need for your environment, you can then determine how much signal you need to stay away from the danger point.

 

On another point, I note that the UDR56K datasheet says the modulation types are FSK and GMSK.  I also know that the digital radio community is moving to PSK as a more reliable way to transmit higher bandwidths within the same spectrum.  I’m not a modulation engineer, but as I understand it, it’s evidently possible to switch more quickly and accurately between phases than it is to switch between frequencies, making for less bit/symbol errors.  I wonder if PSK, QPSK, etc. emission types are even allowed by our FCC Part 97 rules which govern ham radio.  Hmm… something to check.

 

Michael

N6MEF