Edits to section 1 and corrections to n-k+1 stuff.

git-svn-id: svn+ssh://svn.code.sf.net/p/wsjt/wsjt/branches/wsjtx@6321 ab8295b8-cf94-4d9e-aec4-7959e3be5d79
This commit is contained in:
Steven Franke 2015-12-28 03:02:05 +00:00
parent f416a52def
commit fc06ec952f
4 changed files with 49 additions and 40 deletions

View File

@ -1,19 +1,17 @@
# gnuplot script for AWGN vs Rayleigh figure # gnuplot script for "Percent copy" figure
# run: gnuplot fig_wer3.gnuplot
# then: pdflatex fig_wer3.tex
# #
set term epslatex standalone size 16cm,8cm set term epslatex standalone size 6in,6*2/3in
set output "fig_wer3.tex" set output "fig_wer3.tex"
set xlabel "$E_s/N_o$ (dB)" set xlabel "SNR in 2500 Hz Bandwidth (dB)"
set ylabel "WER" set ylabel "Percent Copy"
set style func linespoints set style func linespoints
set key on top outside nobox set key off
set tics in set tics in
set mxtics 2 set mxtics 2
set mytics 10 set mytics 10
set grid set grid
set logscale y plot [-27:-22] [0:110] \
#set format y "10^{%L}" "ftdata-100000.dat" using 1:(100*$3) with linespoints lt 1 pt 7 title 'FT-100K', \
plot "ftdata-1000-rf.dat" using ($1+29.7):(1-$2) every ::1 with linespoints pt 7 title "FT-1K-RF", \ "ftdata-100000.dat" using 1:(100*$2) with linespoints lt 1 pt 7 title 'FT-100K'
"ftdata-10000-rf.dat" using ($1+29.7):(1-$2) every ::1 with linespoints pt 7 title "FT-10K-RF", \
"bmdata-rf.dat" using ($1+29.7):(1-$2) every ::1 with linespoints pt 5 title 'BM-RF', \
"ftdata-10000.dat" using ($1+29.7):(1-$2) every ::1 with linespoints pt 7 title 'FT-10K-AWGN', \
"bmdata.dat" using ($1+29.7):(1-$2) with linespoints pt 5 title 'BM-AWGN'

View File

@ -1,9 +1,9 @@
snr psuccess ntrials 10000 r6315 snr psuccess ntrials 10000 r6315
-26.5 0.004 x -26.5 0.004 x
-26.0 0.03 x -26.0 0.03 x
-25.5 0.107 -25.5 0.107 0.19
-25.0 0.353 -25.0 0.353 0.40 (2)
-24.5 0.653 -24.5 0.653
-24.0 0.913 -24.0 0.913
-23.5 0.983 -23.5 0.983
-23.0 0.9987 x -23.0 0.9987 x

View File

@ -2,9 +2,9 @@ snr psuccess 100000 trials r6315
-27.0 0.0 x -27.0 0.0 x
-26.5 0.007 x -26.5 0.007 x
-26.0 0.057 -26.0 0.057
-25.5 0.207 -25.5 0.207 0.35
-25.0 0.531 0.67 -25.0 0.531 0.67
-24.5 0.822 -24.5 0.822 0.878
-24.0 0.953 -24.0 0.953
-23.5 0.99423 -23.5 0.99423
-23.0 0.99967 302956/303056 -23.0 0.99967 302956/303056

View File

@ -127,18 +127,27 @@ The following paragraph may not belong here - feel free to get rid of it,
\end_layout \end_layout
\begin_layout Standard \begin_layout Standard
The Franke-Taylor (FT) decoder described herein is a probabilistic list-decoder The Franke-Taylor (FT) decoder is a probabilistic list-decoder that we have
that has been optimized for use in the short block-length, low-rate Reed-Solomo developed for use in the short block-length, low-rate Reed-Solomon code
n code used in JT65. used in JT65.
The particular approach that we have developed has a number of desirable JT65 provides a unique sandbox for playing with decoding algorithms.
Several seconds are available for decoding a single 63-symbol message.
This is a long time! The luxury of essentially unlimited time allows us
to experiment with decoders that have high computational complexity.
The payoff is that we can extend the decoding threshold by many dB over
the hard-decision, Berlekamp-Massey decoder on a typical fading channel,
and by a meaningful amount over the KV decoder, long considered to be the
best available soft-decision decoder.
In addition to its excellent performance, the FT algorithm has other desirable
properties, not the least of which is its conceptual simplicity. properties, not the least of which is its conceptual simplicity.
The decoding performance and complexity scale in a useful way, providing Decoding performance and complexity scale in a useful way, providing steadily
steadily increasing soft-decision decoding gain as a tunable computational increasing soft-decision decoding gain as a tunable computational complexity
complexity parameter is increased over more than 5 orders of magnitude. parameter is increased over more than 5 orders of magnitude.
The fact that the algorithm requires a large number of independent decoding This means that appreciable gain should be available from our decoder even
trials should also make it possible to obtain significant performance gains on very simple (and slow) computers.
through parallelization. On the other hand, because the algorithm requires a large number of independent
decoding trials, it should be possible to obtain significant performance
gains through parallelization on high-performance computers.
\end_layout \end_layout
\begin_layout Section \begin_layout Section
@ -378,14 +387,16 @@ probabilistic
decoding methods decoding methods
\begin_inset CommandInset citation \begin_inset CommandInset citation
LatexCommand cite LatexCommand cite
after "Chapter 10"
key "key-1" key "key-1"
\end_inset \end_inset
. .
These algorithms generally involve some amount of educating guessing about Such algorithms involve some amount of educating guessing about which received
which received symbols are in error. symbols are in error or, alternatively, about which received symbols are
The guesses are informed by quality metrics, also known as correct.
The guesses are informed by
\begin_inset Quotes eld \begin_inset Quotes eld
\end_inset \end_inset
@ -393,11 +404,11 @@ soft-symbol
\begin_inset Quotes erd \begin_inset Quotes erd
\end_inset \end_inset
metrics, associated with the received symbols. quality metrics associated with the received symbols.
To illustrate why it is absolutely essential to use such soft-symbol informatio To illustrate why it is absolutely essential to use such soft-symbol informatio
n to identify symbols that are most likely to be in error it helps to consider n in these algorithms it helps to consider what would happen if we tried
what would happen if we tried to use completely random guesses, ignoring to use completely random guesses, ignoring any available soft-symbol informatio
any available soft-symbol information. n.
\end_layout \end_layout
\begin_layout Standard \begin_layout Standard
@ -997,11 +1008,11 @@ The correct JT65 codeword produces a value for
bins containing both signal and noise power. bins containing both signal and noise power.
Incorrect codewords have at most Incorrect codewords have at most
\begin_inset Formula $k=12$ \begin_inset Formula $k-1=11$
\end_inset \end_inset
such bins and at least such bins and at least
\begin_inset Formula $n-k=51$ \begin_inset Formula $n-k+1=52$
\end_inset \end_inset
bins containing noise only. bins containing noise only.
@ -1033,7 +1044,7 @@ d-deviation uncertainty range assumes Gaussian statistics.
\begin_layout Standard \begin_layout Standard
\begin_inset Formula \begin_inset Formula
\[ \[
u=\frac{n-k\pm\sqrt{n-k}}{n}+\frac{k\pm\sqrt{k}}{n}(1+y). u=\frac{n-k+1\pm\sqrt{n-k+1}}{n}+\frac{k-1\pm\sqrt{k-1}}{n}(1+y).
\] \]
\end_inset \end_inset
@ -1044,7 +1055,7 @@ For JT65 this expression evaluates to
\begin_layout Standard \begin_layout Standard
\begin_inset Formula \begin_inset Formula
\[ \[
u\approx1\pm0.13+(0.19\pm0.06)\, y. u\approx1\pm0.11+(0.17\pm0.05)\, y.
\] \]
\end_inset \end_inset
@ -1068,7 +1079,7 @@ As a specific example, consider signal strength
\end_inset \end_inset
0.6, while incorrect codewords will give 0.6, while incorrect codewords will give
\begin_inset Formula $u\approx2.0\pm0.3$ \begin_inset Formula $u\approx1.7\pm0.3$
\end_inset \end_inset
or less. or less.