import os # Configure which GPU
if os.getenv("CUDA_VISIBLE_DEVICES") is None:
= 0 # Use "" to use the CPU
gpu_num "CUDA_VISIBLE_DEVICES"] = f"{gpu_num}"
os.environ[
'TF_CPP_MIN_LOG_LEVEL'] = '3'
os.environ[
# Import Sionna
try:
import sionna as sn
except ImportError as e:
# Install Sionna if package is not already installed
import os
"pip install sionna")
os.system(import sionna as sn
# Configure the notebook to use only a single GPU and allocate only as much memory as needed
# For more details, see https://www.tensorflow.org/guide/gpu
import tensorflow as tf
= tf.config.list_physical_devices('GPU')
gpus if gpus:
try:
0], True)
tf.config.experimental.set_memory_growth(gpus[except RuntimeError as e:
print(e)
# Avoid warnings from TensorFlow
'ERROR')
tf.get_logger().setLevel(
import numpy as np
# For plotting
%matplotlib inline
# also try %matplotlib widget
import matplotlib.pyplot as plt
# for performance measurements
import time
# For the implementation of the Keras models
from tensorflow.keras import Model
This tutorial will guide you through Sionna, from its basic principles to the implementation of a point-to-point link with a 5G NR compliant code and a 3GPP channel model. You will also learn how to write custom trainable layers by implementing a state of the art neural receiver, and how to train and evaluate end-to-end communication systems.
The tutorial is structured in four notebooks:
Part I: Getting started with Sionna
Part II: Differentiable Communication Systems
Part III: Advanced Link-level Simulations
Part IV: Toward Learned Receivers
The official documentation provides key material on how to use Sionna and how its components are implemented.
- Imports & Basics
- A note on random number generation
- Sionna Data-flow and Design Paradigms
- Hello, Sionna!
- Communication Systems as Keras Models
- Forward Error Correction
- Eager vs. Graph Mode
- Exercise
Imports & Basics
We can now access Sionna functions within the sn
namespace.
Hint: In Jupyter notebooks, you can run bash commands with !
.
A note on random number generation
When Sionna is loaded, it instantiates random number generators (RNGs) for Python, NumPy, and TensorFlow. You can optionally set a seed which will make all of your results deterministic, as long as only these RNGs are used. In the cell below, you can see how this seed is set and how the different RNGs can be used.
= 40
sn.config.seed
# Python RNG - use instead of
# import random
# random.randint(0, 10)
print(sn.config.py_rng.randint(0,10))
# NumPy RNG - use instead of
# import numpy as np
# np.random.randint(0, 10)
print(sn.config.np_rng.integers(0,10))
# TensorFlow RNG - use instead of
# import tensorflow as tf
# tf.random.uniform(shape=[1], minval=0, maxval=10, dtype=tf.int32)
print(sn.config.tf_rng.uniform(shape=[1], minval=0, maxval=10, dtype=tf.int32))
7
5
tf.Tensor([2], shape=(1,), dtype=int32)
Sionna Data-flow and Design Paradigms
Sionna inherently parallelizes simulations via batching, i.e., each element in the batch dimension is simulated independently.
This means the first tensor dimension is always used for inter-frame parallelization similar to an outer for-loop in Matlab/NumPy simulations, but operations can be operated in parallel.
To keep the dataflow efficient, Sionna follows a few simple design principles:
- Signal-processing components are implemented as an individual Keras layer.
tf.float32
is used as preferred datatype andtf.complex64
for complex-valued datatypes, respectively.
This allows simpler re-use of components (e.g., the same scrambling layer can be used for binary inputs and LLR-values).tf.float64
/tf.complex128
are available when high precision is needed.- Models can be developed in eager mode allowing simple (and fast) modification of system parameters.
- Number crunching simulations can be executed in the faster graph mode or even XLA acceleration (experimental) is available for most components.
- Whenever possible, components are automatically differentiable via auto-grad to simplify the deep learning design-flow.
- Code is structured into sub-packages for different tasks such as channel coding, mapping,… (see API documentation for details).
These paradigms simplify the re-useability and reliability of our components for a wide range of communications related applications.
Hello, Sionna!
Let’s start with a very simple simulation: Transmitting QAM symbols over an AWGN channel. We will implement the system shown in the figure below.
We will use upper case for naming simulation parameters that are used throughout this notebook
Every layer needs to be initialized once before it can be used.
Tip: Use the API documentation to find an overview of all existing components. You can directly access the signature and the docstring within jupyter via Shift+TAB
.
Remark: Most layers are defined to be complex-valued.
We first need to create a QAM constellation.
= 2 # QPSK
NUM_BITS_PER_SYMBOL = sn.mapping.Constellation("qam", NUM_BITS_PER_SYMBOL)
constellation
=(4,4)); constellation.show(figsize
Task: Try to change the modulation order, e.g., to 16-QAM.
= 4 # 16-QAM
NUM_BITS_PER_SYMBOL = sn.mapping.Constellation("qam", NUM_BITS_PER_SYMBOL)
constellation
=(4,4)); constellation.show(figsize
We then need to setup a mapper to map bits into constellation points. The mapper takes as parameter the constellation.
We also need to setup a corresponding demapper to compute log-likelihood ratios (LLRs) from received noisy samples.
= sn.mapping.Mapper(constellation=constellation)
mapper
# The demapper uses the same constellation object as the mapper
= sn.mapping.Demapper("app", constellation=constellation) demapper
Tip: You can access the signature+docstring via ?
command and print the complete class definition via ??
operator.
Obviously, you can also access the source code via https://github.com/nvlabs/sionna/.
# print class definition of the Constellation class
sn.mapping.Mapper?
Init signature: sn.mapping.Mapper( constellation_type=None, num_bits_per_symbol=None, constellation=None, return_indices=False, dtype=tf.complex64, **kwargs, ) Docstring: Mapper(constellation_type=None, num_bits_per_symbol=None, constellation=None, return_indices=False, dtype=tf.complex64, **kwargs) Maps binary tensors to points of a constellation. This class defines a layer that maps a tensor of binary values to a tensor of points from a provided constellation. Parameters ---------- constellation_type : One of ["qam", "pam", "custom"], str For "custom", an instance of :class:`~sionna.mapping.Constellation` must be provided. num_bits_per_symbol : int The number of bits per constellation symbol, e.g., 4 for QAM16. Only required for ``constellation_type`` in ["qam", "pam"]. constellation : Constellation An instance of :class:`~sionna.mapping.Constellation` or `None`. In the latter case, ``constellation_type`` and ``num_bits_per_symbol`` must be provided. return_indices : bool If enabled, symbol indices are additionally returned. Defaults to `False`. dtype : One of [tf.complex64, tf.complex128], tf.DType The output dtype. Defaults to tf.complex64. Input ----- : [..., n], tf.float or tf.int Tensor with with binary entries. Output ------ : [...,n/Constellation.num_bits_per_symbol], tf.complex The mapped constellation symbols. : [...,n/Constellation.num_bits_per_symbol], tf.int32 The symbol indices corresponding to the constellation symbols. Only returned if ``return_indices`` is set to True. Note ---- The last input dimension must be an integer multiple of the number of bits per constellation symbol. File: ~/anaconda3/envs/sionna/lib/python3.9/site-packages/sionna/mapping.py Type: type Subclasses:
As can be seen, the Mapper
class inherits from Layer
, i.e., implements a Keras layer.
This allows to simply build complex systems by using the Keras functional API to stack layers.
Sionna provides as utility a binary source to sample uniform i.i.d. bits.
= sn.utils.BinarySource() binary_source
Finally, we need the AWGN channel.
= sn.channel.AWGN() awgn_channel
Sionna provides a utility function to compute the noise power spectral density ratio \(N_0\) from the energy per bit to noise power spectral density ratio \(E_b/N_0\) in dB and a variety of parameters such as the coderate and the nunber of bits per symbol.
= sn.utils.ebnodb2no(ebno_db=10.0,
no =NUM_BITS_PER_SYMBOL,
num_bits_per_symbol=1.0) # Coderate set to 1 as we do uncoded transmission here coderate
We now have all the components we need to transmit QAM symbols over an AWGN channel.
Sionna natively supports multi-dimensional tensors.
Most layers operate at the last dimension and can have arbitrary input shapes (preserved at output).
= 16 # How many examples are processed by Sionna in parallel
BATCH_SIZE
= binary_source([BATCH_SIZE,
bits 1024]) # Blocklength
print("Shape of bits: ", bits.shape)
= mapper(bits)
x print("Shape of x: ", x.shape)
= awgn_channel([x, no])
y print("Shape of y: ", y.shape)
= demapper([y, no])
llr print("Shape of llr: ", llr.shape)
Shape of bits: (16, 1024)
Shape of x: (16, 256)
Shape of y: (16, 256)
Shape of llr: (16, 1024)
In Eager mode, we can directly access the values of each tensor. This simplifies debugging.
= 8 # how many samples shall be printed
num_samples = int(num_samples/NUM_BITS_PER_SYMBOL)
num_symbols
print(f"First {num_samples} transmitted bits: {bits[0,:num_samples]}")
print(f"First {num_symbols} transmitted symbols: {np.round(x[0,:num_symbols], 2)}")
print(f"First {num_symbols} received symbols: {np.round(y[0,:num_symbols], 2)}")
print(f"First {num_samples} demapped llrs: {np.round(llr[0,:num_samples], 2)}")
First 8 transmitted bits: [1. 0. 0. 0. 0. 1. 0. 0.]
First 2 transmitted symbols: [-0.32+0.32j 0.32-0.32j]
First 2 received symbols: [-0.42+0.25j 0.28-0.36j]
First 8 demapped llrs: [ 21.14 -12.66 -10.86 -19.34 -14.41 17.98 -17.59 -14.02]
Let’s visualize the received noisy samples.
=(4,4))
plt.figure(figsize1)
plt.axes().set_aspect(True)
plt.grid('Channel output')
plt.title('Real Part')
plt.xlabel('Imaginary Part')
plt.ylabel(
plt.scatter(tf.math.real(y), tf.math.imag(y)) plt.tight_layout()
Task: One can play with the SNR to visualize the impact on the received samples.
Answer:
= sn.utils.ebnodb2no(ebno_db=20.0,
no =NUM_BITS_PER_SYMBOL,
num_bits_per_symbol=1.0) # Coderate set to 1 as we do uncoded transmission here
coderate
= 16 # How many examples are processed by Sionna in parallel
BATCH_SIZE
= binary_source([BATCH_SIZE,
bits 1024]) # Blocklength
print("Shape of bits: ", bits.shape)
= mapper(bits)
x print("Shape of x: ", x.shape)
= awgn_channel([x, no])
y print("Shape of y: ", y.shape)
= demapper([y, no])
llr print("Shape of llr: ", llr.shape)
= 8 # how many samples shall be printed
num_samples = int(num_samples/NUM_BITS_PER_SYMBOL)
num_symbols
print(f"First {num_samples} transmitted bits: {bits[0,:num_samples]}")
print(f"First {num_symbols} transmitted symbols: {np.round(x[0,:num_symbols], 2)}")
print(f"First {num_symbols} received symbols: {np.round(y[0,:num_symbols], 2)}")
print(f"First {num_samples} demapped llrs: {np.round(llr[0,:num_samples], 2)}")
=(4,4))
plt.figure(figsize1)
plt.axes().set_aspect(True)
plt.grid('Channel output')
plt.title('Real Part')
plt.xlabel('Imaginary Part')
plt.ylabel(
plt.scatter(tf.math.real(y), tf.math.imag(y)) plt.tight_layout()
Shape of bits: (16, 1024)
Shape of x: (16, 256)
Shape of y: (16, 256)
Shape of llr: (16, 1024)
First 8 transmitted bits: [0. 0. 1. 0. 1. 1. 0. 1.]
First 2 transmitted symbols: [ 0.95+0.32j -0.32-0.95j]
First 2 received symbols: [ 1.03+0.32j -0.29-0.97j]
First 8 demapped llrs: [-717.86 -161.2 198.93 -158.8 146.28 665.31 -173.72 172.65]
Advanced Task: Compare the LLR distribution for “app” demapping with “maxlog” demapping. The Bit-Interleaved Coded Modulation example notebook can be helpful for this task.
Answer:
= sn.mapping.Demapper("maxlog", constellation=constellation)
demapper
= sn.utils.ebnodb2no(ebno_db=20.0,
no =NUM_BITS_PER_SYMBOL,
num_bits_per_symbol=1.0) # Coderate set to 1 as we do uncoded transmission here
coderate
= 16 # How many examples are processed by Sionna in parallel
BATCH_SIZE
= binary_source([BATCH_SIZE,
bits 1024]) # Blocklength
print("Shape of bits: ", bits.shape)
= mapper(bits)
x print("Shape of x: ", x.shape)
= awgn_channel([x, no])
y print("Shape of y: ", y.shape)
= demapper([y, no])
llr print("Shape of llr: ", llr.shape)
= 8 # how many samples shall be printed
num_samples = int(num_samples/NUM_BITS_PER_SYMBOL)
num_symbols
print(f"First {num_samples} transmitted bits: {bits[0,:num_samples]}")
print(f"First {num_symbols} transmitted symbols: {np.round(x[0,:num_symbols], 2)}")
print(f"First {num_symbols} received symbols: {np.round(y[0,:num_symbols], 2)}")
print(f"First {num_samples} demapped llrs: {np.round(llr[0,:num_samples], 2)}")
# plt.subplot(1,2,1)
=(4,4))
plt.figure(figsize1)
plt.axes().set_aspect(True)
plt.grid('Channel output')
plt.title('Real Part')
plt.xlabel('Imaginary Part')
plt.ylabel(
plt.scatter(tf.math.real(y), tf.math.imag(y))
plt.tight_layout()
Shape of bits: (16, 1024)
Shape of x: (16, 256)
Shape of y: (16, 256)
Shape of llr: (16, 1024)
First 8 transmitted bits: [1. 0. 0. 0. 1. 0. 0. 0.]
First 2 transmitted symbols: [-0.32+0.32j -0.32+0.32j]
First 2 received symbols: [-0.34+0.29j -0.32+0.31j]
First 8 demapped llrs: [ 174.13 -147.75 -145.87 -172.25 160.32 -155.21 -159.68 -164.79]
app vs maxlog (demapping method)
A posterior Probability (APP) demapper
With the “app” demapping method, the LLR for the \(i\)-th bit is computed according to
\[ LLR(i) = \ln\left(\frac{\Pr\left(b_i=1\lvert y,\mathbf{p}\right)}{\Pr\left(b_i=0\lvert y,\mathbf{p}\right)}\right) = \ln\left(\frac{ \sum_{c\in\mathcal{C}_{i,1}} \Pr\left(c\lvert\mathbf{p}\right) \exp\left(-\frac{1}{N_o}\left|y-c\right|^2\right) }{ \sum_{c\in\mathcal{C}_{i,0}} \Pr\left(c\lvert\mathbf{p}\right) \exp\left(-\frac{1}{N_o}\left|y-c\right|^2\right) }\right) \]
where $ {i,1} $ and $ {i,0} $ are the sets of constellation points for which the $ i $-th bit is equal to 1 and 0, respectively. $ = $ is the vector of LLRs that serves as prior knowledge on the $ K $ bits that are mapped to a constellation point and is set to $ $ if no prior knowledge is assumed to be available, and $ (c) $ is the prior probability on the constellation symbol $ c $:
\[ \Pr\left(c\lvert\mathbf{p}\right) = \prod_{k=0}^{K-1} \text{sigmoid}\left(p_k \ell(c)_k\right) \]
where $ (c)_k $ is the $ k^{} $ bit label of $ c $, where 0 is replaced by -1. The definition of the LLR has been chosen such that it is equivalent with that of logits. This is different from many textbooks in communications, where the LLR is defined as
\[ LLR(i) = \ln\left(\frac{\Pr\left(b_i=0\lvert y\right)}{\Pr\left(b_i=1\lvert y\right)}\right). \]
Maximum Logarithmic Approximation (maxlog) demapper
\[\begin{align} LLR(i) &\approx\ln\left(\frac{ \max_{c\in\mathcal{C}_{i,1}} \Pr\left(c\lvert\mathbf{p}\right) \exp\left(-\frac{1}{N_o}\left|y-c\right|^2\right) }{ \max_{c\in\mathcal{C}_{i,0}} \Pr\left(c\lvert\mathbf{p}\right) \exp\left(-\frac{1}{N_o}\left|y-c\right|^2\right) }\right)\\ &= \max_{c\in\mathcal{C}_{i,0}} \left(\ln\left(\Pr\left(c\lvert\mathbf{p}\right)\right)-\frac{|y-c|^2}{N_o}\right) - \max_{c\in\mathcal{C}_{i,1}}\left( \ln\left(\Pr\left(c\lvert\mathbf{p}\right)\right) - \frac{|y-c|^2}{N_o}\right) . \end{align}\]
Communication Systems as Keras Models
It is typically more convenient to wrap a Sionna-based communication system into a Keras model.
These models can be simply built by using the Keras functional API to stack layers.
The following cell implements the previous system as a Keras model.
The key functions that need to be defined are __init__()
, which instantiates the required components, and __call()__
, which performs forward pass through the end-to-end system.
class UncodedSystemAWGN(Model): # Inherits from Keras Model
def __init__(self, num_bits_per_symbol, block_length):
"""
A keras model of an uncoded transmission over the AWGN channel.
Parameters
----------
num_bits_per_symbol: int
The number of bits per constellation symbol, e.g., 4 for QAM16.
block_length: int
The number of bits per transmitted message block (will be the codeword length later).
Input
-----
batch_size: int
The batch_size of the Monte-Carlo simulation.
ebno_db: float
The `Eb/No` value (=rate-adjusted SNR) in dB.
Output
------
(bits, llr):
Tuple:
bits: tf.float32
A tensor of shape `[batch_size, block_length] of 0s and 1s
containing the transmitted information bits.
llr: tf.float32
A tensor of shape `[batch_size, block_length] containing the
received log-likelihood-ratio (LLR) values.
"""
super().__init__() # Must call the Keras model initializer
self.num_bits_per_symbol = num_bits_per_symbol
self.block_length = block_length
self.constellation = sn.mapping.Constellation("qam", self.num_bits_per_symbol)
self.mapper = sn.mapping.Mapper(constellation=self.constellation)
self.demapper = sn.mapping.Demapper("app", constellation=self.constellation)
self.binary_source = sn.utils.BinarySource()
self.awgn_channel = sn.channel.AWGN()
# @tf.function # Enable graph execution to speed things up
def __call__(self, batch_size, ebno_db):
# no channel coding used; we set coderate=1.0
= sn.utils.ebnodb2no(ebno_db,
no =self.num_bits_per_symbol,
num_bits_per_symbol=1.0)
coderate
= self.binary_source([batch_size, self.block_length]) # Blocklength set to 1024 bits
bits = self.mapper(bits)
x = self.awgn_channel([x, no])
y = self.demapper([y,no])
llr return bits, llr
We need first to instantiate the model.
= UncodedSystemAWGN(num_bits_per_symbol=NUM_BITS_PER_SYMBOL, block_length=1024) model_uncoded_awgn
Sionna provides a utility to easily compute and plot the bit error rate (BER).
= -3.0 # Minimum value of Eb/N0 [dB] for simulations
EBN0_DB_MIN = 5.0 # Maximum value of Eb/N0 [dB] for simulations
EBN0_DB_MAX = 2000 # How many examples are processed by Sionna in parallel
BATCH_SIZE
= sn.utils.PlotBER("AWGN")
ber_plots
ber_plots.simulate(model_uncoded_awgn,=np.linspace(EBN0_DB_MIN, EBN0_DB_MAX, 20),
ebno_dbs=BATCH_SIZE,
batch_size=100, # simulate until 100 block errors occured
num_target_block_errors="Uncoded",
legend=True,
soft_estimates=100, # run 100 Monte-Carlo simulations (each with batch_size samples)
max_mc_iter=True); show_fig
EbNo [dB] | BER | BLER | bit errors | num bits | block errors | num blocks | runtime [s] | status
---------------------------------------------------------------------------------------------------------------------------------------
-3.0 | 2.1092e-01 | 1.0000e+00 | 431959 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
-2.579 | 2.0002e-01 | 1.0000e+00 | 409638 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
-2.158 | 1.9044e-01 | 1.0000e+00 | 390028 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
-1.737 | 1.8067e-01 | 1.0000e+00 | 370014 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
-1.316 | 1.7111e-01 | 1.0000e+00 | 350425 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
-0.895 | 1.6151e-01 | 1.0000e+00 | 330766 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
-0.474 | 1.5165e-01 | 1.0000e+00 | 310572 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
-0.053 | 1.4232e-01 | 1.0000e+00 | 291464 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
0.368 | 1.3300e-01 | 1.0000e+00 | 272377 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
0.789 | 1.2365e-01 | 1.0000e+00 | 253240 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
1.211 | 1.1391e-01 | 1.0000e+00 | 233283 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
1.632 | 1.0591e-01 | 1.0000e+00 | 216901 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
2.053 | 9.6549e-02 | 1.0000e+00 | 197733 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
2.474 | 8.7900e-02 | 1.0000e+00 | 180020 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
2.895 | 7.9493e-02 | 1.0000e+00 | 162801 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
3.316 | 7.1242e-02 | 1.0000e+00 | 145903 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
3.737 | 6.3266e-02 | 1.0000e+00 | 129569 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
4.158 | 5.5907e-02 | 1.0000e+00 | 114498 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
4.579 | 4.8395e-02 | 1.0000e+00 | 99113 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
5.0 | 4.1891e-02 | 1.0000e+00 | 85792 | 2048000 | 2000 | 2000 | 0.0 |reached target block errors
The sn.utils.PlotBER
object stores the results and allows to add additional simulations to the previous curves.
Remark: In Sionna, a block error is defined to happen if for two tensors at least one position in the last dimension differs (i.e., at least one bit wrongly received per codeword). The bit error rate the total number of erroneous positions divided by the total number of transmitted bits.
Forward Error Correction (FEC)
We now add channel coding to our transceiver to make it more robust against transmission errors. For this, we will use 5G compliant low-density parity-check (LDPC) codes and Polar codes. You can find more detailed information in the notebooks Bit-Interleaved Coded Modulation (BICM) and 5G Channel Coding and Rate-Matching: Polar vs. LDPC Codes.
= 12
k = 20
n
= sn.fec.ldpc.LDPC5GEncoder(k, n)
encoder = sn.fec.ldpc.LDPC5GDecoder(encoder, hard_out=True) decoder
Let us encode some random input bits.
= 1 # one codeword in parallel
BATCH_SIZE = binary_source([BATCH_SIZE, k])
u print("Input bits are: \n", u.numpy())
= encoder(u)
c print("Encoded bits are: \n", c.numpy())
Input bits are:
[[1. 1. 0. 1. 0. 0. 1. 0. 1. 0. 0. 0.]]
Encoded bits are:
[[0. 0. 1. 0. 1. 0. 0. 0. 0. 1. 1. 0. 1. 0. 1. 0. 1. 1. 0. 0.]]
One of the fundamental paradigms of Sionna is batch-processing. Thus, the example above could be executed for arbitrary batch-sizes to simulate batch_size
codewords in parallel.
However, Sionna can do more - it supports N-dimensional input tensors and, thereby, allows the processing of multiple samples of multiple users and several antennas in a single command line. Let’s say we want to encode batch_size
codewords of length n
for each of the num_users
connected to each of the num_basestations
. This means in total we transmit batch_size
* n
* num_users
* num_basestations
bits.
= 10 # samples per scenario
BATCH_SIZE = 4
num_basestations = 5 # users per basestation
num_users = 1000 # codeword length per transmitted codeword
n = 0.5 # coderate
coderate
= int(coderate * n) # number of info bits per codeword
k
# instantiate a new encoder for codewords of length n
= sn.fec.ldpc.LDPC5GEncoder(k, n)
encoder
# the decoder must be linked to the encoder (to know the exact code parameters used for encoding)
= sn.fec.ldpc.LDPC5GDecoder(encoder,
decoder =True, # binary output or provide soft-estimates
hard_out=True, # or also return (decoded) parity bits
return_infobits=20, # number of decoding iterations
num_iter="boxplus-phi") # also try "minsum" decoding
cn_type
# draw random bits to encode
= binary_source([BATCH_SIZE, num_basestations, num_users, k])
u print("Shape of u: ", u.shape)
# We can immediately encode u for all users, basetation and samples
# This all happens with a single line of code
= encoder(u)
c print("Shape of c: ", c.shape)
print("Total number of processed bits: ", np.prod(c.shape))
Shape of u: (10, 4, 5, 500)
Shape of c: (10, 4, 5, 1000)
Total number of processed bits: 200000
This works for arbitrary dimensions and allows a simple extension of the designed system to multi-user or multi-antenna scenarios.
Let us now replace the LDPC code by a Polar code. The API remains similar.
= 64
k = 128
n
= sn.fec.polar.Polar5GEncoder(k, n)
encoder = sn.fec.polar.Polar5GDecoder(encoder,
decoder ="SCL") # you can also use "SCL" dec_type
Advanced Remark: The 5G Polar encoder/decoder class directly applies rate-matching and the additional CRC concatenation. This is all done internally and transparent to the user.
In case you want to access low-level features of the Polar codes, please use sionna.fec.polar.PolarEncoder
and the desired decoder (sionna.fec.polar.PolarSCDecoder
, sionna.fec.polar.PolarSCLDecoder
or sionna.fec.polar.PolarBPDecoder
).
Further details can be found in the tutorial notebook on 5G Channel Coding and Rate-Matching: Polar vs. LDPC Codes.
class CodedSystemAWGN(Model): # Inherits from Keras Model
def __init__(self, num_bits_per_symbol, n, coderate):
super().__init__() # Must call the Keras model initializer
self.num_bits_per_symbol = num_bits_per_symbol
self.n = n
self.k = int(n*coderate)
self.coderate = coderate
self.constellation = sn.mapping.Constellation("qam", self.num_bits_per_symbol)
self.mapper = sn.mapping.Mapper(constellation=self.constellation)
self.demapper = sn.mapping.Demapper("app", constellation=self.constellation)
self.binary_source = sn.utils.BinarySource()
self.awgn_channel = sn.channel.AWGN()
self.encoder = sn.fec.ldpc.LDPC5GEncoder(self.k, self.n)
self.decoder = sn.fec.ldpc.LDPC5GDecoder(self.encoder, hard_out=True)
#@tf.function # activate graph execution to speed things up
def __call__(self, batch_size, ebno_db):
= sn.utils.ebnodb2no(ebno_db, num_bits_per_symbol=self.num_bits_per_symbol, coderate=self.coderate)
no
= self.binary_source([batch_size, self.k])
bits = self.encoder(bits)
codewords = self.mapper(codewords)
x = self.awgn_channel([x, no])
y = self.demapper([y,no])
llr = self.decoder(llr)
bits_hat return bits, bits_hat
= 0.5
CODERATE = 2000
BATCH_SIZE
= CodedSystemAWGN(num_bits_per_symbol=NUM_BITS_PER_SYMBOL,
model_coded_awgn =2048,
n=CODERATE)
coderate
ber_plots.simulate(model_coded_awgn,=np.linspace(EBN0_DB_MIN, EBN0_DB_MAX, 15),
ebno_dbs=BATCH_SIZE,
batch_size=500,
num_target_block_errors="Coded",
legend=False,
soft_estimates=15,
max_mc_iter=True,
show_fig=False); forward_keyboard_interrupt
EbNo [dB] | BER | BLER | bit errors | num bits | block errors | num blocks | runtime [s] | status
---------------------------------------------------------------------------------------------------------------------------------------
-3.0 | 3.2092e-01 | 1.0000e+00 | 657241 | 2048000 | 2000 | 2000 | 3.1 |reached target block errors
-2.429 | 3.0930e-01 | 1.0000e+00 | 633438 | 2048000 | 2000 | 2000 | 1.9 |reached target block errors
-1.857 | 2.9661e-01 | 1.0000e+00 | 607461 | 2048000 | 2000 | 2000 | 2.0 |reached target block errors
-1.286 | 2.8256e-01 | 1.0000e+00 | 578692 | 2048000 | 2000 | 2000 | 1.9 |reached target block errors
-0.714 | 2.6800e-01 | 1.0000e+00 | 548857 | 2048000 | 2000 | 2000 | 1.9 |reached target block errors
-0.143 | 2.5296e-01 | 1.0000e+00 | 518058 | 2048000 | 2000 | 2000 | 1.9 |reached target block errors
0.429 | 2.3576e-01 | 1.0000e+00 | 482837 | 2048000 | 2000 | 2000 | 2.0 |reached target block errors
1.0 | 2.1625e-01 | 1.0000e+00 | 442873 | 2048000 | 2000 | 2000 | 1.9 |reached target block errors
1.571 | 1.9429e-01 | 1.0000e+00 | 397912 | 2048000 | 2000 | 2000 | 1.8 |reached target block errors
2.143 | 1.6284e-01 | 1.0000e+00 | 333497 | 2048000 | 2000 | 2000 | 1.8 |reached target block errors
2.714 | 1.0383e-01 | 9.8100e-01 | 212644 | 2048000 | 1962 | 2000 | 1.8 |reached target block errors
3.286 | 2.2932e-02 | 4.9900e-01 | 46965 | 2048000 | 998 | 2000 | 1.9 |reached target block errors
3.857 | 5.6611e-04 | 2.7050e-02 | 11594 | 20480000 | 541 | 20000 | 15.2 |reached target block errors
4.429 | 3.9063e-07 | 6.6667e-05 | 12 | 30720000 | 2 | 30000 | 18.8 |reached max iter
5.0 | 0.0000e+00 | 0.0000e+00 | 0 | 30720000 | 0 | 30000 | 29.1 |reached max iter
Simulation stopped as no error occurred @ EbNo = 5.0 dB.
As can be seen, the BerPlot
class uses multiple stopping conditions and stops the simulation after no error occured at a specifc SNR point.
Task: Replace the coding scheme by a Polar encoder/decoder or a convolutional code with Viterbi decoding.
Eager vs Graph Mode
So far, we have executed the example in eager mode. This allows to run TensorFlow ops as if it was written NumPy and simplifies development and debugging.
However, to unleash Sionna’s full performance, we need to activate graph mode which can be enabled with the function decorator @tf.function().
We refer to TensorFlow Functions for further details.
@tf.function() # enables graph-mode of the following function
def run_graph(batch_size, ebno_db):
# all code inside this function will be executed in graph mode, also calls of other functions
print(f"Tracing run_graph for values batch_size={batch_size} and ebno_db={ebno_db}.") # print whenever this function is traced
return model_coded_awgn(batch_size, ebno_db)
= 10 # try also different batch sizes
batch_size = 1.5
ebno_db
# run twice - how does the output change?
run_graph(batch_size, ebno_db)
Tracing run_graph for values batch_size=10 and ebno_db=1.5.
(<tf.Tensor: shape=(10, 1024), dtype=float32, numpy=
array([[0., 0., 0., ..., 1., 0., 1.],
[0., 1., 1., ..., 0., 0., 0.],
[1., 1., 0., ..., 0., 0., 1.],
...,
[0., 1., 0., ..., 1., 1., 1.],
[1., 1., 1., ..., 0., 0., 0.],
[0., 0., 0., ..., 1., 1., 0.]], dtype=float32)>,
<tf.Tensor: shape=(10, 1024), dtype=float32, numpy=
array([[0., 0., 0., ..., 1., 0., 1.],
[0., 1., 1., ..., 0., 0., 0.],
[1., 1., 0., ..., 0., 0., 1.],
...,
[0., 1., 0., ..., 1., 1., 1.],
[1., 1., 1., ..., 0., 0., 0.],
[0., 0., 0., ..., 1., 1., 0.]], dtype=float32)>)
In graph mode, Python code (i.e., non-TensorFlow code) is only executed whenever the function is traced. This happens whenever the input signature changes.
As can be seen above, the print statement was executed, i.e., the graph was traced again.
To avoid this re-tracing for different inputs, we now input tensors. You can see that the function is now traced once for input tensors of same dtype.
See TensorFlow Rules of Tracing for details.
Task: change the code above such that tensors are used as input and execute the code with different input values. Understand when re-tracing happens.
Remark: if the input to a function is a tensor its signature must change and not just its value. For example the input could have a different size or datatype. For efficient code execution, we usually want to avoid re-tracing of the code if not required.
# You can print the cached signatures with
print(run_graph.pretty_printed_concrete_signatures())
run_graph(batch_size=10, ebno_db=1.5)
Returns:
(<1>, <2>)
<1>: float32 Tensor, shape=(10, 1024)
<2>: float32 Tensor, shape=(10, 1024)
We now compare the throughput of the different modes.
= 4 # average over multiple runs
repetitions = BATCH_SIZE # try also different batch sizes
batch_size = 1.5
ebno_db
# --- eager mode ---
= time.perf_counter()
t_start for _ in range(repetitions):
= model_coded_awgn(tf.constant(batch_size, tf.int32),
bits, bits_hat
tf.constant(ebno_db, tf. float32))= time.perf_counter()
t_stop # throughput in bit/s
= np.size(bits.numpy())*repetitions / (t_stop - t_start) / 1e6
throughput_eager
print(f"Throughput in Eager mode: {throughput_eager :.3f} Mbit/s")
# --- graph mode ---
# run once to trace graph (ignored for throughput)
run_graph(tf.constant(batch_size, tf.int32),
tf.constant(ebno_db, tf. float32))
= time.perf_counter()
t_start for _ in range(repetitions):
= run_graph(tf.constant(batch_size, tf.int32),
bits, bits_hat
tf.constant(ebno_db, tf. float32))= time.perf_counter()
t_stop # throughput in bit/s
= np.size(bits.numpy())*repetitions / (t_stop - t_start) / 1e6
throughput_graph
print(f"Throughput in graph mode: {throughput_graph :.3f} Mbit/s")
Throughput in Eager mode: 3.130 Mbit/s
Tracing run_graph for values batch_size=Tensor("batch_size:0", shape=(), dtype=int32) and ebno_db=Tensor("ebno_db:0", shape=(), dtype=float32).
Throughput in graph mode: 14.483 Mbit/s
Let’s run the same simulation as above in graph mode.
ber_plots.simulate(run_graph,=np.linspace(EBN0_DB_MIN, EBN0_DB_MAX, 12),
ebno_dbs=BATCH_SIZE,
batch_size=500,
num_target_block_errors="Coded (Graph mode)",
legend=True,
soft_estimates=100,
max_mc_iter=True,
show_fig=False); forward_keyboard_interrupt
EbNo [dB] | BER | BLER | bit errors | num bits | block errors | num blocks | runtime [s] | status
---------------------------------------------------------------------------------------------------------------------------------------
-3.0 | 2.7972e-01 | 1.0000e+00 | 572860 | 2048000 | 2000 | 2000 | 0.2 |reached target block errors
-2.273 | 2.5935e-01 | 1.0000e+00 | 531157 | 2048000 | 2000 | 2000 | 0.2 |reached target block errors
-1.545 | 2.3590e-01 | 1.0000e+00 | 483122 | 2048000 | 2000 | 2000 | 0.1 |reached target block errors
-0.818 | 2.0800e-01 | 1.0000e+00 | 425979 | 2048000 | 2000 | 2000 | 0.2 |reached target block errors
-0.091 | 1.6746e-01 | 1.0000e+00 | 342949 | 2048000 | 2000 | 2000 | 0.2 |reached target block errors
0.636 | 7.5977e-02 | 9.1400e-01 | 155600 | 2048000 | 1828 | 2000 | 0.2 |reached target block errors
1.364 | 1.6699e-03 | 7.1250e-02 | 13680 | 8192000 | 570 | 8000 | 0.6 |reached target block errors
2.091 | 6.5918e-07 | 4.0000e-05 | 135 | 204800000 | 8 | 200000 | 15.0 |reached max iter
2.818 | 0.0000e+00 | 0.0000e+00 | 0 | 204800000 | 0 | 200000 | 14.9 |reached max iter
Simulation stopped as no error occurred @ EbNo = 2.8 dB.
Task: TensorFlow allows to compile graphs with XLA. Try to further accelerate the code with XLA (@tf.function(jit_compile=True)
).
Remark: XLA is still an experimental feature and not all TensorFlow (and, thus, Sionna) functions support XLA.
Task 2: Check the GPU load with !nvidia-smi
. Find the best tradeoff between batch-size and throughput for your specific GPU architecture.
Exercise
Simulate the coded bit error rate (BER) for a Polar coded and 64-QAM modulation. Assume a codeword length of n = 200 and coderate = 0.5.
Hint: For Polar codes, successive cancellation list decoding (SCL) gives the best BER performance. However, successive cancellation (SC) decoding (without a list) is less complex.
= 200
n = 0.5
coderate
# *You can implement your code here*