\documentclass{article} \usepackage[utf8x]{inputenc} \usepackage[margin=1in]{geometry} % Adjust margins \usepackage{caption} \usepackage{subcaption} \usepackage{parskip} % dont indent after paragraphs, figures \usepackage{xcolor} %\usepackage{csquotes} % Recommended for biblatex \usepackage{tikz} \usepackage{float} \usepackage{amsmath} \PassOptionsToPackage{hyphens}{url} \usepackage{hyperref} % allows urls to follow line breaks of text \usepackage[style=ieee, backend=biber, maxnames=1, minnames=1]{biblatex} \addbibresource{entropy.bib} \title{Entropy as a measure of information} \author{Erik Neller} \date{\today} \begin{document} \maketitle \section{What is entropy?} Across disciplines, entropy is a measure of uncertainty or randomness. Originating in classical thermodynamics, over time it has been applied in different sciences such as chemistry and information theory. %As the informal concept of entropy gains popularity, its specific meaning can feel far-fetched and ambiguous. The name 'entropy' was first coined by german physicist \textit{Rudolf Clausius} in 1865 while postulating the second law of thermodynamics, one of 3(4) laws of thermodynamics based on universal observation regarding heat and energy conversion. Specifically, the second law states that not all thermal energy can be converted into work in a cyclic process. Or, in other words, that the entropy of an isolated system cannot decrease, as they always tend toward a state of thermodynamic equilibrium where entropy is highest for a given internal energy. Another result of this observation is the irreversibility of natural processes, also referred to as the \textit{arrow of time}. Even though the first law (conservation of energy) allows for a cup falling off a table and breaking as well as the reverse process of reassembling itself and jumping back onto the table, the second law only allows the former and denies the latter, requiring the state with higher entropy to occur later in time. Only 10 years later, in 1875, \textit{Ludwig Boltzmann} and \textit{Willard Gibbs} derived the formal definition that is still in use in information theory today. \begin{equation} S = -k_B \sum_i p_i \ln(p_i) \end{equation} It gives statistical meaning to the macroscopic phenomenon of classical thermodynamics by defining the entropy $S$ of a macrostate as the result of probabilities $p_i$ of all its constituting micro states. $k_B$ refers to the Boltzmann constant, which he himself did not determine. \section{Shannon's axioms} \textit{Claude Shannon} adapted the concept of entropy to information theory. In an era of advancing communication technologies, the question he addressed was of increasing importance: How can messages be encoded and transmitted efficiently? As a measure, Shannon's formula uses the \textit{Bit}, quantifying the efficiency of codes and media for transmission and storage. According to his axioms, a measure for information has to comply with the following criteria: \begin{enumerate} \item $I(1) = 0$: events that always occur do not communicate information. \item $I(p)$ is monotonically decreasing in p: an increase in the probability of an event decreases the information from an observed event, and vice versa. \item $I(p_1 \cdot p_2) = I(p_1) + I(p_2)$: the information learned from independent events is the sum of the information learned from each event. \item $I(p)$ is a twice continuously differentiable function of p. \end{enumerate} In information theory, entropy can be understood as the expected information of a message. \begin{equation} H = E(I) = - \sum_i p_i \log_2(p_i) \end{equation} This leaves $ I =log(1/p_i) = - log_2(p_i)$, implying that an unexpected message (low probability) carries more information than one with higher probability. Intuitively, we can imagine David A. Johnston, a volcanologist reporting day after day that there is no activity on Mount St. Helens. After a while, we grow to expect this message because it is statistically very likely that tomorrows message will be the same. When some day we get the message 'Vancouver! This is it!' it carries a lot of information not only semantically (because it announces the eruption of a volcano) but statistically because it was very unlikely given the transmission history. The base 2 is chosen for the logarithm as our computers rely on a system of the same base, but theoretically arbitrary bases can be used as they are proportional according to $\log_a b = \frac{\log_c b}{\log_c a} $. Further, the $\log_2$ can be intuitively understood for an event source with $2^n$ possible outcomes - using standard binary coding, we can easily see that a message has to contain $\log_2(2^n) = n$ Bits in order to be able to encode all possible outcomes. For numbers where $a \neq 2^n$ such as $a=10$, it is easy to see that there exists a number $a^k = 2^n$ which defines a message size that can encode the outcomes of $k$ event sources with $a$ outcomes each, leaving the required Bits per event source at $\log_2(a^k) \div k = \log_2(a)$. %- bedingte Entropie %- Redundanz %- Quellentropie \section{Applications} \subsection{Decision Trees} A decision tree is a supervised learning approach commonly used in machine learning. The goal is to create an algorithm, i.e a series of questions to pose to new data (input variables) in order to predict the target variable, a class label. Graphically, each question can be visualized as a node in a tree, splitting the dataset into two or more groups. This process is applied to the source set and then its resulting sets in a process called \textit{recursive partitioning}. Once a leaf is reached, the class of the input has been successfully determined. In order to build the shallowest possible trees, we want to use input variables that minimize uncertainty. While other measures for the best choice such as the \textit{Gini coefficient} exist, entropy is a popular measure used in decision trees. Using what we learned about entropy, we want the maximum decrease in entropy of our target variable, as explained in~\autoref{ex:decisiontree}. \begin{figure}[H] \centering \begin{minipage}{.3\textwidth} \begin{tabular}{c|c|c} & hot & cold \\ \hline rain &4 &5 \\ \hline no rain & 3 & 2 \\ \end{tabular} \end{minipage} \begin{minipage}{.6\textwidth} When choosing rain as a target variable, the entropy prior to partitioning is $H_{prior} = H(\frac{9}{14},\frac{5}{14})$, after partitioning by temperature (hot/cold)$H_{hot}= H(\frac{4}{7}, \frac{3}{7})$ and $H_{cold}= H(\frac{5}{7}, \frac{2}{7})$ remain. This leaves us with an expected entropy of $p_{hot} * H_{hot} + p_{cold} * H_{cold} $ . The \textbf{information gain} can then be calculated as the difference of entropy proor and post partitioning. Since $H_{prior}$ is constant in this equation, it is sufficient to minimize post-partitioning $E[H]$. \end{minipage} \caption{Example of information gain in decision trees} \label{ex:decisiontree} \end{figure} Advantages of decision trees over other machine learning approaches include low computation cost and interpretability, making it a popular choice for many applications. However, drawbacks include overfitting and poor robustness, where minimal alterations to training data can lead to a change in tree structure. \subsection{Cross-Entropy} Kullback-Leibler = $H(p,q) - H(p)$ as a cost function in machine learning \subsection{Coding} %Coding of a source of an information and communication channel % https://www.youtube.com/watch?v=ErfnhcEV1O8 % relation to hamming distance and efficient codes \subsection{Noisy communication channels} The noisy channel coding theorem was stated by \textit{Claude Shannon} in 1948, but first rigorous proof was provided in 1954 by Amiel Feinstein. One of the important issues Shannon wanted to tackle with his 'Mathematical theory of commmunication' was the insufficient means of transporting discrete data through a noisy channel that were more efficient than the telegram. The means of error correction until then had been limited to very basic means. First, analogue connections like the first telephone lines, bypassed the issue altogether and relied on the communicating parties and their brains' ability to filter human voices from the noise that was inevitably transmitted along with the intended signal. After some development, the telegraph in its final form used morse code, a series of long and short clicks, that, together with letter and word gaps, would encode text messages. Even though the long-short coding might appear similar to todays binary coding, the means of error correction were lacking. For a long time, it relied on simply repeating the message multiple times, which is highly inefficient. The destination would then have to determine the most likely intended message by performing a majority vote. One might also propose simply increasing transmitting power, thereby decreasing the error rate of the associated channel. However, the noisy channel coding theorem provides us with a more elegant solution. It is of foundational importance to information theory, stating that given a noisy channel with capacity $C$ and information transmitted at rate $R$, there exists an $R] (A) -- (B); \draw[->] (B) -- (C); \draw[->] (C) -- (D); \draw[->] (D) -- (E); \draw[->] (N) -- (C); \end{tikzpicture} \caption{Model of a noisy communication channel} \label{fig:noisy-channel} \end{figure} \begin{figure}[H] \begin{tikzpicture} \def\boxw{2.5cm} \def\n{4} \pgfmathsetmacro{\gap}{(\textwidth - \n*\boxw)/(\n-1)} \node (S) at (0,0) [draw, align=center, text width=\boxw] {Transmitter}; \node (S0) at (\boxw + \gap,1) [draw, circle] {0}; \node (S1) at (\boxw + \gap,-1) [draw, circle] {1}; \node (D0) at ({2*(\boxw + \gap)},1) [draw, circle] {0}; \node (D1) at ({2*(\boxw + \gap)},-1) [draw, circle] {1}; \node (D) at ({3*(\boxw + \gap)},0) [draw, align=center, text width=\boxw] {Receiver}; \draw[->] (S) -- (S0); \draw[->] (S) -- (S1); \draw[->,dashed] (S0) -- (D0) node[midway, above] {$1-p$}; \draw[->,dashed] (S0) -- (D1) node[pos=0.8, above] {$p$}; \draw[->,dashed] (S1) -- (D0) node[pos= 0.2, above] {$p$}; \draw[->,dashed] (S1) -- (D1) node[midway, below] {$1-p$}; \draw[->] (D0) -- (D); \draw[->] (D1) -- (D); \end{tikzpicture} \caption{Model of a binary symmetric channel} \label{fig:binary-channel} \end{figure} \printbibliography \end{document}