\section{Effect of Noise on the TCS Curve} Taking the derivative of discrete data is problematic. Consider a function $f(x)$. Using a centred difference finite derivative approximation: \begin{align*} \der{f}{x} &= \frac{f(x + h) - f(x - h)}{h} + O(h^2) \end{align*} The accuraracy of this approximation increases as $h \to 0$\footnote{Ignoring any effects due to rounding of floating point numbers}. If $f_s(x)$ is the result of sampling $f(x)$, with $\Delta f$ the uncertainty in a measurement, then we can calculate the final uncertainty when finite differences are used to approximate $\der{f}{x}$: \begin{align*} f_s(x) &= f(x) \pm \Delta f \\ \der{f_s}{x} &\approx \der{f}{x} \\ &= \frac{f(x + h) - f(x - h)}{h} + O(h^2) \pm \frac{\Delta f}{h} \end{align*} The uncertainty in the sampled derivative has a pole at $h = 0$. {\emph Note: I now suspect that this is a major reason why Komolov has used Lock-in amplifiers} Smoothing of the sampled points $f_s(x)$ (by application of a moving average) will reduce the deviation of points the smooth curve which best fits the data; We can think of the points $f_s(x)$ as sampling a \emph{different} function to $f(x)$, but with smaller uncertainties. Smoothing of the original sampled points removes fine structure. The alternative is to increase $h$. As shown in Figures \ref{siI.eps} and \ref{siI_tcs.eps}, smoothing of $f_s(x)$ has a far greater effect on the derivative of $f_s$ than on $f_s$ itself. \emph{TODO: Calculate MSE for both curves} \emph{TODO: Show curves created with large $h$} \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth, angle=270]{figures/tcs/plots/siI.eps} \caption{An unprocessed and smoothed I(E) curve for a Si sample.} \label{siI.eps} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth, angle=270]{figures/tcs/plots/siI_tcs.eps} \caption{The effect of smoothing the original I(E) curve on its derivative.} \label{siI_tcs.eps} \end{figure} \section{Sources of Noise}