X-Git-Url: https://git.ucc.asn.au/?p=matches%2Fhonours.git;a=blobdiff_plain;f=thesis%2Fappendices%2Ftcs_noise.tex;h=9b6b1e0c40bc878b5c300fbcef8554b98dd12adc;hp=50f3c041d863c310c4e6b0ce91caf334b4b3c8e4;hb=ddcdd43bf4077f35beefc59eb8568e13b6c5b3cd;hpb=1c9618e52bd8dad6a84c698967800beee8378f24 diff --git a/thesis/appendices/tcs_noise.tex b/thesis/appendices/tcs_noise.tex index 50f3c041..9b6b1e0c 100644 --- a/thesis/appendices/tcs_noise.tex +++ b/thesis/appendices/tcs_noise.tex @@ -1,12 +1,12 @@ -\chapter*{Appendix - Effect of Noise on the TCS Curve} +\section{Effect of Noise on the TCS Curve} -Taking the derivative of discrete data is problematic. Using a centred difference finite derivative approximation: +Taking the derivative of discrete data is problematic. Consider a function $f(x)$. Using a centred difference finite derivative approximation: \begin{align*} \der{f}{x} &= \frac{f(x + h) - f(x - h)}{h} + O(h^2) \end{align*} The accuraracy of this approximation increases as $h \to 0$\footnote{Ignoring any effects due to rounding of floating point numbers}. -However, if $f_s(x)$ is the result of sampling $f(x)$, with $\Delta f$ the uncertainty in a measurement: +If $f_s(x)$ is the result of sampling $f(x)$, with $\Delta f$ the uncertainty in a measurement, then we can calculate the final uncertainty when finite differences are used to approximate $\der{f}{x}$: \begin{align*} f_s(x) &= f(x) \pm \Delta f \\ \der{f_s}{x} &\approx \der{f}{x} \\ @@ -14,11 +14,18 @@ However, if $f_s(x)$ is the result of sampling $f(x)$, with $\Delta f$ the uncer \end{align*} The uncertainty in the sampled derivative has a pole at $h = 0$. + {\emph Note: I now suspect that this is a major reason why Komolov has used Lock-in amplifiers} -The problem may be fixed [dodged?] by increasing $h$ (in which case the resolution of the derivative is decreased dramatically), or application of smoothing averages (which also decrease the resolution, but not as much). \emph{Needs rephrasing} -Smoothing of the sampled curve $f_s(x)$ (by application of a moving average) will reduce the deviation of points the smooth curve which best fits the data. As shown in Figures \ref{siI.eps} and \ref{siI_tcs.eps}, smoothing of $f_s(x)$ has a far greater effect on the derivative of $f_s$ than on $f_s$ itself. +Smoothing of the sampled points $f_s(x)$ (by application of a moving average) will reduce the deviation of points the smooth curve which best fits the data; We can think of the points $f_s(x)$ as sampling a \emph{different} function to $f(x)$, but with smaller uncertainties. Smoothing of the original sampled points removes fine structure. + +The alternative is to increase $h$. + +As shown in Figures \ref{siI.eps} and \ref{siI_tcs.eps}, smoothing of $f_s(x)$ has a far greater effect on the derivative of $f_s$ than on $f_s$ itself. + +\emph{TODO: Calculate MSE for both curves} +\emph{TODO: Show curves created with large $h$} \begin{figure}[H]