+\subsection{Required software}
+A number of packages are required to compile the code:
+\texttt{nginx spawn-fcgi libfcgi-dev gcc libssl-dev make libopencv-dev valgrind libldap2-dev mysql-server libmysqlclient-dev php5 php5-gd php5-fpm php5-mysqlnd}
+
+These packages should be installed with the command \texttt{apt-get install}.
+
+\subsection{Required configurations}
+Many components need to be configured correctly for the server to work. In particular, these configurations relate to the web server, nginx, as well as logging software used, rsyslog. These configurations are automatically installed by a specific script on the git repository.
+
+There is a folder, labelled server-configs. Executing \gitref{server-configs}{install.sh} as root should install all the required configuration files to run the server correctly.
+
+% END Jeremy's section
+
+\subsection{Logging and Debugging}
+
+The function \funct{Log} located in \gitref{server}{log.c} is used extensively throughout the server program for debugging, warning and error reporting. This function uses syslog to simultaneously print messages to the \texttt{stderr} output stream of the program and log them to a file, providing a wealth of information about the (mal)functioning of the program. As discussed in Section \ref{API}, the logs may be also be viewed by a client using the server API.
+
+For more low level debugging, ie: detecting memory leaks, uninitialised values, bad memory accesses, etc, the program \texttt{valgrind}\cite{valgrind} was frequently used.
+
+
+
+\section{Image Processing}\label{Image Processing}
+
+
+% BEGIN Callum's section
+
+
+The system contains two USB cameras, the Logitech C170\cite{logitechC170} and the Kaiser Baas KBA03030 (microscope)\cite{kaiserbaasKBA03030}. The Logitech camera will be used to record and stream the can being pressurized to explode. The microscope will be used to measure the change in width in the can.
+
+\subsection{OpenCV}
+
+For everything related to image acquisition and processing we decided to use a library called OpenCV\cite{OpenCV}. OpenCV uses the capture structure to connect with cameras, and stores the data in \type{IplImage} structures and the newer \type{CvMat} structure. As in C we cannot transfer the data straight to \type{CvMat} we need to convert from \type{IplImage} to \type{CvMat}. There are two main functions required for use with the camera. We need to be able to stream images to the user interface and use the microscope as a dilatometer, returning the rate of expansion of the can.
+
+
+\subsection{Image Streaming}
+The image streaming is done through the function file \gitref{server}{image.c} and the header \gitref{server}{image.h}. There are only 2 functions in \gitref{server}{image.c}, both of which are externally accessible by the rest of the system.
+
+ The \funct{Image_Handler} function handles requests from the server. The parameters required for taking the image, such as the camera ID, width and height are determined by calling \funct{FCGI_ParseRequest} (see \gitref{server}{fastcgi.h} and \gitref{server}{fastcgi.c}) using the parameter string passed to the function.
+
+The function \funct{Camera_GetImage} in \gitref{server}{image.c} is used to capture a frame on the camera from the ID given by \var{num}. As we cannot have 2 camera structures open at once, we use a mutex to ensure the function execute concurrently. We check to see if \var{num} is equivalent to the previous camera ID, if so we do not need to close the capture in order to recreate the connection with the new camera, which takes time. These considerations are currently redundant as the decision was made to only have one camera connected at a time, which was mainly due to power and bandwidth issues. However the code was implemented to allow for further development. If more than 2 cameras are ever connected, then the allowable upper bound for \var{num} will need to be increased to $n-1$ (where $n$ is the number of cameras connected to the system).
+
+After capturing the image we encode the \type{IplImage}, which passes back an encoded \type{CvMat}. The image is then returned back to the web browser via \funct{FCGI_WriteBinary}, where it can be displayed.
+
+\subsection{Dilatometer}
+
+The dilatometer algorithm is used to determine the rate of expansion of the can. The relevant functions are declared in \gitref{server}{sensors/dilatometer.c} and \gitref{server}{sensors/dilatometer.h}. When an experiment is started, \funct{Dilatometer_Init} is executed. This creates all the necessary structures and sets the initial value of \var{lastPosition}, which is a static variable that stores the last edge found.
+
+As the \funct{Camera_GetImage} function in \gitref{server}{image.c} is external, it can be accessed from \gitref{server}{sensors/dilatometer.c}. This was done so that both the dilatometer and the image stream can gain access to the camera. The \type{IplImage} returned is converted to the \type{CvMat} structure \var{g_srcRGB}. This \type{CvMat} structure is then passed to a function, \funct{CannyThreshold}. In this function, a series of steps are taken to extract an image containing only the edges. First we use \funct{cvCvtColor} to convert the \type{CvMat} file to a grayscale image. The image is then blurred using the \funct{cvSmooth} function, which we pass the parameters \var{CV_GAUSSIAN} and \var{BLUR}, so we use a Gaussian blur with a kernel of size \var{BLUR} (defined in \gitref{server}{sensors/dilatometer.h}). The blurred file is then passed to the OpenCV Canny Edge detector.
+
+The Canny Edge algorithm\cite{OpenCV_Canny} determines which pixels are ``edge'' pixels through a series of steps. The algorithm applies the Sobel operator in the x and y directions using \var{KERNELSIZE} for the size of the kernel. The result of this gives the gradient strength and direction. The direction is rounded to 0, 45, 90 or 135 degrees. Non-maximum suppression is then used to remove any pixels not considered to be part of an edge. The pixels left are then put through the hysteresis step. If the gradient of the pixel is higher than the upper threshold (in our algorithm denoted by \var{LOWTHRESHOLD*RATIO}) then the pixel is accepted as an edge. If it is below the lower threshold (i.e. \var{LOWTHRESHOLD}) then the pixel is disregarded. The remaining pixels are removed unless that is connected to a pixel above the upper threshold (Canny Edge Detector). The defined values in the header file can be altered to improve accuracy.
+
+The \funct{CannyThreshold} function fills the \type{CvMat} \var{g_edges} structure with the current images edge (i.e. an image containing only pixels considering to be edges, see Appendix \ref{appendix_imageedge} ). The code then finds the location of the line. It does this by sampling a number of rows, determined by the number of samples and the height of the image, finding the pixel/s in the row considered to be an edge. The algorithm then takes the average position of these pixels. The average position over all rows sampled then determines the actual edge position. The rows sampled are evenly spaced over the height of the image. If a row does not contain an edge, then it will not be included in the average. If a blank image goes through, or the algorithm has a low number of samples and does not pick up an edge, then the function will return false and the data point will not be recorded.
+
+Once the edge is found, we will either return the position of the edge, if the \var{DIL_POS} ID is set. It needs to be noted that this will only show the change in position of one side of the can. If the \var{DIL_DIFF} ID is set then the value will be set to the difference between the current position and the last position, multiplied by \var{SCALE} and 2. We need to multiply by 2 as we are only measuring the change in width to one side of the can, however we must assume that the expansion is symmetrical. The scale will be used to convert from pixels to $\mu$m (or a more suitable scale). Currently the scale is set to 1, as the dilatometer has not been calibrated, thus we are only measuring the rate of change of pixels (which is arbitrary). The static variable, \var{lastPosition}, is then set to determine the next change in size. If the difference is negative, then the can is being compressed or is being depressurized.
+The rate of expansion can then be determined from the data set. As the system does not have a fixed refresh rate, however each data point is time-stamped. If the data is the edge position, then plotting the derivative of the time graph will show the rate of expansion over time.
+
+\subsection{Design Considerations}
+
+\subsubsection{OpenCV}
+
+OpenCV was chosen as the image processing library primarily due to it being open source and widely used in image processing tasks.
+One thing to note however is the documentation for OpenCV for the language C is quite difficult to follow. This is mainly due to the fact that the source (despite originally being written for C) is now written primarily for use in C++, thus the documentation and some of the newer functionality is tailored more for C++. This caused some difficulty in writing the code for C as not all C++ functionality was available for C, or was included in a different or outdated fashion.
+
+\subsubsection{Memory Management}
+
+An initial problem I faced when coding in OpenCV was memory leaks. My simple program to take an image and save it to file was causing us to lose approximately 18Mb, which is unacceptable and would cause issues in the long term. After researching the issue I found that I was not properly releasing the structure dealing with storing the image for the data, \type{IplImage}. For example I was using:
+\begin{lstlisting}
+ cvReleaseImage(&frame);
+\end{lstlisting}
+When the correct release function is actually:
+\begin{lstlisting}
+ cvReleaseImageHeader(&frame);
+\end{lstlisting}
+
+Another thing to note was that releasing one of the \type{CvMat} structures (\verb/g_srcRGB/) during the cleanup of the dilatometer module, a \verb/NULL/ pointer exception was returned and the program execution stopped. The reason for this is unknown, but the other \type{CvMat} structures appear to be released properly. For now I simply removed this release; however the cause should be looked into.
+
+\subsubsection{Dilatometer}
+The dilatometer code went through a few iterations. Originally we were informed by the Sensors Team that the camera would be watching the can, rather than object attached to the can. Thus my original algorithms were revolved around finding the actual width and change in width of the can.
+
+Originally I designed the algorithm to find the edge of the can via the pixel thresholds. By finding the average position of the pixels below a certain threshold (as ideally you would have a dark can on a light background to create a contrast for the edge). This would already give a fairly inaccurate result, as it assumes a relatively sharp intensity gradient. Even with little noise the system would have accuracy issues.
+
+To increase the accuracy in finding the edge, I considered the Canny Edge theorem. I wrote my algorithm to find all points above a certain threshold and take the average of these, considering this as an edge. I then scanned through the rest of the image until the next edge was found and do the same. The width of the can is found by taking the difference of the two locations. I also wrote an algorithm to generate these edges so I can test the algorithm. The function (\funct{Dilatometer_TestImage}/, which is still located within \gitref{server}{sensors/dilatometer.c}) generated two edges, with an amount of noise. The edges were created by taking an exponential decay around the edge and adding (and subtracting) a random noise from the expected decay. The edges where then moved outwards using a for loop. The results can be seen in Figure \ref{canny_edges.png}. From the graphs (Figure \ref{}) it can be seen how effective the algorithm was for a system with negligible noise, as it gave negligible percentage error. However with increasing levels of noise we notice a considerable increase in inaccuracy (Figure \ref{canny_edges_noise.png}).
+
+
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.9\textwidth]{figures/canny_edges.png}
+ \caption{Output of canny edge algorithm applied to generated edges}
+ \label{canny_edges.png}
+\end{figure}
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.9\textwidth]{figures/canny_edges_noise.png}
+ \caption{Output of canny edge algorithm applied to generated edges with generated noise}
+ \label{canny_edges_noise.png}
+\end{figure}
+
+After the Sensors Team relayed that they were now attaching something to the can in order to measure the change position, I decided to simply stick with the Canny Edge algorithm and implement something similar to what I had in my previous testing. The figures in appendix A shows the progression of the image through the algorithm. Figure 2A shows the original image, whereas 2B shows the blurred (with a BLUR value of 5) gray scale image. Whereas figure 2C shows the image after going through the Canny Edge algorithm with a low threshold of 35. Figures 3A and 3B both have the same input image, however different input values. It can be seen how tweaking the values can remove outliers, as figure 3B is skewed to the right due to the outliers. From figure 4 it can be seen that despite there being no points in the edge in the top half of the image, the edge has still been accurately determined.
+
+The testing done shows that given a rough edge with few outliers an edge can be determined, however there is an obvious degree of inaccuracy the greater the variance of the edge. The best solution to this however does not lie in software. If an edge was used that was straight even at that magnification with a good contrast then the results would be much more accurate (i.e. the accuracy of the dilatometer is currently more dependent on the object used than the software).
+
+\subsubsection{Interferometer}
+Earlier in the semester we were informed by the Sensors Team that instead of a dilatometer we would be using an interferometer. The algorithm for this was written and tested; it is currently still located in the file \gitref{server}{interferometer.c} and header \gitref{server}{interferometer.h}. However development of the algorithm ceased after the sensors team informed us that the interferometer would no longer be implemented.
+
+\subsection{Further Design Considerations}
+
+\begin{itemize}
+ \item During testing we noted a considerable degree of lag between the image stream and reality. Further testing can be done to determine the causes and any possible solutions.
+ \item A function to help calibrate the dilatometer should be created
+ \item The algorithm should be tested over an extended period of time checking for memory leak issues caused by OpenCV.
+ \item Possibly modify the code to allow the parameters used in the Canny Edge algorithm to be modified in real time so the user can try and maximize the accuracy of the results. The image with the edge superimposed on it can also be streamed to the client in the same manner as the image, so the user can have feedback.
+
+ \item The algorithm can be improved to try and neglect outliers in the edge image; however this is not as necessary if the original object used gives a sufficiently smooth and straight edge.
+\end{itemize}
+
+% END Callum's section
+
+\subsection{Results}
+
+Figure \ref{image_in_api.png} shows an image obtained from one of two dilatometers used in the system setup with collaboration between all teams. The image is of a white lego tile attached to the can. This image was successfully streamed using the server software, and results of the dilatometer readings were monitored using the same software. Unfortunately we were unable to maintain a constant value for a stationary can, indicating that the algorithm needs further development. Due to a leak in the can seal we were unable to pressurize the can sufficiently to see a noticable change in the edge position.
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.6\textwidth]{figures/image_in_api.png}
+ \caption{Microscope image of actual lego tile attached to can in experimental setup}
+ \label{image_in_api.png}
+\end{figure}
+
+
+\section{Human Computer Interaction and the Graphical User Interface}
+
+% BEGIN James' section
+\subsection{Design Considerations}
+
+There are many considerations that are required to be taken into account for the successful creation of a Graphical User Interface (GUI) that allows Human Computer Interaction. A poorly design GUI can make a system difficult and frustrating to use. A GUI made with no considerations to the underlying software can make a system inoperable or block key features. Without a well designed GUI the Human Computer Interaction becomes difficult and discourages any interaction with the system at all.
+
+ One of the key considerations made during the design of the GUI was the functionality it required. Originally this was limited to just allowing for simple control of the system including a start and stop and a display of system pressures however as the project progressed this was expanded to include a user login, limited admin functionality, graphing, image streaming and live server logs. The addition of these features came as a result of changing requirements from the initial brief as well as logical progression of the GUI's capabilities. This gradual progression represents a continual improvement in Human Computer interaction for the system.
+
+ Ease of Use is the most important consideration of all to ensure that a GUI is well designed. Accessibility and user friendliness is a key aspect in web development. Burying key functionality inside menus makes it difficult to find and discourages its use. Making things obvious and accessible encourages use and makes the software quicker to learn which in turn means that the user is able to start doing what they want faster. However there are limits and care has to be taken to make sure that the user isn't bombarded with so many options that it becomes overwhelming for a first time user. Eventually a system of widgets in a sidebar was designed in order to satisfy the ease of use requirements by allowing functionality to be grouped and easily accessible.
+
+ Due to the limits of the Beagle Bone such as available memory and processing power it was important that the code, images and all libraries used were both small in size and efficient. This meant that careful consideration had to be made every time a library was considered for use. It also meant that where possible processing should be offloaded onto the client hardware rather than running on the server which already runs the server side code. This meant large libraries were ruled out and actions such as graphing were performed by the GUI on the client machine.
+
+ The final consideration is extensibility. An extensible software base code allows easy addition of new features. A good extensible interface makes it a simple case of simply dropping the extra code in in order to add extra features whereas a GUI that doesn't take this into account can require deleting and recoding of large chunks of the previous code. This means that the interface code must be structured in a coherent way and all conform to a ``standard'' across the GUI. Code must be laid out in the same way from page to page and where possible sections of code facilitating specific goals should be removed from the main page code. The latter was achieved through the use of the \verb/.load()/ JavaScript function allowing whole widgets to be removed and placed in their own seperate files. This feature alone lets the developer add new widgets simply by creating a widget file conforming to the GUI's standard and then \verb/.load()/ it into the actual page.
+
+\subsection{Libraries used in GUI construction}
+
+These are libraries that we looked at and deemed to be sufficiently useful and as such were chosen to be used in the final GUI design.
+
+\subsubsection{jQuery} \label{jQuery}
+
+jQuery\cite{jQuery} is an open source library designed to make web coding easier and more effective. It has cross-platform and browser support all of the most common browsers. Features such as full CSS3 compatibility, overall versatility and extensibility combined with the light weight footprint made the decision to develop the GUI with this library included an easy one to make.
+
+\subsubsection{Flot}
+
+Flot\cite{flot} is a Javascript library designed for plotting and built for jQuery. This a lightweight easy to use library that allows easy production of attractive graphs. It also includes advanced support for interactive features and can support for $\text{IE} < 9$ . The Flot library provided an easy but powerful way to graph the data being sent by the server.
+
+
+\subsection{Libraries trialed but not used in GUI construction}
+
+These are libraries that were looked at and considered for use in the GUI software but were decided to not be used in the final product.
+
+\subsubsection{jQuery UI}
+
+jQueryUI\cite{jQueryUI} is a library that provides numerous widgets and user interface interactions utilising the jQuery JavaScript library. Targeted at both web design and web development the library allows easy and rapid construction of web application and interfaces with many pre-built interface elements. However this comes with the downside of being a larger library and provides many features that are unnecessary and is as such unfit for use in the GUI.
+
+\subsubsection{chart.js}
+chart.js\cite{chart.js} is an object orientated JavaScript library that provides graphing capabilities on the client side. The library uses some HTML5 elements to provide a variety of ways to present data including line graphs, bar charts, doughnut charts and more. It is a lightweight library that is dependency free however it is lacking on features compared to Flot and did not get used.
+
+\subsection{Design Process for the Graphical User Interface}
+
+As with any coding, following a somewhat strict design process improves efficiency and results in a better end product with more relevant code. Proper planning and testing prevents writing large amounts of code that is latter scrapped. It also provides a more focused direction than can be gleaned off of a project brief.
+
+
+Producing test GUI's with simple functionality allows the developer to experiment and test features without investing a large amount of time and code in something that may not work or solve the required problem. The test GUI's can both functional and aesthetic. Throughout the project a large amount of test GUI's of both types were produced. Aesthetic test GUI's are great for experimenting with the look and feel of the software and allow the developer to experience first hand how the page handles. Functional GUI's on the other hand allow the developer to test out new features and investigate whether the client server interaction is functioning properly.
+
+Whilst producing test GUI's a design document was drawn up. This document encompassed the design goals and specifications for the final Human Computer Interface and provided what was essentially a master plan. Include in the document were things such as what separate pages were to be included, the overall look of the system and what final functionality was desired.
+
+
+Once a design document was completed a Master Template was created. Firstly a draft was created in PowerPoint using Smart Art and can be seen in Figure \ref{draftGUI.png}. After reviewing the draft and accepting the design a HTML template with CSS elements was produced. This template mimics the draft with some added features and improvements as seen in Figure \ref{templateGUI.png}. This was also reviewed and accepted and formed the base code for the GUI.
+
+ With the template completed functionality was then added. By copying the template exactly for each individual page the look of the software is kept the same throughout. Adding functionality is a simple case of substituting in functional code in the demonstration panels as well as adding the necessary JavaScript for the pages to function. Effort was made to keep as much functional code separated from the template itself and to load the code into the page from an external file in order to facilitate cleaner code with better expandability.
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.8\textwidth]{figures/draftGUI.png}
+ \caption{Draft GUI designed in Microsoft Powerpoint}
+ \label{draftGUI.png}
+\end{figure}
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.8\textwidth]{figures/templateGUI.png}
+ \caption{Screenshot of a GUI using templates to form each panel}
+ \label{templateGUI.png}
+\end{figure}
+
+% END James' section
+
+% BEGIN Rowan's section
+
+\section{GUI Design Process}
+
+\subsection{Creation}
+
+The First iteration of the GUI was a relatively simple and almost purely text based. It held a graph, along with the basic image stream we had developed. It was formatted all down the Left hand side of the page.
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.8\textwidth]{figures/gui_creation.png}
+ \caption{First Test GUI}
+
+\end{figure}
+
+\subsection{Testing}
+
+Secondly we decided to test the FastCGI protocol. Where FastCGI can be used to interface programs with a web server. This was the first test with the use of sensors and actuators theoretically collecting data from a server.
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.8\textwidth]{figures/gui_creation.png}
+ \caption{Testing GUI}
+
+\end{figure}
+
+This gui was running over a free domain name which allowed us to play with control and command.
+
+\subsection{Iterations}
+
+After the basic testing of the initial GUIs we started playing with gui design ideas which would be aesthetic, easy to use and reflect on UWA in a positive way. To do this we looked into how professional websites were made by opening their source code and investigating techniques into layout, structure and style. Then we went away and completed some gui design trees, where there would be a clear flow between pages.
+
+\subsection{Parallel GUI Design}
+
+During the GUI development phase, several GUIs were created. Some used graphical development software, while others used hard codded HTML, JavaScript, and CSS. Due to no organization within the group and a lack in communication a “final gui” was made by several of the team members. Some of theses are shown below.
+
+\subsection{GUI Aesthetics}
+
+Once we had decided on our core GUI design, we decided that, although not yet complete we would get Adrain Keatings opinion on the GUI design. While the gui design was simple and functional Dr. Keating pointed out the design was bland. He encouraged us to release our artistic flair onto our GUI and make it more graphical and easy to use. Taking this into account we Began work on another final GUI designing almost from scratch. We kept our GUI design flow, and worked largely on the look and feel of the GUI rather the functionality the gui needed.
+
+\subsection{HTML Structure}
+
+The way our GUI works, in a nutshell, is that we use Basic HTML code to lay out what the page needs, then we have CSS(Styles) on top which lays out and formats the basic HTML code. We the put JavaScript files into the HTML code so that graphs and images and be streamed. In out GUI we have chosen to use JQuery to ask the server for information from the client and jFlot for javascripts graphing functionality.
+
+\subsection{Graphical Development VS Hard Coding}
+
+From the Multiple GUI we had accidently created during the GUI design phase we noticed a large Varity in the styles of GUIs that came out (Which shouldn’t of happened) GUIs were created using HTML CSS and JavaScript being hard codded, from development software like Dreamweaver, and various java based development platforms.
+
+\subsection{Final Design}
+
+The final concept consists of widgets and a navigation bar to the left. We decided for the maximum functionality we could get with the time remaining we would have pages for; Control, Graphs, Data, Data streaming, Pin debugging, and a help screen, shown below.
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.8\textwidth]{figures/gui_final.png}
+ \caption{Final GUI}
+\end{figure}
+
+This is the ``home screen'' it shows the layout of the experiment, the subsystem and a welcome message.
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.8\textwidth]{figures/gui_experiment.png}
+ \caption{The Experiment (While disconnected from the server in the pic above) displays the Warnings and the experiment state to allow device use by only 1 student and avoid nasty conflicting control}
+\end{figure}
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.8\textwidth]{figures/gui_results.png}
+ \caption{The Experimental Results page (also currently disconnected)}
+\end{figure}
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.8\textwidth]{figures/gui_data.png}
+ \caption{The experimental data page shows the start the sensors and actuators are reading, useful for checking the condition and measuring the experiment. }
+\end{figure}
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.8\textwidth]{figures/gui_pintest.png}
+ \caption{The BBB Pin test page is for the software team only so that we can test and debug the experiment we errors are found in the gui or software. }
+\end{figure}
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.8\textwidth]{figures/gui_help.png}
+ \caption{The help page, which links to the wiki information from all the teams and allows new users to look at all aspects of the project to be further developed and finished. }
+\end{figure}