+\section{Hardware Interfacing}\label{Hardware Interfacing}
+
+Figure \ref{pinout.pdf} shows the pin out diagram of the BeagleBone Black. There are many contradictory pin out diagrams available on the internet; this figure was initially created by the software team after trial and error testing with an oscilloscope to determine the correct location of each pin. Port labels correspond with those marked on the BeagleBone PCB. The choice of pin allocations was made by the electrical team after discussion with software when it became apparent that some pins could not be controlled reliably.
+
+
+
+
+\subsection{Sensors}
+
+Code to read sensor values is located in the \gitref{server}{sensors} subdirectory. With the exception of the dilatometer (discussed in Section \ref{Image Processing}), all sensors used in this project produce an analogue output. After conditioning and signal processing, this arrives at an analogue input pin on the BeagleBone as a signal in the range $0\to1.8\text{V}$. The sensors currently controlled by the software are:
+
+\begin{itemize}
+ \item {\bf Strain Gauges} (x4) \gitref{server}{sensors/strain.c}
+
+ To simplify the amplifier electronics, a single ADC is used to read all strain gauges. GPIO pins are used to select the appropriate strain gauge output from a multiplexer. A mutex is used to ensure that no two strain gauges can be read simultaneously.
+
+
+ \item {\bf Pressure Sensors} (x3) \gitref{server}{sensors/pressure.c}
+
+ There are two high range pressure sensors and a single low range pressure sensor; all three are read independently
+ \item {\bf Microphone} (x1) \gitref{server}{sensors/microphone/c}
+
+ The microphone's purpose is to detect the explosion of a can. This sensor was given a low priority, but has been tested with a regular clicking tone and found to register spikes with the predicted frequency (~1.5Hz).
+ \item {\bf Dilatometer} (x2) - \gitref{server}{sensors/dilatometer.c} See Section \ref{Image Processing}
+\end{itemize}
+
+Additional sensors can be added and enabled through use of the \funct{Sensor_Add} function in \funct{Sensor_Init} in the file \gitref{server}{sensors.c}.
+
+The function \funct{Data_Calibrate} located in \gitref{server}{data.c} can be used for interpolating calibration. The pressure sensors and microphone have been calibrated in collaboration with the Sensors Team; however only a small number of data points were taken and the calibration was not tested in detail. We would recommend a more detailed calibration of the sensors for future work.
+
+\subsection{Actuators}
+
+Code to set actuator values is located in the \gitref{server}{actuators} subdirectory. The following actuators are (as of writing) controlled by the software and have been successfully tested in collaboration with the Electronics and Pneumatics teams. Additional actuators can be added and enabled through use of the \funct{Actuator_Add} function in \funct{Actuator_Init} in the file \gitref{server}{actuators.c}.
+
+\subsubsection{Relay Controls} \gitref{server}{actuators/relay.c}
+
+The electrical team employed three relays for control over digital devices. The relays are switched using the GPIO outputs of the BeagleBone Black.
+
+\begin{itemize}
+ \item Can select - Chooses which can can be pressurised (0 for strain, 1 for explode)
+ \item Can enable - Allows the can to be pressurised (0 for vent, 1 for enable)
+ \item Main enable - Allows pressure to flow to the system (0 for vent, 1 for enable) and can be used for emergency venting
+\end{itemize}
+
+The use of a ``can select'' and ``can enable'' means that it is not a software problem to prevent both cans from simultaneously being pressurised. This both simplifies the software and avoids potential safety issues if the software were to fail.
+
+
+\subsubsection{PWM Outputs}
+
+A single PWM output is used to control a pressure regulator (\gitref{server}{actuators/pregulator.c}). The electrical team constructed an RC filter circuit which effectively averages the PWM signal to produce an almost constant analogue output. The period of the PWM is $2\text{kHz}$. This actuator has been calibrated, which allows the user to input the pressure value in kPa rather than having to control the PWM duty cycle correctly.
+
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[angle=90,width=1.0\textwidth]{figures/pinout.pdf}
+ \caption{Pinout Table}
+ \label{pinout.pdf}
+\end{figure}
+
+
+\section{Authentication Mechanisms}\label{Authentication Mechanisms}
+
+The \funct{Login_Handler} function (\gitref{server}{login.c}) is called in the main thread when a HTTP request for authentication is received (see Section \ref{Communication}). This function checks the user's credentials and will give them access to the system if they are valid. Whilst we had originally planned to include only a single username and password, changing client requirements forced us to investigate many alternative authentication methods to cope with multiple users.
+
+Several authentication methods are supported by the server; the method to use can be specified as an argument when the server is started.
+\begin{enumerate}
+ \item {\bf Unix style authentication}
+
+
+ Unix like operating systems store a plain text file (/etc/shadow) of usernames and encrypted passwords\cite{shadow}. To check a password is valid, it is encrypted and then compared to the stored encrypted password. The actual password is never stored anywhere. The /etc/shadow file must be maintained by shell commands run directly from the BeagleBone. Alternatively a web based system to upload a similar file may be created.
+
+ \item {\bf Lightweight Directory Access Protocol (LDAP)}
+
+ LDAP\cite{ldap, ldap_man} is a widely used data base for storing user information. A central server is required to maintain the LDAP database; programs running on the same network can query the server for authentication purposes.
+
+ The UWA user management system (Pheme) employs an LDAP server for storing user information and passwords. The software has been designed so that it can interface with an LDAP server configured similarly to the server on UWA's network. Unfortunately we were unable to gain permission to query this server. However an alternative server could be setup to provide this authentication mechanism for our system.
+
+
+ \item {\bf MySQL Database}
+
+ MySQL\cite{mysql} is a popular and free database system that is widely used in web applications. The ability to search for a user in a MySQL database and check their encrypted password was added late in the design as an alternative to LDAP. There are several existing online user management systems which interface with a MySQL database, and so it is feasible to employ one of these to maintain a list of users authorised to access the experiment. UserCake\cite{UserCake} is recommended, as it is both minimalistic and open source, so can be modified to suit future requirements. We have already begun integration of the UserCake system into the project, however a great deal of work is still required.
+
+
+ MySQL and other databases are vulnerable to many different security issues which we did not have sufficient time to fully explore. Care should be taken to ensure that all these issues are addressed before deploying the system.
+
+
+\end{enumerate}
+
+% END Sam's section
+
+
+\section{Server/Client Communication}\label{Server/Client Communication}
+
+% BEGIN Jeremy's section
+
+This section describes the methods and processes used to communicate between the server and client. For this system, client-server interaction is achieved completely over the internet, via standard HTTP web requests with TLS encryption. In other words, it has been designed to interact with the client over the internet, {\bf completely through a standard web browser} (Figure \ref{client_request_flowchart.png}). No extra software should be required from the client. Detailed reasons for this choice are outlined in Section \ref{Alternative Communication Technologies}
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.8\textwidth]{figures/client_request_flowchart.png}
+ \caption{High level flow chart of a client request to server response}
+ \label{client_request_flowchart.png}
+\end{figure}
+
+\subsection{Web server}
+
+Web requests from a user have to be handled by a web server. For this project, the nginx\cite{nginx} webserver has been used, and acts as the frontend of the remote interface for the system. As shown in Figure \ref{client_request_flowchart.png}, all requests to the system from a remote client are passed through nginx, which then delegates the request to the required subsystem as necessary.
+
+In particular, nginx has been configured to:
+\begin{enumerate}
+ \item Use TLS encryption (HTTPS)
+ \item Forward all HTTP requests to HTTPS requests (force TLS encryption)
+ \item Display the full sever program logs if given \api{log} as the address
+ \item Display the warning and error logs if given \api{errorlog} as the address
+ \item Forward all other requests that start with \api{} to the server program (FastCGI)
+ \item Process and display PHP files (via PHP-FPM) for UserCake
+ \item Try to display all other files like normal (static content; e.g the GUI)
+\end{enumerate}
+
+Transport Layer Security (TLS) encryption, better known as SSL or HTTPS encryption has been enabled to ensure secure communications between the client and server. This is primarily important for when user credentials (username / password) are supplied, and prevents what is called ``man-in-the-middle'' attacks. In other words, it prevents unauthorised persons from viewing such credentials as they are transmitted from the client to the server.
+
+As also mentioned in Section \ref{Authentication Mechanisms} this system also runs a MySQL server for the user management system, UserCake. This kind of server setup is commonly referred to as a LAMP (Linux, Apache, MySQL, PHP) configuration\cite{lamp}, except in this case, nginx has been used in preference to the Apache web server.
+
+Nginx was used as the web server because it is well established, lightweight and performance oriented. It also supports FastCGI by default, which is how nginx interfaces with the server program. Realistically, any well known web server would have sufficed, such as Apache or Lighttpd, given that this is not a large scale service.
+
+\subsection{FastCGI}
+
+Nginx has no issue serving static content --- that is, just normal files to the user. Where dynamic content is required, as is the case for this system, another technology has to be used, which in this case is FastCGI.
+
+FastCGI is the technology that interfaces the server program that has been written with the web server (nginx). As illustrated in Figure \ref{client_request_flowchart.png}, there is a ``FastCGI layer'', which translates web requests from a user to something which the server program can understand, and vice versa for the response.
+
+\subsection{Server API - Making Requests}\label{API}
+
+From the client side, the server interface is accessed through an Application Programming Interface (API). The API forms a contract between the client and server; by requesting a URL of a predetermined format, the response will also be of a predetermined format that the client can use.
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.9\textwidth]{figures/fastcgi-flow-chart.png}
+ \caption{Flow chart of a client request being processed (within the server program). Relevant files are \gitref{server}{fastcgi.c} and \gitref{server}{fastcgi.h}.}
+ \label{fastcgi-flow-chart.png}
+\end{figure}
+
+In the case of the server API designed, requests are formatted as such:
+
+\url{https://host/api/module?key1=value1&key2=value2...&keyN=valueN} (where \verb/host/ is replaced with the IP address or hostname of the server).
+
+
+The API consists of modules accepting arguments (specified as key-value pairs), depending on what that module (Figure \ref{modules}) does. For example, to query the API about basic information (running state, whether the user is logged in etc), the following query is used:
+
+\url{https://host/api/identify}
+
+The server will then respond with this information. In this case, the identify module does not require any arguments. However, it can accept two optional arguments, \texttt{sensors} and \texttt{actuators}, which makes it give extra information on the available sensors and actuators present. This makes the following queries possible:
+
+\begin{itemize}
+ \item \url{https://host/api/identify?sensors=1}
+ \item \url{https://host/api/identify?actuators=1}
+ \item \url{https://host/api/identify?sensors=1&actuators=1}
+\end{itemize}
+
+These give information on the sensors, actuators, or both, respectively. For other modules some parameters may be required, and are not optional.
+This form of an API was chosen because it is simple to use, and extremely easy to debug, given that these requests can just be entered into any web browser to see the result. The request remains fairly human readable, which was another benefit when debugging the server code.
+
+Keeping the API format simple also made it easier to write the code that parsed these requests. All API parsing and response logic lies in \gitref{server}{fastcgi.c}. The framework in \gitref{server}{fastcgi.c} parses a client request and delegates it to the relevant module handler. Once the module handler has sufficiently processed the request, it creates a response, using functions provided by \gitref{server}{fastcgi.c} to do so.
+
+This request handling code went through a number of iterations before the final solution was reached. Changes were made primarily as the number of modules grew, and as the code was used more.
+
+One of the greatest changes to request handling was with regards to how parameters were parsed. Given a request of: \url{http://host/api/actuators?name=pregulator\&start_time=0\&end_time=2}, The module handler would receive as the parameters \texttt{name=pregulator\&start_time=0\&end_time=2}. This string had to be split into the key/value pairs, so the function \funct{FCGI_KeyPair} being made.
+
+With increased usage, this was found to be insufficient. \funct{FCGI_ParseRequest} was created in response, and internally uses \funct{FCGI_KeyPair}, but abstracts request parsing greatly. In essence, it validates the user input, rejecting anything that doesn't match a specified format. If it passes this test, it automatically populates variables with these values. The \funct{IndentifyHandler} module handler in \gitref{server}{fastcgi.c} is a very good example of how this works.
+
+\begin{figure}[H]
+ \centering
+ \begin{tabular}{llll}
+ {\bf API} & {\bf File} & {\bf Function} & {\bf Purpose} \\
+ \api{identify} & \gitref{server}{fastcgi.c} & \funct{IdentifyHandler} & Provide system information \\
+ \api{sensors} & \gitref{server}{sensors.c} & \funct{Sensor_Handler} & Query sensor data points or set sampling rate\\
+ \api{actuators} & \gitref{server}{actuators.c} & \funct{Actuator_Handler} & Set actuator values or query past history \\
+ \api{image} & \gitref{server}{image.c} & \funct{Image_Handler} & Return image from a camera (See Section \ref{Image Processing}) \\
+ \api{control} & \gitref{server}{control.c} & \funct{Control_Handler} & Start/Stop/Pause/Resume the Experiment \\
+ \api{bind} & \gitref{server}{login.c} & \funct{Login_Handler} & Attempt to login to the system (See Section \ref{Cookies})\\
+ \api{unbind} & \gitref{server}{login.c} & \funct{Logout_Handler} & If logged in, logout.
+ \end{tabular}
+ \caption{Brief description of the modules currently implemented by the server.}
+ \label{modules}
+\end{figure}
+
+\subsection{Server API - Response Format}
+
+The server API primarily generates JSON responses to most requests. This was heavily influenced by what the GUI would be programmed in, being JavaScript. This particular format is parsed easily in JavaScript, and is easily parsed in other languages too.
+
+A standard JSON response looks like such:
+
+\begin{figure}[H]
+ \centering
+\begin{verbatim}
+{
+ "module" : "identify",
+ "status" : 1,
+ "start_time" : 614263.377670876,
+ "current_time" : 620591.515903585,
+ "running_time" : 6328.138232709,
+ "control_state" : "Running",
+ "description" : "MCTX3420 Server API (2013)",
+ "build_date" : "Oct 24 2013 19:41:04",
+ "api_version" : 0,
+ "logged_in" : true,
+ "user_name" : "_anonymous_noauth"
+}
+\end{verbatim}
+ \caption{A standard response to querying the \api{identify} module}
+ \label{fastcgi.c-flow-chart.pdf}
+\end{figure}
+
+A JSON response is the direct representation of a JavaScript object, which is what makes this format so useful. For example if the JSON response was parsed and stored in the object \var{data}, the elements would be accessible in JavaScript through \var{data.module} or \var{data.status}.
+
+To generate the JSON response from the server program, \gitref{server}{fastcgi.c} contains a framework of helper functions. Most of the functions help to ensure that the generated output is in a valid JSON format, although only a subset of the JSON syntax is supported. Supporting the full syntax would overcomplicate writing the framework while being of little benefit. Modules can still respond with whatever format they like, using \funct{FCGI_JSONValue} (aka. \funct{FCGI_PrintRaw}), but lose the guarantee that the output will be in a valid JSON format.
+
+Additionally, not all responses are in the JSON format. In specific cases, some module handlers will respond in a more suitable format. For example, the image handler will return an image (using \funct{FCGI_WriteBinary}); it would make no sense to return anything else. On the other hand, the sensor and actuator modules will return data as tab-separated values, if the user specifically asks for it (e.g.: using \url{https://host/api/sensors?id=X&format=tsv})
+
+\subsection{Server API - Cookies}\label{Cookies}
+
+The system makes use of HTTP cookies to keep track of who is logged in at any point. The cookie is a small token of information that gets sent by the server, which is then stored automatically by the web browser. The cookie then gets sent back automatically on subsequent requests to the server. If the cookie sent back matches what is expected, the user is `logged in'. Almost all web sites in existence that has some sort of login use cookies to keep track of this sort of information, so this method is standard practice.
+In the server code, this information is referred to as the `control key'. A control key is only provided to a user if they provide valid login credentials, and no one else is logged in at that time.
+
+The control key used is the SHA-1 hash of some randomly generated data, in hexadecimal format. In essence, this is just a string of random numbers and letters that uniquely identifies the current user.
+
+Initially, users had to pass this information as another key-value pair of the module parameters. However, this was difficult to handle, both for the client and the server, which was what precipitated the change to use HTTP cookies.
+
+\subsection{Client - JavaScript and AJAX Requests}
+
+JavaScript forms the backbone of the web interface that the clients use. JavaScript drives the interactivity behind the GUI and enables the web interface to be updated in real-time. Without JavaScript, interactivity would be severely limited, which would be a large hindrance to the learning aspect of the system.
+
+To maintain interactivity and to keep information up-to-date with the server, the API needs to be polled at a regular interval. Polling is necessary due to the design of HTTP; a server cannot ``push'' data to a client, the client must request it first. To be able to achieve this, code was written in JavaScript to periodically perform what is termed AJAX requests.
+
+AJAX requests are essentially web requests made in JavaScript that occur ``behind the scenes'' of a web page. By making such requests in JavaScript, the web page can be updated without having the user refresh the web page, thus allowing for interactivity and a pleasant user experience.
+
+Whilst AJAX requests are possible with plain JavaScript, the use of the jQuery library (see Section \ref{jQuery}) greatly simplifies the way in which requests can be made and interpreted.
+
+\section{Alternative Communication Technologies}\label{Alternative Communication Technologies}
+
+This section attempts to explain the reasoning behind the communication method chosen. This choice was not trivial, as it had to allow for anyone to remotely control the experiment, while imposing as little requirements from the user as possible. These requirements can be summarised by:
+\begin{enumerate}
+ \item A widely available, highly accessible service should be used, to reach as many users as possible
+ \item Communication between client and server should be fairly reliable, to maintain responsiveness of the remote interface
+ \item Communication should be secured against access from unauthorised persons, to maintain the integrity of the system
+\end{enumerate}
+
+To satisfy the first criteria, remote control via some form of internet access was the natural choice. Internet access is widely established and highly accessible, both globally and locally, where it can be (and is) used for a multitude of remote applications. One only needs to look as far as the UWA Telelabs project for such an example, having been successfully run since 1994 \cite{telelabs}.
+
+Internet communications itself is complex, and there is more than one way to approach the issue of remote control. A number of internet protocols exist, where the protocol chosen is based on the needs of the application. Arguably most prevalent is the Hypertext Transfer Protocol (HTTP)\cite{rfc2616} used in conjunction with the Transmission Control Protocol (TCP) - to distribute web pages and related content across the internet. Other protocols exist, but are less widely used. Even custom protocols can be used, but that comes at the cost of having to build, test and maintain an extra component of software that likely has no benefit over pre-existing systems.
+
+As a result, being able to control the system via a web page and standard web browser seemed the most logical choice, which was why it was used in the final design. Firstly, by designing the system to be controlled from a web page, the system becomes highly accessible, given that where internet access is present, the presence of a web browser is almost guaranteed. Nothing else from the client is required.
+
+Secondly, setup and maintenance for the server is less involved, given that there is a wide range of pre-existing software made just for this purpose. Many features of the web browser can also be leveraged to the advantage of the system --- for example, communications between the client and server can be quite easily secured using Transport Layer Security (TLS, previously known as Secure Sockets Layer or SSL).
+
+Thirdly, reliability of the communications is better guaranteed by using such existing technology, which has been well tested and proven to work of its own accord. While internet access itself may not always be fully reliable, the use of such protocols and correct software design allows for a fair margin of robustness towards this issue. For example, TCP communications have error checking methods built-in to the protocol, to ensure the correct delivery of content. Similarly, HTTP has been designed with intermittent communications to the client in mind\cite{rfc2616}.
+
+\subsection{Server Interface} \label{Server Interface}
+Other options were explored apart from FastCGI to implement the server interface. Primarily, it had to allow for continuous sensor/actuator control independent of user requests, which may be intermittent.
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.8\textwidth]{figures/cgi.png}
+ \caption{Block Diagram of a request to a CGI Application}
+ \label{cgi.png}
+\end{figure}
+
+Initially, a system known as ``Common Gateway Interface'', or CGI was explored. However, CGI based software is only executed when a request is received (Figure \ref{cgi.png}), which makes continuous control and logging over the sensors and actuators unfeasible.
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.8\textwidth]{figures/custom_webserver.png}
+ \caption{Block Diagram of a request to a custom web server}
+ \label{custom_webserver.png}
+\end{figure}
+
+Another system considered was to build a custom web server (Figure \ref{custom_webserver.png}) that used threading, integrating both the control and web components. This option was primarily discarded because it was inflexible to supporting extended services like PHP and TLS encryption. See \href{https://github.com/szmoore/MCTX3420/issues/6}{Issue 6}\cite{github_issues} on GitHub for more information.
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.8\textwidth]{figures/fastcgi.png}
+ \caption{Block Diagram of a request to a FastCGI application}
+ \label{fastcgi.png}
+\end{figure}
+
+In comparison, FastCGI (Figure \ref{fastcgi.png}) can be seen as the ``best of both worlds''. As mentioned previously, it is a variant of CGI, in that it allows some software to respond to web requests. The key difference is that with FastCGI, the program is continuously run independent of any web requests. This overcomes the issues faced with either using CGI or a custom web server; continuous control can be achieved while also not having to worry about the low-level implementation details a web server.
+