[1] P Tomlinson. Virtually Delayed: An investigation into delay tolerant networks and their emulation in a virtual environment. M801 MSC Dissertation 2009/01, October 2009. [ bib | .pdf ]
Delay tolerant networking (DTN) offers a novel way to successfully transfer information over severely delayed, disrupted or periodically disconnected networks. Such delays or disruption would either cause the transmission control protocol (TCP) to fail or perform extremely poorly. The research presented here shows the creation of a virtualised test environment, including a channel emulator, which was used to show the effects of delays and errors on both TCP and two DTN implementations. The breakdown of two variants of TCP under increasing delay was shown with TCP Hybla performing much better than the default TCP Reno. The theoretical and actual performance of TCP’s retransmission timeouts was also investigated. In tests retransmission timings neither corresponded to those used by simulation software nor matched the assumptions made in some other research. The test environment was next used to perform experiments on the DTN2 Reference implementation and Interplanetary Overlay Network. These were run over both TCP and UDP and as expected show resilience and better performance as delays are increased. Some issues were found with DTN2 running over UDP. However, ION running the Licklider Transmission Protocol showed the ability to successfully transfer an image file over delays representing the distance, in terms of light-seconds, between Mars and the Earth using the Moon as a router. When the same transfer was attempted using FTP over TCP it was a total failure despite making adjustments to timing and retry settings in both FTP and TCP. A comprehensive literature review provides an up to date insight into DTN's short history, the state of current research and those areas that still need addressing or where debate exists.

[2] C Lukeman. Securing Cellular Access Networks against Fraud. M801 MSC Dissertation 2009/02, October 2009. [ bib | .pdf ]
Despite improvements made in cellular security since first generation analogue networks, there still remain a number of weaknesses in UMTS networks. This is made more critical because the UMTS AKA is to be used in fourth generation LTE networks. At the current time there are no known attacks against UMTS networks, but new techniques are being developed all the time by hackers and computer processors are becoming more powerful, which means this may not always remain. As mobile applications move in to high value areas such mobile commerce and mobile banking these networks will become more attractive to criminals. Through research a number of weaknesses have been highlighted in UMTS authentication. A number of research projects have been initiated, but to date, these have not been satisfactory for use in a live network. This has been mainly due to lack of compatibility with GSM. The protocol developed introduced two new ideas to cellular authentication. The first is the use of two-factor authentication using a chip and PIN solution. The second involves a novel way of achieving mutual authentication by using a secret authentication code A simulation of the protocol was produced using the client / server architecture of Java. A series of controlled experiments were then run testing all known threats against cellular networks including the highlighted weaknesses. The protocol successfully dealt with all threats and in not altering the area of UMTS AKA associated with interworking, ensured compatibility with GSM. Although successful in the tests conducted, the experiments would need re-running using a dedicated network software tool such as OPNET and an external security assessment by an external party to verify the claims

[3] T Rogenhofer. Model-based Testing: Transforming SDL-UML models to the Intermediated Format 2.0. M801 MSC Dissertation 2009/03, October 2009. [ bib | .pdf ]
In 2007, the International Telecommunication Union published a SDL-UML profile. This SDL-UML profile enables the modelling of software systems with the Unified Modelling Language (UML) 2.0 according to semantics of the Specification and Description Language (SDL). Although it is possible to automatically generate abstract test cases from SDL models, this is not the case for SDL-UML models. This dissertation describes the work done to transform a SDL-UML model to an intermediate text based language, the so-called Intermediated Format (IF) 2.0. This transformation allows an existing automatic testing suite to generate test cases from an IF 2.0 system specification. At the beginning of this research project a comparative analysis was performed in order to investigate to what extent the SDL-UML profile could be transformed to the Intermediate Format 2.0. The results of this analysis were used for the specification of a SDL-UML model that was transformed to an IF 2.0 system specification. This transformation was supported by an Eclipse-based transformation tool. An existing automatic test suite was then used for the verification of the SDL-UML to IF 2.0 transformation. The verification of the transformation showed that all elements of the SDL-UML model except the one for the modelling of communication channels could be transformed to IF 2.0. For an automatically generated test case, these communication channels are required for the sending and receiving of signals from a system under test. As a result of this, the SDL-UML model could not be transformed to Intermediated Format 2.0 for the purpose of automatic test case generation.

[4] A J Moore. Development of an Immersive Environment to Teach Problem Oriented Engineering. M801 MSC Dissertation 2009/04, October 2009. [ bib | .pdf ]
In this thesis I explore current trends in computer based learning, and evaluate the application of existing multimedia design principles to 3-D environments or MUVEs, which are becoming increasingly popular for the possibilities they present for situated learning opportunities. I look into how methods of learning in computer mediated environments have changed, and how this has led to a set of cognitive learning based design principles. I use the design principles I have identified, and apply them to the development of a learning environment, to teach the basic principles of Problem Oriented Engineering. Using student tracking within the environment I create, together with post experience student questionnaires, I assess the value of the principles used. I find that multimedia design principles have some value in the design of a Second Life learning environment, for Problem Oriented Engineering. There is however evidence both from my own research, and from that of Minocha and Reeves (Minocha & Reeves, 2009) that Second Life users, like computer gamers becoming familiar with a new game, expect more from an environment as they become more experienced. In particular the design guidelines identified do not address the issues of immersion or how to design interactions within a 3-D environment. As a result further work is required to build on the multimedia design principles, to help inform the design of 3D virtual world learning spaces.

[5] P Meiers. Assembling The Project Compendium. M801 MSC Dissertation 2009/05, October 2009. [ bib | .pdf ]
Projects have a history of failure with many going over budget, finishing beyond expected completion dates or not meeting requirements. A successful project is generally referred to as a project that satisfies budget, schedule, scope and customer expectations. To reduce project failure there has been a movement towards using project methodologies accompanied by their tools, templates and instruments or what this dissertation refers to as project compendium components. Components help to facilitate the transfer of knowledge however much important knowledge is based on feeling and insights which cannot be captured by components. Further, face to face communication is often viewed as the best means of knowledge transfer although it is not always viable for dispersed teams. The question arises - “Which project compendium components are perceived to contribute most towards project success in the minds of project managers (PMs)?” The research involved analysing data from seventy-nine surveys completed by software and information technology (IT) PMs. The results showed that all components were thought to add value to project success providing they are used appropriately. Specifications, business briefs and project initiation documents were perceived to be most necessary and benefit realisation plans were thought to be the least necessary. Achievement of business objectives and delivery of business benefits were thought by more people to be highly relevant to project success compared with a project being on–budget, on-time and to scope. Many interviewees thought that high quality components can be used to effectively manage project knowledge as they help to ensure transparency, availability and accessibility of information. For components to be most effective, it was viewed that they need to be used in conjunction with socialisation or personal exchange of knowledge and used in an environment where knowledge sharing is fostered. Collaborative software tools were thought to further aid management of components.

[6] B Bond. Critical Success Factors for enabling Packaged Software to realise the potential Business Benefits. M801 MSC Dissertation 2009/06, October 2009. [ bib | .pdf ]
Pre-written Packaged Software that can be configured to meet requirements has become the first choice as a means of satisfying an organisation’s software application needs. The short history of IT projects is characterised by them being difficult to implement and never realising all of their intended benefits. Software development is one of the hardest of human endeavours, because it attempts to build rule-based models of behaviour based upon an infinitely complex world of humans and their social interactions. The attraction of Packaged Software for Businesses is not only that the development of the software has been removed from the equation having already been written, but more importantly, it has also been fully tested and proven to work. The move towards the use of Packaged Software started to gain pace in the mid 1990’s, and it has been growing in popularity for over ten years. However, despite avoiding the need for organisations to develop their own software, there are still numerous reports in the literature of IT projects failing. This study sets out to identify if Packaged Software implementations exhibit any special problems and risks when compared to conventional bespoke software projects. Reports in the literature have concentrated on Enterprise Resource Planning (ERP) Systems. These are the most expensive examples of Packaged Software systems. They are difficult to implement and carry high risks that have even resulted in bankruptcy for the implementing organisation. However ERP systems alone do not completely fulfil all the software application needs that an organisation may have. There are many smaller, more specialised packaged systems that tend to be implemented more frequently. This study sets out to see if lessons can be learned from the literature on ERP systems and whether they can be generally applied to all types of Packaged Software implementations. Studies in the literature have typically looked at implementations from a two dimensional viewpoint, that of the Business as a whole and that of the Software Vendor/Consultants. However, a typical Packaged Software domain actually involves three groups: the “Business”, IT department and the Software Vendor/Consultants. Using the Delphi technique, views from all three groups were gathered to assess the “Critical Success Factors” and risks associated with Packaged Software. The study also set out to identify the roles and responsibilities for the main stakeholders for a Packaged Software implementation. This study found that for Packaged Software to be cost effective, it requires an understanding of the Business Processes and a willingness to change their design to fully exploit the system’s capabilities. This means that Packaged Software implementation is more about designing Business Processes to align with the best practices embodied within the software. If this is not recognised, it can lead to expensive software customisations or inefficient Business Processes being put in place to support the software. This inevitably brings about more change within the Business and therefore requiring more “change management”.

[7] Thein Than Tun, Michael Jackson, Robin Laney, Bashar Nuseibeh, and Yijun Yu. Are your lights off? using problem frames to diagnose system failures. Technical Report 2009/07, 2009. [ bib | .pdf ]
This paper reports on our experience of investigating the role of software systems in the power blackout that affected parts of the United States and Canada on 14 August 2003. Based on a detailed study of the offcial report on the blackout, our investigation aimed to bring out requirements engineering lessons that can inform development practices for dependable software systems. Based on the assumption that the causes of failures are rooted in the complex structures of software systems and their world contexts, we decided to deploy and evaluate a framework that looks beyond the scope of software and into its physical context, and directs at- tention to places in the system structures where failures are likely to occur. We report that (i) Problem Frames were effective in diagnosing the causes of failures and documenting the causes in a schematic and accessible way, and (ii) errors in addressing the concerns of biddable domains, model building problems, and monitoring problems had contributed to the blackout.

[8] C J Flynn. Cross-Validation of Fitness Scores During Co-evolution Using the 'Trap-the-Cap' Board Game as a Testbed. M801 MSC Dissertation 2009/08, October 2009. [ bib | .pdf ]
Games have always been used as a convenient way of testing AI techniques, they have well defined rules and well defined outcomes. The reinforcement learning method of co-evolution is investigated using the board game of Trap-the-Cap. Co-evolution is used when no teacher is available for game playing 'agents' to learn from. Essentially, two populations of agents take it in turns to rank each other before mutating and hence evolving. The populations start out as completely naïve Trapthe- Cap players and gradually increase in sophistication over the ensuing generations. A criticism of co-evolution is that each population of agents is used for training and testing the other population. This is normally to be avoided, but this is not easy for co-evolution which was selected because no teacher was available to provide training. This thesis investigates the technique of injecting independent test agents into the coevolution cycle to provide 'Cross-Validation' of the ranking of a population. It asks the question 'does the use of Cross-Validation provide measurable benefits in terms of speed of evolution, network complexity and performance?' Neural networks were used as the agents in the two populations. The particular technique used to evolve and mutate them is called Neuro-Evolution of Augmenting Topologies (NEAT) which is a method that allows neural networks to use the cross-over operation as well as mutation. Cross-Validation was achieved by providing a source of independently evolved neural networks as an additional source of testers during the co-evolution cycle. Results were encouraging and showed that there was indeed an advantage gained in terms of performance, speed of evolution and network complexity. However, these effects were only present for the first one hundred or so generations, after which the advantage disappeared. This may have been related to the game of Trap-the-Cap itself and the parameters used to evolve players.

[9] B Wyse. Factive / non-factive predicate recognition within Question Generation systems. M801 MSC Dissertation 2009/09, September 2009. [ bib | .pdf ]
The research in this paper relates to Question Generation (QG) – an area of computational and linguistic study with the goal of enabling machines to ask questions using human language. QG requires processing a sentence to generate a question or questions relating to that sentence. This research focuses on the sub-problem of generating questions where the answer can be obtained from the input sentence. One issue with generating such questions is the instance where a proposition in a declarative content clause in a sentence is taken to be true, when it might not actually be. Two sentences are shown in Figure a.1 below with the same declarative content clause (underlined) but with different predicate verbs (bold). The certainty that the proposition in the declarative content clause is true, is different for each. Figure a.1 Predicate verbs A QG system without the ability to understand the difference between the sentences above might generate the question ‘How many people were at the conference?’ Whilst this is grammatically, a valid question, it cannot be definitively answered given (1) above. From (1) we are not absolutely certain how many people were at the conference because the speaker in the sentence is not absolutely certain. In a system designed to generate only questions that can be answered by the input sentence, this is a flaw. The verb ‘know’ is a factive verb. A factive verb “assigns the status of an established fact to its object” (Soanes and Stevenson, 2005a). The verb ‘think’ is a non-factive. A non-factive is a verb “that takes a clausal object which may or may not designate a true fact” (Soanes and Stevenson, 2005b). This research asks the question; what is the impact of enabling a QG system to recognise sentences containing these factive or nonfactive verbs? Impact was regarded as both the overall impact which such a system might have on QG as a whole and the quality improvements which might be obtainable. A QG system was written as part of this research and a sub-task was implemented in this system by writing a software algorithm to perform factive / non-factive recognition. This was done by using a list of factive and non-factive verbs produced by Hooper (1974) which was expanded using a thesaurus. The expanded list allowed me to determine frequency of occurrence for factive/non-factive indicators and thus analyse overall impact. The same list was then used within the QG system to analyse the improvement of question quality. The analysis of factive / non-factive recognition was carried out using the Open University’s online educational resource, OpenLearn. OpenLearn was chosen as it is educational material and is available in a well marked XML format which makes it easy to extract certain content. It was found that factive and non-factive verbs are common enough in educational discourse to justify further work on factivity recognition. The effect on precision when generating questions where the question must be answerable from the input sentence was quite good. It was found that whilst the module was successful in removing unwanted questions it did also remove some perfectly good questions. Previous research has concluded, however, that it is better to generate questions of higher precision and I agree.

[10] D Rizzo. Evaluating the influence of passenger behaviour on aircraft boarding strategies using multi-agent systems. M801 MSC Dissertation 2009/10, October 2009. [ bib | .pdf ]
The efficiency of passenger boarding, especially for short-haul flights, can have a big impact on airline profitability and passenger satisfaction. Several boarding techniques are employed or have been proposed, while simulations and analytical models have been used to compare their performance. The aim of this project was to explore the influence on six boarding techniques of “disturbances” caused by three types of passenger behaviours: choosing the wrong seat, boarding before or after the correct call, and trying to board with the other members of a travelling party. A boarding simulator based on intelligent agents was developed and used to test the influence of such behaviours. The simulator is based on the JADE multi-agent platform and models each passenger as an autonomous software agent, running in a separate thread. The aeroplanes, all typical short-haul single-aisle models, are represented as a regular grid of locations, either seats or aisle segments. The simulator was tested against published boarding times drawn from other simulations (Van Landeghem and Beuselinck 2002) and from observations of actual boarding processes (Kimes and Young 1997). Though the exact boarding times were not reproduced, the relative performance of different boarding methods generally agreed with published data. The simulator was then used to measure the robustness of the selected boarding methods against varying degrees of disturbances from passenger behaviours. Robustness was defined as low sensitivity to the effects of a disturbance. The main result of this project was that, while no boarding method is fully robust against all the disturbances taken into account, the so-called longitudinal boarding strategies (boarding groups spanning a large portion of the fuselage, like windows-middle-aisle and reverse pyramid) performed better than the other methods in every situation, and are therefore to be preferred. This agrees with previous results (Ferrari and Nagel 2005) but under a wider range of conditions.

[11] L.Jedrzejczyk, B.A.Price, A.K.Bandara, and B.Nuseibeh. I know what you did last summer: risks of location data leakage in mobile and social computing. Technical Report 2009/11, November 2009. [ bib | .pdf ]
Advances in mobile and web technologies have brought unwanted access to often sensitive data, ranging from our personal details, where we work, where we live and even behavioral patterns. The increasing use of social networks and location-aware mobile applications raise a number of concerns, including the issue of ensuring users' privacy. In order to explore those concerns we conducted an exploratory study in re-identifying people based on their movements and publicly available information. We observed anonymous users of a location-based social networking application in their natural environment and demonstrated how to re-identify them based on that data. In addition to discovering location-based private data, we were also able to find quite a number of facts from their private lives. This article reports on the methodology we used, ethical issues related to informed consent and user's reaction to the fact of being re-identified.

[12] Marian Petre. Representations for idea capture in early software and hardware development. Technical Report 2009/12, January 2009. [ bib | .pdf ]
This paper presents evidence of how professional software and hardware designers currently capture ideas early in the design process, both individually and in collaboration. The paper reports in detail on a corpus study of over 1000 examples of idea capture representations collected from 15 designers in various design teams over 6 years. Examples include informal notes and sketches which designers made for their personal use, as well as sketches made for discussion at meetings, on whiteboards, in the pub, etc. The paper characterises the corpus, discussing which representations designers use when allowed to choose freely, how designers' informal representations relate to the formal representations from their discipline, how the character of their informal representations facilitates design discussions, and why many of the functions afforded by their sketching are not well supported by existing CAD systems. It discusses what the observations and sketches reveal about requirements for an idea-capture tool that supports collaborative design.

[13] Brian Pluss. Towards a computational pragmatics for non-cooperative dialogue. PhD Probation Report 2009/13, January 2009. [ bib | .pdf ]
Most work in linguistics has approached dialogue on the assumption that participants share a common goal and cooperate to achieve it by means of conversation. In computational linguistics this assumption is even stronger. For instance, most dialogue systems rely on the interlocutor's full coopera- tion to model interaction. The research described here is aimed at the other cases, at those escaping the norms. Failure to cooperate can happen for many reasons. A non-native speaker trying to engage in a complex discussion might provide contribu- tions which are not as clear and precise as would be expected. A student not quite sure about the topic he is supposed to elaborate on in an oral exami- nation might provide information which is not entirely truthful or relevant. Someone suffering from dementia might produce utterances which are irrel- evant or uninformative for the current exchange. These examples have to do with incompetence, ignorance and irrationality, all of which lie outside the scope of our study. We will focus on situations in which non-cooperative conversational behaviour is rational, competent and well-informed. This report is part of the first-year probation assessment for a full-time Ph.D. programme. It provides details about the proposed research question, a review of the relevant literature, the proposed research methodology and a work plan.


This file was generated by bibtex2html 1.98.