Delay tolerant networking (DTN) offers a novel way to successfully transfer information over severely delayed, disrupted or periodically disconnected networks. Such delays or disruption would either cause the transmission control protocol (TCP) to fail or perform extremely poorly. The research presented here shows the creation of a virtualised test environment, including a channel emulator, which was used to show the effects of delays and errors on both TCP and two DTN implementations. The breakdown of two variants of TCP under increasing delay was shown with TCP Hybla performing much better than the default TCP Reno. The theoretical and actual performance of TCP’s retransmission timeouts was also investigated. In tests retransmission timings neither corresponded to those used by simulation software nor matched the assumptions made in some other research. The test environment was next used to perform experiments on the DTN2 Reference implementation and Interplanetary Overlay Network. These were run over both TCP and UDP and as expected show resilience and better performance as delays are increased. Some issues were found with DTN2 running over UDP. However, ION running the Licklider Transmission Protocol showed the ability to successfully transfer an image file over delays representing the distance, in terms of light-seconds, between Mars and the Earth using the Moon as a router. When the same transfer was attempted using FTP over TCP it was a total failure despite making adjustments to timing and retry settings in both FTP and TCP. A comprehensive literature review provides an up to date insight into DTN's short history, the state of current research and those areas that still need addressing or where debate exists.
C Lukeman. Securing Cellular Access Networks against Fraud. http://computing-reports.open.ac.uk/2009/TR2009-02.pdf. 2009. M801 MSC Dissertation, 2009/02,
Despite improvements made in cellular security since first generation analogue networks, there still remain a number of weaknesses in UMTS networks. This is made more critical because the UMTS AKA is to be used in fourth generation LTE networks. At the current time there are no known attacks against UMTS networks, but new techniques are being developed all the time by hackers and computer processors are becoming more powerful, which means this may not always remain. As mobile applications move in to high value areas such mobile commerce and mobile banking these networks will become more attractive to criminals. Through research a number of weaknesses have been highlighted in UMTS authentication. A number of research projects have been initiated, but to date, these have not been satisfactory for use in a live network. This has been mainly due to lack of compatibility with GSM. The protocol developed introduced two new ideas to cellular authentication. The first is the use of two-factor authentication using a chip and PIN solution. The second involves a novel way of achieving mutual authentication by using a secret authentication code A simulation of the protocol was produced using the client / server architecture of Java. A series of controlled experiments were then run testing all known threats against cellular networks including the highlighted weaknesses. The protocol successfully dealt with all threats and in not altering the area of UMTS AKA associated with interworking, ensured compatibility with GSM. Although successful in the tests conducted, the experiments would need re-running using a dedicated network software tool such as OPNET and an external security assessment by an external party to verify the claims
T Rogenhofer. Model-based Testing: Transforming SDL-UML models to the Intermediated Format 2.0. http://computing-reports.open.ac.uk/2009/TR2009-03.pdf. 2009. M801 MSC Dissertation, 2009/03,
In 2007, the International Telecommunication Union published a SDL-UML profile. This SDL-UML profile enables the modelling of software systems with the Unified Modelling Language (UML) 2.0 according to semantics of the Specification and Description Language (SDL). Although it is possible to automatically generate abstract test cases from SDL models, this is not the case for SDL-UML models. This dissertation describes the work done to transform a SDL-UML model to an intermediate text based language, the so-called Intermediated Format (IF) 2.0. This transformation allows an existing automatic testing suite to generate test cases from an IF 2.0 system specification. At the beginning of this research project a comparative analysis was performed in order to investigate to what extent the SDL-UML profile could be transformed to the Intermediate Format 2.0. The results of this analysis were used for the specification of a SDL-UML model that was transformed to an IF 2.0 system specification. This transformation was supported by an Eclipse-based transformation tool. An existing automatic test suite was then used for the verification of the SDL-UML to IF 2.0 transformation. The verification of the transformation showed that all elements of the SDL-UML model except the one for the modelling of communication channels could be transformed to IF 2.0. For an automatically generated test case, these communication channels are required for the sending and receiving of signals from a system under test. As a result of this, the SDL-UML model could not be transformed to Intermediated Format 2.0 for the purpose of automatic test case generation.
A J Moore. Development of an Immersive Environment to Teach Problem Oriented Engineering. http://computing-reports.open.ac.uk/2009/TR2009-04.pdf. 2009. M801 MSC Dissertation, 2009/04,
In this thesis I explore current trends in computer based learning, and evaluate the application of existing multimedia design principles to 3-D environments or MUVEs, which are becoming increasingly popular for the possibilities they present for situated learning opportunities. I look into how methods of learning in computer mediated environments have changed, and how this has led to a set of cognitive learning based design principles. I use the design principles I have identified, and apply them to the development of a learning environment, to teach the basic principles of Problem Oriented Engineering. Using student tracking within the environment I create, together with post experience student questionnaires, I assess the value of the principles used. I find that multimedia design principles have some value in the design of a Second Life learning environment, for Problem Oriented Engineering. There is however evidence both from my own research, and from that of Minocha and Reeves (Minocha & Reeves, 2009) that Second Life users, like computer gamers becoming familiar with a new game, expect more from an environment as they become more experienced. In particular the design guidelines identified do not address the issues of immersion or how to design interactions within a 3-D environment. As a result further work is required to build on the multimedia design principles, to help inform the design of 3D virtual world learning spaces.
P Meiers. Assembling The Project Compendium. http://computing-reports.open.ac.uk/2009/TR2009-05.pdf. 2009. M801 MSC Dissertation, 2009/05,
Projects have a history of failure with many going over budget, finishing beyond expected completion dates or not meeting requirements. A successful project is generally referred to as a project that satisfies budget, schedule, scope and customer expectations. To reduce project failure there has been a movement towards using project methodologies accompanied by their tools, templates and instruments or what this dissertation refers to as project compendium components. Components help to facilitate the transfer of knowledge however much important knowledge is based on feeling and insights which cannot be captured by components. Further, face to face communication is often viewed as the best means of knowledge transfer although it is not always viable for dispersed teams. The question arises - “Which project compendium components are perceived to contribute most towards project success in the minds of project managers (PMs)?” The research involved analysing data from seventy-nine surveys completed by software and information technology (IT) PMs. The results showed that all components were thought to add value to project success providing they are used appropriately. Specifications, business briefs and project initiation documents were perceived to be most necessary and benefit realisation plans were thought to be the least necessary. Achievement of business objectives and delivery of business benefits were thought by more people to be highly relevant to project success compared with a project being on–budget, on-time and to scope. Many interviewees thought that high quality components can be used to effectively manage project knowledge as they help to ensure transparency, availability and accessibility of information. For components to be most effective, it was viewed that they need to be used in conjunction with socialisation or personal exchange of knowledge and used in an environment where knowledge sharing is fostered. Collaborative software tools were thought to further aid management of components.
B Bond. Critical Success Factors for enabling Packaged Software to realise the potential Business Benefits. http://computing-reports.open.ac.uk/2009/TR2009-06.pdf. 2009. M801 MSC Dissertation, 2009/06,
Pre-written Packaged Software that can be configured to meet requirements has become the first choice as a means of satisfying an organisation’s software application needs. The short history of IT projects is characterised by them being difficult to implement and never realising all of their intended benefits. Software development is one of the hardest of human endeavours, because it attempts to build rule-based models of behaviour based upon an infinitely complex world of humans and their social interactions. The attraction of Packaged Software for Businesses is not only that the development of the software has been removed from the equation having already been written, but more importantly, it has also been fully tested and proven to work. The move towards the use of Packaged Software started to gain pace in the mid 1990’s, and it has been growing in popularity for over ten years. However, despite avoiding the need for organisations to develop their own software, there are still numerous reports in the literature of IT projects failing. This study sets out to identify if Packaged Software implementations exhibit any special problems and risks when compared to conventional bespoke software projects. Reports in the literature have concentrated on Enterprise Resource Planning (ERP) Systems. These are the most expensive examples of Packaged Software systems. They are difficult to implement and carry high risks that have even resulted in bankruptcy for the implementing organisation. However ERP systems alone do not completely fulfil all the software application needs that an organisation may have. There are many smaller, more specialised packaged systems that tend to be implemented more frequently. This study sets out to see if lessons can be learned from the literature on ERP systems and whether they can be generally applied to all types of Packaged Software implementations. Studies in the literature have typically looked at implementations from a two dimensional viewpoint, that of the Business as a whole and that of the Software Vendor/Consultants. However, a typical Packaged Software domain actually involves three groups: the “Business”, IT department and the Software Vendor/Consultants. Using the Delphi technique, views from all three groups were gathered to assess the “Critical Success Factors” and risks associated with Packaged Software. The study also set out to identify the roles and responsibilities for the main stakeholders for a Packaged Software implementation. This study found that for Packaged Software to be cost effective, it requires an understanding of the Business Processes and a willingness to change their design to fully exploit the system’s capabilities. This means that Packaged Software implementation is more about designing Business Processes to align with the best practices embodied within the software. If this is not recognised, it can lead to expensive software customisations or inefficient Business Processes being put in place to support the software. This inevitably brings about more change within the Business and therefore requiring more “change management”.
C J Flynn. Cross-Validation of Fitness Scores During Co-evolution Using the 'Trap-the-Cap' Board Game as a Testbed. http://computing-reports.open.ac.uk/2009/TR2009-08.pdf. 2009. M801 MSC Dissertation, 2009/08,
Games have always been used as a convenient way of testing AI techniques, they have well defined rules and well defined outcomes. The reinforcement learning method of co-evolution is investigated using the board game of Trap-the-Cap. Co-evolution is used when no teacher is available for game playing 'agents' to learn from. Essentially, two populations of agents take it in turns to rank each other before mutating and hence evolving. The populations start out as completely naïve Trapthe- Cap players and gradually increase in sophistication over the ensuing generations. A criticism of co-evolution is that each population of agents is used for training and testing the other population. This is normally to be avoided, but this is not easy for co-evolution which was selected because no teacher was available to provide training. This thesis investigates the technique of injecting independent test agents into the coevolution cycle to provide 'Cross-Validation' of the ranking of a population. It asks the question 'does the use of Cross-Validation provide measurable benefits in terms of speed of evolution, network complexity and performance?' Neural networks were used as the agents in the two populations. The particular technique used to evolve and mutate them is called Neuro-Evolution of Augmenting Topologies (NEAT) which is a method that allows neural networks to use the cross-over operation as well as mutation. Cross-Validation was achieved by providing a source of independently evolved neural networks as an additional source of testers during the co-evolution cycle. Results were encouraging and showed that there was indeed an advantage gained in terms of performance, speed of evolution and network complexity. However, these effects were only present for the first one hundred or so generations, after which the advantage disappeared. This may have been related to the game of Trap-the-Cap itself and the parameters used to evolve players.
B Wyse. Factive / non-factive predicate recognition within Question Generation systems. http://computing-reports.open.ac.uk/2009/TR2009-09.pdf. 2009. M801 MSC Dissertation, 2009/09,
The research in this paper relates to Question Generation (QG) – an area of computational and linguistic study with the goal of enabling machines to ask questions using human language. QG requires processing a sentence to generate a question or questions relating to that sentence. This research focuses on the sub-problem of generating questions where the answer can be obtained from the input sentence. One issue with generating such questions is the instance where a proposition in a declarative content clause in a sentence is taken to be true, when it might not actually be. Two sentences are shown in Figure a.1 below with the same declarative content clause (underlined) but with different predicate verbs (bold). The certainty that the proposition in the declarative content clause is true, is different for each. Figure a.1 Predicate verbs A QG system without the ability to understand the difference between the sentences above might generate the question ‘How many people were at the conference?’ Whilst this is grammatically, a valid question, it cannot be definitively answered given (1) above. From (1) we are not absolutely certain how many people were at the conference because the speaker in the sentence is not absolutely certain. In a system designed to generate only questions that can be answered by the input sentence, this is a flaw. The verb ‘know’ is a factive verb. A factive verb “assigns the status of an established fact to its object” (Soanes and Stevenson, 2005a). The verb ‘think’ is a non-factive. A non-factive is a verb “that takes a clausal object which may or may not designate a true fact” (Soanes and Stevenson, 2005b). This research asks the question; what is the impact of enabling a QG system to recognise sentences containing these factive or nonfactive verbs? Impact was regarded as both the overall impact which such a system might have on QG as a whole and the quality improvements which might be obtainable. A QG system was written as part of this research and a sub-task was implemented in this system by writing a software algorithm to perform factive / non-factive recognition. This was done by using a list of factive and non-factive verbs produced by Hooper (1974) which was expanded using a thesaurus. The expanded list allowed me to determine frequency of occurrence for factive/non-factive indicators and thus analyse overall impact. The same list was then used within the QG system to analyse the improvement of question quality. The analysis of factive / non-factive recognition was carried out using the Open University’s online educational resource, OpenLearn. OpenLearn was chosen as it is educational material and is available in a well marked XML format which makes it easy to extract certain content. It was found that factive and non-factive verbs are common enough in educational discourse to justify further work on factivity recognition. The effect on precision when generating questions where the question must be answerable from the input sentence was quite good. It was found that whilst the module was successful in removing unwanted questions it did also remove some perfectly good questions. Previous research has concluded, however, that it is better to generate questions of higher precision and I agree.
D Rizzo. Evaluating the influence of passenger behaviour on aircraft boarding strategies using multi-agent systems. http://computing-reports.open.ac.uk/2009/TR2009-10.pdf. 2009. M801 MSC Dissertation, 2009/10,
The efficiency of passenger boarding, especially for short-haul flights, can have a big impact on airline profitability and passenger satisfaction. Several boarding techniques are employed or have been proposed, while simulations and analytical models have been used to compare their performance. The aim of this project was to explore the influence on six boarding techniques of “disturbances” caused by three types of passenger behaviours: choosing the wrong seat, boarding before or after the correct call, and trying to board with the other members of a travelling party. A boarding simulator based on intelligent agents was developed and used to test the influence of such behaviours. The simulator is based on the JADE multi-agent platform and models each passenger as an autonomous software agent, running in a separate thread. The aeroplanes, all typical short-haul single-aisle models, are represented as a regular grid of locations, either seats or aisle segments. The simulator was tested against published boarding times drawn from other simulations (Van Landeghem and Beuselinck 2002) and from observations of actual boarding processes (Kimes and Young 1997). Though the exact boarding times were not reproduced, the relative performance of different boarding methods generally agreed with published data. The simulator was then used to measure the robustness of the selected boarding methods against varying degrees of disturbances from passenger behaviours. Robustness was defined as low sensitivity to the effects of a disturbance. The main result of this project was that, while no boarding method is fully robust against all the disturbances taken into account, the so-called longitudinal boarding strategies (boarding groups spanning a large portion of the fuselage, like windows-middle-aisle and reverse pyramid) performed better than the other methods in every situation, and are therefore to be preferred. This agrees with previous results (Ferrari and Nagel 2005) but under a wider range of conditions.