[1] Jeremy Annis. Zombie networks: An investigation into the use of anti-forensic techniques employed by botnets. M801 MSC Dissertation 2008/01, June 2008. [ bib | .pdf ]
The rise in the popularity of the digital marketplace has driven a rise in online crime, manifesting itself in many ways, including: the spread of virus software, websites that “phish“ for personal information such as bank account details, malicious software that is capable of logging keystrokes, the theft of information through “ransomware“, the sending of spam emails to solicit purchase of non-existent goods and so on. This exploitation is often carried out by criminal communities with access to large networks of distributed computers, commonly referred to as 'botnets'. Law enforcement agencies regularly employ computer forensic techniques against these botnets and the criminal communities that control them. This battleground has become more sophisticated over time and the software that powers a botnet now regularly deploys a growing library of anti-forensic techniques to make analysis harder. This research examines what anti-forensic techniques are in use by bonets throughout the botnet life-cycle. A number of botnets were analysed in a “safe“ environment through a series of controlled experiments, using both static code analysis and dynamic execution of the malware. Throughout each experiment, the different types of anti-forensic techniques in use were recorded, and an attempt was made to identify the point in the botnet life-cycle when they were used. The experiments showed that a wide variety of anti-forensic techniques are indeed in use by botnets, offering considerable challenge to the forensic investigator. A catalogue of these techniques was produced with an indication of the difficulty each technique might present to the analyst. Program packing (obfuscating the executable code of the botnet) proved to be the most common anti-forensic technique in use; it also presented the greatest difficulty to the forensic analysis process. Many of the other anti-forensic techniques in use by the sample botnets were observed throughout the entire botnet life-cycle, suggesting that when protecting a botnet from forensic analysis, the author is not concerned with what stage of the life-cycle the botnet is in. A correlation was also observed between the quantity and overall difficulty level of the anti-forensic techniques in use, and the criminal success it has “in the wild“.

[2] J. Dokler. Identification of User Information Needs Based in the Analysis of Local Web Search Queries. M801 MSC Dissertation 2008/02, June 2008. [ bib | .pdf ]
Together with the emergence of the World Wide Web some sixteen years ago there came the promise of instant feedback from users as to what information they want and when they want it. This information could then be used to determine the content and structure of web sites. As is usually the case, reality proved to be more complicated. Although user feedback was indeed instantaneous, the analysis was mostly limited to what people looked at as opposed to what they were looking for. Only in recent years has research focused on analysis of search queries that users submit to search engines. And search queries come close to representing what users are looking for. Still the majority of this research is based on queries from general purpose search engines. In this dissertation I explore the findings and ideas coming from the work on general search engines (and in parts on local search engines) and try to apply them in the context of a single web site to improve the structure and content of this web site. In particular I explore the idea that it is possible to determine the top level content elements of a web site from the analysis of the local search queries. Based on this I then proceed to explore the periodic change of web site’s content and how this information can be used to improve the web site. By implementing two different methods of search query analysis (manual and automatic clustering) and examining the results I show that search query analysis is a viable potential source of information about a web site’s structure and that identification of periodic changes of content can be used to amend a web site in advance before the next change occurs.

[3] Jon G. Hall and Lucia Rapanotti. Capturing knowledge through problem oriented engineering. Technical Report 2008/03, 2008. [ bib | .pdf ]
Problem Oriented Engineering (POE) is a formal systemfor engineering design. In previous work, we have successfullyapplied POE within the context of software engineering.This paper illustrates the application of POE beyondsoftware to capturing knowledge in a socio-technical context.The problem we are considering is that of publishing.Through a POE analysis we capture the essential elementsof the problem and show how scientific journals provide atried and tested solution to this problem, exposing the rationalebehind their success.

[4] Pete Thomas, Shailey Minocha, David King, Josie Taylor, Niall Sclater, and Mat Schencks. Collaborative learning in a wiki environment: Experiences from a software engineering course. Technical Report 2008/04, 2008. [ bib | .pdf ]
The post-graduate course, Software Requirements for Business Systems, in the Department of Computing of the Open University (OU) is one of the early adopters of OU's Virtual Learning Environment. The course involves teaching systematic elicitation and documentation of requirements of software systems. On a software development project, team members often work remotely from one another and increasingly use wikis to collaboratively develop the requirements specification. In order to emulate requirements engineering practice, the course has been enhanced to include group collaboration using a wiki. In this paper, we describe the wiki-based collaborative activities and the on-going evaluation of the pedagogical effectiveness of a wiki for collaborative learning.

[5] Mohammed Salifu, Yijun Yu, and Bashar Nuseibeh. Analysing monitoring and switching requirements using constraint satisfiability. Technical Report 2008/05, 2008. [ bib | .pdf ]
Context-aware applications monitor changes in their environmentand switch their behaviour in order to continue satisfyingrequirements. Specifying monitoring and switching in suchapplications can be difficult due to their dependence on varyingenvironmental properties. Two problems require analysis: themonitoring of environmental properties to assess their impact oncontinual requirements satisfaction, and the selection ofappropriate behaviours that ensure requirements satisfaction. Toaddress these problems, we provide concepts for refiningcontextual properties which we then use to formulate twotheorems for monitoring and switching. These enable us toformally analyse the impact of context on monitoring andswitching problems. We have instantiated our general approachby encoding monitoring and switching problems intopropositional logic constraints, which we analyse automaticallyusing a standard SAT solver. The approach is applied to anindustrial case study to assess its efficacy.

[6] David Hardcastle and Richard Power. Generating conceptually aligned texts. Technical Report 2008/06, 2008. [ bib | .pdf ]
A conceptually aligned text is one in whichspans are systematically linked to a logicalencoding of their meanings. We describethe model of alignment used in WYSIWYMsystems, which support knowledge editingthrough an automatically generated feedbacktext, and show that this model has a series ofpractical advantages through which it can supportapplications requiring semantic interactivityand inference, as envisaged by the SemanticWebcommunity. Some results from anefficient recent implementation are presented,showing the benefits of having all levels oflinguistic description available for interactiveservices.

[7] Jon G. Hall and Lucia Rapanotti. Poelog: a prolog-based engine for problem oriented engineering. Technical Report 2008/07, 2008. [ bib | .pdf ]
Problem Oriented Engineering (POE) is a formal systemfor engineering design. In previous work, we have successfullyapplied POE to a variety of problems in the contextof software engineering. However, even for design problemsof modest complexity, the need for an automated toolto keep track of all design artefacts has become apparent.For scalability to real-world problems a tool is imperative.This paper presents a first generation tool for POE basedon Prolog. It argues how the Gentzen-style basis of POEallows for a compact and elegant Prolog encoding, whichwe call POElog. A LaTeX based front-end provides a convenientuser interface to POElog for development and testing.

[8] Thein Than Tun, Yijun Yu, Robin Laney, and Bashar Nuseibeh. Recovering problem structures to support the evolution of software systems. Technical Report 2008/08, 2008. [ bib | .pdf ]
Software systems evolve in response to changes in stakeholderrequirements. Lack of documentation about the original system requirements can make it difficult to analyse and implement new requirements. Although the recovery of requirements from an implementation is usually not possible, we suggest that the recovery of problem structures, which in turn inform the problem analysis of new requirements, is feasible and useful. In this paper, we propose a tool-supported approach to recover and maintain structures of problems, solutions, and their relationships, for specific new features in an existing system. We show how these recovered structures help with requirements assessment, as they highlight early in the evolutionary development whether it is feasible to implement a new requirement. We validate our approach using a case study of a medium-sized open-source software system.

[9] Ian Kenny. Dynamic,hierarchical particle swarm optimisation. Technical Report 2008/09, 2008. [ bib | .pdf ]
Particle Swarm Optimisation (PSO) is an optimisation technique based on the principle of social influence. It has been applied successfully on a wide range of optimisation problems. This paper considers the possibility of a dynamic hierarchical extension to the particle swarm technique, allowing the swarm to consider several related datasets. This provides the advantage of being able to consider several data scans and aggregate the results into a master swarm model.

[10] Thein Than Tun, Yijun Yu, Robin Laney, and Bashar Nuseibeh. Recovering problem structures from execution traces. Technical Report 2008/10, 2008. [ bib | .pdf ]
Software systems evolve in response to changes in stakeholderrequirements. Lack of documentation about the original system can make it difficult to analyze and implement new requirements. Although automatic recovery of all requirements from an implementation is usually not possible, we suggest that the recovery of problem structures, which in turn inform the problem analysis of new requirements, is feasible and useful. In this paper, we propose the toolsupported approach to recover and maintain structures of problems, solutions, and their relationships, by recovering causal control and data dependencies between components. Extracting low-level program structures is done fully automatically,while higher-level descriptions of problem structures are obtained interactively. We validate our approach using a case study of a medium-sized open-source software system.Keywords Causality, Dynamic Data Dependency Analysis

[11] Thein Than Tun, Rod Chapman, Charles Haley, Robin Laney, and Bashar Nuseibeh. Introducing new features to a critical software system. Technical Report 2008/11, 2008. [ bib | .pdf ]
In response to changing requirements and other environmental influences, software systems are increasingly developed incrementally. Successful implementation of new features in existing software is often difficult, whilst many software systems simply 'break' when features are introduced. Size and complexity of modern software, poor software design, and lack of appropriate tools are some of the factors that often confound the issue. In this paper, we report on a successful industrial experience of evolving a feature-rich program analysis tool for dependable software systems. The experience highlights the need for a development framework to maintain rich traceability between development artifacts, and to satisfy certain minimal necessary conditions of artifacts during and after the implementation of a new feature.

[12] Ian Kenny. A brief history and critique of the developments in of particle swarm optimisation. Student Research Proposal 2008/12, 2008. [ bib | .pdf ]
Swarm Computing1 is a relatively new optimisation paradigm. The basic premise is to model natural phenomena such as swarms, flocks and shoals, in order to solve nonlinear problems.There are currently two main heuristic techniques, Ant Colony Optimisation (ACO), developed by Dorigo[81], and Particle Swarm Optimisation (PSO) developed by Kennedy and Eberhart[187]. Ant Colony Optimisation attempts to model the pheromone trails of ants whilst they search for food. Particle Swarm Optimisation attempts to model bird flocks or swarms of bees by modelling their collective social influence.I have decided to concentrate on Particle Swarm Optimisation in myresearch. I consider it has greater unexplored potential, especially as it does not require the problem to be graphable.Much of the current research on particle swarm optimisation concerns the correct selection of the runtime parameters to ensure convergence. I therefore want to explore my conjecture that it is a more flexible approach to solving optimisation problems. In the vast majority of real-world applications, to which PSO has been applied, it is used as a data preprocessor for a neural network or similar post processing system.Swarm Computing has not been applied to very many real-worldapplications. Whilst the travelling salesman problem and other similarstandard optimisation problems all have applications, my intention is to explore an application of swarm computing, specifically, particle swarm optimisation to a real-world problem directly. Ideally I'd like to apply PSO to EMG or EEG data.My ideas centre on two Family or Cultural swarm questions: Can ahierarchy of swarms be introduced to PSO to increase diversity andencourage the spread of information across the solution space? Can that information be used effectively to guide the collective swarm to a better.1 Swarm Computing is often called Swarm Intelligence however, my preferred term is Swarm Computing.

[13] Thein Than Tun, Robin Laney, Michael Jackson, and Bashar Nuseibeh. Tool support to derive specifications for conflict-free composition. Technical Report 2008/13, 2008. [ bib | .pdf ]
Finding specification of pervasive systems is difficult becauseit requires making certain environmental assumptions explicitat design-time, and describing the software in a way that facilitates runtime composition. This paper describes how a systematic refinement of specifications from descriptions of the system's environment and requirements can be automated. Our notion of requirements allows individual features in the system to be inconsistent with each other. Resolution of conflicts at design-time is often over-restrictivebecause it uses the strongest possible conditions for conjunctionsand rules out many possible interactions between features. In order to support runtime resolution, our tool examines specifications for potential conflicts and augments them with information to enable detection at runtime. We use a form of temporal logic, the Event Calculus, as our formalism, and characterize the refinement of requirements as a kind of abductive planning. This allows us to use an existing Event Calculus planning tool, implemented in Prolog, as a basis to develop a reasoning tool for obtaining specifications from potentially inconsistent requirements. We validate our tool by applying it to find specifications of smart home software.

[14] Debra Trusso Haley, Pete Thomas, Marian Petre, and Anne De Roeck. Emma - a computer assisted assessment system based on latent semantic analysis. Technical Report 2008/14, 2008. [ bib | .pdf ]
We present EMMA (ExaM Marking Assistant), a Latent Semantic Analysis (LSA) based Computer Assisted Assessment System (CAA) we have developed as part of ELeGI (www.elegi.org) - -A semantic Grid for human learning is the vision behind the European ELeGI Integrated Project for the implementation of future learning scenarios based on ubiquitous, collaborative, experiential-based and contextualized learning through the design, implementation and validation of the Learning Grid. Assessment is an important component of learning and can have a strong impact on student progress. EMMA can provide both formative and summative assessment that is unbiased and repeatable as well as the almost instant feedback that is most useful for student learning. Our work has demonstrated that, even though the theory of LSA is over 15 years old, many of thedetails that make LSA a practical assessment technique are not known by the research community beyond the original LSA developers. In this paper, we summarise what we have learned about LSA, give an overview of how EMMA works, describe the types of questions that EMMA can assess, and evaluate its results as compared to human markers, and outline a plan for further research.

[15] Debra Trusso Haley, Pete Thomas, Marian Petre, and Anne De Roeck. Using a new inter-rater reliability statistic. Technical Report 2008/15, 2008. [ bib | .pdf ]
This paper discusses methods to evaluate Computer Assisted Assessment (CAA) systems,including some commonly used metrics as well as unconventional ones. I found that most of themethods to measure automated assessment reported in the literature were not useful for mypurposes. After much research, I found a new metric, the Gwet AC1 inter-rater reliability (IRR)statistic (Gwet, 2001), that is a good solution for evaluating CAAs. Section 1.6 discusses AC1,but first I describe other possible metrics to motivate why I think that AC1 is the best availablefor evaluating an automated assessment system.I focus on two types of metrics that I label external and internal metrics. External metrics canbe used for reporting and sharing results. Internal metrics are used for comparing results withina research project.Producers of CAAs need an easily understandable external metric to report results toconsumers of CAAs, i.e., those wishing to use a particular system. In addition to reportingresults to potential consumers, researchers may wish to share their results with otherresearchers. Finally, and perhaps most important for this dissertation, producers need an internalmetric to quickly compare the results of selecting different parameters of the assessmentalgorithm. Many choices need to be made when implementing an LSA-based marking system.The LSA literature frequently leaves many of these choices unspecified, including number ofdimensions in the reduced matrix, amount and type of training data, types of pre-processing, andweighting functions. The choice of these parameters is an intrinsic aspect of building an LSAmarking system. Therefore, researchers need an adequate way to measure and compare theresults of the various selections, as I shall explore in this chapter.

[16] Lucia Rapanotti and Jon G. Hall. Problem oriented engineering in action: experience from the frontline of postgraduate education. Technical Report 2008/16, 2008. [ bib | .pdf ]
The paper reports on the early phases of a developmentproject aimed at a highly innovative post-graduate researchprogramme for the Open University, UK, a world leader insupported distance higher education. The new programmeis a part-time MPhil to be delivered at a distance, supportedby a blend of synchronous and asynchronous technologies.After a brief description of the project and the complexityof the task at hand, the paper discusses how Problem OrientedEngineering, an emergent engineering design framework,was adopted on the project, and its performance inthis complex real-world setting. The outcome of the studywill be of relevance to practitioners who wish to adopt theapproach in tackling real-world development problems, aswell as researchers who are striving to improve the frameworkfor practical adoption.

[17] Patricia Roberts. Criteria for assessing object-relational quality. Technical Report 2008/17, 2008. [ bib | .pdf ]
Object-relational databases combine traditional relational database design with object-oriented features. Although object-relational features have been available to database designers for years, the best way to use these features has not been fully evaluated. This approach for assessing the relative quality of object-relationaldatabase designs draws on approaches for measuring quality in software engineering and conceptual modelling. This paper proposes a set of four criteria to help assess whether a particular design displays the characteristics of quality: integrity, simplicity, flexibility and seamlessness. The criteria are set in a context ofa framework for assessing quality that considers the viewpoints and priorities of different stakeholders in the design: for example, Systems Analysts, Business Analysts, Application Developers and Database Administrators. Future work will address the issue of whether these criteria are sufficient to distinguish betweengood and bad O-R designs. Once the framework has been used, the criteria they will be revisited to examine whether they have been useful in deriving judgements about the overall quality of the designs. Furthermore, the criteria will be assessed to find whether they could be useful in a wider context.

[18] Yijun Yu, Michel Wermelinger, Haruhiko Kaiya, and Bashar Nuseibeh. Depiction of additional node-related elements in graph-based software visualisations. Technical Report 2008/18, 2008. [ bib | .pdf ]
Many of the ways to depict software are based on graphs, although what nodes and edges represent differ from visualisation to visualisation. In this paper we present a light-weight approach to enrich graphbased visualisations so that nodes can represent more information. The idea is to show in each node a rectangle of pixels, each representing a certain element associated to the node, and the colour of each pixel representing up to three attributes of that element. The order of the pixels is user defined and may convey additional information. The approach is generic and allows data obtained through completely different reverse engineering processes to be shown together in a compact way that preserves the meaning of the graph layout. We illustrate our approach byshowing how software architecture and defects can be related: a graph depicting the high-level components and their dependencies is enriched with information about the bugs reported for each component.

[19] Richard Morris. Augmenting collaborative tasks with shareable interactive surfaces. Student Research Proposal 2008/19, 2008. [ bib | .pdf ]
This report describes my research topic: augmenting collaborative tasks with shareable interactive surfaces. It outlines the context of this research with respect to existing theoretical and empirical findings and details a work plan to completion. Here I briefly describe how the research question has been shaped through a review of the literature and interviews with professionals in fields with tasks that could potentially be augmented with the use of Shareable Interactive Digital SurfacES (SIDES).

[20] Nadia Pantidi. A study of the use and appropriation of multipurpose technology-rich spaces. Student Research Proposal 2008/20, 2008. [ bib | .pdf ]

[21] D. Foreman. Evaluating semantic and rhetorical features to determine author attitude in tabloid newspaper articles. M801 MSC Dissertation 2008/23, June 2008. [ bib | .pdf ]
This dissertation investigates the potential of families of machine learning features to improve the accuracy of a semantic orientation classifier that assesses attitudes of tabloid journalists towards the subjects of their opinion piece articles. A category of language, `language of judgement', is defined by which a journalist expresses an opinion matching his overall opinion of an article's subject matter. When the existence of “language of judgement“ was investigated, high inter-annotator agreement on per-document author attitude was found (values of Fleisch and Cohen's kappa were both 0.845) along with moderate agreement on per-sentence classification of judgemental or non-judgemental language (Fleisch's kappa of 0.507 and Cohen's kappa of 0.499). Three families of feature sets were defined to detect this language. The first family, `Semantic features', motivated by consideration of theory of journalism, tags repetitions of nouns that are either located in particular sections of the article or occur multiple times in the article as potential language of judgement. The second and third families, `rhetorical features', draw on Mann and Thompson's Rhetorical Structure Theory. For the second family, rhetorical relations are tagged to indicate the presence of potential language of judgement. For the third family, rhetorical relations are considered to mark potential shifts into and out of language of judgement. Areas of articles between tags from the first family and tags from the second family are tagged with features from this third family, to indicate that the sentence is potentially within an area of language of judgement bounded by these rhetorical relations. The feature sets were not very productive in acquiring judgemental language, together or separately. Precision of 0.405 for combined features was low but exceeded the overall percentage of judgemental language (32.8 percent). Recall of 0.162 was very low. While experimentation with the testing corpus did not give strong evidence for value of the feature sets, cross-validation tests on the training corpus showed greater potential, achieving precision of 0.520 and recall of 0.200. Inspection of learning curves created with the training corpus for the combination of all features showed that learning of judgemental language was taking place. This was also true for the `rhetorical' second and third families when they were investigated separately but was not seen for the first family of features. Weaknesses in corpora construction methodology are considered potentially responsible for differences in results between corpora: suggested changes to remedy this, if more opinion piece articles can be collected, are described. When classifying per-document author attitude, using human-annotated language of judgement was seen to improve the accuracy of a semantic orientation classifier that used Turney's PMI-IR algorithm (in comparison to use of all language in a document). However classification using language selected by the machine learning method did not lead to a similar improvement. The low precision and recall for acquisition of language of judgement obtained on testing corpus data is considered a likely cause of this.

[22] A. Kemble. Forensic Computing: Use of Linux Log Data in USB Portable Storage Device Artefact Analysis. M801 MSC Dissertation 2008/24, June 2008. [ bib | .pdf ]
Portable storage devices (PSDs) can be very useful but they pose a big security risk. News reports regularly describe companies and government departments losing personal and confidential data. The consequences can involve potential for identity fraud, contract termination and threats to national security. In the event of suspected security breach an organisation may investigate to determince the extent of the problem and find those responsible. Most computer use results in artefacts remaining on the computer long after the activity occurred. These artefacts may be used in a forensic investigation to identiy the actions that took place. In an investigation of USB portable storage devise usage, the user, storage device, time of use and purpose would need to be determined to identify a case of misuse. A series of experiments were performed to study the data available on a Linux computer with various logging configurations. A forensic investigation method was adopted from the current literature and evolved during the project. The results show the default configuration of a given Linux distribution does not provide enough evidence to satisfy a forensic investigation into USB flash drive usage, but improvements can be made by modifying the logging software configuration. The project delivers an evaluation of the native Linux logging software and provides a recommendation of the most effective at recording PSD artefacts. The project also provides a tested investigation procedure that helps determine what PSD usage has taken place on a Linux computer.

[23] R. Livermore. A multi-agent system approach to a sumulation study comparing the performance of aircraft boarding using pre-assigned seating and free-for-all strategies. M801 MSC Dissertation 2008/25, June 2008. [ bib | .pdf ]
Achieving true efficiency is an important commercial driver for airlines and can be of huge value in differentiating them in a competitive marketplace. The aircraft boarding process remains a relatively unstudied area in this regard and is perhaps one of the few remaining standard airline operations where significant improvements may still be delivered. Studies to date have focused on improving the process by applying varying levels of control to passenger ordering as they enter the aircraft. However, passenger actions and interactions are, by their nature, goverened by an element of chance and so the natural state of the borading system tends towards randomness. In acknowledgement of this fact, this simulation-based study investigates the performance of the boarding process when controls are relaxed to a greater or lesser degrees. It investigates whether multi-agent systems are appropriate for simulating stochastic processes by comparison with baseline results and whether they allow real conclusion to be drawn on the relative merits of different boarding systems. The results produced by this work cannot be statistically proven to be the same as the baseline and thus it cannot be said in this context that multi-agent systems are appropriate for simulating stochastic processes. However, in relative terms, the findings of this work do appear to follow the patterns hypothesised in earlier studies - that is that borading using pre-assigned seating but with no correlation between the order passergers enter the aircraft and the postion of their seat is preferable over a range of different scenarios to Free-for-All borading. This has allowed useful future work to be identified that will ensure that the results presented in this study are built upon in a more comprehensive manner to develop a fuller picture of the types of passenger interaction and interference that cause differential performance across boarding strategies.

[24] A. Nkwocha. Design Rationale Capture with Problem Oriented Engineering: an Investigation into the Use of the POE Framework for the Capture of Design and Architectural Knoweldge for Reuse within an Organisation. M801 MSC Dissertation 2008/26, June 2008. [ bib | .pdf ]
Design rational in software engineering fills in the gaps between the original requirements of a system and the finished product encompasing decisions, contraints and other information that influences the outcome. Existing research in this field corroborates the importance of design rational for the evolution of existing systems and creation of new systems. Despite this, the practice of design rationale capture and reuse is not as extensive as could be expected due to reasons which include time and budget contraints and lack of standards and tools. This capture of Design Rationale during software design activities carried out using Problem Oriented Engineering (POE) was demonstrated with the use of a case study. POE is a formal system for engineering design that provides a framework for the resolution of software problems in a stepwise manner. A review of literature on Design Rationale, its capture and management yielded a list of elements used as the criteria for identifying design rationale in the information gathered during the case study. Examination of that information revealed that all the identified elements were recorded and led to the conclusion that Design Rationale is captured when solving a software problem using POE. Examination of the flow of information that occurred during the execution of the case study led to the conjecture that Design Rationale recorded during the case study could be reused. Successful reuse would, however depend on the effectiveness of the categorisation, storage and organisation of the information gathered.

[25] I. Ostacchini. Managing assumptions during agile software development. M801 MSC Dissertation 2008/27, June 2008. [ bib | .pdf ]
Software plays an increasingly critical role in our world, yet the assumptions that underlie software development often go unrecorded, these assumptions can fail at any time, with serious consequences. This research evaluates a lightweight approach to assumption management (AM), designed to complement the agile software development methods that are gaining in popularity. Key AM tasks were drawn from previous research, and implemented over three months within a small, agile software development team. A simple database was developed for recording and monitoring assumption information. Thirty-three assumptions were recorded during the three months; a further 17 failed assumptions were recovered from the preceding three months. Two key indicators were proposed for measuring whether AM had been successful. Only one of these indicators was detected in the research results; a longer research timeframe would be required for a more conclusive analysis. A number of strong correlations were found between properties of assumptions. While the data collected depended to a large degree on the subjective estimates of the author, these judgements were validated with some success by his colleagues. In some ways, assumption management was found to be a good fit for agile development; however, the AM process was not successfully integrated into the team's development process, due to a difficulty in adapting to the required 'assumption-aware' way of thinking. Advice is offered to reserachers seeking to ease this transition, and to those looking to conduct further studies in assumption management.

[26] A Thorpe. Synthesising Test-based justification of Problem Oriented Software Engineering. M801 MSC Dissertation 2008/28, September 2008. [ bib | .pdf ]
(POSE) is a young framework supporting requirement and design specification. POSE allows a blend of formal and non-formal.Much of POSE research has been concerned with safety-critical systems, where a justification case is required by legislation. Hall et al. (2007a) suggested that an alternative, and as yet undefined, method of justification based on testing may be cheaper than the existing approach. Also, to date there has been no research into the relationship between testing and POSE. The project identifies an approach to test-based justification of POSE. I arrived at this through a synthesis of observations (based on writing a POSE specification and associated test designs), professional experience, and literary review. My approach has three incremental levels of justification detail, with each representing a decrease in risk, and an increase in cost, from the previous one. These levels are framed to describe the relationship between quality assurance, project management, development methodology and POSE. A by-product of this work has been a clearer understanding of the relationship between POSE and Testing within the software development life-cycle. This project is likely to be of interest to those using POSE for a development project,including quality assurance members, project managers, development managers,designers, testers, clients, and the POSE research community.

[27] J Tredgold. An assessment of the analytical benefits afforded by a timeline visualisation of Semantic Web data with temporal properties. M801 MSC Dissertation 2008/29, September 2008. [ bib | .pdf ]
The vast amount of data on the World Wide Web is, for the most part, authored to be easy for humans to comprehend, rather than for machines to parse. This is good when humans want to read a page, but not so good when they want machines to search out information on their behalf, for instance, to find the best route and price for an upcoming trip. Activity characterised as the Semantic Web is attempting to create a web of structured data that can be utilised by network-based software applications. Data such as that found on Wikipedia is now available on that web. This project sought to evaluate some of the potential benefits of being able to build rich interactive applications to access and analyse data obtained from this new web. To do this a prototype timeline application was built backed with a subset of Semantic Web Wikipedia data. It was evaluated alongside traditional Wikipedia access, with results showing efficiency and accuracy gains. This suggests that, for a class of queries, the approach taken could provide a useful addition to the traditional routes of data discovery.


This file was generated by bibtex2html 1.98.