[1] Catalina Hallett, Donia Scott, and Richard Power. Evaluation of the clef query interface. Technical Report 2006/01, 2006. [ bib | .pdf ]
As part of the CLEF project, we have developed a method for allowing subject matter experts to pose complex queries to a database. The method makesuse of natural language generation, whereby users compose queries throughinteraction with a dynamic text that is generated -on the fly. We presenthere some first-step evaluations that we have conducted with our modest poolof available testers, the numbers of which are constrained for practical reasons having to do with data security. The results allow us to draw some usefulconclusions about the usability of the method, and its training requirements,for subjects that are representative of our -typical intended end-users.

[2] Lucia Rapanotti, Jon G. Hall, and Zhi Li. Problem reduction: a systematic technique for deriving specifications from requirements. Technical Report 2006/2, 2006. [ bib | .pdf ]
In this paper we explore the notion of problem reduction as a systematic transformation from requirements to specifications. We adopt the notion of problem as a requirement in a real-world context for which a software solution is sought, andview the process of software development as a problem solving process, leading ultimately, and hopefully, to a solution which satisfies the requirement in its context. In this paper, we focus on how a solution specification can be derived from a requirement, and introduce problem reduction as a systematic transformation to achieve this. We reflect on how problem reduction captures requirements engineering practices, express it in the context of Problem Frames and provide a set of rules for its application. The intention of the work is to increase the understanding of the problem solving process as well as to provide techniques to support sound engineering practices.

[3] Hans Ahlfeldt, Lars Borin, Philipp Daumke, Natalia Grabar, Catalina Hallett, David Hardcastle, Dimitrios Kokkinakis, Clara Mancini, Kornel Marko, Magnus Merkel, Christian Pietsch, Richard Power, Donia Scott, Annika Silvervarg, Maria Toporowska Gronostaj, Sandra Williams, and Alistair Willis. Literature review on patient-friendly documentation systems. Technical Report 2006/04, 2006. [ bib | .pdf ]
This literature review forms a deliverable in the European Network of Excellence on Semantic Interoperability and Data Mining in Biomedicine. More specifically, it is part of a work package (wp27) which aims to develop and evaluate generic methods and tools for assisting patients to understand their health and healthcare by generating patient-friendly readable texts that paraphrase the content of their electronic health records. We have reviewed the literature in topics that we consider to be relevant to this work package. When appropriate, we cover variations in conditions in the four countries of the collaborating research groups (France, Germany, Sweden and the UK) and we cover corpora, tools and language technologies for the European languages of interest to these groups.First, we consider legal issues involved in patients gaining access to their medical records. Who can view the records? What data do they have the right to access? Are there any data that patients cannot access? Who can access records of dead patients?What about security and data protection? See chapter 2 for brief surveys of the current state of affairs in France, Sweden and the UK. Patient records are packed with jargon, acronyms and medical terms that clinical staff know and understand. Often there is a learning curve before patients become familiar with medical terms associated with their own particular illnesses and they may require more familiar words and phrases to describe medical concepts in an accessible form. The development of largescale medical term banks, thesauri and lexicons (e.g. ums specialist and Metathesaurus) enable Language Technology developers to generate reports for medical staff, but how can we communicate the same concepts to patients? We review the current literature on communicating technical medical terms in everyday language for patients and related issues. See chapter 3.Our survey on computational methodologies for generating patient-friendly texts included the following topics: extraction of terminologies from corpora, comparison of terms from different sources, automatic analysis of patient records, use of ontologies, logics to model terminological use and changes, text simplification and dialogue systems. See chapter 4. There have been many past nlg systems that generated output aimed at patients or doctors.We present an overview of these systems and compare them in 5dimensions:1. application area,2. knowledge used (domain knowledge, generic medical knowledge and linguistic knowledge), 3. user models and personalisation, 4. system evaluation, 5. use of hypertext See chapter 5.We have reviewed work, particularly in medical informatics, on automatic translation of technical medical language into language aimed at patients. Chapter 6 presents a survey of such systems.A number of empirical studies with patients have focused on the question of whether the provision of personalised information for patients is superior in various ways to general information.Will personalising information help patients be better-informed? Will it help them manage their illnesses better and comply with medical guidelines? Will it help them to take their medication in the correct manner? Will such information ultimately reduce hospital admissions? See chapter 7 for a survey of this literature.Our survey of existing corpus annotation tools, see chapter 8, describes existing tools and what they do. It also includes their availability, the languages they cover, their formats, platforms and locations. The tools are classified into:1. Tools for orthographic annotations (document information and document structure), 2. Tools for linguistic annotations (e.g. tokenization, stemming, morphology, pos, syntax), 3. Tools for semantic annotations (discourse-level, semantic tags and umls tags), 4. Workbench and ide tools.Our survey of existing corpora of patient information addressing patient information needs includes corpora that language engineers have used in the past in building systems as well as general linguistic corpora, medical corpora and others. See chapter 9.Chapter 10 surveys the Internet as a corpus, including access to and potential use of the web, Usenet, email and Internet relay chat. See chapter 10.

[4] Chris Mellish, Donia Scott, Lynne Cahill, Roger Evans, Daniel Paiva, and Mike Reape. A reference architecture for natural language generation systems. Technical Report 2006/05, 2006. [ bib | .pdf ]
We present the RAGS (Reference Architecture for Generation Systems) framework: a specification of an abstract Natural Language Generation (NLG) system architecture to support sharing, re-use, comparison and evaluation of NLG technologies. We argue that the evidence from a survey of actual NLG systems calls for a different emphasis in a reference proposal from that seen in similar initiatives in information extraction and multimedia interfaces.We introduce the framework itself, in particular the two-level data model that allows us to support the complex data requirements of NLG systems in a flexible and coherent fashion, and describe our efforts to validate the framework through a range of implementations.

[5] Paul Piwek, Richard Power, and Sandra Williams. Generating scripts for personalised medical dialogues for patients. Technical Report 2006/06, 2006. [ bib | .pdf ]
We propose an NLG system for communicating information to cancer patients about their medical records through scripted dialogues between animated agents. Building on past work, we discuss design issues and research questions, particularly the question of how to convey medical concepts in terms of everyday concepts. Our novel solution incorporates the information to be communicated into a personalised dialogue script between an expert medic and a novice medic. We discuss ongoing work to combine an existing NLG system that generates summaries of patient records with another existing NLG system that generates scripted dialogues for animated agents. We plan to evaluate the completed system with medical staff and patients.

[6] Dr Lucia Rapanotti, Dr Jon Hall, and Michael Jackson. Problem transformations in solving the package router control problem. Technical Report 2006/07, 2006. [ bib | .pdf ]
The paper describes a problem analysis, from early requirements through to design, to devise a controller for a package router. The analysis is based on the representation and systematic transformation of the problem and its parts. The intent of the paper is to provide an example of detailed and systematic analysis by using a problem-oriented approach that brings together ideas from the original problem frames approach with subsequent elabroation and extensions by the authors

[7] John Brier, Lucia Rapanotti, and Jon G Hall. Capturing change descriptions as patterns, in an organizations changing socio-technical system. Technical Report 2006/08, 2006. [ bib | .pdf ]
0In organisations, competitive advantage is increasingly reliant on the alignment of sociotechnical systems with business processes. These are complex and volatile due to the rapid pace ofmarketplace change. An organisation's success increasingly relies on its adaptability. Within this changing environment our work aims at developing tools that encourage an employee driven continuousprocess improvement opportunity, when considering the deployment and use of computing. Our focus is the analysis and synthesis of change, captured as a codification of recurrent patterns of change in a suggested notation of Change Frame Diagram. In this paper we extend our work by reinforcing withdescription, the change context modelled between 'before' and ‘after' change scenarios, and their comparison. We identify context specific variables that support reasoning about the context in which change takes place. We exemplify our approach on a more complex real-world example than previously, resultingin a further, potential, Change Frame Diagram.

[8] Donia Scott and Johanna Moore. An nlg evaluation competition? eight reasons to be cautious. Technical Report 2006/09, 2006. [ bib | .pdf ]
There is a move afoot within a section of the NLG community to push for a competitive comparative evaluation of generation systems, equivalent to similar initiatives within the message understanding, information retrieval, summarisation and word sense disambiguation communities ' viz. MUC, TREC, DUC, Senseval, Communicator, etc. (Reiter and Belz, 2006). While we agree that evaluation is clearly a difficult issue for NLG, and efforts to develop relevant evaluation techniques would obviously be very helpful, it is our view that an evaluation competition of the type proposed may not be sensible for NLG and could be a misguided effort that would damage rather than help the field.

[9] Jon G Hall, Lucia Rapanotti, and Michael Jackson. Problem-oriented software engineering. Technical Report 2006/10, 2006. [ bib | .pdf ]
This paper introduces a formal conceptual framework for software development, based on a problem-oriented perspective that stretches from requirements engineering through to program code.In a software problem the goal is to develop a machine 'that is, a computer executing the software to be developed 'that will ensure satisfaction of the requirement in the problem world. We regard development steps as transformations by which problems are moved towards software solutions. Adequacy arguments are built as problem transformations are applied: adequacy argumentsboth justify proposed development steps and establish traceability relationships between problems and solutions. The framework takes the form of a sequent calculus. Although itself formal, it can accommodate both formal and informal steps in development. A number of transformations are presented, and illustrated by application to small examples.

[10] Derek Mannering, Jon G Hall, and Lucia Rapanotti. Relating safety requirements and system design through problem oriented software engineering. Technical Report 2006/11, 2006. [ bib | .pdf ]
Standards mandate the demonstration of safety propertiesfor industrial software, starting at the initial requirements phase. The processes involved are iterative, with the choice of potential solution architecture being a driver for the discovery of system failure modes. Managing the resulting development is a complex task. Problem Oriented Software Engineering brings togethermany non-formal and formal aspects of software development, providing a structure within which the results of different development activities can be combined and reconciled.This paper illustrates how problem orientation can support the development task of a safety-critical system through its ability to elaborate, transform and analyse the project requirements, reason about the effect of partially detailed candidate architectures, and traceably audit design rationale through iterative development.The approach is validated through its application to anindustrial case study.

[11] Pat Hill, Simon Holland, and Robin Laney. Tutorial : An introduction to aspectmusic. Technical Report 2006/12, 2006. [ bib | .pdf ]
This tutorial is intended to be read in conjunction with the paper -An Introduction to Aspect-Oriented Music Representation. In this tutorial we show, by means of concrete examples, some of the ways in which AspectMusic may be used to represent and compose CMUs through HyperMusic, and how CMUs may be arranged, and crosscutting musical concerns expressed and applied using MusicSpace.

[12] Sarah Beecham, Nathan Baddoo, Tracy Hall, Hugh Robinson, and Helen Sharp. Protocol for a systematic literature review of motivation in software engineering. Technical Report 2006/13, 2006. [ bib | .pdf ]
Motivation is a crucial factor in software productivity and software failure (MoMSE 2005). The proposed study brings together published work in the field of software engineer motivation by following systematic literature review guidelines (Kitchenham 2004) for the first time. The literature review aims to summarise research studies related to our research questions in a way that is fair, rigorous and auditable.

[13] Derek Mannering, Jon G Hall, and Lucia Rapanotti. A problem-oriented approach to normal design for safety-critical systems. Technical Report 2006/14, 2006. [ bib | .pdf ]
Normal design is, essentially, when an engineer knows that the design they are working on will work. Routine 'traditional' engineering works through normal design. Software engineering has more often been assessed as being closer to radical design, i.e., repeated innovation. One of the aims of the Problem Oriented Software Engineering framework (POSE) is to provide a foundation for software engineering to be considered an application of normal design. To achieve this software engineering must mesh with traditional, normal forms of engineering, such as aeronautical engineering. The POSE approach for normalising software development, from early requirements through to code (and beyond), is to provide a structure within which the results of different development activities can be recorded, combined and reconciled. The approach elaborates, transforms and analyses the project requirements, reasons about the effect of (partially detailed) candidate architectures, and traceably audits design rationale through iterative development, to produce a justified (where warranted) fit-for-purpose solution. In this paper we show how POSE supports the development task of a safety-critical system. A normal 'pattern of development' for software safety under POSE is proposed and validated through its application to an industrial case study. Keywords: problem orientation, normal design, safety analysis, traceability.

[14] David J King. The role of information design on pedagogical effectiveness and user interface usability of web-based e-learning. Student Research Proposal 2006/15, 2006. [ bib | .pdf ]
This paper presents a PhD research proposal to explore the accuracy with which web-based e-learning materials can be designed to convey an educationalist's intentions to a learner. The work is based on exploring the matches and mismatches between educationalist intentions for, and learner perceptions of, the materials. This will be achieved through empirical studies using laddering to elicit educationalist intentions, and questionnaires and interviews to determine learner perceptions. From this, it is planned to produce design guidelines for educationalists to accurately convey their intentions to developers of web-based e-learning materials.

[15] Pete Thomas, Neil Smith, and Kevin Waugh. An approach to the automatic grading of imprecise diagrams. Technical Report 2006/16, 2006. [ bib | .pdf ]
In this paper we describe our approach to the grading (marking) of graph-based diagrams for instance those produced by students in the area of Entity-Relationship (E-R) diagrams. This is an application of a more general problem and is based on our framework for diagram understanding. We believe thattechniques of NLP commonly used in understanding text have equivalents in the understanding of diagrams and this paper shows how these techniques have been incorporated into a grading tool. The accuracy of the marking tool has been measured in two small-scale trials and the positive results from those trials are presented here. Our approach naturally allows the provision of diagrammatic feedback on student answers to E-R diagramming questions which has been incorporated into a tool for practising E-R diagramming.

[16] Pete Thomas. A revision tool for learning data modelling with diagrams. Technical Report 2006/17, 2006. [ bib | .pdf ]
This article is extracted from a COLMSCT CETL (Centre of Excellence for Teaching and Learning) interim report on a project to produce a revision tool for learning data modelling that is intended for use on the forthcoming database course M359. The work was carried out as part of the author's COLMSCT teaching fellowship. This article provides the history of the project to-date (January to October 2006) and discusses the evaluation of a prototype version of the tool. Further discussion of the tool itself and of the research that supports it can be found in the references

[17] Mohammed Salifu, Bashar Nuseibeh, and Lucia Rapanotti. Towards context-aware product-family architectures. Technical Report 2006/18, 2006. [ bib | .pdf ]
Introducing variability in a product-line aims to address the needs of large number of users. For example, Nokia phones form a product-line which can be configured into over 1000 different handsets operating in all continents. We sampled a subset of Nokia phones and report this product-line in a detailed pilot study. We identified some underlying sources of the variability in these mobile phones, and also proposed necessary extensions to the current product-line approach in order to develop the phones for varying operating contexts.

[18] Yijun Yu, Markus Strohmaier, Greg McArthur, Jianguo Lu, and John Mylopoulos. Literature programming. Technical Report 2006/19, November 2006. [ bib | .pdf ]
Authoring or reviewing a scientific paper is tedious to avoid or to locate presentational errors. Errors such as spelling, grammar can be checked by existing tools, whereas structural errors for concepts are harder to detect. Converting a technical paper into a program, the `literature programming' proposed in this paper allows existing program analysis tools to be reused for detecting and resolving some of its writing problems. For example, a simple C/C++ parser can be reused to check type errors that can not be captured by spelling and grammatical checkers; redundancies and false dependencies can be exposed and removed by the restructuring tool we developed for C/C++ programs. In general, an analogy between the literature (in terms of papers) and the software (in terms of programs) is made to reuse software engineering tools for literature programming (writing/reviewing/studying). The work has been applied to a recently published ICSE paper, showing a promising direction of software engineering-aided literature programming.

[19] Gareth Bedford. Investigating the Attractors in Off-Line and On-Line (B2C) Music Shopping. M801 MSC Dissertation 2006/20, September 2006. [ bib | .pdf ]
The number of people worldwide who use the Internet has increased exponentiallyover the past few years. This has contributed to a major increase in the numberof consumers who browse for products, compare prices and purchase goods from on-line stores. Consumers can move very quickly and effortlessly between channelssuch as mail-order, on-line and off-line stores when shopping; this makes itincreasingly important for businesses with multi-channel strategies to becomemore aware of the consumer behaviour that takes place around any of the channelsthey offer. Factors that attract consumers towards a shopping channel or storecan be called attractors”. Awareness about attractors can be used to enhancethe competitiveness of a store and the quality of services available toconsumers. On-line music stores, specifically the iTunes store, have enjoyed ahuge amount of growth in recent years. This research project examines theattractors that influence consumer behaviour at the iTunes on-line store and atoff-line music stores in general. The research employs a user-centred approachto elicit these attractors. The findings of this research include a catalogue ofmusic store attractors; the attractors that are most significant are:convenience of a store (off-line attractor), the ability to find somethinginteresting or new (e-commerce attractor), and the ability to buy single tracks(on-line attractor). The research presents some analysis of the relationshipsbetween these attractors and guidelines towards the discovery of attractors. This research goes beyond user-system interaction and usability, examining the customer experience to find that other factors besides usability are involved inattracting consumers to iTunes. It also finds that off-line consumer experiencesinfluence on-line consumer behaviour and expectations.

[20] Ben Cleyndert. The Role of Ontology in an Extendible Tool for Capturing Statistical Information. M801 MSC Dissertation 2006/21, September 2006. [ bib | .pdf ]
Software systems are generally built by software development professionals who are remote from the intended user's domain. A problem with development of systems is the successful transferral of domain knowledge from the user to the software development team. Unified Modelling Language (UML) is an established common language used to facilitate this. The Semantic Web uses collaboration of agents for the completion of ad-hoc tasks. This collaboration of agents can be considered as a software system with its data and functionality being described by ontology. To overcome the problem of domain knowledge transferral, there is potential to utilise the Model Driven Architectural (MDA) approach in the creation of these ontologies to provide users with the capability to develop software systems directly.This research provides an overview of MDA and the Semantic Web and details integration of the two fields to define a unified methodology and architecture for the production of software through modelling. The MDA based modelling methodology developed produces runtime artefacts that define data structures and orchestrate software components in the delivery of system solutions. A prototype is used to evaluate the capability of the derived methodology and architecture to deliver a well-defined system derived directly from modelling techniques familiar to users exposed to UML. Evaluation of the prototype demonstrates that bespoke software systems can be created using runtime artefacts derived from modelling.

[21] Richard Gorman. An Empirical Comparison of Subjective Evaluation and Metrics in the Maintenance of COBOL software. M801 MSC Dissertation 2006/22, September 2006. [ bib | .pdf ]
The cost of software maintenance and in particular the maintenance of legacy software such as COBOL has been widely reported. It is therefore important to be able to measure the maintainability of such software. This study investigates the two primary methods of measuring the maintainability of software; subjective evaluation using software developers, and the more formal and objective approach of software metrics. In addition to these primary methods, a taxonomy of COBOL code smells is developed for potential use in determining software maintainability. The two methods of measurement were applied to a sample of nine COBOL programs. A group of six developers gave a subjective evaluation of the software by answering a questionnaire which assessed their opinion of the software and of the taxonomy of smells, and how these might influence their views. In addition, a total of seven software metrics were used to gain an alternative view of the same software. These metrics were lines of code, McCabe's Cyclomatic Complexity and Halstead's Software Science Indicators. The results show good correlation between the individual metrics evaluated and an equally strong correlation between the metrics and the results of the subjective evaluation. Only one metric, Halstead's Difficulty showed little relationship to the other metrics or to the results of the subjective evaluation. Interestingly, the subjective evaluation showed greater variation in the results than did the software metrics and the subjective evaluation using the taxonomy of smells showed a good correlation with the initial subjective view, yet it did cause a change in opinion in 42.6% of the evaluations. Additionally, the results showed that whatever the method of measuring maintainability, the age of the software is important, and the older the software, the harder it is likely to be to maintain. The results also showed that complexity and size both contribute to maintainability and also that developer experience leads to a better view of the maintainability of the software. This study concludes that software maintainability is influenced by a number of factors, including the source code, program age and developer experience. As a result, the subjective element of developer intuition is required to be able to include all of the factors relevant to measuring software maintainability. Metrics alone cannot easily capture all of these factors, and therefore a metric, or combination of metrics should be used to corroborate the opinions of developers, especially where inexperienced developers are involved in the evaluation.

[22] Milan Hlousek. Suitability of a Problem Frames framework for Intelligent Building services requirements engineering. M801 MSC Dissertation 2006/23, September 2006. [ bib | .pdf ]
An Intelligent building (IB) is a building that responds to varying occupants' needs and changing environment whilst minimizing running cost and ecological impact. The demand for IB is growing. IBs are equipped with embedded autonomous systems and artificial intelligence. These are becoming more sophisticated, complex and flexible. The building investors, operators and occupants have different expectation of the functions of the building. These need to be captured and implemented into the building design. The industry current requirements engineering (RE) methods are not mature enough in order to capture IB requirements within the essential quality. This project employs a software engineering framework called Problem Frames (PF). The framework has been already demonstrated on a Lighting system which is one of the IB subsystems. It is used as a starting point to construct a pilot application of PF for IB systems RE. The work goes further and investigates whether it can be used by IB professionals to capture IB system requirements. First, general expectations of IB stakeholders are captured through a literature research and textual analysis. Many IB related articles and publications are studied in this part of the project in order to generate a fundamental list of IB ust have' features. Second, the identified features are used as an input data for the pilot application of PF for IB systems RE. Then the findings are summarized and presented to an IB specialist forum for an evaluation in form of a questionnaire. The aim of the survey is to find out what the IB professionals think about the proposed method. The responses were positive in all of the investigated areas which indicate that the PF for IB systems is worth of exploring in a full scale in the future.

[23] Roger Swaby. How Can Defect Analysis Help To Improve Risk Management Techniques In IT Projects? M801 MSC Dissertation 2006/24, September 2006. [ bib | .pdf ]
Historical data has shown us that a high proportion of IT projects do not meet the success criteria on which they are based. There are many reasons why a project may fail, some within the direct control of the project manager and others upon which he has less influence. One such reason for failure is ill preparation for circumstances that detrimentally affect the project. Risk management provides a powerful tool for improving project success rates, as good risk management techniques can counteract many of the root causes of project failure. Defects provide a quantitative measure for known problems that occurred during a project. The analysis of these defects is often used to feedback into the development cycle in an attempt to improve project management and process techniques. Much of the research surrounding defect analysis schemes is based on causal analysis, looking at how defect analysis can improve processes for future and current projects. It could be argued that risk management aims to reduce the impact of unforeseen circumstances by eliminating or decreasing the root- causes of these unforeseen circumstances. Not much research has been performed on the linking these two techniques; hence, this dissertation will investigate a possible link between these two.

[24] Patricia Wilson. The use of computing technology in a challenging environment: an investigation in the emergency services in Northern Ireland. M801 MSC Dissertation 2006/25, September 2006. [ bib | .pdf ]
There is a large number of computing technologies found in the office and used in every day routines but the transition to the challenging domain of emergency services, is less apparent. The aim of this study is to address the lack of computing technology at the front line operation of emergency services that could save lives and improve the safety and efficiency of the emergency crews. The study concentrated on the Northern Ireland Fire and Rescue Service (NIFRS) and Northern Ireland Ambulance Service (NIAS). It was the interesting Open University course, User Interface and Design Evaluation (M873), which fostered my interest in computing outside a desktop and led to this choice of topic for the dissertation. There was also a family interest in selecting the emergency service aspect. Emergency crew and IT staff were interviewed to understand front line operations. Document studies of records, training artefacts and logs provided a natural source for result validation. All equipment employing computing technology was classified according to the task perform. The classification facilitated the identification of gaps in technology usage across the emergency organizations. A literature survey of suitable technology within this challenging context to fill the gaps was performed. The results were tested using a focus group. Analysis used techniques such as task scenarios, sequence diagrams and mind maps. A classification of the tasks is presented with six categories including tracking and locating, mobile data, information collection, sensor application, security and communication. Ten technologies have been identified which will have major impact on saving lives and eight technologies with minor impacts. Ten technologies have been identified which will have major impact on improving the efficiency of emergency services and eight technologies with minor impacts. The results shall serve as input to prototypes for equipment in this hostile environment.

[25] Michele Doran. A comparison of Problem Frames, a problem-based method, and KAOS, a goal-based method, for Requirements Analysis within a financial environment. M801 MSC Dissertation 2006/26, September 2006. [ bib | .pdf ]
This research reviews the two Requirement Analysis methods of Problem Frames and KAOS and then applies them to a case study within a financial environment. Both methods focus on the problem domain and have techniques to ensure that the task of Requirements Analysis is carried out in a logical manner that should help to produce complete, consistent and unambiguous requirements. The intention of the research is to make recommendations and produce guidelines for use within financial environments to help to improve the quality of requirements specifications. Specific characteristics of financial environments are reviewed to determine what makes them require special attention in comparison to other, more mechanical environments. As well as general characteristics applicable to all environments, specific characteristics were found that do require special focus. These are complexity of functionality, adaptability to changing requirements to comply with laws and regulations, keeping a competitive edge by delivering quickly to meet the demands of the market and finally, the existence of legacy systems. Existing literature covering the two methods is reviewed to analyse previous descriptions of their application and the advantages and disadvantages experienced. The application to a case study then allowed the comparison of these findings with first hand experience in a financial environment. The similarities were many showing that the advantages of a particular method did carry into the financial environment, although many disadvantages did as well. The requirements specifications produced during the two methods were reviewed by experienced designers /analysts and system testers and their feedback incorporated into the conclusions of the study. Using the full set of conclusions, guidelines and recommendations have been produced, reviewed and then refined. The key findings, that aim to address the financial environment's specific characteristics, are that at specific stages the models and analysis of the requirements should be reviewed with the key stakeholders to try to address the issue of complexity and that the financial environment should keep a catalogued and classified store of past experience to encourage reuse in the early stages of the development lifecycle to aid the delivery time of required software. Problem Frames and KAOS both support these processes but it may be a combination of the two methods which provides the best approach to Requirements Analysis in a financial environment. Although the research is based on a single-case study, the results and conclusions have been made for all financial environments by drawing on their general characteristics. The resulting guidelines are hoped to be of use to improve Requirements Analysis within financial environments.

[26] Sakib Supple. Evaluation of the Model-View-Controller design pattern when applied to a heterogeneous application to distribute newspaper textual content to mobile devices. M801 MSC Dissertation 2006/27, September 2006. [ bib | .pdf ]
The rise in mobile device usage is fuelling demand for access to news stories while on the move. The aim of this research is to suggest a suitable design pattern for applications that publish newspaper text content to many different mobile devices. The Model-View-Controller (MVC) pattern is often used to design applications of this type. Three competing MVC-based designs are compared using the multi-channel-access bridging pattern. This tests the designs for architectural compatibility with an existing server infrastructure. The selected design pattern, a modified Service to Worker pattern, is used to produce a Blackberry application to publish news content. The user interface is designed using human-centred techniques. This interface is quantitatively evaluated and compared with the application performance requirements. It is qualitatively evaluated using a participatory heuristic inspection technique to test its usability. This evaluation identifies a number of defects in the user interface and recommendations are made to address these. To demonstrate the suitability of the design for heterogeneous applications, the code for the Blackberry version is reused to implement a generic version for mobile phones. The result of the project is a heterogeneous mobile application to publish newspaper text content.

[27] Mark D Cornwall. Determining the Feasibility of Achieving Graceful System Degradation Through The Use of Automatic Reconfigurable Designs and Software. M801 MSC Dissertation 2006/28, September 2006. [ bib | .pdf ]
All systems can and do fail, and often a relatively ‘small’ failure has a disproportionate effect, as it prevents access to parts of a system that are operating correctly, or prevents their usage. Hence, the traditional method of maintaining the intended service, is to provide sufficient redundancy, in combination with an appropriate repair strategy; although this is inherently expensive and is not always feasible. However, design and economic pressures, together with engineering’s innate conservatism, effectively prevent this approach from being re-evaluated. The net result is, at best, a step drop in capability and at worse, a total loss of a system. Allowing a system to undergo ‘graceful’ service degradation, instead of a step drop, significantly improves its ‘resilience’; with a direct impact on both availability and cost. A resilient system is therefore capable of absorbing the impact of an interruption, disruption or loss; and will continue to provide a minimum ‘acceptable’ level of service. This thesis is a literature survey, and represents a synthesis of ideas from a number of sources to suggest an inherently resilient system. It presents a new way for looking at system failures that occur within a traditional computer-based architecture, by considering: · An architecture based upon the ‘fractal concept’, i.e. a number of identical components, capable of self-repair, self-configuration, self-optimisation, and selfprotection; which through mutual interaction, can provide high level services equivalent to that of a ‘normal’ computer; · An operating system, which utilises the principles of evolutionary strategies in the form of genetic algorithms, to prioritise and reconfigure the components; which form the pool of available resources for a given operating scenario and environmental circumstances. By combining these two concepts, to create a ‘heterogeneous adaptive system’, this thesis outlines how the proposed approach creates an emergent self-optimising capability; which uses temporal based structures for causal elimination and automatic ‘work around’. This creates an inherently resilient system, capable of automatically adapting to internal and external environmental changes i.e. failures, while maintaining its intended service provisions. This thesis consists of six sections: the introduction; research methods employed; current architecture resilience concepts and responses; the proposed fractal architecture; an examination of the proposed architecture’s ability to address resilience issues; an analysis of the method employed and the results identified; conclusions and future research.

[28] Kieran Brennan. Pragmatic problems in designing large?scale location based mobile games. M801 MSC Dissertation 2006/29, September 2006. [ bib | .pdf ]
The mobile phone has undergone a number of evolutions by incorporating technologies such as digital cameras and music players. More recently connectivity technologies such as WIFI have been integrated and Nokia among others is poised to launch a GPS smart phone early in 2007 (Nokia, 2007a). Connectivity coupled with user location may be about to open up a new range of location based services including mobile location based games. This work sets out to investigate the problems associated with developing software for mobile phones and specifically for large scale mobile location based games. The research investigates the importance of inherent limitations of mobile devices coupled with location sensing technology and the variable needs of users. The study also assesses the impact new technologies such accelerometers and the Galileo satellite system are likely to have.

[29] Mark Vincent. Communicating Software Requirements: A Comparison of Problem Frames and the UML for Domain Modelling and Requirements Analysis in Commercial Business Projects. M801 MSC Dissertation 2006/30, September 2006. [ bib | .pdf ]
The communication about software requirements between software developers and business stakeholders continues to be an area of significant difficulty, and a contributing factor to the high number of projects failing to deliver their intended benefits. Both the Unified Modeling Language (UML) and Problem Frames are proposed as methods to aid this communication, but there appear to be problems in gaining their widespread adoption, especially among the business stakeholders. In the case of Problem Frames, the research to date is still mostly limited to theoretical discussion, with only very few documented examples of industrial use. The UML has a wider commercial case base, but it appears from the literature that there are problems gaining widespread endorsement, especially on the business side. The research seeks to compare the two methods, to establish which better communicates the problem domain and software requirements to both software practitioners and non specialist business stakeholders. The research focuses upon the notations used and how well they convey their intended meaning, this is achieved through basic comprehension testing of some relatively simple software models using a small test group of participants. The tests were conducted during structured interviews in order to elicit feedback from those who took part. The findings suggest there are problems with the technical nature of the models, especially on the business side, and this creates a barrier to effective communication. There remains a tendency to use ordinary English text to describe software requirements at a business domain level and it seems this is due to the lack of business acceptance of the existing structured approaches. The way in which software requirements are presented is found to be an important factor in gaining business acceptance. It is proposed that greater consideration should be given to understanding the perspective of non specialist stakeholders, and only by reconciling that perspective with the needs of the software development teams will it be possible to improve the overall communication between business and IT stakeholders.


This file was generated by bibtex2html 1.98.