Technical Reports at the Centre for Research in Computing

If you notice any error, please contact
only search CRC technical reports

[1] Jon G Hall, Jens Bk Jrgensen, and Lucia Rapanotti. On the use of coloured petri nets in problem oriented software engineering: the package router example. Technical Report 2007/01, 2007. [ bib | .pdf ]
In this paper, we present an approach to specification ofIT systems that combines the use of Coloured Petri Nets (CPN) andthe Problem Oriented Software Engineering (POSE) framework 'an extension and generalisation of Jackson's Problem Frames to the solution of software engineering problems. Through the case study of a package routing system, we demonstrate how a CPN model can be used to make appropriate POSE descriptions and support a POSE argument for the adequacy of a problem's software solution. The suitability of CPN as a description language for POSE is discussed and demonstrated in the study. We found that the ability to execute CPN models offers potential for showing adequacy of solutions, a key aspect of software engineering.Topics. System design using nets; relationship between net theory and other approaches; experience with using nets; higher-level net models (CPN); application of nets to real-time and embedded systems; requirements engineering; Problem Oriented Software Engineering.

[2] Jon G. Hall, Lucia Rapanotti, Karl Cox, and Steven J. Bleistein. Towards normal organisational problem solving. Technical Report 2007/02, 2007. [ bib | .pdf ]
The paper illustrates a sophisticated analysis and retrospective justification as adequate of the design of a solution of a real-world organisational problem. The approach integrates two extant frameworks defined by the authors, problem orientation and B-SCP, which is able to elaborate, transform and analyse the problem requirements, support the alignment of IT and business strategy, reason about the effect of partially detailed candidate solution architectures, and traceably audit design rationale through iterative development. The joint approach brings together many non-formal and formal aspects of organisational development, providing a structure within which the results of differentdevelopment activities can be combined and reconciled.Keywords: Problem Oriented Software Engineering, B-SCP, organisational engineering

[3] Tracy Hall, Nathan Baddoo, Sarah Beecham, Hugh Robinson, and Helen Sharp. The motivation of software engineers: Developing a rigorous and usable model. Technical Report 2007/03, 2007. [ bib | .pdf ]
This report presents a summary of the work undertaken in the one year EPSRC -Modelling Motivation in Software Engineering (MoMSE) project (2005). The aim of this work is to produce a model of motivation in software engineering. We give an overview of how we developed a model of motivation in Software Engineering (SE). Our model of motivation reflects three viewpoints: 1. motivation in the SE literature; 2. classic theory of motivation to include models tailored specifically to reflect motivation in software engineering; and 3. empirical investigations into the motivation phenomenon. The three viewpoints are represented in our model of motivation as follows:

[4] Jon G Hall, Derek Mannering, and Lucia Rapanotti. Arguing safety with problem oriented software engineering. Technical Report 2007/04, 2007. [ bib | .pdf ]
Standards demand that assurance cases support safety critical developments. It is widely acknowledged, however, thatthe current practice of post-hoc assurance|that the product is built and only then argued for safety|leads to many engineering process deficiencies, extra expense, and poorer products. This paper shows how the Problem Oriented Software Engineering framework supports the concurrent design of a safe product and its safety case, by which these deficiencies can be addressed.The basis of the paper is a real development, undertakenby the second author of this paper, of safety-related sub-systems of systems flying in real aircraft. The case studyretains all essential detail and complexity.

[5] Derek Mannering, Jon G Hall, and Lucia Rapanotti. Sil4 process improvement with pose and alloy. Technical Report 2007/05, 2007. [ bib | .pdf ]
Safety Standards demand that industrial applications demonstrate they have the required safety integrity and this starts with theinitial requirements phase. This paper shows how the Problem Oriented Software Engineering (POSE) framework, in conjunction with the Alloy formal method, supports this task through its ability to elaborate, transform and analyse the project requirements and thus develop a solution for an avionics case study. In particular, this work reports on how the POSE/Alloy combination was used in conjunction with the POSE safety pattern to improve the requirements analysis capabilities of an existing, successful safety critical development process. The results of applying this combination to an existing design showed that it could detect anomalies early in the life cycle that had previously been detectedby much later (and more costly) validation work.

[6] Alan Hayes, Pete Thomas, Neil Smith, and Kevin Waugh. A developmental framework for computer-based automated assessment. Technical Report 2007/06, 2007. [ bib | .pdf ]
In this paper, we present an investigation into the development of a framework for the automatic grading (marking) of student submitted course work. We discuss this framework, its structure and its subsystems. Our framework has been developed in the context of the student submission consisting of two components: a design (using the UML methodology) and source code (using the Java programming language).The focus of our framework is upon the consistency between the student code and design. We discuss its context and development and highlight how we can infer structure from the student submission and use this to inform the assessment process.

[7] Debra Trusso Haley, Pete Thomas, Marian Petre, and Anne De Roeck. Seeing the whole picture: Comparing computer assisted assessment systems using lsa-based systems as an example. Technical Report 2007/07, 2007. [ bib | .pdf ]
This paper presents a framework for evaluating computer assisted assessment (CAA) systems. It discusses why the framework can be useful for both producers and consumers of these automatic aids to assessing learners. The framework builds on previous work to analyse Latent Semantic Analysis- (LSA) based systems, a particular type of CAA, that produced a research taxonomy that could help LSA CAA developers to publish their results in a format that is comprehensive, relatively compact, and useful to other researchers. The paper contends that, in order to see a complete picture of a CAA system, certain pieces must be emphasised. It presents the framework as a jigsaw puzzle whose pieces join together to form the whole picture and provides an example of the utility of the framework by presenting some empirical results from our LSA-based CAA system that marks questions about html. Finally, the paper suggests that the framework is not limited to LSA-based systems. With slight modifications, it can be applied to any CAA system.

[8] Michael Ellims, Darrel Ince, and Marian Petre. Aetg vs. man: an assessment of the effectiveness of combinatorial test data generation. Technical Report 2007/08, 2007. [ bib | .pdf ]
This paper reports on an industrial study of the effectiveness oftest data generation. In the literature on the automatic generationof test data a number of techniques stand out as having received asignificant amount of interest. One area that has achievedconsiderable attention is the use of combinatorial techniques toconstruct data adequate test sets that ensure all pairs, triples etc.of input variables are included in at least one test vector. Therehas been some systematic evaluation of the technique as appliedto unit testing and, while there are indications that the techniquecan be effective, very little work has been performed usingindustrial code. Moreover, there has been no comparison ofeffectiveness of the technique for unit testing compared with teststhat are generated by hand. In this paper we apply random andcombinatorial (AETG) techniques to a number of functions drawnfrom industrial code with known faults and existing unit testsuites. Results indicate that for simple cases combinatorialtechniques can be as effective as the human-generated test, butthere are instances associated with complex code where thetechnique performs poorly 'but no worse than randomlygenerated data.

[9] Michael Ellims. The csaw mutation tool users manual. Technical Report 2007/09, 2007. [ bib | .pdf ]
Mutation is a technique that holds great promise in testing research, however it can be difficult getting hold of tools to allow the application of the technique to programs written in the C programming language. One tool that is available tool is Proteum. Here we describe another small tool set that can be used for performing mutation on C functions and provide guidance for the using the tool.

[10] Chrisopher J Ireland. Object-relational implementation constructs: The role of a structured user defined type in the resolution of an object-relational impedance mismatch. Student Research Proposal 2007/10, 2007. [ bib | .pdf ]
The success of a software system depends on the accurate representation of real-world concepts and their faithful transformation into software.The UML has become the language of choice for those designing and building object-based software systems.Object-relational impedance mismatch is a fact of life for those developing an object-oriented application which must use a relational database for persistence. Object-Relational SQL (OR-SQL) represents one attempt to resolve an impedance mismatch between an object-oriented application and a relational database schema. At the heart of this attempt is the Structured User Defined Type (SUDT). My hypothesis is that a semantic preserving transformation of a UML subclass to an OR-SQL representation using a SUDT does provide a viable solution to the problem of an object-relational impedance mismatch. My research proposal tests this hypothesis whilst recognising the interplay of requirements from a number of stakeholders in a software development process. My research will identify the possibilities for semantic preserving transformations of a UML subclass to an OR-SQL representation using a SUDT; and provide an understanding of the contribution of a SUDT to the resolution of an object-relational impedance mismatch.

[11] Antony R. Grinyer. Investigating the adoption of agile software development methodologies in organisations. Student Research Proposal 2007/11, 2007. [ bib | .pdf ]
Agile software development methodologies such as Extreme Programming continue to diffuse throughout the software engineering community, yet little is understood how and why this diffusion is sustained. Moreover, the dearth of empirical evidence elucidating the benefits and constraints of agile methods raises questions as to what sources of information organisations use in order to decide whether to adopt agile methods; the factors which persuade organisations to adopt agile methods, and the type of evidence persuasive to practitioners in those organisations. It is this phenomenon that this research wishes to explore.

[12] Jose Abdelnour-Nocera and Helen Sharp. Adopting agile in a large organization: balancing the old with the new. Technical Report 2007/12, 2007. [ bib | .pdf ]
Much has been written about adopting agile software developmentwithin a large organisation. A key aspect of this significant organisational change is to ensure that a common understanding of the new technology emerges within all stakeholder groups. We propose that an analysis framework based on the concept of Technological Frames (TFs) can identify where understanding is in conflict across different stakeholder groups. We used TFs toanalyse data collected in one organisation in the process of adopting an agile development approach. In doing so, we identified several dimensions (called 'elements' in TFs) which characterise a group's understanding of agility. In this paper, we present these elements and describe the TFs for four distinct groups.We suggest that these elements may be used by other organisations adopting agile methods to help understand the views of different stakeholder groups.Keywords: technological frame; human aspects; empirical study; qualitative study

[13] Helen Sharp, Tracy Hall, Nathan Baddoo, and Sarah Beecham. Exploring motivational differences between software developers and project managers. Technical Report 2007/13, 2007. [ bib | .pdf ]
In this paper, we describe our investigation of the motivationaldifferences between project managers and developers. Motivationhas been found to be a central factor in successful softwareprojects. However the motivation of software engineers isgenerally poorly understood and previous work done in the area isthought to be largely out-of-date. We present data collected from6 software developers and 4 project managers at a workshop weorganized at the XP2006 international conference. We collectedthis data using the Repertory Grid Technique (RGT). RGToriginated from psycho-analysis and allows researchers touncover the detailed building blocks of peoples' attitudes. In thisinvestigation we elicit RGT data focused on attitudes tomotivation. We compare the motivation attitudes of softwaredevelopers to project managers. Our findings suggest that projectmanagers and software developers think differently aboutmotivation. It is very important for successful project outcomesthat project managers understand that developers may bemotivated differently to themselves and that they managedevelopers' motivations appropriately.

[14] Eva Banik. Generating parenthetical constructions. Student Research Proposal 2007/14, 2007. [ bib | .pdf ]
This paper is a research proposal for a dissertation in computational Linguistics, in particular Natural Language Generation. The purpose of the research is to provide a principled account of the generation of embedded constructions (called parentheticals) and to implement the results in a natural language generation system.Parenthetical constructions are frequently used in texts written in a good writing style and have an important role in text understanding (they help the reader to differentiate between more and less important information). They have been much studied in the linguistics literature but have received no attention so far in computational linguistics. While the ability to signal the relative importance of various items in a sentence is clearly an important contributor to the effectiveness of the text, existing natural language generation systems currently do not have a principled way of handling parentheticals.The aim of the research proposed here is to create a framework to model the rhetorical properties of different types of parentheticals and the contexts that license their usage.We will develop a unified natural language generation architecture which integrates syntax, semantics, rhetorical structure and document structure into a complex representation in order to give a principled account of constraints on where and when parenthetical constructions are appropriate to generate. The system uses constraint based reasoning to reduce computational complexity and rank the output texts. The expected results of this research will enable NLG systems to generate stylistically better output and give developers more control over the generation process and the user's interpretation of the generated text.

[15] Jon G. Hall and Lucia Rapanotti. Assurance-driven development in problem oriented engineering. Technical Report 2007/15, 2007. [ bib | .pdf ]
Problem Oriented Engineering (POE) is a Gentzen-style 'natural' framework for engineering design. As such, POE supports rather than guides its user as to the particular sequence of design steps that will be used; the sequencing is user determined as that most appropriate to the context of application. In this paper, however, we suggest a sequencing of steps and interactions with stake-holders that is suitable for assurance-driven development, i.e., for developments in which the argument of fitness-for-purpose is produced during design.

[16] Debra Trusso Haley. Using a new inter-rater reliability statistic. Technical Report 2007/16, 2007. [ bib | .pdf ]
This paper discusses methods to evaluate Computer Assisted Assessment (CAA) systems, including some commonly used metrics as well as unconventional ones. I found that most of the methods to measure automated assessment reported in the literature were not useful for my purposes. After much research, I found a new metric, the Gwet AC1 inter-rater reliability (IRR) statistic (Gwet, 2001), that is a good solution for evaluating CAAs. Section 1.6 discusses AC1, but first I describe other possible metrics to motivate why I think that AC1 is the best available for evaluating an automated assessment system.

[17] E. Aitken. An assessment of un-structured knowledge management techniques in the project management of software development. M801 MSC Dissertation 2007/17, June 2007. [ bib | .pdf ]
Those involved in managing projects are acutely aware of the benefits that could be achieved if knowledge could be transferred between projects and teams. There are few organisations, however, that support structured organisation-level management of knowledge. This study looks at the knowledge management methods employed by individuals in organisations that do not have well-defined or well-embedded knowledge management processes. The study finds that the techniques used most often are those that are recommended by Project Management methodologies, such as post-project reviews and lessons learned reports. It was also found that there is an interest in using some of the recently available collaborative, de-centralised technologies such as blogs that can be implemented on a small scale without necessarily having to be run at a corporate level. Through responses to a questionnaire and the application of the Delphi method, the participants in this research agreed on a number of areas where their own current practice does not meet their needs for knowledge transfer. The techniques currently being employed were limited in their ability to share knowledge effectively and promote re-use. An analysis of existing research shows that the following techniques may improve current practice: The use of patterns, analogous to those used in software development Blogging and other forms of collaboration software Using metrics to get quantifiable information A more narrative approach to knowledge capture, using storytelling to convey key points The use of oral history capture by semi-structured interview All of these are found to provide the possibility of improved knowledge transfer across project teams.

[18] Martin Huw Ball. Evaluation and Improvement of an Automatic Marking Tool for Diagrams. M801 MSC Dissertation 2007/18, September 2007. [ bib | .pdf ]
Within the teaching environment, the ability to automatically grade student assignments and examination answers offers many potential advantages, including improving the speed and standardisation of the marking process, and the release of time spent marking by tutors which can then be utilised for teaching. A marking tool has been developed by the Open University as an investigation into the feasibility of automating the grading of student Entity Relationship (E-R) diagram answers. Two small scale trials of the marking tool were performed by the developers. These showed that automated marking was potentially feasible, and that detailed failure analysis of the incorrectly marked diagrams could be used to develop and implement improvements to the marking tool. The research described in this dissertation extends the work of this team, using 2 new corpora of graded E-R diagrams. The first of these contains 197 diagrams and the second contains 32 diagrams. The second corpus introduced additional complexity as the model E-R diagram answer used entity subtypes within the solution diagram. This dissertation describes the work undertaken to establish an experimental procedure to grade the diagrams using the marking tool; the establishment of statistical and graphical techniques to measure the performance of the marking tool; the root cause analysis of marking tool deficiencies where diagrams are incorrectly graded; and the design, implementation and testing of improvements to the marking tool.

[19] Stephen Cullum. The Effect of Automatic Code Generation on Developer Job Satisfaction. M801 MSC Dissertation 2007/19, September 2007. [ bib | .pdf ]
The aim of this paper is to determine the impact of code generation tools on the productivity of software developers, verifying whether the purported benefits exist and are sustainable long term. These software artefacts are capable of producing applications with minimal software developer involvement, potentially saving companies many hours of effort whilst enforcing a common approach. Productivity is defined as both performance and the psychological components that comprise enjoyment within a role. Performance factors are used to monitor how much faster processes become when using a code generator, whilst enjoyment defines the longevity of the approach. The Job Diagnostic Survey (JDS) (Hackman and Oldham, 1980) is an instrument designed to evaluate the effects of job changes on employees by focusing on a set of task orientated motivational characteristics. The JDS evaluates the effects of work redesign on satisfaction within the work context. Hackman and Oldham’s work focuses on generic roles and therefore requires extension to record information pertinent to software development. To record information about the code generator the instrument was extended in two ways: · Questions were added to focus on the role of software development; · Answers were no longer simply quantitative, but could also be rated on a qualitative basis in terms of performance and enjoyment. Code generator technologies were found to have an impact on productivity. Enjoyment and performance are linked via the enforced structure necessary for automation. In terms of performance the influence was highly positive. For enjoyment the results were mixed. Positive enjoyment issues do exist and are more numerous than the negative ones. However, negative enjoyment issues have the potential to cause significant conflict within any group of individuals. In conclusion, code generators are not a panacea. To acquire the productivity benefits that are associated with code generation the right people and supporting culture are essential.

Multi-objective evolutionary algorithms use a population-based approach and the principles of evolution to find Pareto-optimal solutions to problems with multiple competing objectives. Unlike single-objective problems, in which there is often only one optimal solution (for example a global minimum for a given function), a multi-objective problem will have a set of Pareto-optimal solutions, all of which are equally optimal in the sense that they represent different trade-offs between competing objectives; and therefore no one solution can be said to be the ‘best’. Evolutionary algorithms (whether single- or multi-objective) frequently model solutions to a problem using a population of chromosomes, each of which represents a single encoded solution. These chromosomes are then subjected to repeated evolutionary operators to allow the population to iterate towards one or more optimal solutions. Such operators typically include crossover (sharing genetic material between chromosomes), mutation (allowing variation within a chromosome) and selection (choosing ‘good’ chromosomes from which to create the next generation). Evolutionary algorithms are also frequently implemented in parallel, and a common paradigm is to use a number of distributed sub-populations executing on different machines. In this so-called island paradigm, the migration operator is introduced. In a similar manner to biological evolution, two isolated sub-populations might diverge genetically. Migration is a process that occasionally allows genetic material to move between sub-populations. Research shows that this process may have a beneficial impact on the genetic diversity of both sub-populations, thus improving the overall quality of the results. The way in which genetic material moves between sub-populations is dictated by the links between the sub-populations (termed the migration topology). In addition to the benefits, there is also a cost to migration: solutions must be distributed over a network or between running processes. One class of problem that can be solved using an evolutionary algorithm is that of network optimisation – in particular the problem of finding an optimal topology of a network given certain criteria. Such problems can involve single or multiple objectives, depending on how the problem is defined. In this research I combine the concepts of network optimisation, migration topologies, diversity and cost; and examine the effectiveness of a multi-objective evolutionary algorithm in finding a Pareto-optimal migration topology for itself. In this way, an identical multi-objective algorithm is applied at both the problem domain-level and the meta-level. That is, the multi-objective algorithm searches for Pareto-optimal migration topologies for a selected domain problem which is itself being solved by a parallel distributed version of the same algorithm. To the best of my knowledge, this is a novel application of the same algorithm in this way; and demonstrates the generality of such algorithms. The primary hypothesis is therefore that it is possible to find such Pareto-optimal topologies using the same algorithm at two levels. The results obtained support this hypothesis.

[21] A L Harper. The Impact Of Cultural Differences On Contracts Between Public Sector Police Forces And Private Sector Organisations. M801 MSC Dissertation 2007/21, September 2007. [ bib | .pdf ]
The last 30 years have seen many public sector organisations outsource their information systems to other suppliers. The initial driver behind this outsourcing, “Compulsory Competitive Tendering“, focused on cost and efficiency. Latterly “Best Value” has replaced this requiring instead continuous improvement in a combination of economy, efficiency and effectiveness. Despite a growth in outsourcing, both trade and academic publications report a significant number of failed contracts. While there is no single cause for these failures, a breakdown in relationships is reported as a frequent factor. While it is acknowledged that a relationship comprises a number of different elements this dissertation concentrates on organisational culture. Specifically this dissertation explores the impact and management of organisational cultural differences that are identified when public sector Police Forces engage in a contract process with the private sector. This investigation was achieved through the use of a questionnaire, interviews and the participant observation of a formal tender process. The results identify that the UK Police Forces do not experience the high percentage of failures as reported generally. The formal tender processes used are seen to be adequate at ensuring a successful contract where success is measured against hard values such as cost and quality. The importance of understanding each other’s values is recognised and also the importance of a good customer – supplier relationship. It has been identified that where organisational cultural differences are identified early in the tender process the contract is more likely to have a successful outcome.

[22] Y Jacques. Relation of Local Documents to Browsed Web Pages. M801 MSC Dissertation 2007/22, September 2007. [ bib | .pdf ]
The relation between browsed web pages and local documents is of increasing interest as the browser takes centre stage and web indexes grow into the billions. Users are storing and accessing their data both locally and online. The division between what is remote and what is local is breaking down and yet a huge gulf remains. There is a need for research on self-adapting interfaces that contextualise the user experience to reduce system complexity. Part of this larger task lies in reducing the distance between remote and local resources. The author developed a browser plug-in that presented users with local documents associated to browsed web pages to test the nature of the relation between remote and local resources. Users were profiled and tested in various contexts, performing work, research, search and personal tasks. They examined the relations between the pages they were browsing, their characterization as key phrases and the documents on their computers, rating the relevance and utility of contextualised local resources in a selfadapting interface. The application also logged user actions over a period of half a year as they used or ran the application in the background while browsing, providing a vital source of data about the relation between the web and desktop computers. Though the user group was too small to be considered statistically relevant, preliminary findings revealed that users indicated high relevance between local and remote resources when performing work, research and search tasks, and that contextualised interfaces could reduce cognitive load, minimizing the effort in finding local resources related to browsing activity.

[23] F Kavanagh. Analysis of a phonetic and rule based algorithm approach to determine rhyme categories and patterns in verse. M801 MSC Dissertation 2007/23, September 2007. [ bib | .pdf ]
This dissertation analyses the use of a rule based algorithm incorporating a phonetic dictionary to identify the rhyme structure and pattern of English language poetic verse. Current methods of rhyme analysis incorporate the use of rhyming tables to identify rhyme. These rhyming tables require considerable manual effort to maintain and update and are cumbersome to use to allow for the variety of pronunciation and accentuation used in English verse. The research conducted and outlined in this paper assesses the feasibility of using the rules based algorithm to determine rhyme word pairs and hence rhyme structure and patterns. For this research a prototype software application was developed using the JAVA programming language. Various rhyme pattern matching rules were modelled to identify particular rhyme types and words were identified and matched through these series of rules. By loading a phonetic dictionary it was possible to represent each word in multiple formats: the original ‘base’ word, the phonetic spelling representation of the word and the stressed vowel patterns of the word. A fourth representation was also possible by applying phonetic representation rules based on the Phonix/Editex method described by Zobel and Dart (1996). Using these four representations of a single word to facilitate comparison with another improved both the occurrence of a match and the confidence of the match. The results of the research shows that it is possible to apply a rule based algorithm to the problem of rhyme structure identification in English verse. Although time is required to model the rules and rule exceptions to improve the accuracy of the rhyme identification this approach requires no manual effort once these are modelled. A single rule can correctly identify multiple rhyme types and word pairs in comparison to a look up rhyme table that can only identify a single rhyme pairing for each entry and must be updated for each new pairing regardless of its similarity to an existing one. By using a phonetic dictionary the variance of pronunciation and accentuation in English verse may be avoided by loading a dictionary with the appropriate phonetic representations (assuming a dictionary with the appropriate pronunciation exists). This means it is very simple to alternate between the identification of rhyme patterns in eighteen century English verse and identifying rhyme patterns in twentieth century American English verse by simply loading the appropriate phonetic dictionary.

[24] J Leach. The Persistence of Evolution. An Evolutionary Art Project: Creating artworks by evolving scripts for a ray-tracing program. M801 MSC Dissertation 2007/24, September 2007. [ bib | .pdf ]
Evolutionary art is art produced by computers using genetic algorithms and similar techniques based on natural evolution. This project involved the design, implementation and evaluation of a novel evolutionary art system for creating 2D or 3D graphical images. The system uses the principles of natural evolution to ‘evolve’ scripts for the popular ray-tracing program POVRay (POVRay, n.d.). A POVRay script contains instructions for drawing a scene; it describes the forms and textures that make up the scene and how they are to be viewed. In this system, ‘child’ scripts are generated from a ‘parent’ script by replication and random mutation. The child scripts are then rendered to produce an image and the user selects one of the resulting images to become the new parent. Once finished, the system was used to examine various live issues in the field, such as which points in the creative process are evolutionary systems most useful, and what kinds of artwork are most amenable to evolution? The system was also evaluated by testers to gain feedback, both on the system itself, and on attitudes to evolutionary art in general. Informal testing and interview sessions were used to see if users like the system and whether they think it might be useful.

[25] J G Lloyd. A Security Analysis of a Biometric Authentication System using UMLsec and the Java Modeling Language. M801 MSC Dissertation 2007/25, September 2007. [ bib | .pdf ]
The UMLsec approach is intended to be a simpler way for software designers to specify system security requirements and to verify their correctness. UMLsec is a set of Unified Modeling Language (UML) stereotypes with associated tags and constraints that can be added to UML diagrams to specify security requirements such as secrecy, integrity and authenticity. The approach includes the description of protocol security requirements using a cryptographic protocol notation. The UML diagrams can then be analysed by a set of tools to automatically verify these requirements for correctness. However, even if the specification is provably correct, security flaws might be introduced during the design, implementation and subsequent maintenance of the system through errors and omissions. The UMLsec approach includes a set of techniques and tools that seek to automatically verify that implemented code does not contain security flaws, but these techniques and tools do not yet relate back to the specified security requirements in UMLsec so ways are needed to verify that the implemented system is correct in relation to this specification. This research dissertation designed a prototypical biometric authentication system using UMLsec to evaluate how easy UMLsec is to use in this context and to investigate how easy it is to implement a system in Java from this design. It then examined the use of the Java Modeling Language (JML) to relate the code back to its specification to verify that the implementation was secure. The UMLsec approach was effective in specifying security requirements succinctly and sufficiently precisely to avoid significant change during coding, although the capabilities of the implementation language need to be taken into account to avoid redundancy in the specification. The threat model was particularly useful in clarifying the extent of an adversary’s access to the system. However UMLsec is not a design or implementation approach and so does not assist with issues such as selecting the type or strength of security algorithms. The approach should contribute to reducing the high training and usage costs of formal methods but it will need training, simpler documentation and CASE tool support to appeal to industry. The UMLsec specification would also need to be maintained throughout the life of a system so that all changes made could be verified, which would increase maintenance effort and cost. JML was applied to parts of the prototype code and helped to verify it by focusing attention on the consistency of the code with its UMLsec specification, which revealed a number of security flaws and weaknesses. However a subsequent manual check revealed more flaws, design weaknesses and inconsistencies in the UMLsec specification, some of which would not have been revealed by JML even if it had been fully applied. The jmlc and ESC/Java 2 tools were used to compile and statically check JML against the Java code. However there is no tool support for verifying the JML against a UML specification so the Java code could not be verified against its UMLsec specification via JML. The JML specifications might therefore not completely reflect the UMLsec specification. The value of JML was limited because it was applied after code development to a software design that was not entirely suitable; it would have been more useful to have used it to specify methods during design and made less use of the Object Constraint Language (OCL) to specify class operations. JML was also sometimes cumbersome to use in verifying security requirements. Any organisation adopting JML would therefore need to develop a design style guide and security patterns, and provide adequate training for designers and developers. Using JML does not eliminate security flaws since they could be contained in products, design features not implemented in code, infrastructure, and associated business and operational processes. Future research should develop a tool to generate draft JML specifications from UMLsec sequence diagrams to improve JML coding efficiency and reduce the risk of omissions. Other research might map UMLsec to features of implementation language frameworks, develop UMLsec and JML security patterns, evaluate other JML tools in a security requirements context and integrate these techniques within a coherent security systems development method.

[26] G McAleese. Improving Scansion with Syntax: an Investigation into the Effectiveness of a Syntactic Analysis of Poetry by Computer using Phonological Scansion Theory. M801 MSC Dissertation 2007/26, September 2007. [ bib | .pdf ]
Two thousand years ago, Quintilian (1922), a Roman poetry expert, described his method of finding the rhythm in lines of poetry (technically called ‗scansion‘): ―it is not so important for us to consider the actual feet (syllables paired up according to the poem‟s meter and with one syllable emphasised) but rather the general rhythmical effect of the phrase‖ English poetry experts have instead focused on the stressed and unstressed syllables in feet, producing scansions that they themselves admit are often inaccurate and subjective (Wright, 1994). Their theories are used in the latest computer scansion programs, the best of which, like Hartman‘s Scandroid, scan barely as well as the undergraduates they are designed to help (Hartman, 2005). Figure 1 illustrates the inaccuracy and range of expert and computer scansions of a difficult line in Milton‘s Paradise Lost: the closer to the ‗expected‘ point, the better the scansion. Figure 1: accuracy of scansion systems This dissertation evaluates a new computer scansion application, Calliope, which follows Quintilian‘s phrase-based approach in two ways. Firstly, it addresses problems in identifying word stress by referencing syntactic data produced by the Antelope Natural Language Processing Parser (Proxem, 2007). Secondly, it implements a new scansion method based on research over the last twenty years into the influence of syntax on scansion (particularly Hayes and Kaun, 1996). This is the first time that syntax has been systematically integrated into a scansion program and the first time that some of these widely accepted research conclusions have been used to develop a scansion procedure. Calliope is assessed for speed and accuracy in producing stress assignments and scansions in lines of poetry by comparing it to Hartman‘s program. Expert assessments serve as a benchmark, and non-expert assessments are used to identify acceptable alternatives. Using the same criteria, Calliope is also assessed against the most popular scansion methods (including systems developed by Fabb, Groves and Plamondon). Compared to Scandroid, Calliope is far superior in assigning stresses. It is also much more effective in identifying meter, more accurate in predicting line scansion and identifies a wider range of meters. Compared to popular scansion methods, it equals or betters their performance in the same categories – see Figure 1. It seems that syntax makes a significant, but largely unexploited, contribution in determining both word stress and scansion in poetry. Suggestions are made for future research in this area.

This file was generated by bibtex2html 1.95.