Those involved in managing projects are acutely aware of the benefits that could be achieved if knowledge could be transferred between projects and teams. There are few organisations, however, that support structured organisation-level management of knowledge. This study looks at the knowledge management methods employed by individuals in organisations that do not have well-defined or well-embedded knowledge management processes. The study finds that the techniques used most often are those that are recommended by Project Management methodologies, such as post-project reviews and lessons learned reports. It was also found that there is an interest in using some of the recently available collaborative, de-centralised technologies such as blogs that can be implemented on a small scale without necessarily having to be run at a corporate level. Through responses to a questionnaire and the application of the Delphi method, the participants in this research agreed on a number of areas where their own current practice does not meet their needs for knowledge transfer. The techniques currently being employed were limited in their ability to share knowledge effectively and promote re-use. An analysis of existing research shows that the following techniques may improve current practice: The use of patterns, analogous to those used in software development Blogging and other forms of collaboration software Using metrics to get quantifiable information A more narrative approach to knowledge capture, using storytelling to convey key points The use of oral history capture by semi-structured interview All of these are found to provide the possibility of improved knowledge transfer across project teams.
Martin Huw Ball. Evaluation and Improvement of an Automatic Marking Tool for Diagrams. http://computing-reports.open.ac.uk/2007/TR2007-18.pdf. 2007. M801 MSC Dissertation, 2007/18,
Within the teaching environment, the ability to automatically grade student assignments and examination answers offers many potential advantages, including improving the speed and standardisation of the marking process, and the release of time spent marking by tutors which can then be utilised for teaching. A marking tool has been developed by the Open University as an investigation into the feasibility of automating the grading of student Entity Relationship (E-R) diagram answers. Two small scale trials of the marking tool were performed by the developers. These showed that automated marking was potentially feasible, and that detailed failure analysis of the incorrectly marked diagrams could be used to develop and implement improvements to the marking tool. The research described in this dissertation extends the work of this team, using 2 new corpora of graded E-R diagrams. The first of these contains 197 diagrams and the second contains 32 diagrams. The second corpus introduced additional complexity as the model E-R diagram answer used entity subtypes within the solution diagram. This dissertation describes the work undertaken to establish an experimental procedure to grade the diagrams using the marking tool; the establishment of statistical and graphical techniques to measure the performance of the marking tool; the root cause analysis of marking tool deficiencies where diagrams are incorrectly graded; and the design, implementation and testing of improvements to the marking tool.
Stephen Cullum. The Effect of Automatic Code Generation on Developer Job Satisfaction. http://computing-reports.open.ac.uk/2007/TR2007-19.pdf. 2007. M801 MSC Dissertation, 2007/19,
The aim of this paper is to determine the impact of code generation tools on the productivity of software developers, verifying whether the purported benefits exist and are sustainable long term. These software artefacts are capable of producing applications with minimal software developer involvement, potentially saving companies many hours of effort whilst enforcing a common approach. Productivity is defined as both performance and the psychological components that comprise enjoyment within a role. Performance factors are used to monitor how much faster processes become when using a code generator, whilst enjoyment defines the longevity of the approach. The Job Diagnostic Survey (JDS) (Hackman and Oldham, 1980) is an instrument designed to evaluate the effects of job changes on employees by focusing on a set of task orientated motivational characteristics. The JDS evaluates the effects of work redesign on satisfaction within the work context. Hackman and Oldham’s work focuses on generic roles and therefore requires extension to record information pertinent to software development. To record information about the code generator the instrument was extended in two ways: · Questions were added to focus on the role of software development; · Answers were no longer simply quantitative, but could also be rated on a qualitative basis in terms of performance and enjoyment. Code generator technologies were found to have an impact on productivity. Enjoyment and performance are linked via the enforced structure necessary for automation. In terms of performance the influence was highly positive. For enjoyment the results were mixed. Positive enjoyment issues do exist and are more numerous than the negative ones. However, negative enjoyment issues have the potential to cause significant conflict within any group of individuals. In conclusion, code generators are not a panacea. To acquire the productivity benefits that are associated with code generation the right people and supporting culture are essential.
J Erlank. META-OPTIMISATION OF MIGRATION TOPOLOGIES IN MULTI-OBJECTIVE EVOLUTIONARY ALGORITHMS. http://computing-reports.open.ac.uk/2007/TR2007-20.pdf. 2007. M801 MSC Dissertation, 2007/20,
Multi-objective evolutionary algorithms use a population-based approach and the principles of evolution to find Pareto-optimal solutions to problems with multiple competing objectives. Unlike single-objective problems, in which there is often only one optimal solution (for example a global minimum for a given function), a multi-objective problem will have a set of Pareto-optimal solutions, all of which are equally optimal in the sense that they represent different trade-offs between competing objectives; and therefore no one solution can be said to be the best. Evolutionary algorithms (whether single- or multi-objective) frequently model solutions to a problem using a population of chromosomes, each of which represents a single encoded solution. These chromosomes are then subjected to repeated evolutionary operators to allow the population to iterate towards one or more optimal solutions. Such operators typically include crossover (sharing genetic material between chromosomes), mutation (allowing variation within a chromosome) and selection (choosing good chromosomes from which to create the next generation). Evolutionary algorithms are also frequently implemented in parallel, and a common paradigm is to use a number of distributed sub-populations executing on different machines. In this so-called island paradigm, the migration operator is introduced. In a similar manner to biological evolution, two isolated sub-populations might diverge genetically. Migration is a process that occasionally allows genetic material to move between sub-populations. Research shows that this process may have a beneficial impact on the genetic diversity of both sub-populations, thus improving the overall quality of the results. The way in which genetic material moves between sub-populations is dictated by the links between the sub-populations (termed the migration topology). In addition to the benefits, there is also a cost to migration: solutions must be distributed over a network or between running processes. One class of problem that can be solved using an evolutionary algorithm is that of network optimisation in particular the problem of finding an optimal topology of a network given certain criteria. Such problems can involve single or multiple objectives, depending on how the problem is defined. In this research I combine the concepts of network optimisation, migration topologies, diversity and cost; and examine the effectiveness of a multi-objective evolutionary algorithm in finding a Pareto-optimal migration topology for itself. In this way, an identical multi-objective algorithm is applied at both the problem domain-level and the meta-level. That is, the multi-objective algorithm searches for Pareto-optimal migration topologies for a selected domain problem which is itself being solved by a parallel distributed version of the same algorithm. To the best of my knowledge, this is a novel application of the same algorithm in this way; and demonstrates the generality of such algorithms. The primary hypothesis is therefore that it is possible to find such Pareto-optimal topologies using the same algorithm at two levels. The results obtained support this hypothesis.
A L Harper. The Impact Of Cultural Differences On Contracts Between Public Sector Police Forces And Private Sector Organisations. http://computing-reports.open.ac.uk/2007/TR2007-21.pdf. 2007. M801 MSC Dissertation, 2007/21,
The last 30 years have seen many public sector organisations outsource their information systems to other suppliers. The initial driver behind this outsourcing, “Compulsory Competitive Tendering“, focused on cost and efficiency. Latterly “Best Value” has replaced this requiring instead continuous improvement in a combination of economy, efficiency and effectiveness. Despite a growth in outsourcing, both trade and academic publications report a significant number of failed contracts. While there is no single cause for these failures, a breakdown in relationships is reported as a frequent factor. While it is acknowledged that a relationship comprises a number of different elements this dissertation concentrates on organisational culture. Specifically this dissertation explores the impact and management of organisational cultural differences that are identified when public sector Police Forces engage in a contract process with the private sector. This investigation was achieved through the use of a questionnaire, interviews and the participant observation of a formal tender process. The results identify that the UK Police Forces do not experience the high percentage of failures as reported generally. The formal tender processes used are seen to be adequate at ensuring a successful contract where success is measured against hard values such as cost and quality. The importance of understanding each other’s values is recognised and also the importance of a good customer – supplier relationship. It has been identified that where organisational cultural differences are identified early in the tender process the contract is more likely to have a successful outcome.
Y Jacques. Relation of Local Documents to Browsed Web Pages. http://computing-reports.open.ac.uk/2007/TR2007-22.pdf. 2007. M801 MSC Dissertation, 2007/22,
The relation between browsed web pages and local documents is of increasing interest as the browser takes centre stage and web indexes grow into the billions. Users are storing and accessing their data both locally and online. The division between what is remote and what is local is breaking down and yet a huge gulf remains. There is a need for research on self-adapting interfaces that contextualise the user experience to reduce system complexity. Part of this larger task lies in reducing the distance between remote and local resources. The author developed a browser plug-in that presented users with local documents associated to browsed web pages to test the nature of the relation between remote and local resources. Users were profiled and tested in various contexts, performing work, research, search and personal tasks. They examined the relations between the pages they were browsing, their characterization as key phrases and the documents on their computers, rating the relevance and utility of contextualised local resources in a selfadapting interface. The application also logged user actions over a period of half a year as they used or ran the application in the background while browsing, providing a vital source of data about the relation between the web and desktop computers. Though the user group was too small to be considered statistically relevant, preliminary findings revealed that users indicated high relevance between local and remote resources when performing work, research and search tasks, and that contextualised interfaces could reduce cognitive load, minimizing the effort in finding local resources related to browsing activity.
F Kavanagh. Analysis of a phonetic and rule based algorithm approach to determine rhyme categories and patterns in verse. http://computing-reports.open.ac.uk/2007/TR2007-23.pdf. 2007. M801 MSC Dissertation, 2007/23,
This dissertation analyses the use of a rule based algorithm incorporating a phonetic dictionary to identify the rhyme structure and pattern of English language poetic verse. Current methods of rhyme analysis incorporate the use of rhyming tables to identify rhyme. These rhyming tables require considerable manual effort to maintain and update and are cumbersome to use to allow for the variety of pronunciation and accentuation used in English verse. The research conducted and outlined in this paper assesses the feasibility of using the rules based algorithm to determine rhyme word pairs and hence rhyme structure and patterns. For this research a prototype software application was developed using the JAVA programming language. Various rhyme pattern matching rules were modelled to identify particular rhyme types and words were identified and matched through these series of rules. By loading a phonetic dictionary it was possible to represent each word in multiple formats: the original ‘base’ word, the phonetic spelling representation of the word and the stressed vowel patterns of the word. A fourth representation was also possible by applying phonetic representation rules based on the Phonix/Editex method described by Zobel and Dart (1996). Using these four representations of a single word to facilitate comparison with another improved both the occurrence of a match and the confidence of the match. The results of the research shows that it is possible to apply a rule based algorithm to the problem of rhyme structure identification in English verse. Although time is required to model the rules and rule exceptions to improve the accuracy of the rhyme identification this approach requires no manual effort once these are modelled. A single rule can correctly identify multiple rhyme types and word pairs in comparison to a look up rhyme table that can only identify a single rhyme pairing for each entry and must be updated for each new pairing regardless of its similarity to an existing one. By using a phonetic dictionary the variance of pronunciation and accentuation in English verse may be avoided by loading a dictionary with the appropriate phonetic representations (assuming a dictionary with the appropriate pronunciation exists). This means it is very simple to alternate between the identification of rhyme patterns in eighteen century English verse and identifying rhyme patterns in twentieth century American English verse by simply loading the appropriate phonetic dictionary.
J Leach. The Persistence of Evolution. An Evolutionary Art Project: Creating artworks by evolving scripts for a ray-tracing program. http://computing-reports.open.ac.uk/2007/TR2007-24.pdf. 2007. M801 MSC Dissertation, 2007/24,
Evolutionary art is art produced by computers using genetic algorithms and similar techniques based on natural evolution. This project involved the design, implementation and evaluation of a novel evolutionary art system for creating 2D or 3D graphical images. The system uses the principles of natural evolution to ‘evolve’ scripts for the popular ray-tracing program POVRay (POVRay, n.d.). A POVRay script contains instructions for drawing a scene; it describes the forms and textures that make up the scene and how they are to be viewed. In this system, ‘child’ scripts are generated from a ‘parent’ script by replication and random mutation. The child scripts are then rendered to produce an image and the user selects one of the resulting images to become the new parent. Once finished, the system was used to examine various live issues in the field, such as which points in the creative process are evolutionary systems most useful, and what kinds of artwork are most amenable to evolution? The system was also evaluated by testers to gain feedback, both on the system itself, and on attitudes to evolutionary art in general. Informal testing and interview sessions were used to see if users like the system and whether they think it might be useful.
J G Lloyd. A Security Analysis of a Biometric Authentication System using UMLsec and the Java Modeling Language. http://computing-reports.open.ac.uk/2007/TR2007-25.pdf. 2007. M801 MSC Dissertation, 2007/25,
The UMLsec approach is intended to be a simpler way for software designers to specify system security requirements and to verify their correctness. UMLsec is a set of Unified Modeling Language (UML) stereotypes with associated tags and constraints that can be added to UML diagrams to specify security requirements such as secrecy, integrity and authenticity. The approach includes the description of protocol security requirements using a cryptographic protocol notation. The UML diagrams can then be analysed by a set of tools to automatically verify these requirements for correctness. However, even if the specification is provably correct, security flaws might be introduced during the design, implementation and subsequent maintenance of the system through errors and omissions. The UMLsec approach includes a set of techniques and tools that seek to automatically verify that implemented code does not contain security flaws, but these techniques and tools do not yet relate back to the specified security requirements in UMLsec so ways are needed to verify that the implemented system is correct in relation to this specification. This research dissertation designed a prototypical biometric authentication system using UMLsec to evaluate how easy UMLsec is to use in this context and to investigate how easy it is to implement a system in Java from this design. It then examined the use of the Java Modeling Language (JML) to relate the code back to its specification to verify that the implementation was secure. The UMLsec approach was effective in specifying security requirements succinctly and sufficiently precisely to avoid significant change during coding, although the capabilities of the implementation language need to be taken into account to avoid redundancy in the specification. The threat model was particularly useful in clarifying the extent of an adversary’s access to the system. However UMLsec is not a design or implementation approach and so does not assist with issues such as selecting the type or strength of security algorithms. The approach should contribute to reducing the high training and usage costs of formal methods but it will need training, simpler documentation and CASE tool support to appeal to industry. The UMLsec specification would also need to be maintained throughout the life of a system so that all changes made could be verified, which would increase maintenance effort and cost. JML was applied to parts of the prototype code and helped to verify it by focusing attention on the consistency of the code with its UMLsec specification, which revealed a number of security flaws and weaknesses. However a subsequent manual check revealed more flaws, design weaknesses and inconsistencies in the UMLsec specification, some of which would not have been revealed by JML even if it had been fully applied. The jmlc and ESC/Java 2 tools were used to compile and statically check JML against the Java code. However there is no tool support for verifying the JML against a UML specification so the Java code could not be verified against its UMLsec specification via JML. The JML specifications might therefore not completely reflect the UMLsec specification. The value of JML was limited because it was applied after code development to a software design that was not entirely suitable; it would have been more useful to have used it to specify methods during design and made less use of the Object Constraint Language (OCL) to specify class operations. JML was also sometimes cumbersome to use in verifying security requirements. Any organisation adopting JML would therefore need to develop a design style guide and security patterns, and provide adequate training for designers and developers. Using JML does not eliminate security flaws since they could be contained in products, design features not implemented in code, infrastructure, and associated business and operational processes. Future research should develop a tool to generate draft JML specifications from UMLsec sequence diagrams to improve JML coding efficiency and reduce the risk of omissions. Other research might map UMLsec to features of implementation language frameworks, develop UMLsec and JML security patterns, evaluate other JML tools in a security requirements context and integrate these techniques within a coherent security systems development method.
G McAleese. Improving Scansion with Syntax: an Investigation into the Effectiveness of a Syntactic Analysis of Poetry by Computer using Phonological Scansion Theory. http://computing-reports.open.ac.uk/2007/TR2007-26.pdf. 2007. M801 MSC Dissertation, 2007/26,
Two thousand years ago, Quintilian (1922), a Roman poetry expert, described his method of finding the rhythm in lines of poetry (technically called ‗scansion‘): ―it is not so important for us to consider the actual feet (syllables paired up according to the poem‟s meter and with one syllable emphasised) but rather the general rhythmical effect of the phrase‖ English poetry experts have instead focused on the stressed and unstressed syllables in feet, producing scansions that they themselves admit are often inaccurate and subjective (Wright, 1994). Their theories are used in the latest computer scansion programs, the best of which, like Hartman‘s Scandroid, scan barely as well as the undergraduates they are designed to help (Hartman, 2005). Figure 1 illustrates the inaccuracy and range of expert and computer scansions of a difficult line in Milton‘s Paradise Lost: the closer to the ‗expected‘ point, the better the scansion. Figure 1: accuracy of scansion systems This dissertation evaluates a new computer scansion application, Calliope, which follows Quintilian‘s phrase-based approach in two ways. Firstly, it addresses problems in identifying word stress by referencing syntactic data produced by the Antelope Natural Language Processing Parser (Proxem, 2007). Secondly, it implements a new scansion method based on research over the last twenty years into the influence of syntax on scansion (particularly Hayes and Kaun, 1996). This is the first time that syntax has been systematically integrated into a scansion program and the first time that some of these widely accepted research conclusions have been used to develop a scansion procedure. Calliope is assessed for speed and accuracy in producing stress assignments and scansions in lines of poetry by comparing it to Hartman‘s program. Expert assessments serve as a benchmark, and non-expert assessments are used to identify acceptable alternatives. Using the same criteria, Calliope is also assessed against the most popular scansion methods (including systems developed by Fabb, Groves and Plamondon). Compared to Scandroid, Calliope is far superior in assigning stresses. It is also much more effective in identifying meter, more accurate in predicting line scansion and identifies a wider range of meters. Compared to popular scansion methods, it equals or betters their performance in the same categories – see Figure 1. It seems that syntax makes a significant, but largely unexploited, contribution in determining both word stress and scansion in poetry. Suggestions are made for future research in this area.