AuthorsShaw, Hazel Anne
MetadataShow full item record
AbstractComputers and the software they run are pervasive, yet released software is often unreliable, which has many consequences. Loss of time and earnings can be caused by application software (such as word processors) behaving incorrectly or crashing. Serious disruption can occur as in the l4th August 2003 blackouts in North East USA and Canadal, or serious injury or death can be caused as in the Therac-25 overdose incidents. One way to improve the quality of software is to test it thoroughly. However, software testing is time consuming, the resources, capabilities and skills needed to carry it out are often not available and the time required is often curtailed because of pressures to meet delivery deadlines3. Automation should allow more thorough testing in the time available and improve the quality of delivered software, but there are some problems with automation that this research addresses. Firstly, it is difficult to determine ifthe system under test (SUT) has passed or failed a test. This is known as the oracle problem4 and is often ignored in software testing research. Secondly, many software development organisations use an iterative and incremental process, known as evolutionary development, to write software. Following release, software continues evolving as customers demand new features and improvements to existing ones5. This evolution means that automated test suites must be maintained throughout the life ofthe software. A contribution of this research is a methodology that addresses automatic generation of the test cases, execution of the test cases and evaluation of the outcomes from running each test. "Predecessor" software is used to solve the oracle problem. This is software that already exists, such as a previous version of evolving software, or software from a different vendor that solves the same, or similar, problems. However, the resulting oracle is assumed not be perfect, so rules are defined in an interface, which are used by the evaluator in the test evaluation stage to handle the expected differences. The interface also specifies functional inputs and outputs to the SUT. An algorithm has been developed that creates a Markov Chain Transition Matrix (MCTM) model of the SUT from the interface. Tests are then generated automatically by making a random walk of the MCTM. This means that instead of maintaining a large suite of tests, or a large model of the SUT, only the interface needs to be maintained. 1) NERC Steering Group (2004). Technical Analysis ofthe August 14,2003, Blackout: What Happened, Why, and What Did We Learn? July 13th 2004. Available from: ftp:/ /www.nerc.com/pub/sys/all_ updl/docslblackoutINERC ]inatBlackout_Report _ 07_13_ 04.pdf 2) Leveson N. G., Turner C. S. (1993) An investigation of the Therac-25 accidents. IEEE Computer, Vo126, No 7, Pages 18-41. 3) LogicaCMG (2005) Testing Times for Board Rooms. Available from http://www.logicacmg.com/pdf/trackeditestingTimesBoardRooms.pdf 4) Bertolino, A. (2003) Software Testing Research and Practice, ASM 2003, Lecture Notes in Computer Science, Vol 2589, Pages 1-21. 5) Sommerville, 1. (2004) Software Engineering, 7th Edition. Addison Wesley. ISBN 0-321-21026-3.
CitationShaw, H.A. (2005) 'Automated test of evolving software' PhD thesis. University of Luton.
PublisherUniversity of Bedfordshire
TypeThesis or dissertation
DescriptionA thesis submitted to the University of Luton, in partial fulfilment of the requirements for the degree of Doctor of Philosophy
The following license files are associated with this item:
Showing items related by title, author, creator and subject.
A software definable MIMO testbed: architecture and functionalityHong, Xuemin; Delannoy, Eric; Allen, Ben; Dohler, Mischa; King's College London (2009)Following the intensive theoretical studies of recently emerged MIMO technology, a variety of performance measures become important to investigate the challenges and trade-offs at various levels throughout MIMO system design process. This paper presents a review of the MIMO testbed recently set up at King’s College London. The architecture that distinguishes the testbed as a flexible and reconfigurable system is first preseneted. This includes both the hardware and software aspects, and is followed by a discussion of implementation methods and evaluation of system research capabilities.
Design of software defined down-conversion and up-conversion: an overviewZhang, Yue; Huang, Li-Ke; Maple, Carsten; Xuan, Qing (ZTE corporation, China, 2011-12)In recent years, much attention has been paid to software-defined radio (SDR) technologies for multimode wireless systems. SDR can be defined as a radio communication system that uses software to modulate and demodulate radio signals. This article describes concepts, theory, and design principles for SDR down-conversion and up-conversion. Design issues in SDR down-conversion are discussed, and two different architectures, super-heterodyne and direct-conversion, are proposed. Design issues in SDR up-conversion are also discussed, and trade-offs in the design of filters, mixers, NCO, DAC, and signal processing are highlighted.
Identifying Mubasher software products through sentiment analysis of Arabic tweetsAL-Rubaiee, Hamed Saad; Qiu, Renxi; Li, Dayou; University of Bedfordshire (Institute of Electrical and Electronics Engineers Inc., 2016-05-02)Social media has recently become a rich resource in mining user sentiments. In this paper, Twitter has been chosen as a platform for opinion mining in trading strategy with Mubasher products, which is a leading stock analysis software provider in the Gulf region. This experiment proposes a model for sentiment analysis of Saudi Arabic (standard and Arabian Gulf dialect) tweets to extract feedback from Mubasher products. A hybrid of natural language processing and machine learning approaches on building models are used to classify tweets according to their sentiment polarity into one of the classes positive, negative and neutral. Firstly, document's Pre-processing are explored on the dataset. Secondly, Naive Bayes and Support Vector Machines (SVMs) are applied with different feature selection schemes like TF-IDF (Term Frequency-Inverse Document Frequency) and BTO (Binary-Term Occurrence). Thirdly, the proposed model for sentiment analysis is expanded to obtain the results for N-Grams term of tokens. Finally, human has labelled the data and this may involve some mistakes in the labelling process. At this moment, neutral class with generalisation of our classification will take results to different classification accuracy.