Browsing Centre for Research in Distributed Technologies (CREDIT) by Authors
Alignment method with application to gas chromatography / mass spectrometry screeningHitchcock, Jonathan James; Li, Dayou; Maple, Carsten; Keech, Malcolm (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2012-09-08)The paper presents a new spectrum-based alignment method that is able to precisely adjust the retention times of Gas Chromatography / Mass Spectrometry data so that corresponding scans of two samples can be chosen for comparison purposes. It includes the ability to do precise alignment within fractions of a scan; this is equivalent to doing sub-pixel registration of images.
GC/MS data reduction using retention time alignment and spectral subtractionHitchcock, Jonathan James; Li, Dayou; Maple, Carsten; Keech, Malcolm; Teale, Phil; Hudson, Simon; University of Bedfordshire; HFL Ltd (British Mass Spectrometry Society, 2007-09-10)The identification of chemical compounds in a complex mixture is a challenge. In the context of drug surveillance for the sporting world, large-scale screening of urine and blood samples is undertaken using methods such as GC/MS with low-resolution mass spectrometers that measure integer values of m/z ratio. The analysis of the GC/MS data can be automated using standard mass spectrometry software to detect peaks in the chromatograms and to search a library of mass spectra of known drugs. Because of noise and the presence of co-eluting compounds, the mass spectra are usually not exactly the same as those in the library. The match quality for a genuine match can be quite low, and the library search settings must be sufficiently sensitive so as not to miss positive samples. Therefore many false matches are reported for checking and validation by human analysts, and, since almost all the samples are negative, this process of checking is tedious, time-consuming and cost-inefficient. The usual technique to remove unwanted background is to subtract the spectrum of an adjacent scan of the same sample. Our proposed method instead subtracts the spectrum of a second similar sample. The intention is that any contributions from a substance common to the two samples will be eliminated, and that any substance that is in the first sample but not in the second will still be recorded in the subtracted dataset. Assuming a suitable second sample is available that does not contain banned substances, those that are present in the first sample can be more easily detected. The subtraction is applied to each scan of the test sample. For this to work, it is essential that retention times are precisely aligned so that a corresponding scan of the second sample can be chosen. Although many methods of alignment are described in the literature, simple linear alignment based on a correlation measure is found to be sufficient. It is also necessary to scale the spectra being subtracted to allow for differences between the two samples in the concentration of the common compounds. Subtracting a similar dataset will reduce the number of peaks to be considered, and our hypothesis is that a library search of the resulting dataset will produce a smaller number of false matches than the same library search applied to the original data. An experiment was carried out to test this and the number of false matches was indeed found to be reduced. The more similar the second sample was to the first, the better was the result. It was also verified that true matches of compounds of interest are still reported by the library search of the subtracted data.
A novel scalable parallel algorithm for finding optimal paths over heterogeneous terrainMaple, Carsten; Hitchcock, Jonathan James (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2005-07-07)The area of path planning has received a great deal of attention recently. Algorithms are required that can deliver optimal paths for robots to take over homogeneous or non-homogeneous terrain. Optimal paths may be those that involve the shortest distance travelled, the least number of turns or the least number of ascents and descents. The often highly complex nature of terrains and the necessity for realtime solutions have lead to a requirement for the development of parallel algorithms. Such problems have been notoriously difficult to parallelise efficiently; indeed it has been said that an efficiency of 25-60% should be considered a success. In this paper we present a parallel algorithm for finding optimal paths over non-homogeneous terrain that demonstrates superlinear speed-up.