• The refocusing distance of a standard plenoptic photograph

      Hahne, Christopher; Aggoun, Amar; Velisavljević, Vladan; University of Bedfordshire (IEEE, 2015-06-12)
      In the past years, the plenoptic camera aroused an increasing interest in the field of computer vision. Its capability of capturing three-dimensional image data is achieved by an array of micro lenses placed in front of a traditional image sensor. The acquired light field data allows for the reconstruction of photographs focused at different depths. Given the plenoptic camera parameters, the metric distance of refocused objects may be retrieved with the aid of geometric ray tracing. Until now there was a lack of experimental results using real image data to prove this conceptual solution. With this paper, the very first experimental work is presented on the basis of a new ray tracing model approach, which considers more accurate micro image centre positions. To evaluate the developed method, the blur metric of objects in a refocused image stack is measured and compared with proposed predictions. The results suggest quite an accurate approximation for distant objects and deviations for objects closer to the camera device.
    • Dynamic user equipment-based hysteresis-adjusting algorithm in LTE femtocell networks

      Xiao, Zhu; Zhang, Xu; Maple, Carsten; Allen, Ben; Liu, Enjie; Mahato, Shyam Babu; University of Bedfordshire; Southeast University, Nanjing; Hunan University, Chang sha; University of Warwick (IEEE, 2014-09)
      In long-term evolution (LTE) femtocell networks, hysteresis is one of the main parameters which affects the performance of handover with a number of unnecessary handovers, including ping-pong, early, late and incorrect handovers. In this study, the authors propose a hybrid algorithm that aims to obtain the optimised unique hysteresis for an individual mobile user moving at various speeds during the inbound handover process. This algorithm is proposed for two-tier scenarios with macro and femto. The centralised function in this study evaluates the overall handover performance indicator. Then, the handover aggregate performance indicator (HAPI) is used to determine an optimal configuration. Based on the received reference signal-to-interference-plus-noise ratio, the distributed function residing on the user equipment (UE) is able to obtain an optimal unique hysteresis for the individual UE. Theoretical analysis with three indication boundaries is provided to evaluate the proposed algorithm. A system-level simulation is presented, and the proposed algorithm outperformed the existing approaches in terms of handover failure, call-drop and redundancy handover ratios and also achieved better overall system performance.
    • Applying Cross-cultural theory to understand users’ preferences on interactive information retrieval platform design

      Chessum, Karen; Liu, Haiming; Frommholz, Ingo; University of Bedfordshire (University of Nottingham, School of Computer Science., 2014-09)
      In this paper we look at using culture to group users and model the users’ preference on cross cultural information retrieval, in order to investigate the relationship between the user search preferences and the user’s cultural background. Initially we review and discuss briefly website localisation. We continue by examining culture and Hofstede’s cultural dimensions. We identified a link between Hofstede’s five dimensions and user experience. We did an analogy for each of the five dimensions and developed six hypotheses from the analogies. These hypotheses were then tested by means of a user study. Whilst the key findings from the study suggest cross cultural theory can be used to model user’s preferences for information retrieval, further work still needs to be done on how cultural dimensions can be applied to inform the search interface design.
    • Healthcare-event driven semantic knowledge extraction with hybrid data repository

      Yu, Hong Qing; Zhao, Xia; Zhen, Xin; Dong, Feng; Liu, Enjie; Clapworthy, Gordon J.; University of Bedfordshire (IEEE, 2014-08)
      In this paper, we introduce a Healthcare-Event (H-event) based knowledge extraction approach on a hybrid data repository. The repository collects (with individual user's permission) dynamic and large volume healthcare related data from various resources such as wearable sensors, social media Web APIs and our application itself. The proposed extraction approach relies on two data processing processes. One is the data integration process to dynamically retrieving the large data using public data service APIs. The first process also generates a set of big knowledge bases and stored in NoSQL storage. This paper will focus on the second extraction process that is the H-Event based ontological knowledge extraction for detecting and monitoring user's healthcare related situations, such as medical symptoms, treatments, conditions and daily activities from the NoSQL knowledge bases. The second process can be seen as post-processing step to detect more explicit healthcare knowledge about personalised health conditions and represent the knowledge using RDF formats in a semantic triple repository to enhance further data analytics.
    • An algorithm for accurate taillight detection at night

      Boonsim, Noppakun; Prakoonwit, Simant; University of Bedfordshire (Foundation of Computer Science, 2014-08)
      Vehicle detection is an important process of many advance driver assistance system (ADAS) such as forward collision avoidance, Time to collision (TTC) and Intelligence headlight control (IHC). This paper presents a new algorithm to detect a vehicle ahead by using taillight pair. First, the proposed method extracts taillight candidate regions by filtering taillight colour regions and applying morphological operations. Second, pairing each candidates and pair symmetry analysis steps are implemented in order to have taillight positions. The aim of this work is to improve the accuracy of taillight detection at night with many bright spot candidates from streetlamps and other factors from complex scenes. Experiments on still images dataset show that the proposed algorithm can improve the taillight detection accuracy rate and robust under limited light images.
    • Baseline of virtual cameras acquired by a standard plenoptic camera setup

      Hahne, Christopher; Aggoun, Amar; Haxha, Shyqyri; Velisavljević, Vladan; Fernandez, Juan C. J.; University of Bedfordshire (IEEE, 2014-07-03)
      Plenoptic cameras have been used to computationally generate viewpoint images from the captured light field. This paper aims to provide a prediction of corresponding virtual camera positions based on the parameters of a standard plenoptic camera setup. Furthermore, by tracing light rays from the sensor to the object space, a solution is proposed to estimate the baseline of viewpoints. By considering geometrical optics, the suggested approach has been implemented in Matlab and assessed using Zemax, a real ray tracing simulation tool. Thereby, the impact of different main lens locations is investigated. Results of the baseline approximation indicate that estimates obtained by the proposed model deviate by less than 0.2% compared to the complex real ray tracing method.
    • Super depth-map rendering by converting holoscopic viewpoint to perspective projection

      Alazawi, E.; Abbod, M.; Aggoun, Amar; Swash, M.R.; Fatah, O. Abdul; Fernandez, Juan C. J.; University of Bedfordshire; Brunel University (IEEE, 2014-07)
      The expansion of 3D technology will enable observers to perceive 3D without any eye-wear devices. Holoscopic 3D imaging technology offers natural 3D visualisation of real 3D scenes that can be viewed by multiple viewers independently of their position. However, the creation of a super depth-map and reconstruction of the 3D object from a holoscopic 3D image is still in its infancy. The aim of this work is to build a high-quality depth map of a real 3D scene from a holoscopic 3D image through extraction of multi-view high resolution Viewpoint Images (VPIs) to compensate for the poor features of VPIs. To manage this, we propose a reconstruction method based on the perspective formula to convert sets of directional orthographic low resolution VPIs into perspective projection geometry. Following that, we implement an Auto-Feature point algorithm for synthesizing VPIs to distinctive Feature-Edge (FE) blocks to localize and provide an individual feature detector that is responsible for integration of 3D information. Detailed experiments proved the reliability and efficiency of the proposed method, which outperforms state-of-the-art methods for depth map creation.
    • Performance analysis of a generalized and autonomous DRX scheme

      Liu, Enjie; Ren, Weili; University of Bedfordshire (IEEE, 2014-07)
      A generalized and autonomous DRX (discontinuous reception) scheme, applicable to both 3GPP and IEEE 802.16e standards, is analyzed by two - level Markov chain modeling along with the ETSI packet traffic model. Numerical analysis showed that this scheme is capable of autonomously adjusting DRX cycle to keep up with changing UE activity level with no signaling overhead increase, thus produces a better tuned DRX operation. Quantitative comparison with the power saving schemes of 3GPP and 802.16e standards demonstrated that it is advantageous over and generalization of these power saving schemes.
    • Reference based holoscopic 3D camera aperture stitching for widening the overall viewing angle

      Swash, M.R.; Fernandez, Juan C. J.; Aggoun, Amar; Fatah, O. Abdul; Tsekleves, Emmanuel; Brunel University; University of Bedfordshire (IEEE, 2014-07)
      Holoscopic 3D imaging also known as Integral imaging is a promising technique for creating full color 3D optical models that exist in space independently of the viewer. The images exhibit continuous parallax throughout the viewing zone. In order to achieve depth control, robust and real-time, a single aperture holoscopic 3D imaging camera is used for recording holoscopic 3D image using a regularly spaced array of small lenslets, which view the scene at a slightly different angle to its neighbour. However, the main problem the holoscopic 3D camera aperture faces is that it is not big enough for recording larger scene with existing 2D camera sensors. This paper proposes a novel reference based holoscopic 3D camera aperture stitching method that enlarges overall viewing angle of the holoscopic 3D camera in post-production after the capture.
    • Taxonomy of optimisation techniques and applications

      Maple, Carsten; Prakash, Edmond C.; Huang, Wei; Qureshi, Adnan Nabeel Abid; University of Bedfordshire (Inderscience Publishers, 2014-06)
      This paper presents a review of recent advances in optimisation techniques. Optimisation is a complex task and it is nearly impossible to identify a single technique which can act as a silver bullet in all contexts where scarcity and limitation of resources and constraints exist. The list of individual optimisation methods, their combinations and hybridisations is endless and, hence, it is imperative to classify them based on common attributes and highlight some of the salient industrial and research domains where they have been implemented. This paper concentrates on application areas of the different optimisation techniques in particular, with the objective to establish a practical taxonomy based on the combination of heuristic or non-heuristic nature of algorithms, nature of design variables and nature of equations. A précis of research at the University of Bedfordshire is also given to highlight the contributions made towards optimising different industrial and engineering problems exemplifying the latest trends and research arenas.
    • Scheduling and optimisation of batch plants: model development and comparison of approaches

      Tan, Yaqing; Huang, Wei; Sun, Yanming; Yue, Yong; University of Bedfordshire (Inderscience Publishers, 2014-06)
      The application of parallel machines and storage facilities provides flexibility but raises challenges for batch plants. This research proposes a scheduling model in batch plants, considering complex real-world constraints that were seldom addressed together. Two optimisation approaches, genetic algorithm (GA) and constraint programming (CP), are applied to solve the complex batch plant scheduling problem. A case study and scalability tests are conducted to investigate different performance of GA and CP in the problem to prepare for further research application. It is found that the CP approach has a better performance in solving batch plant scheduling problems with complex constraints although it needs longer time. The ‘restart’ search strategy is better than two other search strategies for the CP approach.
    • Towards multiple 3D bone surface identification and reconstruction using few 2D X-ray images for intraoperative applications

      Prakoonwit, Simant; University of Bedfordshire (IGI Global, 2014-06)
      This article discusses a possible method to use a small number, e.g. 5, of conventional 2D X-ray images to reconstruct multiple 3D bone surfaces intraoperatively. Each bone’s edge contours in X-ray images are automatically identified. Sparse 3D landmark points of each bone are automatically reconstructed by pairing the 2D X-ray images. The reconstructed landmark point distribution on a surface is approximately optimal covering main characteristics of the surface. A statistical shape model, dense point distribution model (DPDM), is then used to fit the reconstructed optimal landmarks vertices to reconstruct a full surface of each bone separately. The reconstructed surfaces can then be visualised and manipulated by surgeons or used by surgical robotic systems.
    • Parallel centerline extraction on the GPU

      Liu, Baoquan; Telea, Alexandru C.; Roerdink, Jos B.T.M.; Clapworthy, Gordon J.; Williams, David; Yang, Po; Dong, Feng; Codreanu, Valeriu; Chiarini, Alessandro; University of Bedfordshire; et al. (Elsevier, 2014-06)
      Centerline extraction is important in a variety of visualization applications including shape analysis, geometry processing, and virtual endoscopy. Centerlines allow accurate measurements of length along winding tubular structures, assist automatic virtual navigation, and provide a path-planning system to control the movement and orientation of a virtual camera. However, efficiently computing centerlines with the desired accuracy has been a major challenge. Existing centerline methods are either not fast enough or not accurate enough for interactive application to complex 3D shapes. Some methods based on distance mapping are accurate, but these are sequential algorithms which have limited performance when running on the CPU. To our knowledge, there is no accurate parallel centerline algorithm that can take advantage of modern many-core parallel computing resources, such as GPUs, to perform automatic centerline extraction from large data volumes at interactive speed and with high accuracy. In this paper, we present a new parallel centerline extraction algorithm suitable for implementation on a GPU to produce highly accurate, 26-connected, one-voxel-thick centerlines at interactive speed. The resulting centerlines are as accurate as those produced by a state-of-the-art sequential CPU method [40], while being computed hundreds of times faster. Applications to fly through path planning and virtual endoscopy are discussed. Experimental results demonstrating centeredness, robustness and efficiency are presented.
    • Low cost estimation of IQ imbalance for direct conversion transmitters

      Li, Wei; Zhang, Yue; Huang, Li-Ke; Cosmas, John; Maple, Carsten; Xiong, Jian; University of Bedfordshire (IEEE, 2014-06)
      A low cost frequency-dependent (FD) I/Q imbalance self-compensation scheme is investigated in this paper. The direct conversion transmitters are widely used in wireless systems. However, the unwanted image-frequencies and distortions are inevitably introduced into the direct conversion system. This problem is even severer in wideband systems. Therefore, the accurate estimation and compensation of I/Q imbalance is crucial. The current compensation method is based on external instruments or internal feedback path which introduces additional impairments and is expensive. This paper proposes a low cost FD I/Q imbalance self-IQ-demodulation based compensation scheme without using external calibration instruments. First, the impairments of baseband and RF components are investigated. Further, I/Q imbalance model is developed. Then, the proposed a self-IQ-demodulation based compensation scheme is investigated. The overall FD I/Q imbalance parameters are estimated by developing a self-IQ-demodulation algorithm. To realize this self-IQ-demodulation algorithm, this paper introduces minor modifications to the current power detector circuit and specially designed training signal. Afterwards, the estimated parameters are applied to the baseband equivalent compensator. This sophisticated algorithm guarantees low computation complexity and low cost. The compensation performance is evaluated in laboratory measurement
    • Diode-based IQ imbalance estimation in direct conversion transmitters

      Xiong, Jian; Zhang, Yue; Li, Wei; Wang, Jin; Huang, Li-Ke; Maple, Carsten; University of Bedfordshire; Aeroflex, Stevenage; Shanghai Jiao Tong University (IEEE, 2014-02)
      Direct conversion transmitters are widely used in wireless systems for their inherent features of being simple and low cost. In this architecture, the inphase (I branch) and quadrature signal (Q branch) will be unconverted to the RF frequency band by quadrature modulation. However, the drawback of the direct conversion architecture is that it is sensitive to IQ imbalance caused by the impairment of analogue devices in I and Q branches. It then results in interferences in mirror frequencies which degrades the signal quality. Therefore, the accurate measurement of IQ imbalance is crucial. A diode-based method to measure the broadband IQ imbalance is proposed which does not need additional measurement instruments. Measurement results show the effectiveness of this method
    • License plate localization based on statistical measures of license plate features

      Boonsim, Noppakun; Prakoonwit, Simant; University of Bedfordshire (Association of Computer Electronics and Electrical Engineers, 2014-01)
      — License plate localization is considered as the most important part of license plate recognition system. The high accuracy rate of license plate recognition is depended on the ability of license plate detection. This paper presents a novel method for license plate localization bases on license plate features. This proposed method consists of two main processes. First, candidate regions extraction step, Sobel operator is applied to obtain vertical edges and then potential candidate regions are extracted by deploying mathematical morphology operations [5]. Last, license plate verification step, this step employs the standard deviation of license plate features to confirm license plate position. The experimental results show that the proposed method can achieve high quality license plate localization results with high accuracy rate of 98.26 %.
    • Synopsis of an engineering solution for a painful problem - phantom limb pain

      Mousavi, A.; Cole, J.; Kalganova, T.; Stone, R.; Zhang, J.; Pettifer, S.; Walker, R.; Nikopoulou-Smyrni, P.; Henderson Slater, D.; Aggoun, Amar; et al. (Scitepress, 2014)
      This paper is synopsis of a recently proposed solution for treating patients who suffer from Phantom Limb Pain (PLP). The underpinning approach of this research and development project is based on an extension of "mirror box" therapy which has had some promising results in pain reduction. An outline of an immersive individually tailored environment giving the patient a virtually realised limb presence, as a means to pain reduction is provided. The virtual 3D holographic environment is meant to produce immersive, engaging and creative environments and tasks to encourage and maintain patients' interest, an important aspect in two of the more challenging populations under consideration (over-60s and war veterans). The system is hoped to reduce PLP by more than 3 points on an 11 point Visual Analog Scale (VAS), when a score less than 3 could be attributed to distraction alone.
    • Light field geometry of a standard plenoptic camera

      Hahne, Christopher; Aggoun, Amar; Haxha, Shyqyri; Velisavljević, Vladan; Fernández, Juan Carlos Jácome; University of Bedfordshire (OSA, 2014)
      The Standard Plenoptic Camera (SPC) is an innovation in photography allowing for acquiring two-dimensional images focused at different depths, from a single exposure. Contrary to conventional cameras, the SPC consists of a micro lens array and a main lens projecting virtual lenses into object space. For the first time, the present research provides an approach to estimate the distance and depth of refocused images extracted from captures obtained by an SPC. Furthermore, estimates for the position and baseline of virtual lenses which correspond to an equivalent camera array are derived. On the basis of paraxial approximation, a ray tracing model employing linear equations has been developed and implemented using Matlab. The optics simulation tool Zemax is utilized for validation purposes. By designing a realistic SPC, experiments demonstrate that a predicted image refocusing distance at 3.5 m deviates by less than 11% from the simulation in Zemax, whereas baseline estimations indicate no significant difference. Applying the proposed methodology will enable an alternative to the traditional depth map acquisition by disparity analysis.
    • Distributed pixel mapping for refining dark area in parallax barriers based holoscopic 3D Display

      Swash, M.R.; Aggoun, Amar; Fatah, O. Abdul; Fernandez, Juan C. J.; Alazawi, E.; Tsekleves, Emmanuel; Brunel University; University of Bedfordshire (IEEE, 2013-12)
      Autostereoscopic 3D Display is robustly developed and available in the market for both home and professional users. However 3D resolution with acceptable 3D image quality remains a great challenge. This paper proposes a novel pixel mapping method for refining dark areas between two pinholes by distributing it into 3 times smaller dark areas and creating micro-pinholes in parallax barriers based holoscopic 3D displays. The proposed method allows to project RED, GREEN, BLUE subpixels separately from 3 different pinholes and it distributes the dark spaces into 3 times smaller dark spaces, which become unnoticeable and improves quality of the constructed holoscopic 3D scene significantly. Parallax barrier technology refers to a pinhole sheet or device placed in front or back of a liquid crystal display, allowing to project viewpoint pixels into space that reconstructs a holoscopic 3D scene in space. The holoscopic technology mimics the imaging system of insects, such as the fly, utilizing a single camera, equipped with a large number of micro-lenses or pinholes, to capture a scene, offering rich parallax information and enhanced 3D feeling without the need of wearing specific eyewear.
    • Pre-processing of holoscopic 3D image for autostereoscopic 3D displays

      Swash, M.R.; Aggoun, Amar; Fatah, O. Abdul; Li, B.; Fernandez, Juan C. J.; Alazawi, E.; Tsekleves, Emmanuel; University of Bedfordshire; Brunel University (2013-12)
      Holoscopic 3D imaging also known as Integral imaging is an attractive technique for creating full colour 3D optical models that exist in space independently of the viewer. The constructed 3D scene exhibits continuous parallax throughout the viewing zone. In order to achieve depth control, robust and real-time, a single aperture holoscopic 3D imaging camera is used for recording holoscopic 3D image using a regularly spaced array of microlens arrays, which view the scene at a slightly different angle to its neighbour. However, the main problem is that the microlens array introduces a dark borders in the recorded image and this causes errors at playback on the holoscopic 3D Display. This paper proposes a reference based pre-processing of holoscopic 3D image for autostereoscopic holoscopic 3D displays. The proposed method takes advantages of microlens as reference point to detect amount of introduced dark borders and reduce/remove them from the holoscopic 3D image.