reference
stringlengths 141
444k
| target
stringlengths 31
68k
|
---|---|
Background subtraction methods in video streams: A review <s> 1) Median Filtering: <s> We present a new approach to the tracking of very non rigid patterns of motion, such as water flowing down a stream. The algorithm is based on a "disturbance map", which is obtained by linearly subtracting the temporal average of the previous frames from the new frame. Every local motion creates a disturbance having the form of a wave, with a "head" at the present position of the motion and a historical "tail" that indicates the previous locations of that motion. These disturbances serve as loci of attraction for "tracking particles" that are scattered throughout the image. The algorithm is very fast and can be performed in real time. We provide excellent tracking results on various complex sequences, using both stabilized and moving cameras, showing: a busy ant column, waterfalls, rapids and, flowing streams, shoppers in a mall, and cars in a traffic intersection. <s> BIB001 </s> Background subtraction methods in video streams: A review <s> 1) Median Filtering: <s> Background subtraction methods are widely exploited for moving object detection in videos in many applications, such as traffic monitoring, human motion capture, and video surveillance. How to correctly and efficiently model and update the background model and how to deal with shadows are two of the most distinguishing and challenging aspects of such approaches. The article proposes a general-purpose method that combines statistical assumptions with the object-level knowledge of moving objects, apparent objects (ghosts), and shadows acquired in the processing of the previous frames. Pixels belonging to moving objects, ghosts, and shadows are processed differently in order to supply an object-based selective update. The proposed approach exploits color information for both background subtraction and shadow detection to improve object segmentation and background update. The approach proves fast, flexible, and precise in terms of both pixel accuracy and reactivity to background changes. <s> BIB002 </s> Background subtraction methods in video streams: A review <s> 1) Median Filtering: <s> Background subtraction is a widely used approach for detecting moving objects from static cameras. Many different methods have been proposed over the recent years and both the novice and the expert can be confused about their benefits and limitations. In order to overcome this problem, this paper provides a review of the main methods and an original categorisation based on speed, memory requirements and accuracy. Such a review can effectively guide the designer to select the most suitable method for a given application in a principled way. Methods reviewed include parametric and non-parametric background density estimates and spatial correlation approaches. <s> BIB003 | Median filtering is one of the background subtraction method algorithms, which is common to use. It is based on the assessment of the background model by calculating the average of each input pixel. The object is not considered as a background just after pass more than half of the frame absorbed storage. The benefits of this method are simple construction, very fast process and easy to use. Models and background are not fixed, they change during the time. The drawbacks of these approaches are two important factors. One of them is failing on the track of targets in animated backgrounds and dependent accuracy on the speed of the target and the other is frame rate BIB002 - . 2) Frame Difference: One of the simplest types of the background subtraction methods is frame difference. This method is considered the previous frame as the background. In this way, the target is determined by subtracting the current framework of the background model - BIB001 . Frame difference (absolute) at time t + 1 is considered and the background is assumed to be the frame at time t. This difference image would only show some intensity for the pixel locations which have changed in the two frames. Though we have seemingly removed the background. This approach will only work for cases where all foreground pixels are moving and all background pixels are static BIB003 , . |
Background subtraction methods in video streams: A review <s> 4) MIN-MAX Filtering: <s> Real-time segmentation of moving regions in image sequences is a fundamental step in many vision systems including automated visual surveillance, human-machine interface, and very low-bandwidth telecommunications. A typical method is background subtraction. Many background models have been introduced to deal with different problems. One of the successful solutions to these problems is to use a multi-colour background model per pixel proposed by Grimson et al [1, 2,3]. However, the method suffers from slow learning at the beginning, especially in busy environments. In addition, it can not distinguish between moving shadows and moving objects. This paper presents a method which improves this adaptive background mixture model. By reinvestigating the update equations, we utilise different equations at different phases. This allows our system learn faster and more accurately as well as adapts effectively to changing environment. A shadow detection scheme is also introduced in this paper. It is based on a computational colour space that makes use of our background model. A comparison has been made between the two algorithms. The results show the speed of learning and the accuracy of the model using our update algorithm over the Grimson et al’s tracker. When incorporate with the shadow detection, our method results in far better segmentation than The Thirteenth Conference on Uncertainty in Artificial Intelligence that of Grimson et al. <s> BIB001 | Three different values are used in this algorithm to determine which pixel demonstrates the background model or the target. The target shows more intensity of radiation in the background and less valuable to radiation in the background during a certain period of time BIB001 . Haritaoglu et al. proposed another technique with the goal of local adaptation to noise. Here, every background pixel comes with a maximum Ms, minimum ms, and a maximum of consecutive frames difference Ds observed over a training sequence. The most of schemes use forgetting factors or exponential weighting to determine the ratio of contribution of past observations that follows below. They can be used for background subtraction and estimation BIB001 . The four following methods are non-recursive techniques. |
Background subtraction methods in video streams: A review <s> 7) <s> We present a new approach to the tracking of very non rigid patterns of motion, such as water flowing down a stream. The algorithm is based on a "disturbance map", which is obtained by linearly subtracting the temporal average of the previous frames from the new frame. Every local motion creates a disturbance having the form of a wave, with a "head" at the present position of the motion and a historical "tail" that indicates the previous locations of that motion. These disturbances serve as loci of attraction for "tracking particles" that are scattered throughout the image. The algorithm is very fast and can be performed in real time. We provide excellent tracking results on various complex sequences, using both stabilized and moving cameras, showing: a busy ant column, waterfalls, rapids and, flowing streams, shoppers in a mall, and cars in a traffic intersection. <s> BIB001 </s> Background subtraction methods in video streams: A review <s> 7) <s> Identifying moving objects from a video sequence is a fundamental and ::: critical task in many computer-vision applications. A common approach ::: is to perform background subtraction, which identifies moving objects ::: from the portion of a video frame that differs significantly from a ::: background model. There are many challenges in developing a good ::: background subtraction algorithm. First, it must be robust against ::: changes in illumination. Second, it should avoid detecting ::: non-stationary background objects such as swinging leaves, rain, snow, ::: and shadow cast by moving objects. Finally, its internal background ::: model should react quickly to changes in background such as starting ::: and stopping of vehicles. In this paper, we compare various background subtraction algorithms for detecting moving vehicles and pedestrians in urban traffic video sequences. We consider approaches varying from simple techniques such as frame differencing and adaptive median filtering, to more sophisticated probabilistic modeling techniques. While complicated techniques often produce superior performance, our experiments show that simple techniques such as adaptive median filtering can produce good results with much lower computational complexity. <s> BIB002 </s> Background subtraction methods in video streams: A review <s> 7) <s> AbstractVarious tracking methods have been developed to track objects with different degrees or levels of tracking ability. The ability or performance of each tracking method is dependent on the feature or data that is being used for tracking purpose. The ability of a tracking method can be measured by utilizing tracking metrics to give an indication of the tracking ability of an algorithm. This paper offers some insights into the issues and similarities of performance measurement reporting of video tracking algorithms and proposes a method in assessing the robustness of a video tracking algorithm. The proposed metric introduces another measure to measure the consistency of a tracking algorithm. The work presented in this paper shows that using only one metric to measure the tracking performance is inadequate. The proposed metric presented in this paper shows that the utilization of multiple metrics such as tracking success rate and tracking consistency or robustness would give a better indication of the ... <s> BIB003 </s> Background subtraction methods in video streams: A review <s> 7) <s> Background subtraction is one of the key techniques for automatic video analysis, especially in the domain of video surveillance. Although its importance, evaluations of recent background subtraction methods with respect to the challenges of video surveillance suffer from various shortcomings. To address this issue, we first identify the main challenges of background subtraction in the field of video surveillance. We then compare the performance of nine background subtraction methods with post-processing according to their ability to meet those challenges. Therefore, we introduce a new evaluation data set with accurate ground truth annotations and shadow masks. This enables us to provide precise in-depth evaluation of the strengths and drawbacks of background subtraction methods. <s> BIB004 | Kalman Filtering: This technique is one of the most well-known recursive methods. If we assume the intensity values of the pixels in the image follow a normal distribution such as, where simple adaptive filters are responsible for updating the mean and variance of the background model to compensate for the illumination changes and include objects with long stops in the background model. Background estimation using Kalman filtering has been explained in . The main difference between them is the used state space for tracking process. The simplest ones are those which are based only on the luminance BIB001 - , BIB003 . 8) Hidden Markov Models: All of the mentioned models are able to reconcile to gradual changes in lighting. However, if remarkable amount of intensity changes occur, they all encounter serious problems. Another method which is able of modelling the variations in the pixel intensity is known as Markov Model. It tries to model these variations as discrete states based on modes of the environment, for instance cloudy/sunny skies or lights on/off. A three-state HMM has been shown for modelling the intensity of a pixel in trafficmonitoring applications BIB002 , BIB004 . |
Background subtraction methods in video streams: A review <s> B. Statistical Methods <s> A common method for real-time segmentation of moving regions in image sequences involves "background subtraction", or thresholding the error between an estimate of the image without moving objects and the current image. The numerous approaches to this problem differ in the type of background model used and the procedure used to update the model. This paper discusses modeling each pixel as a mixture of Gaussians and using an on-line approximation to update the model. The Gaussian, distributions of the adaptive mixture model are then evaluated to determine which are most likely to result from a background process. Each pixel is classified based on whether the Gaussian distribution which represents it most effectively is considered part of the background model. This results in a stable, real-time outdoor tracker which reliably deals with lighting changes, repetitive motions from clutter, and long-term scene changes. This system has been run almost continuously for 16 months, 24 hours a day, through rain and snow. <s> BIB001 </s> Background subtraction methods in video streams: A review <s> B. Statistical Methods <s> Real-time segmentation of moving regions in image sequences is a fundamental step in many vision systems including automated visual surveillance, human-machine interface, and very low-bandwidth telecommunications. A typical method is background subtraction. Many background models have been introduced to deal with different problems. One of the successful solutions to these problems is to use a multi-colour background model per pixel proposed by Grimson et al [1, 2,3]. However, the method suffers from slow learning at the beginning, especially in busy environments. In addition, it can not distinguish between moving shadows and moving objects. This paper presents a method which improves this adaptive background mixture model. By reinvestigating the update equations, we utilise different equations at different phases. This allows our system learn faster and more accurately as well as adapts effectively to changing environment. A shadow detection scheme is also introduced in this paper. It is based on a computational colour space that makes use of our background model. A comparison has been made between the two algorithms. The results show the speed of learning and the accuracy of the model using our update algorithm over the Grimson et al’s tracker. When incorporate with the shadow detection, our method results in far better segmentation than The Thirteenth Conference on Uncertainty in Artificial Intelligence that of Grimson et al. <s> BIB002 </s> Background subtraction methods in video streams: A review <s> B. Statistical Methods <s> Mixture of Gaussians is a widely used approach for background modeling to detect moving objects from static cameras. Numerous improvements of the original method developed by Stauffer and Grimson [1] have been proposed over the recent years and the purpose of this paper is to provide a survey and an original classification of these improvements. We also discuss relevant issues to reduce the computation time. Firstly, the original MOG are reminded and discussed following the challenges met in video sequences. Then, we categorize the different improvements found in the literature. We have classified them in term of strategies used to improve the original MOG and we have discussed them in term of the critical situations they claim to handle. After analyzing the strategies and identifying their limitations, we conclude with several promising directions for future research. <s> BIB003 </s> Background subtraction methods in video streams: A review <s> B. Statistical Methods <s> Locating moving objects in a video sequence is the first step of many computer vision applications. Among the various motion-detection techniques, background subtraction methods are commonly implemented, especially for applications relying on a fixed camera. Since the basic inter-frame difference with global threshold is often a too simplistic method, more elaborate (and often probabilistic) methods have been proposed. These methods often aim at making the detection process more robust to noise, background motion and camera jitter. In this paper, we present commonly-implemented background subtraction algorithms and we evaluate them quantitatively. In order to gauge performances of each method, tests are performed on a wide range of real, synthetic and semi-synthetic video sequences representing different challenges. <s> BIB004 </s> Background subtraction methods in video streams: A review <s> B. Statistical Methods <s> In this paper we present a novel method for foreground segmentation. Our proposed approach follows a non-parametric background modeling paradigm, thus the background is modeled by a history of recently observed pixel values. The foreground decision depends on a decision threshold. The background update is based on a learning parameter. We extend both of these parameters to dynamic per-pixel state variables and introduce dynamic controllers for each of them. Furthermore, both controllers are steered by an estimate of the background dynamics. In our experiments, the proposed Pixel-Based Adaptive Segmenter (PBAS) outperforms most state-of-the-art methods. <s> BIB005 | Modelling Background with a single image as in basic methods requires a rigorously fixed background void of noise and artifacts. Since this requirement cannot be satisfied in every real-life scenario, many models with each background pixel in a probability density function (PDF) learned over a series of training frames. The Statistical methods using one Gaussian have two sub-sequences: Gaussian Average was proposed by Wren BIB002 , and the Simple Gaussian of Benezeth and his colleagues. It does not cope with multimodal backgrounds BIB004 . Many researchers have worked on Statistical methods using multiple Gaussians that is called Gaussian Mixture Model (GMM). Some of these research were done by Stauffer and Grimson BIB001 , TraKuPong and Bowden BIB003 , Zivkovic , and Baf et al. BIB005 . To account for backgrounds made of animated textures (such as waves on the water or trees shaken by the wind), some authors proposed the use of multimodal PDFs such as Stauffer and Grimson's method BIB001 . |
Background subtraction methods in video streams: A review <s> C. Fuzzy Based Methods <s> AbstractIn this paper, reported algorithms for the removal of fog are reviewed. Fog reduces the visibility of scene and thus performance of various computer vision algorithms which use feature information. Formation of fog is the function of the depth. Estimation of depth information is under constraint problem if single image is available. Hence, removal of fog requires assumptions or prior information. Fog removal algorithms estimate the depth information with various assumptions, which are discussed in detail here. Fog removal algorithm has a wide application in tracking and navigation, consumer electronics, and entertainment industries. <s> BIB001 </s> Background subtraction methods in video streams: A review <s> C. Fuzzy Based Methods <s> In this paper we present a novel method for foreground segmentation. Our proposed approach follows a non-parametric background modeling paradigm, thus the background is modeled by a history of recently observed pixel values. The foreground decision depends on a decision threshold. The background update is based on a learning parameter. We extend both of these parameters to dynamic per-pixel state variables and introduce dynamic controllers for each of them. Furthermore, both controllers are steered by an estimate of the background dynamics. In our experiments, the proposed Pixel-Based Adaptive Segmenter (PBAS) outperforms most state-of-the-art methods. <s> BIB002 </s> Background subtraction methods in video streams: A review <s> C. Fuzzy Based Methods <s> Based on Type-2 Fuzzy Gaussian Mixture Model (T2-FGMM) and Markov Random Field (MRF), we propose a novel background modeling method for motion detection in dynamic scenes. The key idea of the proposed approach is the successful introduction of the spatial-temporal constraints into the T2-FGMM by a Bayesian framework. The evaluation results in pixel level demonstrate that the proposed method performs better than the sound Gaussian Mixture Model (GMM) and T2-FGMM in such typical dynamic backgrounds as waving trees and water rippling. <s> BIB003 | Fuzzy logic depends on the fuzzy set theory. The Fuzzy Sets Theory is another development of the classical mathematic theory that has been studied Georg Cauter. However, the fuzzy logic can deal with words in place of the language nature of human such a small word Large, or almost equal to BIB001 . Fuzzy based techniques include three categories. For the first time, Zhang and xuthe worked on Fuzzy Sugeno Integral with Adaptive-Selective Update . Next, Baf et al. BIB002 proposed a Fuzzy Choquet Integral with AdaptiveSelective Update. Finally, Fuzzy Gaussian of Sigari et al. was proposed. Also in that year, Baf et al. proposed both Type-2 Fuzzy GMM-UM and GMM-UV methods. Zhao with his colleagues suggested the Type-2 Fuzzy GMM-UM and GMM-UV with MRF BIB003 . |
Background subtraction methods in video streams: A review <s> D. Non-Parametric Methods <s> Background modeling is an important component of many vision systems. Existing work in the area has mostly addressed scenes that consist of static or quasi-static structures. When the scene exhibits a persistent dynamic behavior in time, such an assumption is violated and detection performance deteriorates. In this paper, we propose a new method for the modeling and subtraction of such scenes. Towards the modeling of the dynamic characteristics, optical flow is computed and utilized as a feature in a higher dimensional space. Inherent ambiguities in the computation of features are addressed by using a data-dependent bandwidth for density estimation using kernels. Extensive experiments demonstrate the utility and performance of the proposed approach. <s> BIB001 </s> Background subtraction methods in video streams: A review <s> D. Non-Parametric Methods <s> Metrology of vehicle trajectories has several applications in the field of road safety, particularly in dangerous curves. Actually, it is of great interest to observe trajectories of vehicles with the aim of designing a real time driver warning device in dangerous areas. This paper addresses the first step of a work with a video system placed along the road with the objective of vehicle's position and speed estimation. This system has been totally developed for this project and can record simultaneously three cameras with 640 times 480 pixels up to 30 frames per second (fps) and rangefinder informations. The best contribution of this paper is an original probabilistic background subtraction algorithm, first step of a global method (calibration, tracking, ...) implemented to be able to measure vehicle trajectories. Kinematic GPS (in post-processing) has been extensively used to get ground truth <s> BIB002 </s> Background subtraction methods in video streams: A review <s> D. Non-Parametric Methods <s> For a responsive audio art installation in a skylit atrium, we introduce a single-camera statistical segmentation and tracking algorithm. The algorithm combines statistical background image estimation, per-pixel Bayesian segmentation, and an approximate solution to the multi-target tracking problem using a bank of Kalman filters and Gale-Shapley matching. A heuristic confidence model enables selective filtering of tracks based on dynamic data. We demonstrate that our algorithm has improved recall and F 2 -score over existing methods in OpenCV 2.1 in a variety of situations. We further demonstrate that feedback between the tracking and the segmentation systems improves recall and F 2 -score. The system described operated effectively for 5–8 hours per day for 4 months; algorithms are evaluated on video from the camera installed in the atrium. Source code and sample data is open source and available in OpenCV. <s> BIB003 | Elgammal and his co-workers proposed Kernel Density Estimation (KDE) algorithm. An unstructured approach can also be used to model a multimodal PDF. In this perspective, Elgammal et al. BIB001 proposed a Parzen-window estimate at each background pixel. The problem of this method is the memory requirement size (n * size (frame)), time to compute the kernel values (mitigated by a LUT approach). More sophisticated methods can also be envisaged such as Mittal and Paragios BIB002 which is based on "Variable Bandwidth Kernels". Goyat et al. worked on VuMeter . Hofmann BIB003 proposed a Pixel-Based Adaptive Segmenter (PBAS) as well as Godbehere et al. studied on GMG. |
Background subtraction methods in video streams: A review <s> G. Methods Based on Eigen Features <s> A fast robust eigen-background update algorithm is proposed for foreground object detection. The update procedure involves no eigen decomposition, thus faster than former eigen-background based algorithms. Meanwhile, the algorithm can robustly maintain the desired background model, resistant to outlying objects. <s> BIB001 </s> Background subtraction methods in video streams: A review <s> G. Methods Based on Eigen Features <s> The paper presents a neural network based segmentation method which can extract moving objects in video. This proposed neural network architecture is multilayer so as to match the complexity of the frames in a video stream and deal with the problems of segmentation. The neural network combines inputs that exploit spatio-temporal correlation among pixels. Each of these unit themselves produce imperfect results, but the neural network learns to combine their results for better overall segmentation, even though it is trained with noisy results from a simpler method. The proposed algorithm converges from an initial stage where all the pixels are considered to be part of the background to a stage where only the appropriate pixels are classified as background. Results are shown to demonstrate the efficacy of the method compared to a more memory intensive MoG method. <s> BIB002 | Eigen background / SL-PCA was proposed by Oliver BIB001 . The key element of this method lies in its ability of learning the background model from unconstrained video sequences, even when they contain moving foreground objects. Furthermore, PCA can be applied to a sequence of n frames to compute the Eigen backgrounds, and finally it is faster than a Mixture of Gaussian approach. III. CHALLENGES OF BACKGROUND SUBTRACTION FOR VIDEO SURVEILLANCE Background subtraction methods have to deal with various challenges due to the nature of video supervision. Besides the standard challenges, many of the background subtraction challenges have studied in literature before [51] . We refer to the work of Bouwmans et al. for a comprehensive study. For instance, we bring up the following challenges: • Gradual or sudden illumination changes: It is necessary to adapt the BS methods to gradual changes of the environment. • Dynamic background: Some parts in the video may contain moving objects, but should be regarded as background. Such movement can be irregular or periodical like waving trees. • Bootstrapping: If initialization data which is free from foreground objects is not available, the background model has to be initialized using a bootstrapping strategy . • Video noise: Video signal is generally superimposed by noise. BS approaches for video surveillance has to cope with such degraded signals affected by different types of noise, such as sensor noise or compression artifacts . • Camouflage: Deliberately or not, some objects in a video may poorly differ from the appearance of background. It leads to make an incorrect classification. This is an important case in surveillance applications especially. IV. DISCUSSION Recently, Tian et al. BIB002 proposed a selective Eigen background modelling and subtraction method that can keep robust in crowded scenes. Three "selectivity" mechanisms are integrated with their methods, including selective training, selective model initialization and pixel-level selective reconstruction. They used of three Eigen background algorithms: C-EigenBg, BS-EigenBg, PS-EigenBgNVF and compared the results with other non-Eigen background algorithms like GMM, Bayes, Codebook, PBAS, and Vibe. As it can be seen in the video that method Luque fails to segment the foreground objects effectively. Mog provides better results than the Luque method, but the proposed method gives the best overall results as Fig. 5 illustrates their results. Y. Benezeth and his co-workers tested the BS algorithms on groups of videos illustrating different scenarios and thus different challenges. As can be seen from those Precision / Recall curves, the MinMax method is slightly less effective than the others, mostly because it exclusively works on grayscale data, thus ignoring colour. |
Recent advances in features extraction and description algorithms: A comprehensive survey <s> I. INTRODUCTION <s> Recent technology and market trends have demanded the significant need for feasible solutions to video/camera systems and analytics. This paper provides a comprehensive account on theory and application of intelligent video systems and analytics. It highlights the video system architectures, tasks, and related analytic methods. It clearly demonstrates that the importance of the role that intelligent video systems and analytics play can be found in a variety of domains such as transportation and surveillance. Research directions are outlined with a focus on what is essential to achieve the goals of intelligent video systems and analytics. <s> BIB001 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> I. INTRODUCTION <s> In this work, a hardware-software co-design is proposed to effectively utilize FPGA resources for a prototype of an automated video surveillance system on a programmable platform. Time-critical steps of a foreground object detection algorithm are designed and implemented in the FPGA's logic elements to maximize parallel processing. Other non time-critical tasks are achieved by executing a high level language program on an embedded Nios-II processor. Custom and parallel processing modules are integrated into the video processing chain by a streaming protocol that aggressively utilizes on-chip memory to increase the throughput of the system. A data forwarding technique is incorporated with an on-chip buffering scheme to reduce computations and resources in the window-based operations. Other data control interfaces are achieved by software drivers that communicate with hardware controllers using Altera's Memory-Mapped protocol. The proposed prototype has demonstrated real-time processing capability that outperforms other implementations. <s> BIB002 | Features detection and description from static and dynamic scenes is an active area of research and one of the most studied topics in computer vision literature. The concept of feature detection and description refers to the process of identifying points in an image (interest points) that can be used to describe the image's contents such as Edges, corners, ridges and blobs. It is primarily aiming towards object detection, analysis and tracking from a video stream to describe the semantics of the its actions and behavior . It also has a long list of potential applications, which include, but is not limited to, access control to sensitive building, crowd and population statistical analysis, human detection and tracking, detecting of suspicious actions, traffic analysis, vehicular tracking, and detection of military targets. In the last few years, we have witnessed a remarkable increase in the amount of homogeneous and inhomogeneous visual inputs (mainly due to the availability of cheap capturing devices such as the built-in cameras in smart phones, in addition to the availability of free image hosting applications, websites and servers such as Instagram and Facebook). This drives the research communities to propose number of novel, robust, and automated features detection and description algorithms, that can adapt to the needs of an application in terms of accuracy and performance. Most of the proposed algorithms requires intensive computations (especially when it is used with high-definition video stream or with high-resolution satellite imagery applications). Hardware accelerators with massive processing capabilities for these algorithms is required to accelerate the its computations for real-time applications. Digital Signal Processors (DSPs), Field Programmable Gate Arrays (FPGAs), System-on-Chips (SoCs), Application-Specific Integrated Circuits (ASICs), and Graphic Processing Units (GPUs) platforms with smarter, parallelizable, and pipelinable hardware processing designs could be targeted to alleviate this issue. Porting feature detection and description algorithms into hardware platforms speedup its computation by order of magnitude. However, hardware-constrains such as memory, power, scalability and format interfacing constitute a major bottleneck of scaling it into high resolutions. The typical solution for these hardware-related issues is to scale down the resolution or to sacrifice the accuracy of the detected features. The stateof-the-art in machine and robotic vision, on the other hand, has lately concluded that it is the processing algorithms that will make a substantial contribution to resolve these issues BIB001 [3]. That is, computer vision algorithms might be targeted to resolve most of those problems associated with the memoryand power-demanding hardware requirements, and might yield a big revolution for such systems BIB002 . This challenge is inviting researchers to invent, implement and test these new algorithms, which mainly fall in the feature detection and description category, and which are the fundamental tools of many visual computations applications. To ensure the robustness of vision algorithms, an essential prerequisite is that they are designed to cover a wide range of possible scenarios with a high-level of repeatability and affine-invariance. Ultimately, studying all of these scenarios and parameters is virtually impossible, however, a clear understanding of all these variables is critical for a successful design. Key factors influencing real-time performance include the processing platform (and its associated constrains on memory, power and frequency in FPGAs, SoCs, GPUs, etc., that can result in algorithmic modifications that can possibly impact the desired performance), monitored environment (e.g. illuminations, reflections, shadows, view orientation, angle, etc.), and the application of interest (e.g. targets of interest, tolerable miss detection/false alarm rates and the desired tradeoffs, and allowed latency). As such, a careful study of computer vision algorithms is essential. This paper is dedicated to provide a comprehensive overview on the state-of-the-art and recent advances in feature detection and description algorithms. Specifically, the paper starts by overviewing fundamental concepts that constitute the core of feature detection and description algorithms. It then compares, reports and discusses their performance and capabilities. The Maximally Stable Extremal Regions (MSER) algorithm and the Scale Invariant Feature Transform (SIFT) algorithm, being two of the best of their type, are selected to report their recent algorithmic derivatives. The rest of the paper is organized as follows. Section II provides an overview of the recent state-of-the-art feature detection and description algorithms proposed in literature. It also summaries and compares their performance and accuracy under various transformations. In Section III, the MSER and SIFT algorithms are studied in detail in terms of their recent derivatives. Finally, Section IV concludes the paper with outlooks into future work. |
Recent advances in features extraction and description algorithms: A comprehensive survey <s> A. Local features <s> In this survey, we give an overview of invariant interest point detectors, how they evolvd over time, how they work, and what their respective strengths and weaknesses are. We begin with defining the properties of the ideal local feature detector. This is followed by an overview of the literature over the past four decades organized in different categories of feature extraction methods. We then provide a more detailed analysis of a selection of methods which had a particularly significant impact on the research field. We conclude with a summary and promising future research directions. <s> BIB001 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> A. Local features <s> Feature detection is a fundamental and important problem in computer vision and image processing. It is a low-level processing step which serves as the essential part for computer vision based applications. The goal of this paper is to present a survey of recent progress and advances in visual feature detection. Firstly we describe the relations among edges, corners and blobs from the psychological view. Secondly we classify the algorithms in detecting edges, corners and blobs into different categories and provide detailed descriptions for representative recent algorithms in each category. Considering that machine learning becomes more involved in visual feature detection, we put more emphasis on machine learning based feature detection methods. Thirdly, evaluation standards and databases are also introduced. Through this survey we would like to present the recent progress in visual feature detection and identify future trends as well as challenges. We survey the recent progress and advances in visual feature detection.The relations among different kinds of features are covered.Representative feature detection algorithms are described.We categorize and discuss the pros/cons for different kinds of visual features.We put some emphasis on future challenges in feature design through this survey. <s> BIB002 | Local image features (also known as interest points, key points, and salient features) can be defined as a specific pattern which unique from its immediately close pixels, which is generally associated with one or more of image properties BIB001 BIB002 . Such properties include edges, corners, regions, etc. Figure 1 (a) below represents a summary of such local features. Indeed, these local features represent essential anchor points that can summarize the content of the frame (with the aid of feature descriptors) while searching an image (or a video). These local features are then converted into numerical descriptors, representing unique and compact summarization of these local features. Local (descriptive and invariant) features provide a powerful tool that can be used in a wide range of computer vision and robotics applications, such as real-time visual surveillance, image retrieval, video mining, object tracking, mosaicking, target detection, and wide baseline matching to name few . To illustrate on the usefulness of such local features, consider the following example. Given an aerial image, a detected edge can represent a street, corners may be street junctions, and homogeneous regions can represent cars, roundabouts or buildings (of course, this is a resolution dependent). The term detector (a.k.a. extractor) traditionally refers to the algorithm or technique that detects (or extracts) these local features and prepare them to be passed to another processing stage that describe their contents, i.e. a feature descriptor algorithm. That is, feature extraction plays the role of an intermediate image processing stage between different computer vision algorithms. In this work, the terms detector and extractor are interchangeably used. |
Recent advances in features extraction and description algorithms: A comprehensive survey <s> B. Ideal Local Features <s> In this survey, we give an overview of invariant interest point detectors, how they evolvd over time, how they work, and what their respective strengths and weaknesses are. We begin with defining the properties of the ideal local feature detector. This is followed by an overview of the literature over the past four decades organized in different categories of feature extraction methods. We then provide a more detailed analysis of a selection of methods which had a particularly significant impact on the research field. We conclude with a summary and promising future research directions. <s> BIB001 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> B. Ideal Local Features <s> Feature detection is a fundamental and important problem in computer vision and image processing. It is a low-level processing step which serves as the essential part for computer vision based applications. The goal of this paper is to present a survey of recent progress and advances in visual feature detection. Firstly we describe the relations among edges, corners and blobs from the psychological view. Secondly we classify the algorithms in detecting edges, corners and blobs into different categories and provide detailed descriptions for representative recent algorithms in each category. Considering that machine learning becomes more involved in visual feature detection, we put more emphasis on machine learning based feature detection methods. Thirdly, evaluation standards and databases are also introduced. Through this survey we would like to present the recent progress in visual feature detection and identify future trends as well as challenges. We survey the recent progress and advances in visual feature detection.The relations among different kinds of features are covered.Representative feature detection algorithms are described.We categorize and discuss the pros/cons for different kinds of visual features.We put some emphasis on future challenges in feature design through this survey. <s> BIB002 | In general, a local feature typically has a spatial extent which is due to its local pixels neighborhood. That is, they represent a subset of the frame that is semantically meaningful, e.g. correspond to an object (or a part of an object). Ultimately, it is infeasible to localize all such features as this will require the prerequisite of high-level frame (scene) understanding BIB001 . As such, those features detection algorithms tries to locate these features directly based on the intensity patterns in the input frame. The selection of these local features can indeed greatly impact the overall system performance BIB002 . Ideal features (and hence feature detectors) should typically have the following important qualities BIB001 : (1) Distinctiveness: the intensity patterns underlying the detected features should be rich in variations that can be used for distinguishing features and matching them. (2) Locality: features should be local so as to reduce the chances of getting occluded as well as to allow simple estimation of geometric and photometric deformations between two frames with different views. (3) Quantity: the total number of detected features (i.e. features density) should be sufficiently (not excessively) large to reflect the frames content in a compact form. (4) Accuracy: features detected should be located accurately with respect to different scales, shapes and pixels locations in a frame. (5) Efficiency: features should be efficiently identified in a short time that makes them suitable for real-time (i.e. timecritical) applications. (6) Repeatability: given two frames of the same object (or scene) with different viewing settings, a high percentage of the detected features from the overlapped visible part should be found in both frames. Repeatability is greatly affected by the following two qualities. (7) Invariance: in scenarios where a large deformation is expected (scale, rotation, etc.), the detector algorithm should model this deformation mathematically as precisely as possible so that it minimizes its effect on the extracted features. (8) Robustness: in scenarios where a small deformation is expected (noise, blur, discretization effects, compression arti- Intuitively, a given computer vision applications may favor one quality over another BIB001 . Repeatability, arguably the most important quality, is directly dependent on the other qualities (that is, improving one will equally improve repeatability). Nevertheless, regarding the other qualities, compromises typically need to be made. For example, distinctiveness and locality are competing properties (the more local a feature, the less distinctive it becomes, making feature matching more difficult). Efficiency and quantity are another example of such competing qualities. A highly dense features are likely to improve the object/scene recognition task, but this, however, will negatively impact the computation time. |
Recent advances in features extraction and description algorithms: A comprehensive survey <s> C. Feature Detectors <s> Abstract The wide-baseline stereo problem, i.e. the problem of establishing correspondences between a pair of images taken from different viewpoints is studied. A new set of image elements that are put into correspondence, the so called extremal regions , is introduced. Extremal regions possess highly desirable properties: the set is closed under (1) continuous (and thus projective) transformation of image coordinates and (2) monotonic transformation of image intensities. An efficient (near linear complexity) and practically fast detection algorithm (near frame rate) is presented for an affinely invariant stable subset of extremal regions, the maximally stable extremal regions (MSER). A new robust similarity measure for establishing tentative correspondences is proposed. The robustness ensures that invariants from multiple measurement regions (regions obtained by invariant constructions from extremal regions), some that are significantly larger (and hence discriminative) than the MSERs, may be used to establish tentative correspondences. The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes. Significant change of scale (3.5×), illumination conditions, out-of-plane rotation, occlusion, locally anisotropic scale change and 3D translation of the viewpoint are all present in the test problems. Good estimates of epipolar geometry (average distance from corresponding points to the epipolar line below 0.09 of the inter-pixel distance) are obtained. <s> BIB001 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> C. Feature Detectors <s> The paper gives a snapshot of the state of the art in affine covariant region detectors, and compares their performance on a set of test images under varying imaging conditions. Six types of detectors are included: detectors based on affine normalization around Harris (Mikolajczyk and Schmid, 2002; Schaffalitzky and Zisserman, 2002) and Hessian points (Mikolajczyk and Schmid, 2002), a detector of `maximally stable extremal regions', proposed by Matas et al. (2002); an edge-based region detector (Tuytelaars and Van Gool, 1999) and a detector based on intensity extrema (Tuytelaars and Van Gool, 2000), and a detector of `salient regions', proposed by Kadir, Zisserman and Brady (2004). The performance is measured against changes in viewpoint, scale, illumination, defocus and image compression. ::: ::: The objective of this paper is also to establish a reference test set of images and performance software, so that future detectors can be evaluated in the same framework. <s> BIB002 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> C. Feature Detectors <s> In this survey, we give an overview of invariant interest point detectors, how they evolvd over time, how they work, and what their respective strengths and weaknesses are. We begin with defining the properties of the ideal local feature detector. This is followed by an overview of the literature over the past four decades organized in different categories of feature extraction methods. We then provide a more detailed analysis of a selection of methods which had a particularly significant impact on the research field. We conclude with a summary and promising future research directions. <s> BIB003 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> C. Feature Detectors <s> Local feature detectors and descriptors are widely used in many computer vision applications and various methods have been proposed during the past decade. There have been a number of evaluations focused on various aspects of local features, matching accuracy in particular, however there has been no comparisons considering the accuracy and speed trade-offs of recent extractors such as BRIEF, BRISK, ORB, MRRID, MROGH and LIOP. This paper provides a performance evaluation of recent feature detectors and compares their matching precision and speed in randomized kd-trees setup as well as an evaluation of binary descriptors with efficient computation of Hamming distance. <s> BIB004 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> C. Feature Detectors <s> Numerous techniques and algorithms have been developed and implemented, primarily in software, for object tracking, detection and recognition. A few attempts have been made to implement some of the algorithms in hardware. However, those attempts have not yielded optimal results in terms of accuracy, power and memory requirements. The aim of this paper is to explore and investigate a number of possible algorithms for real-time video surveillance, revealing their various theories, relationships, shortcomings, advantages and disadvantages, and pointing out their unsolved problems of practical interest in principled way, which would be of tremendous value to engineers and researchers trying to decide what algorithm among those many in literature is most suitable to specific application and the particular real-time System-on-Chip (SoC) implementation. <s> BIB005 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> C. Feature Detectors <s> Feature detection is a fundamental and important problem in computer vision and image processing. It is a low-level processing step which serves as the essential part for computer vision based applications. The goal of this paper is to present a survey of recent progress and advances in visual feature detection. Firstly we describe the relations among edges, corners and blobs from the psychological view. Secondly we classify the algorithms in detecting edges, corners and blobs into different categories and provide detailed descriptions for representative recent algorithms in each category. Considering that machine learning becomes more involved in visual feature detection, we put more emphasis on machine learning based feature detection methods. Thirdly, evaluation standards and databases are also introduced. Through this survey we would like to present the recent progress in visual feature detection and identify future trends as well as challenges. We survey the recent progress and advances in visual feature detection.The relations among different kinds of features are covered.Representative feature detection algorithms are described.We categorize and discuss the pros/cons for different kinds of visual features.We put some emphasis on future challenges in feature design through this survey. <s> BIB006 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> C. Feature Detectors <s> The traditional environment maps built by mobile robots include both metric ones and topological ones. These maps are navigation-oriented and not adequate for service robots to interact with or serve human users who normally rely on the conceptual knowledge or semantic contents of the environment. Therefore, the construction of semantic maps becomes necessary for building an effective human-robot interface for service robots. This paper reviews recent research and development in the field of visual-based semantic mapping. The main focus is placed on how to extract semantic information from visual data in terms of feature extraction, object/place recognition and semantic representation methods. <s> BIB007 | The technical literature is rich with new features detections and description algorithms, and surveys that compare their performance and their qualities such as those mentioned in the earlier section. The reader is referred to some of the elegant surveys from the literature in BIB003 BIB002 . However, no ideal detector exists until today. This is mainly due to the virtually infinite number of possible computer vision applications (that may require one or multiple features), the divergence of imaging conditions (changes in scale, viewpoint, illumination and contrast, image quality, compression, etc.) and possible scenes. The computational efficiency of such detectors becomes even more important when considered for real-time applications BIB006 [8] BIB005 . As such, the most important local features include: (1) Edges: refer to pixel patterns at which the intensities abruptly change (with a strong gradient magnitude), (2) Corners: refer to the point at which two (or more) edge intersect in the local neighborhood, and (3) Regions: refer to a closed set of connected points with a similar homogeneity criteria, usually the intensity value. One can intuitively note that there is a strong correlation between these local features. For example, multiple edges sometimes surround a region, i.e. tracking the edges defines the region boundaries. Similarly, the intersection of edges defines the corners BIB007 . A summary for the well-known feature detectors can be found in table 1. The performance of many of the state-of-the-art detectors is compared in table 2. As was reported in many performance comparison surveys in the computer vision literature BIB003 [10] BIB004 , both the MSER BIB001 and the SIFT algorithms have shown an excellent performance in terms of the invariance and other feature qualities (see table 2, the last two rows). Due to these facts, the MSER and SIFT algorithms were extended to several derivatives with different enhancements (that will be reported on later sections). As such, the following section of this paper considers reporting the algorithmic derivatives of the MSER and SIFT algorithms. |
Recent advances in features extraction and description algorithms: A comprehensive survey <s> A. MSER Derivatives <s> This paper introduces an efficient 3D segmentation concept, which is based on extending the well-known Maximally Stable Extremal Region (MSER) detector to the third dimension. The extension allows the detection of stable 3D regions, which we call the Maximally Stable Volumes (MSVs). We present a very efficient way to detect the MSVs in quasi-linear time by analysis of the component tree. Two applications - 3D segmentation within simulated MR brain images and analysis of the 3D fiber network within digitized paper samples - show that reasonably good segmentation results are achieved with low computational effort. <s> BIB001 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> A. MSER Derivatives <s> This paper introduces a novel colour-based affine co-variant region detector. Our algorithm is an extension of the maximally stable extremal region (MSER) to colour. The extension to colour is done by looking at successive time-steps of an agglomerative clustering of image pixels. The selection of time-steps is stabilised against intensity scalings and image blur by modelling the distribution of edge magnitudes. The algorithm contains a novel edge significance measure based on a Poisson image noise model, which we show performs better than the commonly used Euclidean distance. We compare our algorithm to the original MSER detector and a competing colour-based blob feature detector, and show through a repeatability test that our detector performs better. We also extend the state of the art in feature repeatability tests, by using scenes consisting of two planes where one is piecewise transparent. This new test is able to evaluate how stable a feature is against changing backgrounds. <s> BIB002 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> A. MSER Derivatives <s> In this paper we present a new algorithm for computing Maximally Stable Extremal Regions (MSER), as invented by Matas et al. The standard algorithm makes use of a union-find data structure and takes quasi-linear time in the number of pixels. The new algorithm provides exactly identical results in true worst-case linear time. Moreover, the new algorithm uses significantly less memory and has better cache-locality, resulting in faster execution. Our CPU implementation performs twice as fast as a state-of-the-art FPGA implementation based on the standard algorithm. ::: ::: The new algorithm is based on a different computational ordering of the pixels, which is suggested by another immersion analogy than the one corresponding to the standard connected-component algorithm. With the new computational ordering, the pixels considered or visited at any point during computation consist of a single connected component of pixels in the image, resembling a flood-fill that adapts to the grey-level landscape. The computation only needs a priority queue of candidate pixels (the boundary of the single connected component), a single bit image masking visited pixels, and information for as many components as there are grey-levels in the image. This is substantially more compact in practice than the standard algorithm, where a large number of connected components must be considered in parallel. The new algorithm can also generate the component tree of the image in true linear time. The result shows that MSER detection is not tied to the union-find data structure, which may open more possibilities for parallelization. <s> BIB003 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> A. MSER Derivatives <s> This paper presents a novel hardware accelerator architecture for the linear-time Maximally Stable Extremal Regions (MSER) detector algorithm. In contrast to the standard MSER algorithm, the linear-time MSER implementation is more suitable for real-time applications of image retrieval in large-scale and high resolution datasets (e.g. satellite images). The linear-time MSER accelerator design is optimized by enhancing its flooding process (which is one of the major drawbacks of the standard linear-time MSER) using a structure that we called stack of pointers, which makes it memory-efficient as it reduces the memory requirement by nearly 90%. The accelerator is configurable and can be integrated with many image processing algorithms, allowing a wide spectrum of potential real-time applications to be realized even on small and power-limited devices. <s> BIB004 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> A. MSER Derivatives <s> This paper presents a novel implementation of the Maximally Stable Extremal Regions (MSER) detector on system-on-chip (SoC) using 65nm CMOS technology. The novel SoC was developed following the Application Specific Integrated Circuit (ASIC) design flow which significantly enhanced its realization and fabrication, and overall performances. The SoC has very low area requirement (around 0.05 mm2) and is capable of detecting both bright and dark MSERs in a single run, while computing simultaneously their associated regions' moments, simplifying its interfacing with other image algorithms (e.g. SIFT and SURF). The novel MSER SoC is power-efficient (requires 2.25 mW) and memory-efficient as it saves more than 31% of the memory space reported in the state-of-the-art MSER implementation on FPGA, making it suitable for mobile devices. With 256×256 resolution and its operating frequency of 133 MHz, the SoC is expected to have a 200 frames/second processing rate, making it suitable (when integrated with other algorithms in the system) for time-critical real-time applications such as visual surveillance. <s> BIB005 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> A. MSER Derivatives <s> Extremal Regions of Extremum Levels (EREL) are regions detected from a set of all extremal regions of an image. Maximally Stable Extremal Regions (MSER) which is a novel affine covariant region detector, detects regions from a same set of extremal regions as well. Although MSER results in regions with almost high repeatability, it is heavily dependent on the union-find approach which is a fairly complicated algorithm, and should be completed sequentially. Furthermore, it detects regions with low repeatability under the blur transformations. The reason for the latter shortcoming is the absence of boundaries information in stability criterion. To tackle these problems we propose to employ prior information about boundaries of regions, which results in a novel region detector algorithm that not only outperforms MSER, but avoids the MSER’s rather complicated steps of union-finding. To achieve that, we introduce Maxima of Gradient Magnitudes (MGMs) and use them to find handful of Extremum Levels (ELs). The chosen ELs are then scanned to detect their Extremal Regions (ER). The proposed algorithm which is called Extremal Regions of Extremum Levels (EREL) has been tested on the public benchmark dataset of Mikolajczyk [1]. Our experimental evaluations illustrate that, in many cases EREL achieves higher repeatability scores than MSER even for very low overlap errors. <s> BIB006 | Maximally stable extremal regions (MSER) algorithm was proposed by Matas et al in 2002 . Since then number of region detection algorithms have been proposed based on the MSER technique. The following is a list of five MSER derivatives presented in chronological order. (1) N-Dimensional Extension: The algorithm was extended first in 2006 for 3D segmentation BIB001 by extending the neighborhoods search and stability criteria to 3D image data instead of 2D intensity date. Later on, in 2007, another extension for N-dimensional data space was proposed by Vedaldi in , and later on the same year, an extension to vector-valued function that can be exploited with the three-color channels was also provided in BIB002 . (2) Linear-Time MSER Algorithm: In 2008, Nister and Stewenius proposed a new processing flow that emulates real flood-filling in BIB003 . The new linear-time MSER algorithm has several advantages over the standard algorithm such as the better cache locality, linear complexity, etc. An initial hardware design was proposed in BIB004 . 3) The Extended MSER (X-MSER) Algorithm: The standard MSER algorithm searches for extremal regions from the input intensity frame only. However, in 2015, the authors of proposed an extension to the depth (space) domain noting out the correlation between the depth images and intensity images, and introduced the extended MSER detector, which was patented in . (4) The Parallel MSER Algorithm: One of the major drawbacks of the MSER algorithm is the need to run it twice on every frame to detect both dark and bright extremal regions. To circumvent on these issues, the authors proposed a parallel MSER algorithm BIB005 . Parallel in this context refers to the capability of detecting both extremal regions in a single run. This algorithmic enhancement showed great advantages over the standard MSER algorithm such as a considerable reduction in the execution time, required hardware resources and power, etc. This parallel MSER algorithm has few US patents that are associated with it (e.g. ). (5) Other MSER derivatives: Other algorithms that were inspired from the MSER algorithm include the Extremal Regions of the Extremal Levels BIB006 [28] algorithm and the Tree-based Morse Regions (TBMR) . |
Recent advances in features extraction and description algorithms: A comprehensive survey <s> B. SIFT Derievatives <s> Stable local feature detection and representation is a fundamental component of many image registration and object recognition algorithms. Mikolajczyk and Schmid (June 2003) recently evaluated a variety of approaches and identified the SIFT [D. G. Lowe, 1999] algorithm as being the most resistant to common image deformations. This paper examines (and improves upon) the local image descriptor used by SIFT. Like SIFT, our descriptors encode the salient aspects of the image gradient in the feature point's neighborhood; however, instead of using SIFT's smoothed weighted histograms, we apply principal components analysis (PCA) to the normalized gradient patch. Our experiments demonstrate that the PCA-based local descriptors are more distinctive, more robust to image deformations, and more compact than the standard SIFT representation. We also present results showing that using these descriptors in an image retrieval application results in increased accuracy and faster matching. <s> BIB001 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> B. SIFT Derievatives <s> SIFT has been proven to be the most robust local invariant feature descriptor. SIFT is designed mainly for gray images. However, color provides valuable information in object description and matching tasks. Many objects can be misclassified if their color contents are ignored. This paper addresses this problem and proposes a novel colored local invariant feature descriptor. Instead of using the gray space to represent the input image, the proposed approach builds the SIFT descriptors in a color invariant space. The built Colored SIFT (CSIFT) is more robust than the conventional SIFT with respect to color and photometrical variations. The evaluation results support the potential of the proposed approach. <s> BIB002 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> B. SIFT Derievatives <s> We propose the n -dimensional scale invariant feature transform ( n-SIFT) method for extracting and matching salient features from scalar images of arbitrary dimensionality, and compare this method's performance to other related features. The proposed features extend the concepts used for 2-D scalar images in the computer vision SIFT technique for extracting and matching distinctive scale invariant features. We apply the features to images of arbitrary dimensionality through the use of hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. We analyze the performance of a fully automated multimodal medical image matching technique based on these features, and successfully apply the technique to determine accurate feature point correspondence between pairs of 3-D MRI images and dynamic 3D + time CT data. <s> BIB003 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> B. SIFT Derievatives <s> This article presents a novel scale- and rotation-invariant detector and descriptor, coined SURF (Speeded-Up Robust Features). SURF approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (specifically, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper encompasses a detailed description of the detector and descriptor and then explores the effects of the most important parameters. We conclude the article with SURF's application to two challenging, yet converse goals: camera calibration as a special case of image registration, and object recognition. Our experiments underline SURF's usefulness in a broad range of topics in computer vision. <s> BIB004 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> B. SIFT Derievatives <s> If a physical object has a smooth or piecewise smooth boundary, its images obtained by cameras in varying positions undergo smooth apparent deformations. These deformations are locally well approximated by affine transforms of the image plane. In consequence the solid object recognition problem has often been led back to the computation of affine invariant image local features. Such invariant features could be obtained by normalization methods, but no fully affine normalization method exists for the time being. Even scale invariance is dealt with rigorously only by the scale-invariant feature transform (SIFT) method. By simulating zooms out and normalizing translation and rotation, SIFT is invariant to four out of the six parameters of an affine transform. The method proposed in this paper, affine-SIFT (ASIFT), simulates all image views obtainable by varying the two camera axis orientation parameters, namely, the latitude and the longitude angles, left over by the SIFT method. Then it covers the other four parameters by using the SIFT method itself. The resulting method will be mathematically proved to be fully affine invariant. Against any prognosis, simulating all views depending on the two camera orientation parameters is feasible with no dramatic computational load. A two-resolution scheme further reduces the ASIFT complexity to about twice that of SIFT. A new notion, the transition tilt, measuring the amount of distortion from one view to another, is introduced. While an absolute tilt from a frontal to a slanted view exceeding 6 is rare, much higher transition tilts are common when two slanted views of an object are compared (see Figure hightransitiontiltsillustration). The attainable transition tilt is measured for each affine image comparison method. The new method permits one to reliably identify features that have undergone transition tilts of large magnitude, up to 36 and higher. This fact is substantiated by many experiments which show that ASIFT significantly outperforms the state-of-the-art methods SIFT, maximally stable extremal region (MSER), Harris-affine, and Hessian-affine. <s> BIB005 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> B. SIFT Derievatives <s> We present a new method to extract scale-invariant features from an image by using a Cosine Modulated Gaussian (CM-Gaussian) filter. Its balanced scale-space atom with minimal spread in scale and space leads to an outstanding scale-invariant feature detection quality, albeit at reduced planar rotational invariance. Both sharp and distributed features like corners and blobs are reliably detected, irrespective of various image artifacts and camera parameter variations, except for planar rotation. The CM-Gaussian filters are approximated with the sum of exponentials as a single, fixed-length filter and equal approximation error over all scales, providing constant-time, low-cost image filtering implementations. The approximation error of the corresponding digital signal processing is below the noise threshold. It is scalable with the filter order, providing many quality-complexity trade-off working points. We validate the efficiency of the proposed feature detection algorithm on image registration applications over a wide range of testbench conditions. <s> BIB006 | SIFT algorithm has a local feature detector and local histogram-based descriptor. It detects sets of interest points in an image and for each point it computes a histogram-based descriptor with 128 values. Since SIFT algorithm has been proposed by Lowe in 2004, number of algorithms tried to reduce the SIFT descriptor width to reduce the descriptor computation and matching time. Other algorithms used different window size and histogram compution pattern around each interset point either to speed up the computation process or increase the descripotr robustness against different transformations. One can note that the SIFT is rich with derivatives compared to the MSER algorithm. The reason is that there is not that much to be done to the MSER simple processing flow, unlike the SIFT which is more complicated. A brief overview of the SIFT algorithmic derivatives are discussed below. (1) ASIFT: Yu and Morel proposed an affine version of the SIFT algorithm in BIB005 , which is termed as ASIFT. This derivative simulates all image views obtainable by varying the latitude and the longitude angles. It then uses the standard SIFT method itself. ASIFT is proven to outperform SIFT and to be fully affine invariant BIB005 . However, the major drawback is the dramatic increase in the computational load. The code of the ASIFT can be found in . (2) CSIFT: Another variation of SIFT algorithm to colored space is the CSIFT BIB002 . It basically modifies the SIFT descriptor (in color invariant space) and is found to be more robust under blur change and affine change and less robust under illumination changes as compared to the standard SIFT. (3) n-SIFT: The n-SIFT algorithm is simply a straightforward extension of the standard SIFT algorithm to images (or data) with multi-dimensions BIB003 . The algorithm creates feature vectors through using hyperspherical coordinates for gradients and multidimensional histograms. The extracted features by n-SIFT can be matched efficiently in 3D and 4D images compared to the traditional SIFT algorithm. (4) PCA-SIFT: The PCA-SIFT BIB001 adopts an substitute feature vector derived using principal component analysis (PCA), that is based on the normalized gradient patches instead of weighted and smoothed HoG that is used in the standard SIFT. More importantly, it uses a window size 41x41 pixels to generate a descriptor of length 39x39x2= 3042, but it reduces the dimensionality of the descriptor from 3042 to 20 36 vector by using PCA, which may be more preferable in memory limited devices. (5) SIFT-SIFER Retrofit: The major difference between SIFT and SIFT with Error Resilience (SIFER) BIB006 algorithm is that SIFER (with an improvement in accuracy at the cost of the computational load) has better scale-space management using a higher granularity image pyramid representation and better scale-tuned filtering using a cosine modulated Gaussian (CMG) filter. This algorithm improved the accuracy and robustness of the feature by 20 percent for some criteria. However, the accuracy comes at a cost of increasing the execution time about two times slower than SIFT algorithm. (6) Other derivatives: Other SIFT derivatives include the SURF BIB004 , SIFT CS-LBP Retrofit, RootSIFT Retrofit, and CenSurE and STAR algorithms, which are summarized in . |
Allocating, Detecting and Mining Sound Structures: An Overview of Technical Tools <s> Adaptivity <s> An efficient and perfectly invertible signal transform feat uring a constant-Q frequency resolution is presented. The proposed approach is based on the idea of the recently introduced nonstationary Gabor frames. Exploiting the properties of the operator corresponding to a family of analysis atoms, this approach overcomes the problems of the classical implementations of constant-Q transforms, in particular, computational intensity and lack of i nvertibility. Perfect reconstruction is guaranteed by using an easy t o calculate dual system in the synthesis step and computation time is kept low by applying FFT-based processing. The proposed method is applied to real-life signals and evaluated in comparison to a related approach, recently introduced specifically for audio signa ls. <s> BIB001 </s> Allocating, Detecting and Mining Sound Structures: An Overview of Technical Tools <s> Adaptivity <s> Signal analysis with classical Gabor frames leads to a fixed time-frequency resolution over the whole time-frequency plane. To overcome the limitations imposed by this rigidity, we propose an extension of Gabor theory that leads to the construction of frames with time-frequency resolution changing over time or frequency. We describe the construction of the resulting nonstationary Gabor frames and give the explicit formula for the canonical dual frame for a particular case, the painless case. We show that wavelet transforms, constant-Q transforms and more general filter banks may be modeled in the framework of nonstationary Gabor frames. Further, we present the results in the finite-dimensional case, which provides a method for implementing the above-mentioned transforms with perfect reconstruction. Finally, we elaborate on two applications of nonstationary Gabor frames in audio signal processing, namely a method for automatic adaptation to transients and an algorithm for an invertible constant-Q transform. <s> BIB002 | While in classical Gabor frames, as introduced in the previous section, we obtain all samples of the STFT by applying the same window ϕ, shifted along a regular set of sampling points and taking FFT of the same length. Exploiting the concept of frames, we can achieve adaptivity of the resolution in either time or frequency. To do so, we relax the regularity of the classical Gabor frames, which leads to nonstationary Gabor frames (NSGT): For (k, m) ∈ I M × I M , we set (i) ϕ k,m = M mb k ϕ k for adaptivity in time. (ii) ϕ k,m = T kam ϕ m for adaptivity in frequency. A detailed mathematical analysis of NSGTs is beyond the scope of this contribution, but we wish to emphasize, that both analysis and synthesis can be done in a similar manner as in the regular case, that is, a diagonal frame operator can be achieved and perfect reconstruction is guaranteed by using either dual or tight windows. For all details, see BIB001 BIB002 . |
Allocating, Detecting and Mining Sound Structures: An Overview of Technical Tools <s> Examples and interpretation of adaptive transforms <s> We examine in some detail Mel Frequency Cepstral Coefficients (MFCCs) the dominant features used for speech recognition and investigate their applicability to modeling music. In particular, we examine two of the main assumptions of the process of forming MFCCs: the use of the Mel frequency scale to model the spectra; and the use of the Discrete Cosine Transform (DCT) to decorrelate the Mel-spectral vectors. We examine the first assumption in the context of speech/music discrimination. Our results show that the use of the Mel scale for modeling music is at least not harmful for this problem, although further experimentation is needed to verify that this is the optimal scale in the general case. We investigate the second assumption by examining the basis vectors of the theoretically optimal transform to decorrelate music and speech spectral vectors. Our results demonstrate that the use of the DCT to decorrelate vectors is appropriate for both speech and music spectra. MFCCs for Music Analysis Of all the human generated sounds which influence our lives, speech and music are arguably the most prolific. Speech has received much focused attention and decades of research in this community have led to usable systems and convergence of the features used for speech analysis. In the music community however, although the field of synthesis is very mature, a dominant paradigm has yet to emerge to solve other problems such as music classification or transcription. Consequently, many representations for music have been proposed (e.g. (Martin1998), (Scheirer1997), (Blum1999)). In this paper, we examine some of the assumptions of Mel Frequency Cepstral Coefficients (MFCCs) the dominant features used for speech recognition and examine whether these assumptions are valid for modeling music. MFCCs have been used by other authors to model music and audio sounds (e.g. (Blum1999)). These works however use cepstral features merely because they have been so successful for speech recognition without examining the assumptions made in great detail. MFCCs (e.g. see (Rabiner1993)) are short-term spectral features. They are calculated as follows (the steps and assumptions made are explained in more detail in the full paper): 1. Divide signal into frames. 2. For each frame, obtain the amplitude spectrum. 3. Take the logarithm. 4. Convert to Mel (a perceptually-based) spectrum. 5. Take the discrete cosine transform (DCT). We seek to determine whether this process is suitable for creating features to model music. We examine only steps 4 and 5 since, as explained in the full paper, the other steps are less controversial. Step 4 calculates the log amplitude spectrum on the so-called Mel scale. This transformation emphasizes lower frequencies which are perceptually more meaningful for speech. It is possible however that the Mel scale may not be optimal for music as there may be more information in say higher frequencies. Step 5 takes the DCT of the Mel spectra. For speech, this approximates principal components analysis (PCA) which decorrelates the components of the feature vectors. We investigate whether this transform is valid for music spectra. Mel vs Linear Spectral Modeling To investigate the effect of using the Mel scale, we examine the performance of a simple speech/music discriminator. We use around 3 hours of labeled data from a broadcast news show, divided into 2 hours of training data and 40 minutes of testing data. We convert the data to ‘Mel’ and ‘Linear’ cepstral features and train mixture of Gaussian classifiers for each class. We then classify each segment in the test data using these models. This process is described in more detail in the full paper. We find that for this speech/music classification problem, the results are (statistically) significantly better if Mel-based cepstral features rather than linear-based cepstral features are used. However, whether this is simply because the Mel scale models speech better or because it also models music better is not clear. At worst, we can conclude that using the Mel cepstrum to model music in this speech/music discrimination problem is not harmful. Further tests are needed to verify that the Mel cepstrum is appropriate for modeling music in the general case. Using the DCT to Approximate Principal Components Analysis We additionally investigate the effectiveness of using the DCT to decorrelate Mel spectral features. The mathematically correct way to decorrelate components is to use PCA (or equivalently the KL transform). This transform uses the eigenvalues of the covariance matrix of the data to be modeled as basis vectors. By investigating how closely these vectors approximate cosine functions we can get a feel for how well the DCT approximates PCA. By inspecting the eigenvectors for the Mel log spectra for around 3 hours of speech and 4 hours of music we see that the DCT is an appropriate transform for decorrelating music (and speech) log spectra. Future Work Future work should focus on a more thorough examination the parameters used to generate MFCC features such as the sampling rate of the signal, the frequency scaling (Mel or otherwise) and the number of bins to use when smoothing. Also worthy of investigation is the windowing size and frame rate. Suggested Readings Blum, T, Keislar, D., Wheaton, J. and Wold, E., 1999, Method and article of manufacture for content-based analysis, storage, retrieval, and segmentation of audio information, U.S. Patent 5, 918, 223. Martin, K.. 1998, Toward automatic sound source recognition: identifying musical instruments, Proceedings NATO Computational Hearing Advanced Study Institute. Rabiner, L. and Juang, B., 1993, Fundamentals of Speech Recognition, Prentice-Hall. Scheirer, E. and Slaney, M., 1997, Construction and evaluation of a robust multifeature speech/music discriminator, Proceedings IEEE International Conference on Acoustics, Speech, and Signal Processing. <s> BIB001 </s> Allocating, Detecting and Mining Sound Structures: An Overview of Technical Tools <s> Examples and interpretation of adaptive transforms <s> An efficient and perfectly invertible signal transform feat uring a constant-Q frequency resolution is presented. The proposed approach is based on the idea of the recently introduced nonstationary Gabor frames. Exploiting the properties of the operator corresponding to a family of analysis atoms, this approach overcomes the problems of the classical implementations of constant-Q transforms, in particular, computational intensity and lack of i nvertibility. Perfect reconstruction is guaranteed by using an easy t o calculate dual system in the synthesis step and computation time is kept low by applying FFT-based processing. The proposed method is applied to real-life signals and evaluated in comparison to a related approach, recently introduced specifically for audio signa ls. <s> BIB002 | We now illustrate the influence of adaptivity on the visual representation of audio signals. First, an analysis of a short excerpt of G. Ligeti's piano concert is given. This signal has percussive onsets in the piano and Glockenspiel voices and some orchestral background. Figure 1 first shows a regular Gabor (STFT) analysis and secondly, a representation in which the percussive parts are finely resolved by an adaptive NSGT. Our second example is an excerpt from a duet between violin and piano, by by J.Zorn. We can see three short segments: A vivid sequence of violin and piano notes followed by a calm violin melody with accompanying piano and finally an inharmonic part with chirp component. For this signal, we show an FFT-based Gabor transform (STFT) and an NSGT-based constant-Q transform in Figure 2 . In both cases the display of the frequency axis is logarithmic. It is obvious, that the NSGT, with adaptivity in the frequency domain, provides more accurate resolution of the harmonic components, in particular in low frequency regions. Note that with MFCC, very popular features used in speech and music processing, BIB001 , are obtained from an FFT-based STFT, using a logarithmic spacing of the frequency bins, while the analysis windows are linearly spaced. Given the new opportunities offered by adaptive NSGT it may well be worth reconsidering the underlying basic analysis. Returning to the quest for salient "sound objects" that stand out from their background, these examples show well, that the analysis tool influences, even by visual inspection, what may be considered as such. In particular, in the Ligeti example, the zooming-in onto the percussive onsets makes these components more distinguishable from their background. On the other hand, the harmonic parts require less coefficients, since they are represented by longer windows. It should be noted that, for further processing, e.g. the extraction of percussive components, this kind of representation is beneficial. Even more impressively, in the low frequency components of the second example, the single harmonics are not resolved at all in the FFT-based transform, while the NSGT-transform clearly separated them from a soft noise-floor background. Again, apart from pure visual evaluation, frequency separation of single components is necessary for applications such as transposition, cp. BIB002 . More visual and audio examples for adaptivity in both time and frequency can be found on http://www.univie.ac.at/nonstatgab/. |
Allocating, Detecting and Mining Sound Structures: An Overview of Technical Tools <s> Discussion and Future Work <s> In this paper the potential of using nonstationary Gabor transform for beat tracking in music is examined. Nonstationary Gabor transforms are a generalization of the short-time Fourier transform, which allow flexibility in choosing the number of bins per octave, while retaining a perfect inverse transform. In this paper, it is evaluated if these properties can lead to an improved beat tracking in music signals, thus presenting an approach that introduces recent findings in mathematics to music information retrieval. For this, both nonstationary Gabor transforms and short-time Fourier transform are integrated into a simple beat tracking framework. Statistically significant improvements are observed on a large dataset, which motivates to integrate the nonstationary Gabor transform into state of the art approaches for beat tracking and tempo estimation. <s> BIB001 </s> Allocating, Detecting and Mining Sound Structures: An Overview of Technical Tools <s> Discussion and Future Work <s> Most methods to compute content-based similarity between audio samples are based on descriptors representing the spectral envelope or the texture of the audio signal only. This paper describes an approach based on (i) the extraction of spectro-temporal profiles from audio and (ii) non-linear alignment of the profiles to calculate a distance measure. <s> BIB002 </s> Allocating, Detecting and Mining Sound Structures: An Overview of Technical Tools <s> Discussion and Future Work <s> Sparsity in redundant dictionaries has become a forceful paradigm in signal processing over the last two decades. Its basic idea is to represent a signal with as few coefficients as possible using overcomplete sets of expansion functions which are ideally well adapted to the signal class. In audio processing, different collections of windowed Fourier or cosine bases have proven to serve as well adapted dictionaries for most audio signals of relevance for humans, in particular speech and music. Furthermore, they are easy to interpret and reflect physical reality as they expand a signal with respect to the dimensions of time and frequency. <s> BIB003 | In this contribution we showed how, even by visual inspection, the choice of various representations that exploit prior knowledge about a signal (class) of interest, can influence the resulting analysis. It will and should be the topic of further, and necessarily interdisciplinary, research to scrutinize the influence of these choices on the performance of higher-level processing steps. Some preliminary steps in this direction have been pursued within the research project Audio Miner, cf. http://www.ofai.at/research/impml/projects/audiominer. html and BIB001 BIB003 BIB002 and shown promising results. We strongly believe, that using appropriate, still concise, representations of the original data is important to avoid biased results in higher-level processing steps. |
Quantum Programming Languages: An Introductory Overview <s> INTRODUCTION <s> An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences: (1) Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an encryption key publicly revealed by the intented recipient. Only he can decipher the message, since only he knows the corresponding decryption key. (2) A message can be “signed” using a privately held decryption key. Anyone can verify this signature using the corresponding publicly revealed encryption key. Signatures cannot be forged, and a signer cannot later deny the validity of his signature. This has obvious applications in “electronic mail” and “electronic funds transfer” systems. A message is encrypted by representing it as a number M, raising M to a publicly specified power e, and then taking the remainder when the result is divided by the publicly specified product, n , of two large secret primer numbers p and q. Decryption is similar; only a different, secret, power d is used, where e * d ≡ 1(mod (p - 1) * (q - 1)). The security of the system rests in part on the difficulty of factoring the published divisor, n . <s> BIB001 </s> Quantum Programming Languages: An Introductory Overview <s> INTRODUCTION <s> A computer is generally considered to be a universal computational device; i.e., it is believed able to simulate any physical computational device with a cost in computation time of at most a polynomial factor: It is not clear whether this is still true when quantum mechanics is taken into consideration. Several researchers, starting with David Deutsch, have developed models for quantum mechanical computers and have investigated their computational properties. This paper gives Las Vegas algorithms for finding discrete logarithms and factoring integers on a quantum computer that take a number of steps which is polynomial in the input size, e.g., the number of digits of the integer to be factored. These two problems are generally considered hard on a classical computer and have been used as the basis of several proposed cryptosystems. We thus give the first examples of quantum cryptanalysis. > <s> BIB002 </s> Quantum Programming Languages: An Introductory Overview <s> INTRODUCTION <s> An inexpensive faucet aerator is provided by three molded parts and a thin metal cup that holds two of the molded parts in assembled relationship. The first molded part is an elongated annulus provided with upstream and downstream recesses separated by an inner ring that helps to break up the liquid flow and serves as an abutment to support a second molded, jet-forming, part in the upstream recess and is arranged to be engaged by the third, molded, part located in the downstream recess and aiding in defining air intake means to the aerator. <s> BIB003 </s> Quantum Programming Languages: An Introductory Overview <s> INTRODUCTION <s> From the foundations of quantum theory to quantum technology - G. Alber quantum information theory, an invitation - R. Werner quantum communication - H. Weinfurter and A. Zeilinger quantum algorithms, applicable algebra and quantum physics - T.H. Beth and M. Rotteler mixed-state entanglement and quantum communication - M. Rotteler and R. Horodecki. <s> BIB004 | Quantum theory in its modern form dates back to the year 1926. Within the past eight decades, innumerable applications of this theory have been detected, which have had a deep impact on all aspects of technology, even on human life in general. Apparently, although this is a fairly long time, the potential of quantum theory for innovative applications still seems to remain inexhaustible. During the past two decades, several completely new applications of quantum physics at the edge between computer science and the new area of quantum information theory BIB003 BIB004 have been discovered. These are based on the observation that certain genuine quantum properties of a single or few quantum particles open the way to technologies not amenable to the classical physics. Quantum cryptography is the catchword which characterizes one group of these applications. The one-time pad of cryptography requires the distribution of long keys consisting of a sequence of random bits. This protocol has been proven to be unconditionally secure, provided that the key can be transmitted securely. Quantum key distribution can guarantee that the presence of an eavesdropper will be detected with certainty, at least in principle. Quantum cryptography is now available as a commercial product. Certainly, most spectacular has been the discovery by Peter Shor BIB002 that quantum systems can speed up the computational task of factorizing large integers into primes by many orders of magnitude. Building systems of this kind (which have been dubbed 'quantum computers') would make standard cryptographical protocols such as RSA BIB001 and ElGamal insecure, because these rely on the fact that no classical polynomial-time factoring algorithm is known. The activities of programming and designing algorithms require some sort of notation and a programming model. This applies to both classical and quantum computers. In particular, a notation, which is adapted to specific properties and peculiarities of programming quantum systems, is called a 'quantum programming language' (QPL). Therefore, since several years, the question whether conventional programming models and languages are sufficient or whether these should be replaced with new models and languages is being discussed. It might be argued that this discussion is premature (it has, in fact, jestingly been called 'putting the cart before the horse' ) because sufficiently sized quantum computers which could outperform modern classical PCs in factorizing large integers do not exist and will not exist in the foreseeable future. And 'will never exist' is even argued by some more pessimistic people. Nevertheless, there are at least two good reasons to discuss the issue now. First, quantum computers can be simulated on classical computers, although not efficiently in general, of course. So, at least for small numbers of 'qubits', quantum algorithms can be run on a classical computer. Second, there do exist applications which could be realized on smaller sized quantum computers, such as the simulation of complex systems [1, p. 204] . Some workers in the field argue that applications of this type might be realizable within a couple of years. THE COMPUTER JOURNAL, VOL. 50 NO. BIB004 2007 This article surveys discussions and current contributions to the young research area of QPLs, which potentially might support the development of quantum algorithms and quantum programs. The rest of the article is organized as follows. Section 2 summarizes some terminology of quantum theory and explains some basic ideas behind the formalism. There is an ongoing debate on the interpretation of quantum theory. Although this is beyond the scope of the present article, we give some comments in Section 3 because, in some of the publications on QPLs, questions of interpretation are touched. General design aspects are discussed in Section 4. Section 5 surveys in detail some of the approaches, such as the use of pseudocode, a procedural approach and an approach based on a conventional programming language. The section also discusses some more recent theoretical works related to lambda calculus, functional programming and linear logic. And, finally, Section 6 concludes the article with a summary. The intended audience for this article is computer scientists who are interested in getting some general idea of present attempts to define programming languages for quantum computers. With the exception of Section 2, most parts of the article are kept non-technical; in particular, in Section 5.6, no formalized introductions into categorical terminology, linear logic or formal semantics are given. However, references have been provided for those readers who want to see more details of the issues treated in this article. The present article is a largely extended and updated version of a seminar report on QPLs , see also Refs. [9 -11] . |
Quantum Programming Languages: An Introductory Overview <s> QUANTUM THEORY AND QUANTUM COMPUTATION <s> In May 1935, Albert Einstein, Boris Podolsky and Nathan Rosen published an argument that quantum mechanics fails to provide a complete description of physical reality. Today, 50 years later, the EPR paper and the theoretical and experimental work it inspired remain remarkable for the vivid illustration they provide of one of the most bizarre aspects of the world revealed to us by the quantum theory. <s> BIB001 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM THEORY AND QUANTUM COMPUTATION <s> A state of a composite quantum system is called classically correlated if it can be approximated by convex combinations of product states, and Einstein-Podolsky-Rosen correlated otherwise. Any classically correlated state can be modeled by a hidden-variable theory and hence satisfies all generalized Bell's inequalities. It is shown by an explicit example that the converse of this statement is false. <s> BIB002 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM THEORY AND QUANTUM COMPUTATION <s> From the Publisher: ::: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. ::: In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. ::: As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. ::: Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25% increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning. <s> BIB003 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM THEORY AND QUANTUM COMPUTATION <s> We define the model of quantum circuits with density matrices, where non-unitary gates are allowed. Measurements in the middle of the computation, noise and decoherence are implemented in a natural way in this model, which is shown to be equivalent in computational power to standard quantum circuits. ::: The main result in this paper is a solution for the subroutine problem: The general function that a quantum circuit outputs is a probabilistic function, but using pure state language, such a function can not be used as a black box in other computations. We give a natural definition of using general subroutines, and analyze their computational power. ::: We suggest convenient metrics for quantum computing with mixed states. For density matrices we analyze the so called ``trace metric'', and using this metric, we define and discuss the ``diamond metric'' on superoperators. These metrics enable a formal discussion of errors in the computation. ::: Using a ``causality'' lemma for density matrices, we also prove a simple lower bound for probabilistic functions. <s> BIB004 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM THEORY AND QUANTUM COMPUTATION <s> An inexpensive faucet aerator is provided by three molded parts and a thin metal cup that holds two of the molded parts in assembled relationship. The first molded part is an elongated annulus provided with upstream and downstream recesses separated by an inner ring that helps to break up the liquid flow and serves as an abutment to support a second molded, jet-forming, part in the upstream recess and is arranged to be engaged by the third, molded, part located in the downstream recess and aiding in defining air intake means to the aerator. <s> BIB005 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM THEORY AND QUANTUM COMPUTATION <s> From the foundations of quantum theory to quantum technology - G. Alber quantum information theory, an invitation - R. Werner quantum communication - H. Weinfurter and A. Zeilinger quantum algorithms, applicable algebra and quantum physics - T.H. Beth and M. Rotteler mixed-state entanglement and quantum communication - M. Rotteler and R. Horodecki. <s> BIB006 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM THEORY AND QUANTUM COMPUTATION <s> Optimal implementation of quantum gates is crucial for designing a quantum computer. We consider the matrix representation of an arbitrary multiqubit gate. By ordering the basis vectors using the Gray code, we construct the quantum circuit which is optimal in the sense of fully controlled single-qubit gates and yet is equivalent with the multiqubit gate. In the second step of the optimization, superfluous control bits are eliminated, which eventually results in a smaller total number of the elementary gates. In our scheme the number of controlled NOT gates is O(4(n)) which coincides with the theoretical lower bound. <s> BIB007 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM THEORY AND QUANTUM COMPUTATION <s> Abstract In this paper we give a self-contained introduction to the conceptional and mathematical foundations of quantum information theory. In the first part we introduce the basic notions like entanglement, channels, teleportation, etc. and their mathematical description. The second part is focused on a presentation of the quantitative aspects of the theory. Topics discussed in this context include: entanglement measures, channel capacities, relations between both, additivity and continuity properties and asymptotic rates of quantum operations. Finally, we give an overview on some recent developments and open questions. <s> BIB008 | Quantum theory is the theory of physical processes at an atomic and subatomic scale. It is a state theory, which means that the basic notions are state of a system, evolution of a system's state in time, observables and measurement, the process of measuring observables in a given system state. There are many up-to-date textbooks BIB005 BIB006 and tutorials [14 -16] on quantum computation, including a collection of on-line articles on different levels of abstraction . Therefore, we restrict ourselves to a brief summary of terminology and notation, but discuss some points and problems of the physical background. The standard formalism underlying quantum theory defines a general framework which leaves room for empirical choices such as the system's number of degrees of freedom and the 'law of force' (technically: the Hamiltonian). Moreover, quantum theory is a statistical theory: observational results are probabilistic including the limiting cases of probability 0 or 1. Formally, the arena of quantum theory is a Hilbert space H, a complex vector space with an inner product which is complete with respect to this product. The traditional notation, due to Dirac, for elements of this vector space is jcl, where c is some label. This notation, which is quite popular in the physics community, has many advantages for practical calculations and a few disadvantages and also, occasionally, some potential ambiguities. Readers who prefer an alternative presentation may consult Ref. [18, pp. 531-541] , where vectors and matrices are written in block form. However, Mermin's tutorial , which is specifically aimed at readers with no prior familiarity with quantum mechanics, uses Dirac's notation even for classical bits. Usually, in the context of quantum computation, the state space is a finite collection of two-dimensional Hilbert spaces. So, the dimension of the Hilbert space is finite and, to a large extent, elementary linear algebra is all that is needed at this level of abstraction. The theorem of Riesz states that for every vector jfl [ H, there exists exactly one continuous linear functional on H, denoted by kfj, such that the inner product kfjcl may be regarded as an application of kfj to the vector jcl. The linear functionals on H also form a Hilbert space, the dual space H*. In component language, the operations jcl [ H 7 ! kcj [ H* and vice versa (dual correspondence) are also known as 'lowering' and 'raising' of indices. The states of a system, more precisely, mixed states, are linear positive operators r on H with tr r ¼ 1 (tr ¼ trace, the sum of the diagonal elements of the matrix r ij representing the operator with respect to some basis). In a closed system, the time evolution of a state r is given by a unitary operator U according to r 0 ¼ U rU*. Here, a norm-preserving invertible operator is called unitary. Particularly important is the operation of building larger state spaces from smaller ones. Two quantum systems A and B with Hilbert spaces H A and H B , respectively, can be joined into one system A & B. In Hilbert space terminology, the resulting bipartite system is represented by the tensor product H A&B ¼ H A H B . The dimension of the Hilbert space of a composite system A&B is given by dim Traditional textbooks usually identify system states with vectors. In fact, there are special states, called pure states, which informally could be paraphrased as 'states with as little randomness as possible'. Formally, a pure state r can be characterized by tr(r 2 ) ¼ 1 and may canonically be represented in the form of a dyad r ¼ jclkcj. The main drawback of exclusively using this notion of system state lies in the fact that a composite quantum system can be in a pure state, whereas subsystems can, at the same time, be in mixed states, which means that only partial information is available on the subsystems. Therefore, the notion of mixed states introduces a unifying view. A system with the property that maximal information is available on the system as a whole but no information at all is available on the subsystems is called (maximally) entangled. According to Schrödinger, this is the fundamental QUANTUM PROGRAMMING LANGUAGES 135 property setting quantum physics apart from the classical physics. For many decades, entanglement has been considered a strange and bizarre feature of quantum physics. One of the fundamental new insights of modern quantum information theory is the observation that entanglement serves as a resource for potential applications such as fast factorization of integers into primes. The now generally accepted formal definition of entanglement, due to Werner BIB002 BIB001 famous popular paper. In many situations, observables can adequately be represented by self-adjoint operators the eigenvalues of which are the potential measuring values. A measurement yields probabilistically one of these values and, additionally, projects the system state onto the eigenspace of the measured value. In the context of quantum information theory, some of these traditional postulates have turned out to be oversimplified. Generalizations using the notions of quantum operation or quantum channel require some more advanced formalism, which will not be treated here in detail. Indepth introductions to this formalism can be found in most advanced texts, see, for example, Refs. BIB005 BIB006 BIB008 There are three basic steps in a quantum process: system preparation, system transformation, i.e. unitary time evolution of a closed system, and measurement. A basic task for a physicist who faces the problem of modelling a concrete quantum system is to find a suitable Hilbert space H, representing the number of degrees of freedom, and the unitary operators U (or the Hamiltonian), representing the system's time evolution. Presently, in quantum computation, the most popular model is the qubit or gate model, which may, in the context of this paper, also serve as an example of the general formalism sketched earlier. In this model, a quantum network is a composite system consisting of n qubits. A one-qubit system is a two-level system, for example, a spin-1/2-particle such as an electron or a photon with two polarization states (right/ left or vertical/horizontal polarization). The Hilbert space modelling these systems is H 2 ¼ C 2 and the Hilbert space of a composite system of n qubits is H n ¼ H 2 n . So, in particular, adding one qubit to a system doubles its dimension: Another way of writing this three-qubit state is jcl ¼ a 0 j0l þ a 1 j1l þ . . . þ a 7 j7l with the obvious re-interpretation of bit sequences as integers. Applying a unitary operation to jcl means to proceed one step in time or, to put it differently, to process all of the numbers 0-7 in one step. Therefore, this capability of quantum systems of processing many integer values simultaneously has been called 'quantum parallelism'. In an n-qubit system, an operator U is represented by a 2 n  2 n matrix, which obviously gets extremely large even for modest values of n. So, an important question is how this matrix can be broken down into smaller parts. A number of theorems exist which give (partial) answers to this question BIB005 : single qubit and CNOT gates (discussed subsequently) can be used to implement an arbitrary unitary operation on n qubits. These gates are universal but 'no straightforward method is known to implement these in a fashion which is resistant to errors' [1, p. 194] . But, there exist discrete sets of gates which can be used to perform universal quantum computation using quantum error-correcting codes. Arbitrary unitary operations can be approximated by discrete sets of gates. One such set of gates is: Hadamard gate, phase gate, CNOT gate (controlled NOT, XOR) and T-gate. Figure 1 shows the graphical representations of these gates, their matrix form and their operation on states. More recent work on breaking up large unitaries into more elementary constituents can be found in Ref. BIB007 and references therein. In CLRS-style pseudocode notation BIB003 , a quantum computation in its most basic form can be written as follows: until the desired level of statistical confidence has been reached The traditional gate-model relies on the assumption that at any given time, the system is in a pure state. There are many situations, however, which cannot be described adequately, if at all, within this setting. A generalization of the gate model with mixed states has been given by Aharonov et al. BIB004 . In their article a quantum circuit is defined as a directed acyclic graph, where each node represents one gate. The gate itself is represented by a so-called superoperator, a trace preserving (in general, trace non-increasing), completely positive linear map from mixed states on k qubits to mixed states on l qubits, where k = l in general. Situations which can thus be treated adequately include measurements in the 136 R. RÜ DIGER middle of a computation, decoherence and noise and the so-called subroutine problem. The notion of a superoperator (alternative or closely related notions are quantum operations and channels) is sufficiently general to deal with situations like unitary and non-unitary evolution like measurement or quantum noise in a unified formal framework. The physical idea in the background of this formalism is the question how quantum operations in an open system can be described intrinsically, i.e. without reference to the environment. The article by Aharonov et al. gives a readable account and motivation of this terminology, see also the introductory texts on quantum information theory, cited at the beginning of this section. |
Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 137 <s> We have measured the linear polarization correlation of the photons emitted in a radiative atomic cascade of calcium. A high-efficiency source provided an improved statistical accuracy and an ability to perform new tests. Our results, in excellent agreement with the quantum mechanical predictions, strongly violate the generalized Bell's inequalities, and rule out the whole class of realistic local theories. No significant change in results was observed with source-polarizer separations of up to 6.5 m. <s> BIB001 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 137 <s> A Franson-type test of Bell inequalities by photons 10.9 km apart is presented. Energy-time entangled photon pairs are measured using two-channel analyzers, leading to a violation of the inequalities by 16 standard deviations without subtracting accidental coincidences. Subtracting them, a two-photon interference visibility of 95.5% is observed, demonstrating that distances up to 10 km have no significant effect on entanglement. This sets quantum cryptography with photon pairs as a practical competitor to the schemes based on weak pulses. <s> BIB002 | In view of the loss of direct intuition compared with the classical physics, the question is legitimate, whether quantum theory is the definite theory of micro physics or whether there might be a more subtle theory predicting more details than quantum theory does, which is, in some way, 'closer to reality'. Of course, computer science can hardly solve this problem and no one (anyway, no physicist) expects this. But, computer science offers some terminology which is flexible enough to shed some light on the subject from a different perspective. From a computer science perspective, quantum theory is a kind of automata theory: a system has to be initialized ('prepared'), the system dynamics is described by a sequence of states and the final result will be output ('measured'). The state space is a kind of abstraction and it can reasonably be asked in which way the automaton has been realized or implemented. In computer science, implementation is commonly seen as a kind of mapping to a real standard system, for example, a standard hardware, operating system or programming language. In physics, the situation is similar insofar that the Hilbert space is a highly abstracted way of describing experiments. (Peres as quoted in Ref. [1, p. 112]:'. . . quantum phenomena do not occur in a Hilbert space, they occur in a laboratory.') It is, however, not clear whether the question 'Is it possible in the context of theoretical physics to talk about reality itself?' really makes sense. Physics always describes natural phenomena, although in the classical physics, notably classical mechanics, the gap between formal description and sensual perception seems to be small. Amazingly, computer science offers some more subtle terminology, which could help in clarifying the relation between abstract description and implementation. In the context of the specification language LOTOS, Bolognesi and Brinksma [27, p. 39] write in their tutorial: 'In LOTOS the words specification and implementation have a relative meaning, not an absolute one. Given two (syntactically homogeneous) LOTOS specifications S 1 and S 2 , we will say that S 2 is an implementation of the specification S 1 when, informally, S 2 gives a more structured and detailed description of the system specified in S 1 .' This definition can successfully be applied to physics. In the history of physics, there are many examples of successful refinements. For example, statistical mechanics can be seen as a proper refinement of thermodynamics. All of the results of the latter are reproduced by statistical mechanics and, additionally, there are phenomena such as fluctuations, which can be explained by statistical mechanics. The analogy between quantum mechanics and thermodynamics elucidates Einstein's position towards quantum mechanics, see the Einstein-Born letters [28, letter dated 50/09/15]. A detailed appreciation of Einstein's historical role in the development of quantum mechanics from a perspective of modern quantum information theory has been given by Werner . So, instead of looking for a 'realization' of physical phenomena which obviously are successfully described by the Hilbert space formalism, one should ask whether refinements of the theory exist, which could explain the theory in much the same way as statistical mechanics explains thermodynamics. The so-called local hidden variable theories were one such attempt to explain the statistical nature of quantum phenomena in much the same way as the stochastic behaviour of classical probabilistic systems can be explained. Throwing dice in the usual manner is influenced by innumerable parameters which cannot be controlled fully. Embodying this idea into a theory of quantum processes led to predictions, which were substantially different from the conventional quantum physics. In fact, the celebrated Bell inequalities state that these theories set stronger bounds on a certain parameter, the Bell correlation, than the quantum theory. The key feature setting physics apart from mathematics or computer science is the existence of a 'supreme referee': the experiment. And, in fact, experiments BIB001 BIB002 say that Bell's inequalities can be violated by quantum systems, thus ruling out the theories with local hidden parameters. Therefore, to summarize, it is an open question whether a proper refinement of quantum theory exists. In this sense, one might say that there does not yet exist an entirely satisfactory explanation of how quantum phenomena are 'realized' in nature. This should not, however, obscure the overwhelming success of quantum theory in its present form: theoretical predictions agree perfectly with experimental results and no contradictions between observational experiences and the mathematical framework of the theory [1, p. 2] are known. This somewhat lengthy discussion should point out that re-formulating and possibly refining quantum theory appear to be a risky matter. Whether a discussion on QPLs can contribute anything to these issues is certainly an open question. Nevertheless, with respect to the perspective on the foundation of quantum theory, the attempts of re-formulating quantum theory (Section 5.6) are certainly the most exciting aspect of this research. |
Quantum Programming Languages: An Introductory Overview <s> General goals of designing QPLs <s> A few conventions for thinking about and writing quantum pseudocode are proposed. The conventions can be used for presenting any quantum algorithm down to the lowest level and are consistent with a quantum random access machine (QRAM) model for quantum computing. In principle a formal version of quantum pseudocode could be used in a future extension of a conventional language. <s> BIB001 </s> Quantum Programming Languages: An Introductory Overview <s> General goals of designing QPLs <s> It is becoming increasingly clear that, if a useful device for quantum computation will ever be built, it will be embodied by a classical computing machine with control over a truly quantum subsystem, this apparatus performing a mixture of classical and quantum computation. This paper investigates a possible approach to the problem of programming such machines: a template high level quantum language is presented which complements a generic general purpose classical language with a set of quantum primitives. The underlying scheme involves a run-time environment which calculates the byte-code for the quantum operations and pipes it to a quantum device controller or to a simulator. This language can compactly express existing quantum algorithms and reduce them to sequences of elementary operations; it also easily lends itself to automatic, hardware independent, circuit simplification. A publicly available preliminary implementation of the proposed ideas has been realised using the language. <s> BIB002 </s> Quantum Programming Languages: An Introductory Overview <s> General goals of designing QPLs <s> This article is a brief and subjective survey of quantum programming language research. 1 Quantum Computation Quantum computing is a relatively young subject. It has its beginnings in 1982, when Paul Benioff and Richard Feynman independently pointed out that a quantum mechanical system can be used to perform computations [11, p.12]. Feynman’s interest in quantum computation was motivated by the fact that it is computationally very expensive to simulate quantum physical systems on classical computers. This is due to the fact that such simulation involves the manipulation is extremely large matrices (whose dimension is exponential in the size of the quantum system being simulated). Feynman conceived of quantum computers as a means of simulating nature much more efficiently. The evidence to this day is that quantum computers can indeed perform certain tasks more efficiently than classical computers. Perhaps the best-known example is Shor’s factoring algorithm, by which a quantum computer can find the prime factors of any integer in probabilistic polynomial time [15]. There is no known classical probabilistic algorithm which can solve this problem in polynomial time. In the ten years since the publication of Shor’s result, there has been an enormous surge of research in quantum algorithms and quantum complexity theory. 2 Quantum Programming Languages Quantum physics involves phenomena, such as superposition and entanglement, whose properties are not always intuitive. These same phenomena give quantum computation its power, and are often at the heart of an interesting quantum algorithm. However, there does not yet seem to be a unifying set of principles by which quantum algorithms are developed; each new algorithm seems to rely on a unique set of “tricks” to achieve its particular goal. One of the goals of programming language design is to identify and promote useful “high-level” concepts — abstractions or paradigms which allow humans 2 to think about a problem in a conceptual way, rather than focusing on the details of its implementation. With respect to quantum programming, it is not yet clear what a useful set of abstractions would be. But the study of quantum programming languages provides a setting in which one can explore possible language features and test their usefulness and expressivity. Moreover, the definition of prototypical programming languages creates a unifying formal framework in which to view and analyze existing quantum algorithm. 2.1 Virtual Hardware Models Advances in programming languages are often driven by advances in compiler design, and vice versa. In the case of quantum computation, the situation is complicated by the fact that no practical quantum hardware exists yet, and not much is known about the detailed architecture of any future quantum hardware. To be able to speak of “implementations”, it is therefore necessary to fix some particular, “virtual” hardware model to work with. Here, it is understood that future quantum hardware may differ considerably, but the differences should ideally be transparent to programmers and should be handled automatically by the compiler or operating system. There are several possible virtual hardware models to work with, but fortunately all of them are equivalent, at least in theory. Thus, one may pick the model which fits one’s computational intuitions most closely. Perhaps the most popular virtual hardware model, and one of the easiest to explain, is the quantum circuit model. Here, a quantum circuit is made up from quantum gates in much the same way as a classical logic circuit is made up from logic gates. The difference is that quantum gates are always reversible, and they correspond to unitary transformations over a complex vector space. See e.g. [3] for a succinct introduction to quantum circuits. Of the two basic quantum operations, unitary transformations and measurements, the quantum circuit model emphasizes the former, with measurements always carried out as the very last step in a computation. Another virtual hardware model, and one which is perhaps even better suited for the interpretation of quantum programming languages, is the QRAM model of Knill [9]. Unlike the quantum circuit model, the QRAM models allows unitary transformations and measurements to be freely interleaved. In the QRAMmodel, a quantum device is controlled by a universal classical computer. The quantum device contains a large, but finite number of individually addressable quantum bits, much like a RAM memory chip contains a multitude of classical bits. The classical controller sends a sequence of instructions, which are either of the form “apply unitary transformation U to qubits i and j” or “measure qubit i”. The quantum device carries out these instruction, and responds by making the results of the measurements available. A third virtual hardware model, which is sometimes used in complexity theory, is the quantum Turing machine. Here, measurements are never performed, and the entire operation of the machine, which consists of a tape, head, and finite control, is assumed to be unitary. While this model is theoretically equivalent <s> BIB003 </s> Quantum Programming Languages: An Introductory Overview <s> General goals of designing QPLs <s> We propose the design of a programming language for quantum computing. Traditionally, quantum algorithms are frequently expressed at the hardware level, for instance in terms of the quantum circuit model or quantum Turing machines. These approaches do not encourage structured programming or abstractions such as data types. In this paper, we describe the syntax and semantics of a simple quantum programming language with high-level features such as loops, recursive procedures, and structured data types. The language is functional in nature, statically typed, free of run-time errors, and has an interesting denotational semantics in terms of complete partial orders of superoperators. <s> BIB004 </s> Quantum Programming Languages: An Introductory Overview <s> General goals of designing QPLs <s> We define quantum cellular automata as infinite quantum lattice systems with discrete time dynamics, such that the time step commutes with lattice translations and has strictly finite propagation speed. In contrast to earlier definitions this allows us to give an explicit characterization of all local rules generating such automata. The same local rules also generate the global time step for automata with periodic boundary conditions. Our main structure theorem asserts that any quantum cellular automaton is structurally reversible, i.e., that it can be obtained by applying two blockwise unitary operations in a generalized Margolus partitioning scheme. This implies that, in contrast to the classical case, the inverse of a nearest neighbor quantum cellular automaton is again a nearest neighbor automaton. ::: We present several construction methods for quantum cellular automata, based on unitaries commuting with their translates, on the quantization of (arbitrary) reversible classical cellular automata, on quantum circuits, and on Clifford transformations with respect to a description of the single cells by finite Weyl systems. Moreover, we indicate how quantum random walks can be considered as special cases of cellular automata, namely by restricting a quantum lattice gas automaton with local particle number conservation to the single particle sector. <s> BIB005 </s> Quantum Programming Languages: An Introductory Overview <s> General goals of designing QPLs <s> We introduce the language QML, a functional language for quantum computations on finite types. Its design is guided by its categorical semantics: QML programs are interpreted by morphisms in the category FQC of finite quantum computations, which provides a constructive semantics of irreversible quantum computations realisable as quantum gates. QML integrates reversible and irreversible quantum computations in one language, using first order strict linear logic to make weakenings explicit. Strict programs are free from decoherence and hence preserve superpositions and entanglement -which is essential for quantum parallelism. <s> BIB006 </s> Quantum Programming Languages: An Introductory Overview <s> General goals of designing QPLs <s> We present the quantum programming language cQPL which is an extended version of QPL [Sel04b]. It is capable of quantum communication and it can be used to formulate all possible quantum algorithms. Additionally, it possesses a denotational semantics based on a partial order of superoperators and uses fixed points on a generalised Hilbert space to formalise (in addition to all standard features expected from a quantum programming language) the exchange of classical and quantum data between an arbitrary number of participants. Additionally, we present the implementation of a cQPL compiler which generates code for a quantum simulator. <s> BIB007 </s> Quantum Programming Languages: An Introductory Overview <s> General goals of designing QPLs <s> We discuss the role of classical control in the context of reversible quantum cellular automata. Employing the structure theorem for quantum cellular automata, we give a general construction scheme to turn an arbitrary cellular automaton with external classical control into an autonomous one, thereby proving the computational equivalence of these two models. We use this technique to construct a universally programmable cellular automaton on a one-dimensional lattice with single cell dimension 12. <s> BIB008 | In computer science, language design is a highly controversial matter. On the one hand, a vast world of (classical) programming languages exists and, on the other hand, even the relevance of the subject itself is a matter of debate. Some people consider programming languages as a marginal issue, just as a means of getting a computer to do some useful work, whereas scientists involved in language and system design consider this issue as central to the whole field of computer science. Just to cite one of the pioneers of language and system design, Wirth [33, p. 10]: '. . . I hope, I have clearly expressed my opinion that programming, programming style, programming discipline, and therewith programming languages are still not merely one of many issues in computer science, but pillars.' All of these controversial matters have also to be discussed in the context of QPLs and, of course, many more which are specifically related to quantum physics. In this section, QPL design will be considered from an informal perspective. Here, some general goals which should be achieved will be put into the foreground. The following is a non-exhaustive, subjective commented list of some aspects, which will play a certain role in designing QPLs. Some of these desiderata will certainly be controversial or might considered as marginal, and some might even turn out not to be realizable. In their article on Q language, Bettelli et al. BIB002 list several desiderata for a QPL. According to these authors, a QPL should fulfil the following requirements: completeness: it must be possible to code every quantum algorithm or, more generally, every quantum program; classical extension: the quantum language must include a high level 'classical computing paradigm'; separability: classical and quantum programming must be kept separated; expressivity: the language must provide high-level constructs; hardware independence: the language should not rely on details of the quantum hardware. There may be some other and more specific desiderata. A QPL should or should possibly (i) run on top of a simulator as well as on a real system, (ii) help in discovering new efficient quantum algorithms, (iii) enable a layperson to write quantum programs, (iv) comply with the concept of abstract data types (ADTs), (v) provide high-level language constructs, (vi) support quantum data and quantum control, (vii) support programming in the large and programming communication processes, (viii) be as close as possible to classical language concepts for pragmatic reasons and (ix) support quantum processes completely, including measurement. In the sequel, we give some comments on this list. It should be possible to couple the language, more precisely, the run-time system, to a simulator and potentially replace the simulator with a real quantum computer without the need of changing parts of the program in any way. As stated earlier, quantum computers can (non-efficiently) be simulated by classical computers simply by integrating the basic equations for time evolution of quantum systems. But here, a caveat should be added: this statement tacitly assumes that the quantum system is not composed of parts which are spatially separated. The attempt of simulating a spatially separated quantum system by a classical system, which is also spatially separated, requires additional resources (classical communication) and introduces additional timing constraints which would have no counterpart in reality. In contrast to general opinion that programming languages are merely a means of getting a computer to do some useful work, language designers emphasize that programming languages also serve as a means for communication between humans. Therefore, QPLs should enable programmers to reason about structures of quantum algorithms and programs. Ideally, so the argument of many people, a well-designed QPL should aid in discovering new quantum algorithms. However, a comparison with the situation of classical programming languages suggests that the idea of languages being helpful in this context should be regarded sceptically. An undergraduate student having finished a programming course in Pascal will presumably not be able to re-invent Quicksort, for example. In fact, up to now, none of the approaches seems to have led to a discovery of new quantum algorithms. One of the fundamental goals of classical programming languages is to enable a layperson to write complex programs without a detailed knowledge of the architecture of the hardware or the operating system. In the context of quantum programs, this means that a computer scientist could program quantum computers without a detailed knowledge of the QUANTUM PROGRAMMING LANGUAGES 139 underlying physics. This could possibly be achieved by encapsulating typical quantum operations in a library: this is the idea of ADTs. There remain, however, at least two problems with this approach. First, non-experts will not have an intuitive understanding of elementary quantum operations. So, they will hardly be able to combine segments of quantum programs in a reasonable manner without some kind of formalized support. Second, if an algorithm is encapsulated as a whole such that its effect (not its efficiency, of course) can be understood classically, some information on the probabilities involved must be provided. From this perspective, quantum algorithms look like extremely fast classical probabilistic algorithms. It is, however, very unlikely that within this setting, new quantum algorithms will be discovered. Knill BIB001 proposes a pseudocode notation for quantum programs and the model of a quantum random access machine (QRAM) in which the quantum system is being controlled by a classical computer; this model has been influential in the design of several QPLs, see Section 5. For example, in Selinger's BIB003 BIB004 language QPL/QFC quantum flow charts (QFC) this idea has been put into the slogan 'classical control, quantum data'. One could as well imagine a situation in which both data and control are quantum mechanical. In the proposed language QML BIB006 , this slogan has been modified to 'quantum data and quantum control'. This idea, which permits superposed instructions as well as superposed data, has already been put forward in the context of quantum cellular automata, see Refs. BIB008 BIB005 and references therein. QPLs should also support programming in the large, i.e. they should support some kind of modularization. This is a rather non-trivial point because when composing two quantum systems into one single system, the existence of nonclassical correlations has to be taken into consideration. Obviously, a classical modularization scheme will not work, because in this setting, global memory will be additive instead of being multiplicative. QPLs should also be able to express quantum communication protocols. In recent work by Mauerer BIB007 , the language cQPL, a variant of Selinger's language QPL, has been formulated, which extends QPL with communication capabilities. When designing a QPL, it is certainly a good idea to preserve as many classical language features as possible. Consequently, many languages introduce a quantum-if by means of the unitary two-qubit-operation CNOT. Although there is nothing wrong with this, it might possibly suggest a too close analogy with classical languages. The point is that the role of the target and the control bits will be exchanged, if the computational basis is replaced by the Bell-basis, which consists of suitable linear superpositions of the basis vectors j00l, j01l, j10l and j11l. Moreover, if the target qubit is put into an equally weighted superposition by applying a Hadamard operation, then the resulting two-qubit state is a maximally entangled state, i.e. the state of both qubits is completely undefined. So, in these situations, the analogy to a classical 'If' is lost completely. This is again an example of the counterintuitiveness of quantum mechanics, see, for example, Ref. [1, p. 179 ] for more details. The area of QPLs is rapidly evolving and some of the approaches are certainly preliminary steps. In the final form of a QPL, the measurement process must certainly be incorporated because it is an integral constituent of quantum theory. It would, in fact, be very easy to compute efficiently the values of a function f : Z ! Z with a quantum computer. The crux of the matter is that measuring one value irreversibly destroys the information on all the other values. Therefore, the extraction of information on a function is non-trivial. What can, in fact, be extracted is the information on properties of the function as a whole such as the period of a periodic function. This is one of the key ingredients of Shor's algorithm. |
Quantum Programming Languages: An Introductory Overview <s> 140 <s> The number of steps any classical computer requires in order to find the prime factors of an l-digit integer N increases exponentially with l, at least using algorithms known at present1. Factoring large integers is therefore conjectured to be intractable classically, an observation underlying the security of widely used cryptographic codes1,2. Quantum computers3, however, could factor integers in only polynomial time, using Shor's quantum factoring algorithm4,5,6. Although important for the study of quantum computers7, experimental demonstration of this algorithm has proved elusive8,9,10. Here we report an implementation of the simplest instance of Shor's algorithm: factorization of N = 15 (whose prime factors are 3 and 5). We use seven spin-1/2 nuclei in a molecule as quantum bits11,12, which can be manipulated with room temperature liquid-state nuclear magnetic resonance techniques. This method of using nuclei to store quantum information is in principle scalable to systems containing many quantum bits13, but such scalability is not implied by the present work. The significance of our work lies in the demonstration of experimental and theoretical techniques for precise control and modelling of complex quantum computers. In particular, we present a simple, parameter-free but predictive model of decoherence effects14 in our system. <s> BIB001 </s> Quantum Programming Languages: An Introductory Overview <s> 140 <s> We develop a type theory and provide a denotational semantics for a simple fragment of the quantum lambda calculus, a formal language for quantum computation based on linear logic. In our semantics, variables inhabit certain Hilbert bundles, and computations are interpreted as the appropriate inner product preserving maps between Hilbert bundles. These bundles and maps form a symmetric monoidal closed category, as expected for a calculus based on linear logic. <s> BIB002 </s> Quantum Programming Languages: An Introductory Overview <s> 140 <s> The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine. <s> BIB003 </s> Quantum Programming Languages: An Introductory Overview <s> 140 <s> The paper develops a model of quantum computing from the perspective of functional programming. The model explains the fundamental ideas of quantum computing at a level of abstraction that is familiar to functional programmers. The model also illustrates some of the inherent difficulties in interpreting quantum mechanics and highlights the differences between quantum computing and traditional (functional or otherwise) computing models. <s> BIB004 </s> Quantum Programming Languages: An Introductory Overview <s> 140 <s> Compilers and computer-aided design tools will be essential for quantum computing. We present a computer-aided design flow that transforms a high-level language program representing a quantum computing algorithm into a technology-specific implementation. We trace the significant steps in this flow and illustrate the transformations to the representation of the quantum program. The focus of this paper is on the languages and transformations needed to represent and optimize a quantum algorithm along the design flow. Our software architecture provides significant benefits to algorithm designers, tool builders, and experimentalists. Of particular interest are the trade-offs in performance and accuracy that can be obtained by weighing different optimization and error-correction procedures at given levels in the design hierarchy. <s> BIB005 </s> Quantum Programming Languages: An Introductory Overview <s> 140 <s> These ‘lecture notes’ are based on joint work with Samson Abramsky. I will survey and informally discuss the results of [3, 4, 5, 12, 13] in a pedestrian not too technical way. These include: • ‘The logic of entanglement’, that is, the identification and abstract axiomatization of the ‘quantum information-flow’ which enables protocols such as quantum teleportation. 1 To this means we defined strongly compact closed categories which abstractly capture the behavioral properties of quantum entanglement. • ‘Postulates for an abstract quantum formalism’in which classical informationflow (e.g. token exchange) is part of the formalism. As an example, we provided a purely formal description of quantum teleportation and proved correctness in <s> BIB006 </s> Quantum Programming Languages: An Introductory Overview <s> 140 <s> Elaborating on our joint work with Abramsky in quant-ph/0402130 we further unravel the linear structure of Hilbert spaces into several constituents. Some prove to be very crucial for particular features of quantum theory while others obstruct the passage to a formalism which is not saturated with physically insignificant global phases. ::: First we show that the bulk of the required linear structure is purely multiplicative, and arises from the strongly compact closed tensor which, besides providing a variety of notions such as scalars, trace, unitarity, self-adjointness and bipartite projectors, also provides Hilbert-Schmidt norm, Hilbert-Schmidt inner-product, and in particular, the preparation-state agreement axiom which enables the passage from a formalism of the vector space kind to a rather projective one, as it was intended in the (in)famous Birkhoff & von Neumann paper. ::: Next we consider additive types which distribute over the tensor, from which measurements can be build, and the correctness proofs of the protocols discussed in quant-ph/0402130 carry over to the resulting weaker setting. A full probabilistic calculus is obtained when the trace is moreover linear and satisfies the \em diagonal axiom, which brings us to a second main result, characterization of the necessary and sufficient additive structure of a both qualitatively and quantitatively effective categorical quantum formalism without redundant global phases. Along the way we show that if in a category a (additive) monoidal tensor distributes over a strongly compact closed tensor, then this category is always enriched in commutative monoids. <s> BIB007 | R. RÜ DIGER Unfortunately, the meaning of 'controlling quantum hardware' is not quite obvious. Many of the existing QPLs have been combined with a simulator based on strongly idealized models of hardware: the system is assumed to be perfectly isolated from the environment so that decoherence effects (i.e. effects destroying interference) will not come into play, unitary operations can be made arbitrarily exact and error correction is not an explicit part of the model. Of course, it is highly non-trivial and seems in fact impossible to incorporate all of these features into a working simulation model. An article on the first implementation of an NMR-based quantum computer BIB001 , which could factorize the number 15 (into 3 and 5, as the article reports), also reports that a complete simulation of the experiment, involving 4 7 Â4 7 parameters, was not feasible because the state space (of the simulation) was too large. Although programming languages are a central subject on their own right, most computer scientists would agree that even more importantly, they also form a part of a larger structure. As a historical example, C and Unix illustrate such a close relationship. The Oberon language and Oberon operating system are a highly remarkable and modern example of a symbiosis of this kind. In an interesting article, which addresses this problem in the context of quantum programming, Svore et al. BIB005 consider the problem of designing languages for a quantum computing system from a larger perspective. Some details will be discussed in Section 5.5. Another ambitious goal in designing QPLs can be described as an attempt to re-formulate quantum theory itself in such a way that the theory embodies high-level structures of theoretical computer science BIB006 BIB007 . Articles along these lines start with ideas of theoretical computer science by extending formal models such that formal reasoning on quantum processes should become possible. The quantum lambda calculus by van Tonder BIB002 BIB003 and qGCL by Sanders and Zuliani , an extension of Dijkstra's Guarded Command Language (GCL), are two examples of this kind. qGCL is an imperative language with a formal (operational) semantics. The language contains mechanisms for stepwise refinement, which make it particularly suitable as a specification language. Presently, research on QPLs seems to focus on concepts of functional programming. One argument in favour of this approach is that functional languages can express the algebraic structure of vector spaces in a natural way BIB004 . Other formalisms of theoretical computer science could as well serve as a starting point for defining new QPLs. In Section 5, some of these approaches will be discussed in more detail. |
Quantum Programming Languages: An Introductory Overview <s> First-step towards a QPL: pseudocode <s> From the Publisher: ::: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. ::: In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. ::: As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. ::: Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25% increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning. <s> BIB001 </s> Quantum Programming Languages: An Introductory Overview <s> First-step towards a QPL: pseudocode <s> A few conventions for thinking about and writing quantum pseudocode are proposed. The conventions can be used for presenting any quantum algorithm down to the lowest level and are consistent with a quantum random access machine (QRAM) model for quantum computing. In principle a formal version of quantum pseudocode could be used in a future extension of a conventional language. <s> BIB002 | In computer science, algorithms are traditionally formulated in some or the other form of pseudocode, for example, in a CLRS-like style BIB001 , which may be considered as a first step towards a programming language. Current textbooks on the quantum information theory commonly use a form which mixes texts in natural language with standard mathematical notations. In an early article, which has had a lot of influence on later work, particularly on the languages QCL and Q language, Knill BIB002 has proposed some form of pseudocode for quantum programming. In principle, it suffices to combine traditional classical control structures with quantum operations. As an illustration, Figure 2 shows Shor's algorithm in a form which will be easily accessible to computer scientists. The effect of this algorithm can be summarized as follows. For a given composite number N, FACTORIZE(N) returns a pair of non-trivial factors of N. The algorithm is probabilistic in two respects. First, in line 6, the value of a is drawn randomly from Z N . This turns FACTORIZE(N) into a randomized |
Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 141 <s> In this paper a microscopic quantum mechanical model of computers as represented by Turing machines is constructed. It is shown that for each numberN and Turing machineQ there exists a HamiltonianHNQ and a class of appropriate initial states such that if c is such an initial state, thenψQN(t)=exp(−1H N Qt)ψQN(0) correctly describes at timest3,t6,⋯,t3N model states that correspond to the completion of the first, second, ⋯, Nth computation step ofQ. The model parameters can be adjusted so that for an arbitrary time intervalΔ aroundt3,t6,⋯,t3N, the “machine” part ofψQN(t) is stationary. <s> BIB001 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 141 <s> It is argued that underlying the Church-Turing hypothesis there is an implicit physical assertion. Here, this assertion is presented explicitly as a physical principle: ‘every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means’. Classical physics and the universal Turing machine, because the former is continuous and the latter discrete, do not obey the principle, at least in the strong form above. A class of model computing machines that is the quantum generalization of the class of Turing machines is described, and it is shown that quantum theory and the ‘universal quantum computer’ are compatible with the principle. Computing machines resembling the universal quantum computer could, in principle, be built and would have many remarkable properties not reproducible by any Turing machine. These do not include the computation of non-recursive functions, but they do include ‘quantum parallelism’, a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it. The intuitive explanation of these properties places an intolerable strain on all interpretations of quantum theory other than Everett’s. Some of the numerous connections between the quantum theory of computation and the rest of physics are explored. Quantum complexity theory allows a physically more reasonable definition of the ‘complexity’ or ‘knowledge’ in a physical system than does classical complexity theory. <s> BIB002 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 141 <s> A computer is generally considered to be a universal computational device; i.e., it is believed able to simulate any physical computational device with a cost in computation time of at most a polynomial factor: It is not clear whether this is still true when quantum mechanics is taken into consideration. Several researchers, starting with David Deutsch, have developed models for quantum mechanical computers and have investigated their computational properties. This paper gives Las Vegas algorithms for finding discrete logarithms and factoring integers on a quantum computer that take a number of steps which is polynomial in the input size, e.g., the number of digits of the integer to be factored. These two problems are generally considered hard on a classical computer and have been used as the basis of several proposed cryptosystems. We thus give the first examples of quantum cryptanalysis. > <s> BIB003 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 141 <s> A quantum computer promises efficient processing of certain computational tasks that are intractable with classical computer technology. While basic principles of a quantum computer have been demonstrated in the laboratory, scalability of these systems to a large number of qubits, essential for practical applications such as the Shor algorithm, represents a formidable challenge. Most of the current experiments are designed to implement sequences of highly controlled interactions between selected particles (qubits), thereby following models of a quantum computer as a (sequential) network of quantum logic gates. Here we propose a different model of a scalable quantum computer. In our model, the entire resource for the quantum computation is provided initially in form of a specific entangled state (a so-called cluster state) of a large number of qubits. Information is then written onto the cluster, processed, and read out form the cluster by one-particle measurements only. The entangled state of the cluster thus serves as a universal substrate for any quantum computation. Cluster states can be created efficiently in any system with a quantum Ising-type interaction (at very low temperatures) between two-state particles in a lattice configuration. <s> BIB004 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 141 <s> An inexpensive faucet aerator is provided by three molded parts and a thin metal cup that holds two of the molded parts in assembled relationship. The first molded part is an elongated annulus provided with upstream and downstream recesses separated by an inner ring that helps to break up the liquid flow and serves as an abutment to support a second molded, jet-forming, part in the upstream recess and is arranged to be engaged by the third, molded, part located in the downstream recess and aiding in defining air intake means to the aerator. <s> BIB005 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 141 <s> What resources are universal for quantum computation? In the standard model, a quantum computer consists of a sequence of unitary gates acting coherently on the qubits making up the computer. This paper shows that a very different model involving only projective measurements, quantum memory, and the ability to prepare the |0>state is also universal for quantum computation. In particular, no coherent unitary dynamics are involved in the computation. <s> BIB006 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 141 <s> We define quantum cellular automata as infinite quantum lattice systems with discrete time dynamics, such that the time step commutes with lattice translations and has strictly finite propagation speed. In contrast to earlier definitions this allows us to give an explicit characterization of all local rules generating such automata. The same local rules also generate the global time step for automata with periodic boundary conditions. Our main structure theorem asserts that any quantum cellular automaton is structurally reversible, i.e., that it can be obtained by applying two blockwise unitary operations in a generalized Margolus partitioning scheme. This implies that, in contrast to the classical case, the inverse of a nearest neighbor quantum cellular automaton is again a nearest neighbor automaton. ::: We present several construction methods for quantum cellular automata, based on unitaries commuting with their translates, on the quantization of (arbitrary) reversible classical cellular automata, on quantum circuits, and on Clifford transformations with respect to a description of the single cells by finite Weyl systems. Moreover, we indicate how quantum random walks can be considered as special cases of cellular automata, namely by restricting a quantum lattice gas automaton with local particle number conservation to the single particle sector. <s> BIB007 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 141 <s> We present the SQRAM architecture for quantum computing, which is based on Knill's QRAM model. We detail a suitable instruction set, which implements a universal set of quantum gates, and demonstrate the operation of the SQRAM with Deutsch's quantum algorithm. The compilation of high-level quantum programs for the SQRAM machine is considered; we present templates for quantum assembly code and a method for decomposing matrices for complex quantum operations. The SQRAM simulator and compiler are discussed, along with directions for future work. <s> BIB008 | algorithm, even if the order-finding algorithm in line 10 would be implemented deterministically. Second, the result returned by the latter is probabilistic as well, if implemented on a quantum computer, because of the measurement action in line 4 of algorithm FIND-ORDER in Figure 3 . The embedding of these functions into a classical control structure makes FAC-TORIZE(N) a kind of Las Vegas algorithm: if there are exactly two prime factors, the correct non-trivial factorization is determined deterministically, apart from the order in which the factors appear, but the run-time is a random variable. Function FIND-ORDER, the core part of the algorithm in Figure 2 and the exclusive task of the quantum computer, determines the order of a with respect to N, i.e. the least integer r such that a r ; 1 (mod N). This is based on the (purely classical) function CONTINUED -FRACTION-EXPANSION(m, p, a, N) , which returns the smallest r such that a r ; 1 (mod N) if such r exists, otherwise 21, using the continued fraction expansion of m/p. Here, with regard to the subject of this article, two points deserve special attention: first, in general, quantum algorithms require some classical pre-and post-processing. Therefore, quantum programming languages should contain a mixture of classical and non-classical language elements. Second, with this notation, the classical language elements have to be interpreted intuitively just as in a conventional procedural language. In order to 'interpret' the quantum operations, however, which appear here in the shape of ADTs, one has to return to the standard formalism of quantum physics. Since these operations act on (quantum-)registers, this kind of pseudocode notation may be regarded as a mixture of procedural language elements and some kind of machine language. Of course, operations like FIND-ORDER N must further be decomposed into more elementary functions; Figure 3 shows one of the possibilities in pseudocode form. Here, MIX is the Hadamard operation, applied to a quantum register, U a,N is a unitary operation which represents the function x 7 ! x a mod N and QFT is the quantum Fourier transform which determines the period of this function. The QFT can be defined using the definition of the classical discrete Fourier transform (DFT). In a given basis, the transformation may be written as jcl k¼0 y k jkl. Here, n is the number of qubits and the coefficients (y k ) are obtained from the coefficients (x j ) by the usual classical DFT. An implementation of the quantum Fourier transform by means of more elementary operations is shown in Figure 4 : the exterior for-loop shows a decomposition into n blocks of unitaries, each of which consists of a sequence of Hadamard and controlled two-qubit operations. Obviously, the complexity of this quantum algorithm is Q(n 2 ). This shows the exponential speed-up of the QFT compared with the classical FFT, the complexity of which is Q(n2 n ). For details, the reader may consult a textbook on quantum information theory, e.g. Ref. [1, p. 217] . A complete presentation and analysis of Shor's algorithm, in particular, the determination of the correctness probabilities, can be found in Refs. BIB005 BIB003 . The pseudocode of Figures 2 -4 implicitly assumes that each register will be used in only one mode, either quantum or classical. In his article, Knill goes several steps further. He suggests to introduce a unifying framework which provides methods for handling quantum registers, annotations for specifying the extent of entanglement and methods for initializing, using and measuring quantum registers. In addition, the framework includes meta-operations such as reversing a quantum register, conditioning of quantum registers and converting a classical algorithm to a reversible one. Moreover, the article introduces a notation which allows to indicate whether a register is possibly in a superposed state. If this is the case, then restricted operations only can be applied to the register such as preparations, unitary operations and measurements. Otherwise, arbitrary operations are allowed, which are typical for classical processors. The article also provides a set of rules governing how registers are used. For example, an assignment with a quantum register on the right indicates a measurement and a register appearing on the right of an assignment can experience side effects, i.e. registers are assumed to be passed by reference. Knill illustrates his pseudocode notation with some examples. The controlled two-qubit operation in line 5 in Figure 4 of the present article is denoted in his first variant of the QFT by an underlined if to indicate a quantum conditional. In a second variant of the QFT, a measurement of the amplitudes has been included in the algorithm. This is denoted by an assignment of a register which appears in its quantum form on the right and the classical form on the left. Another idea in Knill's article is the QRAM model. According to this model, quantum computers are not stand-alone devices but form a part of a larger architecture. A conventional classical PC performs the pre-and postprocessing and controls the quantum device driver by building the required sequence of unitary operations as a classical data structure, which is then transmitted to the device driver: the quantum system is triggered by the classical PC, so to speak. After the final measurement, the PC can initiate another round with parameters, possibly depending on previous measurement results. An essential point of this idea is that in order to keep coherence times short, the PC should do all the processing that the quantum computer can anyway not speed up. The article by Knill has been influential in the design of several QPLs, particularly QCL by Ö mer and Q language by Bettelli et al. In a recent article, Nagarajan et al. BIB008 describe an elaborated variant of the QRAM model, which they call sequential quantum random access machine (SQRAM). Some more details will be given in Section 5.6. As an aside, it can be mentioned that there are several other quantum computational models. Quantum turing machines have been investigated at the very beginning of studies of quantum computing by Benioff BIB001 , Deutsch BIB002 and others (see, for example, [1, p. 214] for more references). Usually, these are considered adequate for questions of computability but as too general as an underlying model of QPLs. More recently, several variants of the model of measurement-based quantum computation have been proposed BIB004 BIB006 . The relation of this conceptually new computational model to the conventional gate model is the subject of current research. Although there has been considerable work on quantum cellular automata (see Ref. BIB007 and references therein) and several languages for classical cellular automata have been defined [58 -61] , no QPL based on this model seems to have been published up to now. |
Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> A method and apparatus for superimposing printed characters of any such nature as may be transmitted upon a received television image, at the will of the viewer at the receiver. The character information is incrementally transmitted during the vertical blanking interval of the television scanning format. The receiver is especially constructed to have a dynamic shift register, also means to manually select one or none of plural character programs; such as news, stock market, or weather. The characters may be made to crawl horizontally to present an extended message, which crawl may be halted by the viewer. The mandatory display of emergency messages is possible by a control located at the transmitter. <s> BIB001 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> Abstract The λ-calculus is considered a useful mathematical tool in the study of programming languages, since programs can be identified with λ-terms. However, if one goes further and uses βη-conversion to prove equivalence of programs, then a gross simplification is introduced (programs are identified with total functions from values to values ) that may jeopardise the applicability of theoretical results. In this paper we introduce calculi, based on a categorical semantics for computations , that provide a correct basis for proving equivalence of programs for a wide range of notions of computation . <s> BIB002 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> This paper explores the use monads to structure functionalprograms. No prior knowledge of monads or category theory isrequired. Monads increase the ease with which programs may be modified.They can mimic the effect of impure features such as exceptions,state, and continuations; and also provide effects not easilyachieved with such features. The types of a program reflect whicheffects occur. The first section is an extended example of the use of monads. Asimple interpreter is modified to support various extra features:error messages, state, output, and non-deterministic choice. Thesecond section describes the relation between monads and thecontinuation-passing style. The third section sketches how monadsare used in a compiler for Haskell that is written in Haskell. <s> BIB003 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> Abstract Monads have become very popular for structuring functional programs since Wadler introduced their use in 1990. In particular, libraries of combinators are often based on a monadic type. Such libraries share (in part) a common interface, from which numerous benefits flow, such as the possibility to write generic code which works together with any library. But, several interesting and useful libraries are fundamentally incompatible with the monadic interface. In this paper I propose a generalisation of monads, which I call arrows, with significantly wider applicability. The paper shows how many of the techniques of monadic programming generalise to the new setting, and gives examples to show that the greater generality is useful. In particular, three non-monadic libraries for efficient parsing, building graphical user interfaces, and programming active web pages fit naturally into the new framework. <s> BIB004 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> From the foundations of quantum theory to quantum technology - G. Alber quantum information theory, an invitation - R. Werner quantum communication - H. Weinfurter and A. Zeilinger quantum algorithms, applicable algebra and quantum physics - T.H. Beth and M. Rotteler mixed-state entanglement and quantum communication - M. Rotteler and R. Horodecki. <s> BIB005 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> We develop a type theory and provide a denotational semantics for a simple fragment of the quantum lambda calculus, a formal language for quantum computation based on linear logic. In our semantics, variables inhabit certain Hilbert bundles, and computations are interpreted as the appropriate inner product preserving maps between Hilbert bundles. These bundles and maps form a symmetric monoidal closed category, as expected for a calculus based on linear logic. <s> BIB006 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine. <s> BIB007 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> The paper develops a model of quantum computing from the perspective of functional programming. The model explains the fundamental ideas of quantum computing at a level of abstraction that is familiar to functional programmers. The model also illustrates some of the inherent difficulties in interpreting quantum mechanics and highlights the differences between quantum computing and traditional (functional or otherwise) computing models. <s> BIB008 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> These ‘lecture notes’ are based on joint work with Samson Abramsky. I will survey and informally discuss the results of [3, 4, 5, 12, 13] in a pedestrian not too technical way. These include: • ‘The logic of entanglement’, that is, the identification and abstract axiomatization of the ‘quantum information-flow’ which enables protocols such as quantum teleportation. 1 To this means we defined strongly compact closed categories which abstractly capture the behavioral properties of quantum entanglement. • ‘Postulates for an abstract quantum formalism’in which classical informationflow (e.g. token exchange) is part of the formalism. As an example, we provided a purely formal description of quantum teleportation and proved correctness in <s> BIB009 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> With a view towards models of quantum computation, we define a functional language where all functions are linear operators by construction. A small step operational semantic (and hence an interpreter/simulator) is provided for this language in the form of a term rewrite systems. The linear-algebraic -calculus hereby constructed is linear in a different (yet related) sense to that, say, of the linear -calculus. These various notions of linearity are discussed in the context of quantum programming languages . <s> BIB010 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> This article is a brief and subjective survey of quantum programming language research. 1 Quantum Computation Quantum computing is a relatively young subject. It has its beginnings in 1982, when Paul Benioff and Richard Feynman independently pointed out that a quantum mechanical system can be used to perform computations [11, p.12]. Feynman’s interest in quantum computation was motivated by the fact that it is computationally very expensive to simulate quantum physical systems on classical computers. This is due to the fact that such simulation involves the manipulation is extremely large matrices (whose dimension is exponential in the size of the quantum system being simulated). Feynman conceived of quantum computers as a means of simulating nature much more efficiently. The evidence to this day is that quantum computers can indeed perform certain tasks more efficiently than classical computers. Perhaps the best-known example is Shor’s factoring algorithm, by which a quantum computer can find the prime factors of any integer in probabilistic polynomial time [15]. There is no known classical probabilistic algorithm which can solve this problem in polynomial time. In the ten years since the publication of Shor’s result, there has been an enormous surge of research in quantum algorithms and quantum complexity theory. 2 Quantum Programming Languages Quantum physics involves phenomena, such as superposition and entanglement, whose properties are not always intuitive. These same phenomena give quantum computation its power, and are often at the heart of an interesting quantum algorithm. However, there does not yet seem to be a unifying set of principles by which quantum algorithms are developed; each new algorithm seems to rely on a unique set of “tricks” to achieve its particular goal. One of the goals of programming language design is to identify and promote useful “high-level” concepts — abstractions or paradigms which allow humans 2 to think about a problem in a conceptual way, rather than focusing on the details of its implementation. With respect to quantum programming, it is not yet clear what a useful set of abstractions would be. But the study of quantum programming languages provides a setting in which one can explore possible language features and test their usefulness and expressivity. Moreover, the definition of prototypical programming languages creates a unifying formal framework in which to view and analyze existing quantum algorithm. 2.1 Virtual Hardware Models Advances in programming languages are often driven by advances in compiler design, and vice versa. In the case of quantum computation, the situation is complicated by the fact that no practical quantum hardware exists yet, and not much is known about the detailed architecture of any future quantum hardware. To be able to speak of “implementations”, it is therefore necessary to fix some particular, “virtual” hardware model to work with. Here, it is understood that future quantum hardware may differ considerably, but the differences should ideally be transparent to programmers and should be handled automatically by the compiler or operating system. There are several possible virtual hardware models to work with, but fortunately all of them are equivalent, at least in theory. Thus, one may pick the model which fits one’s computational intuitions most closely. Perhaps the most popular virtual hardware model, and one of the easiest to explain, is the quantum circuit model. Here, a quantum circuit is made up from quantum gates in much the same way as a classical logic circuit is made up from logic gates. The difference is that quantum gates are always reversible, and they correspond to unitary transformations over a complex vector space. See e.g. [3] for a succinct introduction to quantum circuits. Of the two basic quantum operations, unitary transformations and measurements, the quantum circuit model emphasizes the former, with measurements always carried out as the very last step in a computation. Another virtual hardware model, and one which is perhaps even better suited for the interpretation of quantum programming languages, is the QRAM model of Knill [9]. Unlike the quantum circuit model, the QRAM models allows unitary transformations and measurements to be freely interleaved. In the QRAMmodel, a quantum device is controlled by a universal classical computer. The quantum device contains a large, but finite number of individually addressable quantum bits, much like a RAM memory chip contains a multitude of classical bits. The classical controller sends a sequence of instructions, which are either of the form “apply unitary transformation U to qubits i and j” or “measure qubit i”. The quantum device carries out these instruction, and responds by making the results of the measurements available. A third virtual hardware model, which is sometimes used in complexity theory, is the quantum Turing machine. Here, measurements are never performed, and the entire operation of the machine, which consists of a tape, head, and finite control, is assumed to be unitary. While this model is theoretically equivalent <s> BIB011 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> The objective of this paper is to develop a functional programming language for quantum computers. We develop a lambda calculus for the classical control model, following the first author's work on quantum flow-charts. We define a call-by-value operational semantics, and we give a type system using affine intuitionistic linear logic. The main results of this paper are the safety properties of the language and the development of a type inference algorithm. <s> BIB012 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> Elaborating on our joint work with Abramsky in quant-ph/0402130 we further unravel the linear structure of Hilbert spaces into several constituents. Some prove to be very crucial for particular features of quantum theory while others obstruct the passage to a formalism which is not saturated with physically insignificant global phases. ::: First we show that the bulk of the required linear structure is purely multiplicative, and arises from the strongly compact closed tensor which, besides providing a variety of notions such as scalars, trace, unitarity, self-adjointness and bipartite projectors, also provides Hilbert-Schmidt norm, Hilbert-Schmidt inner-product, and in particular, the preparation-state agreement axiom which enables the passage from a formalism of the vector space kind to a rather projective one, as it was intended in the (in)famous Birkhoff & von Neumann paper. ::: Next we consider additive types which distribute over the tensor, from which measurements can be build, and the correctness proofs of the protocols discussed in quant-ph/0402130 carry over to the resulting weaker setting. A full probabilistic calculus is obtained when the trace is moreover linear and satisfies the \em diagonal axiom, which brings us to a second main result, characterization of the necessary and sufficient additive structure of a both qualitatively and quantitatively effective categorical quantum formalism without redundant global phases. Along the way we show that if in a category a (additive) monoidal tensor distributes over a strongly compact closed tensor, then this category is always enriched in commutative monoids. <s> BIB013 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> We define a strongly normalising proof-net calculus corresponding to the logic of strongly compact closed categories with biproducts. The calculus is a full and faithful representation of the free strongly compact closed category with biproducts on a given category with an involution. This syntax can be used to represent and reason about quantum processes. <s> BIB014 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> We present the SQRAM architecture for quantum computing, which is based on Knill's QRAM model. We detail a suitable instruction set, which implements a universal set of quantum gates, and demonstrate the operation of the SQRAM with Deutsch's quantum algorithm. The compilation of high-level quantum programs for the SQRAM machine is considered; we present templates for quantum assembly code and a method for decomposing matrices for complex quantum operations. The SQRAM simulator and compiler are discussed, along with directions for future work. <s> BIB015 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> We present the quantum programming language cQPL which is an extended version of QPL [Sel04b]. It is capable of quantum communication and it can be used to formulate all possible quantum algorithms. Additionally, it possesses a denotational semantics based on a partial order of superoperators and uses fixed points on a generalised Hilbert space to formalise (in addition to all standard features expected from a quantum programming language) the exchange of classical and quantum data between an arbitrary number of participants. Additionally, we present the implementation of a cQPL compiler which generates code for a quantum simulator. <s> BIB016 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> We show that the model of quantum computation based on density matrices and superoperators can be decomposed into a pure classical (functional) part and an effectful part modelling probabilities and measurement. The effectful part can be modelled using a generalisation of monads called arrows. We express the resulting executable model of quantum computing in the Haskell programming language using its special syntax for arrow computations. However, the embedding in Haskell is not perfect: a faithful model of quantum computing requires type capabilities that are not directly expressible in Haskell. <s> BIB017 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> We introduce the language QML, a functional language for quantum computations on finite types. Its design is guided by its categorical semantics: QML programs are interpreted by morphisms in the category FQC of finite quantum computations, which provides a constructive semantics of irreversible quantum computations realisable as quantum gates. QML integrates reversible and irreversible quantum computations in one language, using first order strict linear logic to make weakenings explicit. Strict programs are free from decoherence and hence preserve superpositions and entanglement -which is essential for quantum parallelism. <s> BIB018 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> Abstract In this paper we give a self-contained introduction to the conceptional and mathematical foundations of quantum information theory. In the first part we introduce the basic notions like entanglement, channels, teleportation, etc. and their mathematical description. The second part is focused on a presentation of the quantitative aspects of the theory. Topics discussed in this context include: entanglement measures, channel capacities, relations between both, additivity and continuity properties and asymptotic rates of quantum operations. Finally, we give an overview on some recent developments and open questions. <s> BIB019 | Presently, the primary focus of current research in the area of QPLs seems to be mainly on the functional programming paradigm and not on imperative or object-oriented languages. Several reasons are given for this approach. First, operations in a Hilbert space are functions in the traditional mathematical sense. Therefore, it suggests itself to map these to functions in a functional language, i.e. to language constructs, which map inputs to outputs without side effects. Second, it is argued that type safety in functional languages is much higher than in imperative languages. This can open the way to realizing systems in which the compiler rather than the run-time system detects violations of the laws of quantum mechanics. Much of the present work on this matter uses the terminology of category theory and linear logic as a technical framework. Good introductions into these fields can be found in Refs. . Another ambitious approach, although closely related to QPLs but far beyond this field, aims at establishing the new field of 'quantum informatics', which is a research area different from but related to the more traditional quantum information theory (see Refs. BIB009 BIB013 BIB014 and earlier references therein). During the last eight decades, the mathematical setting of quantum mechanics, originally due to von Neumann and others, has been extended to a rigorous theory, which contains the measurement process as well as a description of 'purely classical' systems in a common formal framework. C*-algebras form the basis of this framework (see, for example, a contribution by Werner in Ref. BIB005 and the article by Keyl BIB019 for detailed state-of-the-art introductions). There still remain at least two problems. First, many physicists feel that from a physical point of view, neither the existence of two types of time evolution, unitary and measurement, nor the relation between the notions of classical and quantum have yet been satisfactorily 'explained' BIB001 . Second, from a computer scientist's point of view, the Hilbert space formalism describes systems on the level of bits and qubits (0s and 1s in folklore terms), which is far from what is commonly called high-level methods and structures in present days computer science. Nowadays, the main subjects of classical computer science rely on notions such as modules, ADTs, components, functional languages, process calculi, type systems and various theoretical foundations thereof. Apart from a few applications, which directly interface with hardware devices, programming on a bit level is now of marginal importance only. The relation between the traditional mathematical treatment of quantum mechanics with its operations on the level of qubits and a classical assembler raises the question whether there are high-level structures analogous to classical computer science, which allow some reasoning on this level for quantum systems. Particularly, it is argued by Coecke BIB013 that mappings of the kind f : H ! H can have a lot of different meanings, such as operators, (mixed) states etc. In Coecke's article, this has been called 'the lack of types reflecting kinds'. So, the question is whether classical structures can be extended ('quantized') to high-level quantum structures, which are not merely unitary operations acting on qubit states. Moreover, these should be manageable in such a way that useful work can be done, for example, the development of efficient algorithms. Although this work aims primarily at the foundation of quantum physics itself, there are as well pragmatic goals such as protocol analysis and design, particularly applications to information security [70 -72] . A detailed description of all QPLs, which have been published so far, is far beyond the scope of the present article. Therefore, only a small number of examples will be sketched in the following. We refer the reader to the original articles; a commented literature summary has recently been given by Gay . Computability with functions can be formalized by means of the lambda calculus, which accordingly forms the basis of functional programming. van Tonder BIB006 BIB007 has developed a variant of this calculus for quantum programs which may be regarded as an alternate model of the quantum Turing machine. In its present version, the l q -calculus is based on the vector formalism; classical data and measurements are not treated in the present form of the model. Arrighi and Dowek BIB010 give a formalization of vector spaces and describe an operational semantics for a formal tensor calculus based on term rewrite systems. Also, a brief non-formal account of linear logic can be found in their article. In linear logic, logical statements are re-interpreted as consumption of resources. Particularly, two of the structural 146 R. RÜ DIGER rules of classical logic (weakening and contraction) are not available in general. In the context of quantum programming, this is brought into connection with peculiarities related to discarding and cloning quantum states. Some remarks on different notions of linearity in linear logic and of linearity in vector spaces are also mentioned in the article by Arrighi and Dowek. Most influential has been Selinger's [10, 18, 74,] work. He defines two variants of a first-order functional language, a textual form (named QPL) and, alternatively, a QPL in the form of quantum flow charts (named QFC). The language is based on the idea (the 'slogan') of 'classical control and quantum data', which is along the lines of Knill's QRAM model, although the language itself is not based on any special hardware model. Separating control and data in this way means that data can be in a superposed state, whereas the control flow cannot. One of the key points of QPL/QFC is that to each programming fragment, a superoperator will be assigned, which maps input states to output states. Thus, the language is based on the established formalism, mentioned in Section 2, which describes mixed states and operations on states in a general unified setting. Therefore, unitary time evolution and measurements can be dealt with in a common framework as well as situations like, for example, irreversibly discarding a qubit ('measuring a qubit without learning the result' [1, p. 187]). Another innovative feature of the language is its denotational semantics, which is based on complete partial orders of superoperators. QPL/QFC forms the basis for several other articles. One difficulty, mentioned by Selinger BIB011 , is the proper handling of linearity; combining classical and quantum structures in one system requires a linear and non-linear type system. In Refs. BIB012 , Valiron and Selinger propose a higher order QPL based on a linear-typed lambda calculus. The language combines both classical data types and measurements as a primitive feature, which is essential for algorithms where unitary operations and measurements are interleaved. The semantics of the proposed language is operational and the appropriate type system is affine intuitionistic linear logic. Also, the authors develop a type inference algorithm. Another article, which is closely related to QPL by Selinger, is the work by Nagarajan et al. BIB015 . The authors extend the QRAM model to a model, called SQRAM, by an explicit construction of instruction sets for the classical and the quantum component and they also describe a compiler for a subset of QPL. As an example, they show how Deutsch's algorithm can be expressed in their formalism. The extension of QPL to cQPL by Mauerer BIB016 has already been mentioned. The most distinguishing feature of this language is its ability to describe quantum communication protocols. Therefore, the language, which has a denotational semantics, is suitable for security proofs of communication protocols. A compiler for the language has been developed, which can also be regarded as a QPL compiler. Several experiments with the functional language Haskell as a QPL have been described BIB008 . There is a somewhat vague analogy to the work on Q language insofar, as an established standard language is being used as a QPL. However, the analogy ends here: programs written in Cþþ and Haskell have not much in common. In Ref. BIB017 , superoperators are introduced as arrows BIB004 , which generalize monads BIB002 BIB003 (an algebraic structure which formalizes the notion of a computation). Vizzotto et al. BIB017 remark that the no-cloning property of quantum systems cannot adequately be represented within this framework, and they state that a better approach would be to continue the work with QML by Altenkirch and Grattage BIB018 . This QPL is a first-order functional language with a denotational semantics. In contrast to Selinger's QPL, the language is based on the idea of 'quantum data and quantum control'. Measurements will be included in a future version of the language. A QML compiler has been implemented in Haskell . Table 1 summarizes some of the features of those QPLs which have been discussed or mentioned in the preceding sections. The reader should be aware that research on quantum |
Runtime Adaptive Extensible Embedded Processors — A Survey <s> Introduction <s> System designers can optimize Xtensa for their embedded application by sizing and selecting features and adding new instructions. Xtensa provides an integrated solution that allows easy customization of both hardware and software. This process is simple, fast, and robust. <s> BIB001 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Introduction <s> Lx is a scalable and customizable VLIW processor technology platform designed by Hewlett-Packard and STMicroelectronics that allows variations in instruction issue width, the number and capabilities of structures and the processor instruction set. For Lx we developed the architecture and software from the beginning to support both scalability (variable numbers of identical processing resources) and customizability (special purpose resources). In this paper we consider the following issues. When is customization or scaling beneficial? How can one determine the right degree of customization or scaling for a particular application domain? What architectural compromises were made in the Lx project to contain the complexity inherent in a customizable and scalable processor family? The experiments described in the paper show that specialization for an application domain is effective, yielding large gains in price/performance ratio. We also show how scaling machine resources scales performance, although not uniformly across all applications. Finally we show that customization on an application-by-application basis is today still very dangerous and much remains to be done for it to become a viable solution. <s> BIB002 | The ever increasing demand of high-performance at low-power in the embedded domain is fueling the trend towards customized embedded processors . A customized processor is designed specifically for an application domain (e.g., network, multimedia etc.) enabling it to offer significantly higher performance compared to its generalpurpose counterparts, while consuming much lower energy. This dual improvement in power-performance is achieved by eliminating certain structures (e.g., floating-point unit) that are redundant for the particular application-domain, while choosing appropriate dimensions for other structures (e.g., cache, TLB, register file). The elimination of redundant structures cuts down energy/area wastage and tailor-made dimensioning of required structures improves performance at reduced power budget. A further step towards customization is instruction-set extensible processors or extensible processors for short. An extensible processor opens up the opportunity to customize the Instruction-Set Architecture (ISA) through application-specific extension instructions or custom instructions. Each custom instruction encapsulates a frequency occurring complex pattern in the data-flow graph of the application(s). Custom instructions are implemented as Custom Functional Units (CFU) in the data-path of the processor core. As multiple instructions from the base ISA are folded into a single custom instruction, we save fetching/decoding costs and improve code size. More importantly, the CFU can typically achieve significantly lower latency through parallelization and chaining of basic operations (the latency is determined by the critical path in the dataflow graph of the corresponding custom instruction) compared to executing one operation per cycle sequentially in the original processor. On the other hand, as custom instructions are exposed to the programmer, extensible processors offer great flexibility just like any software-programmable general-purpose processors. The large number of commercial extensible processors available in today's market (e.g., Xtensa BIB001 , Lx BIB002 , ARC configurable cores [2], OptimoDE , MIPS CorExtend [18] ) is a testament to their wide-spread popularity. There are, however, some drawbacks of traditional extensible processors. First, we need to design and fabricate different customized processor for each application domain. A processor customized for one application domain may fail to provide any tangible performance benefit for a different domain. Soft core processors with extensibility features that are synthesized in FPGAs (e.g., Altera Nios , Xilinx MicroBlaze [21] ) somewhat mitigate this problem as the customization can be performed post-fabrication. Still, customizable soft cores suffer from lower frequency and higher energy consumption issues because the entire processor (and not just the CFUs) is implemented in FPGAs. Apart from cross-domain performance problems, extensible processors are also limited by the amount of silicon available for implementation of the CFUs. As embedded systems progress towards highly complex and dynamic applications (e.g., MPEG-4 video encoder/decoder, software-defined radio), the silicon area constraint becomes a primary concern. Moreover, for highly dynamic applications that can switch between different modes (e.g., runtime selection of encryption standard) with unique custom instructions requirements, a customized processor catering to all scenarios will clearly be a sub-optimal design. Runtime adaptive extensible embedded processors offer a potential solution to all these problems. An adaptive extensible processor can be configured at runtime to change its custom instructions and the corresponding CFUs. Clearly, to achieve runtime adaptivity, the CFUs have to be implemented in some form of reconfigurable logic. But the base processor is implemented in ASIC to provide high clock frequency and better energy efficiency. As CFUs are implemented in reconfigurable logic, these extensible processors offer full flexibility to adapt (post-fabrication) the custom instructions according to the requirement of the application running on the system and even midway through the execution of the application. Such adaptive extensible processors can be broadly classified into two categories: -Explicit Reconfigurability: This class of processors need full compiler or programmer support to identify the custom instructions, synthesize them, and finally cluster then into one (or more) configurations that can be switched at runtime. In other words, custom instructions are generated off-line and the application is recompiled to use these custom instructions. -Transparent Reconfigurability: This class of processors do not expose the extensibility feature to the compiler or the programmer. In other words, the extensibility is completely transparent to the user. Instead, the runtime system identifies the custom instructions and synthesizes them while the application is running on the system. These systems are more complex, but may provide better performance as the decisions are taken at runtime. In this article, we will first provide a quick survey of the architecture of explicit runtime adaptive extensible processors followed by the compiler support required for such processors. Next, we will discuss transparent reconfigurable processors and their runtime systems. Finally, we will conclude this survey by outlining the challenges and opportunities in this domain. |
Runtime Adaptive Extensible Embedded Processors — A Survey <s> Architecture <s> This paper explores a novel way to incorporate hardware-programmable resources into a processor microarchitecture to improve the performance of general-purpose applications. Through a coupling of compile-time analysis routines and hardware synthesis tools, we automatically configure a given set of the hardware-programmable functional units (PFUs) and thus augment the base instruction set architecture so that it better meets the instruction set needs of each application. We refer to this new class of general-purpose computers as PRogrammable Instruction Set Computers (PRISC). Although similar in concept, the PRISC approach differs from dynamically programmable microcode because in PRISC we define entirely-new primitive datapath operations. In this paper, we concentrate on the microarchitectural design of the simplest form of PRISC—a RISC microprocessor with a single PFU that only evaluates combinational functions. We briefly discuss the operating system and the programming language compilation techniques that are needed to successfully build PRISC and, we present performance results from a proof-of-concept study. With the inclusion of a single 32-bit-wide PFU whose hardware cost is less than that of a 1 kilobyte SRAM, our study shows a 22% improvement in processor performance on the SPECint92 benchmarks. <s> BIB001 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Architecture <s> A dynamic instruction set computer (DISC) has been developed that supports demand-driven modification of its instruction set. Implemented with partially reconfigurable FPGAs, DISC treats instructions as removable modules paged in and out through partial reconfiguration as demanded by the executing program. Instructions occupy FPGA resources only when needed and FPGA resources can be reused to implement an arbitrary number of performance-enhancing application-specific instructions. DISC further enhances the functional density of FPGAs by physically relocating instruction modules to available FPGA space. <s> BIB002 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Architecture <s> As custom computing machines evolve, it is clear that a major bottleneck is the slow interconnection architecture between the logic and memory. This paper describes the architecture of a custom computing machine that overcomes the interconnection bottleneck by closely integrating a fixed-logic processor, a reconfigurable logic array, and memory into a single chip, called OneChip-98. The OneChip-98 system has a seamless programming model that enables the programmer to easily specify instructions without additional complex instruction decoding hardware. As well, there is a simple scheme for mapping instructions to the corresponding programming bits. To allow the processor and the reconfigurable array to execute concurrently, the programming model utilizes a novel memory-consistency scheme implemented in the hardware. To evaluate the feasibility of the OneChip-98 architecture, a 32-bit MIPS-like processor and several performance enhancement applications were mapped to the Transmogrifier-2 field programmable system. For two typical applications, the 2-dimensional discrete cosine transform and the 64-tap FIR filter, we were capable of achieving a performance speedup of over 30 times that of a stand-alone state-of-the-art processor. <s> BIB003 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Architecture <s> Reconfigurable hardware has the potential for significant performance improvements by providing support for application-specific operations. We report our experience with Chimaera, a prototype system that integrates a small and fast reconfigurable functional unit (RFU) into the pipeline of an aggressive, dynamically-scheduled superscalar processor. Chimaera is capable of performing 9-input/1-output operations on integer data. We discuss the Chimaera C compiler that automatically maps computations for execution in the RFU. Chimaera is capable of: (1) collapsing a set of instructions into RFU operations, (2) converting control-flow into RFU operations, and (3) supporting a more powerful fine-grain data-parallel model than that supported by current multimedia extension instruction sets (for integer operations). Using a set of multimedia and communication applications we show that even with simple optimizations, the Chimaera C compiler is able to map 22% of all instructions to the RFU on the average. A variety of computations are mapped into RFU operations ranging from as simple as add/sub-shift pairs to operations of more than 10 instructions including several branches. Timing experiments demonstrate that for a 4-way out-of-order superscalar processor Chimaera results in average performance improvements of 21%, assuming a very aggressive core processor design (most pessimistic RFU latency model) and communication overheads from and to the RFU. <s> BIB004 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Architecture <s> System designers can optimize Xtensa for their embedded application by sizing and selecting features and adding new instructions. Xtensa provides an integrated solution that allows easy customization of both hardware and software. This process is simple, fast, and robust. <s> BIB005 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Architecture <s> This paper describes a new architecture for embedded reconfigurable computing, based on a very-long instruction word (VLIW) processor enhanced with an additional run-time configurable datapath. The reconfigurable unit is tightly coupled with the processor, featuring an application-specific instruction-set extension. Mapping computation intensive algorithmic portions on the reconfigurable unit allows a more efficient elaboration, thus leading to an improvement in both timing performance and power consumption. A test chip has been implemented in a standard 0.18-/spl mu/m CMOS technology. The test of a signal processing algorithmic benchmark showed speedups ranging from 4.3/spl times/ to 13.5/spl times/ and energy consumption reduced up to 92%. <s> BIB006 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Architecture <s> In this paper, we present a polymorphic processor paradigm incorporating both general-purpose and custom computing processing. The proposal incorporates an arbitrary number of programmable units, exposes the hardware to the programmers/designers, and allows them to modify and extend the processor functionality at will. To achieve the previously stated attributes, we present a new programming paradigm, a new instruction set architecture, a microcode-based microarchitecture, and a compiler methodology. The programming paradigm, in contrast with the conventional programming paradigms, allows general-purpose conventional code and hardware descriptions to coexist in a program: In our proposal, for a given instruction set architecture, a onetime instruction set extension of eight instructions, is sufficient to implement the reconfigurable functionality of the processor. We propose a microarchitecture based on reconfigurable hardware emulation to allow high-speed reconfiguration and execution. To prove the viability of the proposal, we experimented with the MPEG-2 encoder and decoder and a Xilinx Virtex II Pro FPGA. We have implemented three operations, SAD, DCT, and IDCT. The overall attainable application speedup for the MPEG-2 encoder and decoder is between 2.64-3.18 and between 1.56-1.94, respectively, representing between 93 percent and 98 percent of the theoretically obtainable speedups. <s> BIB007 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Architecture <s> A software-configurable processor combines a traditional RISC processor with a field-programmable instruction extension unit that lets the system designer tailor the processor to a particular application. To add application-specific instructions to the processor, the programmer adds a pragma before a C or C++ function declaration, and the compiler then turns the function into a single instruction <s> BIB008 | Temporal Reconfiguration. We start with architectures that enable temporal reconfiguration, but only one custom instruction can exist at any point of time. That is, there is no spatial sharing of the reconfigurable logic among custom instructions. PRISC (PRogrammable Instruction Set Processor) BIB001 is one of the very first architectures to include temporal reconfigurability of the custom functional units. Temporal reconfiguration virtually enlarges the limited reconfigurable hardware, which is tightly attached to the datapath of core processor. PRISC supports a set of configurations, each of which contains a computation kernel or a custom instruction. At any point of time, there is only one active configuration for reconfigurable hardware. However, each of the configurations can become active at some point of time through time-multiplexing. Therefore, temporal reconfiguration can extend the computational ability of the reconfigurable hardware at the cost of reconfiguration overhead. Figure 1 shows the Programmable Functional Unit (PFU) in parallel with the other traditional functional units in the datapath of the PRISC processor. PFU data communication is similar to the other functional units. However, PFU can support only two input operands and one output operand. With the limitation on the number of input and output operands, PRISC cannot implement large custom instructions that can potentially provide more performance benefit though instruction-level parallelism as well as higher latency reduction. Moreover, as each configuration can include only one instruction, PRISC effectively restricts the number of custom instructions per loop body to one; BIB001 otherwise, the temporal reconfiguration cost within loop body will typically outweigh any benefit of custom instructions. OneChip BIB003 reduces reconfiguration overhead by allowing multiple configurations to be stored in the PFU, but only one configuration is active at any point of time. Moreover, OneChip comprises of a superscalar pipeline with PFU to achieve higher performance for streaming applications. However, OneChip lacks the details of how programmers specify or design the hardware that is mapped onto the reconfigurable logic. Spatial and Temporal Reconfiguration. Both PRISC and OneChip allow only one custom instruction per configuration that can result in high reconfiguration cost specially if two custom instructions in the same code segment are executed frequently, for example, inside a loop body. Our next set of architectures enable spatial reconfiguration, that is, the reconfigurable hardware can be shared among multiple custom instructions. The combination of spatial and temporal reconfiguration is a powerful feature that partitions the custom instructions into multiple configurations, each of which contains one or more custom instructions. This clustering of multiple custom instructions into a single configuration can significantly reduce the reconfiguration overhead. Chimaera BIB004 , which is inspired by PRISC, is one of the original works considering temporal plus spatial configuration of the custom functional units. Chimaera tightly couples Reconfigurable Functional Unit (RFU) with a superscalar pipeline. The main innovation of the Chimaera RFU is that it uses nine input registers to produce the result in one destination register. Simple compiler support is provided to automatically map group of normal instructions into custom instructions. However, Chimaera compiler lacks support for spatial and temporal reconfiguration of custom instructions so as to make runtime reconfiguration more efficient. Stretch S6000 BIB008 commercial processor follows this research trend. Figure 2 shows the Stretch S6000 engine that incorporates Tensilica Xtensa LX dual-issue VLIW processor BIB005 and the Stretch Instruction Set Extension Fabric (ISEF). The ISEF is software-configurable datapath based on programmable logic. It consists of a plane of Arithmetic/logic Units (AU) and a plane of Multiplier Units (MU) embedded and interlinked in a programmable, hierarchical routing fabric. This configurable fabric acts as a functional unit to the processor. It is built into the processor's datapath, and resides alongside other traditional functional units. The programmer defined application specific instructions (Extension Instructions) are implemented in this fabric. When an extension instruction is issued, the processor checks to make sure the corresponding configuration (containing the extension instruction) is loaded into the ISEF. If the required configuration is not present in the ISEF, it is automatically loaded prior to the execution of the user-defined instruction. ISEF provides high data bandwidth to the core processor through 128-bit wide registers. In addition, 64KB embedded RAM is included inside ISEF to store temporary results of computation. With all these features, a single custom instruction can potentially implement a complete inner loop of the application. The Stretch compiler fully unrolls any loop with constant iteration counts. Fig. 2 . Stretch S6000 datapath BIB008 can be removed to make space for the new instructions. Moreover, as only a part of the fabric is reconfigured, it saves reconfiguration cost. DISC (Dynamic Instruction Set Computer) BIB002 is one of the earliest attempts for an extensible processor to provide partial reconfiguration feature. DISC implements each instruction of the instruction set as an independent circuit module. It can page-in and page-out individual instruction modules onto reconfigurable fabric in a demand-driven manner. DISC supports relocatable circuit modules such that an existing instruction module can be moved inside the fabric to generate enough contiguous space for the incoming instruction module. The drawback of DISC system is that both standard and custom instructions are implemented in reconfigurable logic, causing significant performance overhead. On the other hand, the host processor is under-utilized as it only performs resource allocation and reconfiguration. Extended Instruction Set RISC (XiRisc) BIB006 follows this line of development to couple a VLIW datapath with a pipelined run-time reconfigurable hardware. XiRisc has a five-stage pipeline with two symmetrical execution flows called Data Channels. Reconfigurable datapath supports up to four source operands and two destination operands for each custom instruction. Moreover, reconfigurable hardware can hold internal states for several computations so as to reduce the register pressure. However, configuration caching is missing in XiRisc leading to high reconfiguration overhead. Moreover, there is lack of compiler support for designer to automatically generate custom instructions. Molen BIB007 polymorphic processor incorporates an arbitrary number of reconfigurable functional units. Molen resolves the issue of opcode space explosion for custom functions as well as data bandwidth limitation of the reconfigurable hardware. Moreover, Molen architecture allows two or more independent functions to be executed in parallel in the reconfigurable logic. To achieve these features, Molen requires a new programming paradigm that enables general-purpose instructions and hardware descriptions of custom instructions to coexist in a program. An one-time instruction set extension of eight instructions is added to support the functionality of reconfigurable hardware. Molen compiler automatically generates optimized binary code for C applications with pragma annotation for custom instructions. The compiler can also generate appropriate custom instructions for each implementation of reconfigurable logic. The reconfiguration cost is hidden by scheduling the instructions appropriately such that the configuration corresponding to a custom instruction can be prefetched before that custom instruction is scheduled to execute. |
Runtime Adaptive Extensible Embedded Processors — A Survey <s> Compiler Support <s> We present an efficient framework for dynamic reconfiguration of application-specific instruction-set customization. A key component of this framework is an iterative algorithm for temporal and spatial partitioning of the loop kernels. Our algorithm maximizes performance gain of an application while taking into consideration the dynamic reconfiguration cost. It selects the appropriate custom instruction-sets for the loops and maps them into appropriate configurations. We model the temporal partitioning problem as a k-way graph partitioning problem. A dynamic programming based solution is used for the spatial partitioning. Comprehensive experimental results indicate that our iterative algorithm is highly scalable while producing optimal or near-optimal (99% of the optimal) performance gain. <s> BIB001 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Compiler Support <s> This paper explores runtime reconfiguration of custom instructions in the context of multi-tasking real-time embedded systems. We propose a pseudo-polynomial time algorithm that minimizes processor utilization through customization and runtime reconfiguration, while satisfying all the timing constraints. Our experimental infrastructure consists of Stretch customizable processor supporting runtime reconfiguration as the hardware platform and realistic embedded benchmarks as applications. We observe that runtime reconfiguration of custom instructions can help to reduce the processor utilization by up to 64%. The experimental results also demonstrate that our algorithm is highly scalable and achieves optimal or near optimal (3% difference) processor utilization. <s> BIB002 | Most of the runtime adaptive extensible processors lack appropriate compiler support to automate the design flow. However, given the tight time-to-market constraint of embedded systems, compiler support is instrumental in developing greater acceptability of these architectures. Currently, the burden is entirely on the programmer to select appropriate custom instructions and cluster them into one or more configurations. Choosing an appropriate set of custom instructions for an application itself is a difficult problem. Significant research effort has been invested in developing automated selection techniques for custom instructions . Runtime reconfiguration has the additional complication of both temporal and spatial partitioning of the set of custom instructions in the reconfigurable fabric. We have recently developed an efficient framework BIB001 that starts with an application specified in ANSI-C and automatically selects appropriate custom instructions as well as clubs them into one or more configurations (see Figure 3) . We first extract a set of compute-intensive candidate loop kernels from the application through profiling. For each candidate loop, one or more Custom Instruction Set (CIS) versions are generated differing in performance gain and area tradeoffs. Fig. 4 . A set of periodic task graphs and the corresponding schedule BIB002 partitioning algorithm. We model the temporal partitioning of the custom instructions into different configurations as a k-way graph partitioning problem. A dynamic programming based pseudo-polynomial time algorithm determines the spatial partitioning of the custom instructions within a configuration. The selected CIS versions to be implemented in hardware pass through a datapath synthesis tool. It generates the bitstream corresponding to each configuration (based on the outcome of the temporal partitioning). These bitstreams are used to configure the fabric at runtime. The remaining loops are implemented in software on the core processor. Finally, the source code is modified to exploit the new custom instructions. We also extend our work to include runtime reconfiguration of custom instructions for multiple tasks along with timing constraints BIB002 . An application is modeled as a set of periodic task graphs, each associated with a period and a deadline. Multiple CIS versions are generated for each constituent task of a task graph. Each task has many instances in the static non-preemptive schedule over the hyper-period (the least common multiple of the task graph periods) as shown in Figure 4 . The objective is to minimize processor utilization by exploiting runtime reconfiguration of the custom instructions while satisfying deadline constraints. To achieve this goal, temporal partitioning divides the schedule into a number of configurations, where area constraint is imposed on each configuration. For example, Figure 4 illustrates an initial fragment of the schedule and its partitioning into three configurations. Note that each configuration contains a disjoint subsequence of task instances from the original schedule. Temporal partitioning allows a larger virtual area at the cost of reconfiguration overhead. The area within a configuration is spatially partitioned among the task instances assigned to it by choosing appropriate CIS version for each task instance. A dynamic programming based algorithm is enhanced with various constraints to efficiently solve the problem. |
Runtime Adaptive Extensible Embedded Processors — A Survey <s> Transparent Extensible Processors <s> Application-specific instruction set extensions are an effective way of improving the performance of processors. Critical computation subgraphs can be accelerated by collapsing them into new instructions that are executed on specialized function units. Collapsing the subgraphs simultaneously reduces the length of computation as well as the number of intermediate results stored in the register file. The main problem with this approach is that a new processor must be generated for each application domain. While new instructions can be designed automatically, there is a substantial amount of engineering cost incurred to verify and to implement the final custom processor. In this work, we propose a strategy to transparent customization of the core computation capabilities of the processor without changing its instruction set. A congurable array of function units is added to the baseline processor that enables the acceleration of a wide range of data flow subgraphs. To exploit the array, the microarchitecture performs subgraph identification at run-time, replacing them with new microcode instructions to configure and utilize the array. We compare the effectiveness of replacing subgraphs in the fill unit of a trace cache versus using a translation table during decode, and evaluate the tradeoffs between static and dynamic identification of subgraphs for instruction set customization. <s> BIB001 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Transparent Extensible Processors <s> Instruction set customization is an effective way to improve processor performance. Critical portions of applicationdata-flow graphs are collapsed for accelerated execution on specialized hardware. Collapsing dataflow subgraphs will compress the latency along critical paths and reduces the number of intermediate results stored in the register file. While custom instructions can be effective, the time and cost of designing a new processor for each application is immense. To overcome this roadblock, this paper proposes a flexible architectural framework to transparently integrate custom instructions into a general-purpose processor. Hardware accelerators are added to the processor to execute the collapsed subgraphs. A simple microarchitectural interface is provided to support a plug-and-play model for integrating a wide range of accelerators into a pre-designed and verified processor core. The accelerators are exploited using an approach of static identification and dynamic realization. The compiler is responsible for identifying profitable subgraphs, while the hardware handles discovery, mapping, and execution of compatible subgraphs. This paper presents the design of a plug-and-play transparent accelerator system and evaluates the cost/performance implications of the design. <s> BIB002 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Transparent Extensible Processors <s> We describe a new processing architecture, known as a warp processor, that utilizes a field-programmable gate array (FPGA) to improve the speed and energy consumption of a software binary executing on a microprocessor. Unlike previous approaches that also improve software using an FPGA but do so using a special compiler, a warp processor achieves these improvements completely transparently and operates from a standard binary. A warp processor dynamically detects the binary's critical regions, reimplements those regions as a custom hardware circuit in the FPGA, and replaces the software region by a call to the new hardware implementation of that region. While not all benchmarks can be improved using warp processing, many can, and the improvements are dramatically better than those achievable by more traditional architecture improvements. The hardest part of warp processing is that of dynamically reimplementing code regions on an FPGA, requiring partitioning, decompilation, synthesis, placement, and routing tools, all having to execute with minimal computation time and data memory so as to coexist on chip with the main processor. We describe the results of developing our warp processor. We developed a custom FPGA fabric specifically designed to enable lean place and route tools, and we developed extremely fast and efficient versions of partitioning, decompilation, synthesis, technology mapping, placement, and routing. Warp processors achieve overall application speedups of 6.3X with energy savings of 66p across a set of embedded benchmark applications. We further show that our tools utilize acceptably small amounts of computation and memory which are far less than traditional tools. Our work illustrates the feasibility and potential of warp processing, and we can foresee the possibility of warp processing becoming a feature in a variety of computing domains, including desktop, server, and embedded applications. <s> BIB003 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Transparent Extensible Processors <s> Adaptation in embedded processing is key in order to address efficiency. The concept of extensible embedded processors works well if a few a-priori known hot spots exist. However, they are far less efficient if many and possible at-design-time-unknown hot spots need to be dealt with. Our RISPP approach advances the extensible processor concept by providing flexibility through runtime adaptation by what we call "instruction rotation". It allows sharing resources in a highly flexible scheme of compatible components (called atoms and molecules). As a result, we achieve high speed-ups at moderate additional hardware. Furthermore, we can dynamically tradeoff between area and speed-up through runtime adaptation. We present the main components of our platform and discuss by means of an H.264 video codec. <s> BIB004 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Transparent Extensible Processors <s> We are presenting a new concept of an application-specific processor that is capable of transmuting its instruction set according to non-predictive application behavior during run-time. In those scenarios, current (extensible) embedded processors are less efficient since they are not run-time adaptive. We have identified the instruction set selection to be a critical step to perform at run time and hence we focus this paper on that crucial part. Our paradigm conducts as many steps as possible at compile/design time and as little as necessary at run time with the constraint to provide a sufficient flexibility to react to non-predictive application behavior efficiently We provide an in-depth analysis of our scheme and achieve a speed-up of up to 7.19times (average: 3.63times) compared to state-of-the-art adaptive approaches (like [19]). As an application, we have employed a whole H.264 video encoder though our scheme is by principle applicable to many other embedded applications. Our results are evaluated by an implementation of the instruction set selection for our transmutable processor on an FPGA platform. <s> BIB005 | We now proceed to describe extensible processors that are reconfigured transparently by the runtime system. Configurable Compute Accelerators (CCA): Transparent instruction-set customization supports a plug-and-play model for integrating a wide range of accelerators into a pre-designed and verified processor core. Moreover, instruction-set customization occurs at runtime. An architectural framework for transparent instruction-set customization has been proposed in BIB002 . The framework comprises of static identification of subgraphs for execution on CCA BIB001 and runtime selection of custom instructions to be synthesized to CCA as shown in Figure 5 . First, the program is analyzed to identify the most frequent computation subgraphs (custom instructions) to be mapped onto CCA. Figure 5 (a) shows that two subgraphs have been selected. They are considered as normal functions and will be replaced by function calls. At runtime, the first time a selected subgraph is encountered, it is executed in the core pipeline while a hardware engine determines the CCA configuration concurrently. From the second execution onwards, the subgraph is implemented in the CCA as shown in Figure 5(b) . Static subgraph extraction and replacement are achieved by adding a few steps into the conventional code generation process, which comprises of prepass scheduling, register allocation and postpass scheduling of spill code as shown in Figure 6 . These steps are shaded in gray in the figure. First, given a dataflow graph, subgraph identification selects a set of potential subgraphs, which will be later implemented on CCA. Subgraph identification is a well-studied problem; interested readers can refer to for a detailed exposition of the solutions. Note that subgraph identification is performed before register allocation to avoid false dependencies within data flow graph. After subgraph identification, selected subgraphs are collapsed into a single instruction. However, when collapsing subgraphs, code motion ensures the correctness if the subgraph crosses branch boundaries. Before getting into register allocation, the collapsed instruction is expanded so that register allocator can assign the registers to internal values. The advantage of this approach is that even a processor without CCA can execute the subgraphs as well (because they are treated as normal functions). More importantly, subgraph expansion ensures that register allocation remains relatively unchanged. After register allocation, each subgraph is compacted to an atomic node and passed on as input to postpass scheduling. When postpass scheduling completes, each subgraph is expanded once again and a function is created for each subgraph along with a function call. WARP: At the other end of the spectrum, we have WARP BIB003 that has been developed with completely transparent instruction-set customization in mind. WARP processor consists of a main processor with instruction and data caches, an on-chip profiler, WARP-oriented FPGA and an on-chip computer-aided design (CAD) module. The execution of an application starts only on the main processor. During the execution, the profiler determines the critical kernels of the application. Then, CAD module invokes the Riverside On-Chip CAD (ROCCAD) tool chain. ROCCAD tool chain starts with decompilation of the application binary code of software loops into high-level representation that is more suitable for synthesis. Next, the partitioning algorithm determines the most suitable loops to be implemented in FPGA. For the selected kernels, ROCCAD uses behavioral and Register Transfer Level (RTL) synthesis to generate appropriate circuit descriptions. Then, ROCCAD configures the FPGA by using Just-In-Time (JIT) FPGA compilation tools. The JIT compiler performs logic synthesis to optimize the hardware circuit followed by technology mapping to map the hardware circuit onto reconfigurable logic fabric. Placement and route are then performed to complete the JIT compilation. Finally, ROCCAD updates the application binary code to utilize the custom accelerators inside the FPGA. RISPP (Rotating Instruction Set Processing Platform) BIB004 is a recent architecture that offers a unique approach towards runtime customization. RISPP introduces the notion of atoms and molecules for custom instructions. Atom is the basic datapath, while a combination of atoms creates custom instruction molecule. Atoms can be reused across different custom instruction molecules. Compared to the contemporary reconfigurable architectures, RISPP reduces the overhead of partial reconfiguration substantially through an innovative gradual transition of the custom instructions implementation from software into hardware. At compile time, only the potential custom instructions (molecules) are identified, but these molecules are not bound to any datapath in hardware. Instead, a number of possible implementation choices are available including a purely software implementation. At runtime, the implementation of a molecule can gradually "upgrade" to hardware as and when the atoms it needs become available. If no atom is available for a custom instruction, it will be executed in core pipeline using the software implementation. RISPP requires fast design space exploration technique at runtime to combine appropriate elementary data paths and evaluate tradeoffs between performance and hardware area of the custom instructions BIB005 . A greedy heuristic is proposed to select the appropriate implementation for each custom instruction. |
SDN in the home: A survey of home network solutions using Software Defined Networking <s> Background <s> Before building the network or its components, first understand the home and the behavior of its human inhabitants. <s> BIB001 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Background <s> Software Defined Networking (SDN) is an exciting technology that enables innovation in how we design and manage networks. Although this technology seems to have appeared suddenly, SDN is part of a long history of efforts to make computer networks more programmable. In this paper, we trace the intellectual history of programmable networks, including active networks, early efforts to separate the control and data plane, and more recent work on OpenFlow and network operating systems. We highlight key concepts, as well as the technology pushes and application pulls that spurred each innovation. Along the way, we debunk common myths and misconceptions about the technologies and clarify the relationship between SDN and related technologies such as network virtualization. <s> BIB002 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Background <s> The idea of programmable networks has recently re-gained considerable momentum due to the emergence of the Software-Defined Networking (SDN) paradigm. SDN, often referred to as a "radical new idea in networking", promises to dramatically simplify network management and enable innovation through network programmability. This paper surveys the state-of-the-art in programmable networks with an emphasis on SDN. We provide a historic perspective of programmable networks from early ideas to recent developments. Then we present the SDN architecture and the OpenFlow standard in particular, discuss current alternatives for implementation and testing of SDN-based protocols and services, examine current and future SDN applications, and explore promising research directions based on the SDN paradigm. <s> BIB003 | This section provides the necessary background to understand the general problem of home networking and the very nature of SDN that makes this new paradigm a very attractive solution for that problem. This is a brief treatment, and the interested reader is referred to BIB001 BIB002 Goransson, Black, & Culver, 2016; BIB003 . |
SDN in the home: A survey of home network solutions using Software Defined Networking <s> How can SDN help? <s> Networks and networked applications depend on several pieces of configuration information to operate correctly. Such information resides in routers, firewalls, and end hosts, among other places. Incorrect information, or misconfiguration, could interfere with the running of networked applications. This problem is particularly acute in consumer settings such as home networks, where there is a huge diversity of network elements and applications coupled with the absence of network administrators. ::: ::: To address this problem, we present NetPrints, a system that leverages shared knowledge in a population of users to diagnose and resolve misconfigurations. Basically, if a user has a working network configuration for an application or has determined how to rectify a problem, we would like this knowledge to be made available automatically to another user who is experiencing the same problem. NetPrints accomplishes this task by applying decision tree based learning on working and nonworking configuration snapshots and by using network traffic based problem signatures to index into configuration changes made by users to fix problems. We describe the design and implementation of NetPrints, and demonstrate its effectiveness in diagnosing a variety of home networking problems reported by users. <s> BIB001 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> How can SDN help? <s> We argue that heterogeneity is hindering technological innovation in the home---homes differ in terms of their devices and how those devices are connected and used. To abstract these differences, we propose to develop a home-wide operating system. A HomeOS can simplify application development and let users easily add functionality by installing new devices or applications. The development of such an OS is an inherently inter-disciplinary exercise. Not only must the abstractions meet the usual goals of being efficient and easy to program, but the underlying primitives must also match how users want to manage and secure their home. We describe the preliminary design of HomeOS and our experience with developing applications for it. <s> BIB002 | As mentioned earlier, SDN separates the control plane from the data plane, providing the required abstraction of low-level layers into a logical view that can be understood and programmed by network developers. Providing an access into the configuration of network hardware through software programming is essential to allow users to manage their networks via high-level applications that are developed for them by third-party developers. Alternatively, users can outsource network configuration and management to service providers. Viewing the target management functions of each possible application as a separate control slice of the home network, trusted third parties can programmatically control different slices to better manage different functions, such as WiFi configuration, improving routing and implementing access control (e.g. configure WiFi channel and power to minimize interference and/or set parental controls). Several other researchers have previously identified the need for applications and services within the home to cope with increasing complexity and heterogeneity. Few works suggested solutions that are independent of the SDN concept, such as creating new and separate operating system for the home in which users deal with applications and high-level policies to deal with integration and management of their network BIB002 or using an OSGI (Open Service Gateway Initiative)-based framework to install applications on a residential gateway (Valtchev & Frankov, 2002) . However, most of the recent works rely on the SDN technology, and particularly the OpenFlow-based solutions to address the problem of network management. This is the main focus of this article. Aside from the core management functions, and as a subset of those functions, significant work has been done to automate detection and diagnosis of faults in home networks, and to define the appropriate interaction and interfaces between the users and tools to manage and configure the home network BIB001 ). |
SDN in the home: A survey of home network solutions using Software Defined Networking <s> Generic theme <s> Wireless home networks are increasingly deployed in people's homes worldwide. Unfortunately, home networks have evolved using protocols designed for backbone and enterprise networks, which are quite different in scale and character to home networks. We believe this evolution is at the heart of widely observed problems experienced by users managing and using their home networks. In this paper we investigate redesign of the home router to exploit the distinct social and physical characteristics of the home. We extract two key requirements from a range of ethnographic studies: users desire greater understanding of and control over their networks' behaviour. We present our design for a home router that focuses on monitoring and controlling network traffic flows, and so provides a platform for building user interfaces that satisfy these two user requirements. We describe and evaluate our prototype which uses NOX and OpenFlow to provide per-flow control, and a custom DHCP implementation to enable traffic isolation and accurate measurement from the IP layer. It also provides finer-grained per-flow control through interception of wireless association and DNS resolution. We evaluate the impact of these modifications, and thus the applicability of flow-based network management in the home. <s> BIB001 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Generic theme <s> Managing a home network is challenging because the underlying infrastructure is so complex. Existing interfaces either hide or expose the network's underlying complexity, but in both cases, the information that is shown does not necessarily allow a user to complete desired tasks. Recent advances in software defined networking, however, permit a redesign of the underlying network and protocols, potentially allowing designers to move complexity further from the user and, in some cases, eliminating it entirely. In this paper, we explore whether the choices of what to make visible to the user in the design of today's home network infrastructure, performance, and policies make sense. We also examine whether new capabilities for refactoring the network infrastructure - changing the underlying system without compromising existing functionality - should cause us to revisit some of these choices. Our work represents a case study of how co-designing an interface and its underlying infrastructure could ultimately improve interfaces for that infrastructure. <s> BIB002 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Generic theme <s> As the number and variety of connected devices increase, most end-users find themselves unable to manage their home networks properly, not having enough time and/or knowledge to do so. In this paper, we propose a new approach to remove this burden from them, by fully virtualizing the home network and delegating its management and operations to the ISP, while keeping end-users in control. We furthermore define the architecture of our software-based Majord'Home solution. Acting as a majordomo of the home, it handles a representation of the home objects and network constraints, automates the connectivity between heterogeneous elements and thus meets the needs of end-users. We finally describe the first version of our on-going implementation as a proof of concept. <s> BIB003 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Generic theme <s> Summary ::: Within the Internet service provider landscape, the residential gateway (RGW) plays a key role in service provision. The RGW should be an enabler for the provision of new and better services for residential users, but it is often instead an obstacle for innovation. This paper discusses how to improve the provision of innovative services and to increase the usability of residential networks by upgrading the residential gateway in alignment with the current paradigms of software-defined networking (SDN) and network function virtualization. In this approach, SDN contributes by providing a fine-grained control of the traffic and network function virtualization contributes by outsourcing traditional and specialized network functions running inside the RGW like routing or network address translation to the Internet service provider premises. Based on this approach, a management framework has been designed by considering 2 aspects: the involvement of the residential user in the management tasks through the provision of network management applications and the need to decouple network applications from the underlying SDN controller technology to encourage the development of innovative network applications. In addition, a virtualized management and networking domain has been defined to complement the approach and leverage cloud technologies. The advantages and challenges of this approach are analyzed based on a proof of concept development. <s> BIB004 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Generic theme <s> The residential gateway is a key device in the provision of Internet access to a household or to a small office. Managing a residential network nowadays means configuring the functionality provided by the residential gateway, which is often a task that requires a certain level of technical expertise that most residential users lack. Internet Service Providers sometimes address this usability problem by managing the residential gateway from a central location and offering a way of configuring simple functions such as the password of the Wi-Fi network through a web-based application. In this paper a new user-centric management architecture is proposed, to increase the active engagement of residential users in the management tasks of their own networks, improving the usability of the network and facilitating the provision of new services. In this approach, residential network management applications are split in two components: a front-end handling user interaction and running on the user's preferred device (PC, laptop, smartphone); and a back-end built on top of both the Software Defined Networking (SDN) and the Network Functions Virtualization (NFV) paradigms. The solution takes advantage of the fine-grained control of network traffic and the convenience to communicate network events provided by SDN and the outsourcing of traditional network functions like routing or NAT from the residential gateway to a cloud-based infrastructure managed by the Internet Service Provider. In this paper the advantages and challenges of this approach are discussed, based on the results obtained from a proof of concept system that has been developed to evaluate the feasibility and performance of the proposal. The residential network usability is improved by implementing a new user-centric management model.Residential network management applications (RENEMA apps) involve users in managing their own networks.Residential network services (RENESEs) expedite and simplify the development of RENEMA apps.The virtualized management and networking domain (vMANDO) concept hosts the SDN and NFV components.The architecture allows avoiding the manufacturer lock-in effect. <s> BIB005 | Apart from the works that focus on a specific aspect of managing home networks, such as bandwidth allocation or security, several articles introduce their own approach of exploiting SDN in home networking, from a general perspective. Two of the first works in this category were developed as part of the Homework project (The University of Nottingham, 2012), and aimed at redesigning exiting home-network infrastructure (i.e. routers) based on the concepts of SDN to provide the user with better understanding and control as well as novel interfaces (Mortier et al., 2011 BIB001 . The authors in BIB002 ) take the home network as a case study to discuss how SDN can be used to refactor current networks and provide users with the correct level of network visibility and actionable information. The concept of virtualisation is suggested in the remaining works of this generic category BIB003 Dillon & Winters, 2014; BIB004 BIB005 . These four works differ in their proposed architectures, but agree on virtualising the home network and delegating the management and control of the network to someone in the cloud, most probably the Internet Service Provider (ISP). This aims to remove the management burden from the user while providing the usability of the network. |
SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> As Internet service providers increasingly implement and impose "usage caps", consumers need better ways to help them understand and control how devices in the home use up the available network resources or available capacity. Towards this goal, we will demonstrate a system that allows users to monitor and manage their usage caps. The system uses the BISMark firmware running on network gateways to collect usage statistics and report them to a logically centralized controller, which displays usage information. The controller allows users to specify policies about how different people, devices, and applications should consume the usage cap; it implements and enforces these policies via a secure OpenFlow control channel to each gateway device. The demonstration will show various use cases, such as limiting the usage of a particular application, visualizing usage statistics, and allowing users within a single household to "trade" caps with one another. <s> BIB001 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Despite the popularity of home networks, they face a number of systemic problems: (i)Broadband networks are expensive to deploy; and it is not clear how the cost can be shared by several service providers; (ii) Home networks are getting harder to manage as we connect more devices, use new applications, and rely on them for entertainment, communication and work|it is common for home networks to be poorly managed, insecure or just plain broken; and (iii) It is not clear how home networks will steadily improve, after they have been deployed, to provide steadily better service to home users. In this paper we propose slicing home networks as a way to overcome these problems. As a mechanism, slicing allows multiple service providers to share a common infrastructure; and supports many policies and business models for cost sharing. We propose four requirements for slicing home networks: bandwidth and traffic isolation between slices, independent control of each slice, and the ability to modify and improve the behavior of a slice. We explore how these requirements allow cost-sharing, outsourced management of home networks, and the ability to customize a slice to provide higher-quality service. Finally, we describe an initial prototype that we are deploying in homes. <s> BIB002 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Policy-makers, ISPs and content providers are locked in a debate about who can control the Internet traffic that flows into our homes. In this paper we argue that the user, not the ISP or the content provider, should decide how traffic is prioritized to and from the home. Home users know most about their preferences, and if they can express them well to the ISP, then both the ISP and user are better off. To test the idea we built a prototype that lets users express highlevel preferences that are translated to low-level semantics and used to control the network. <s> BIB003 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Home networks are becoming increasingly complex, with many household devices (PCs tablets, phones, media gateways, smart TVs) and diverse user applications (browsing, video streaming, peer-to-peer, VoIP, gaming) sharing the single broadband access link. In today's architecture the traffic streams compete for bandwidth on a best-effort basis, resulting in poor quality of experience for users. In this paper, we leverage the emerging paradigm of software defined networking (SDN) to enable the ISP to expose some controls to the users to manage service quality for specific devices and applications in their household. Our contributions are to develop an architecture and interface for delegation of such control to the user, and to demonstrate its value via experiments in a laboratory test-bed using three representative applications: video, web-browsing, and large downloads. <s> BIB004 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> This paper considers SDN, and OpenFlow in particular, as technology to develop the next generation of more flexible, configurable and automated home networks. We identify the problems with the current state of the art in home networking, which includes a lack of user engagement in home network maintenance and configuration, Internet bandwidth limitations, and a lack of ISP reconfiguration and troubleshooting tools. We propose HomeVisor, a novel remote home network management tool. In this paper, we evaluate HomeVisor's ability to outsource control to an entity outside the home network. This includes the overhead of multiple slices within the home, and the effect of controller latency on network performance. <s> BIB005 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Software Defined Networks (SDN) offers the opportunity to drive down costs through increased operational efficiency-network utilization in particular-service creation velocity, and differentiated and personalized network services. This way the CAPEX and OPEX costs for the operator are going to be drawn down and same way costs will be drawn down for the end user. In the context of UNIFY project [1], one of the main objectives is to focus on enablers of such unified production environment and will develop an automated, dynamic service creation platform, leveraging a fine-granular service chaining architecture. <s> BIB006 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Home networks are becoming increasingly rich in devices and applications, but continue to share the broadband link in a neutral way. We believe the time is ripe to personalize the home network experience, allowing a household to differentiate its users (e.g. father's laptop prioritized over kid's iPad) and services (e.g. video streaming prioritized over downloading). In this paper we argue that SDN provides a way to automate self-customization by households, while cloud-based delivery simplifies subscriber management. We develop an architecture comprising a cloud-based front-end portal and SDN-based back-end APIs, and show how these can be used by the subscriber to improve streaming-video (YouTube) quality and video conferencing (Skype) experience, and to permit device-specific parental controls (e.g. Facebook access). We prototype and validate our solution in a platform comprising the Floodlight controller and OVS switches. Lastly, we evaluate our solutions via experiments of realistic scenarios to quantify the benefits in terms of improved quality of experience and new features for the user. <s> BIB007 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> In this paper we present an idea of a propriety Software Defined residential Network (SDrN) and we show as a use case, a multicast streaming service that can be hosted on such networks. To verify the feasibility of the service in the context of quality of service, we offer to the providers of online streaming services (in some cases the ISPs themselves), APIs to control and validate the QoS of the users in the service. The QoS control APIs were tested on SDN based simulation environment. <s> BIB008 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Software Defined Network (SDN) has long been a research focus since born from the lab of Stanford University. Researches on traditional home networks are faced with a series of challenges due to the ever more complicated user demands. The application of SDN to the home network is an effective approach in coping with it. Now the research on the SDN based home network is in its preliminary stage. Therefore, for better user experience, it is essential to effectively manage and utilize the resources of the home network. The general slicing strategies don't show much advantage in performance within the home networks due to the increased user demands and applications. In this paper, we introduce an advanced SDN based home network prototype and analyze its compositions and application requirements. By implementing and comparing several slicing strategies in properties, we achieve an optimized slicing strategy according to the specified home network circumstance and our preference. <s> BIB009 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Existing home-networking protocols do not robustly incorporate universal connectivity among multiple homes, which leaves their use restricted to a single home. In addition, even in a single home network, new functional requirements ask for more diversified forms of networking control. This paper presents in-home consumer electronic devices that incorporate the emerging SDN (Software Defined Networking) paradigm. The proposed devices enable ondemand provisioning for protocol-agnostic home networking and thus provide a high degree of flexibility for intra-home networking as well as wider connectivity for inter-home networking. The feasibility of the prototype devices is verified by realizing a multi-home visual-sharing scenario and by supporting diverse future scenarios. <s> BIB010 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> In this paper, we propose to combine the emerging software defined networking (SDN) paradigm with the existing residential broadband infrastructure to enable home users to have dynamic control over their traffic flows. The SDN centralized control technology enables household devices to have virtualized services with quality of service (QoS) guarantee. SDN-enabled open application programming interfaces (APIs) allow Internet service providers (ISPs) to perform bandwidth slicing in home networks and implement time-dependent hybrid pricing. Given the requests from household devices for virtualized and non-virtualized services, we formulate a Stackelberg game to characterize the pricing strategy of ISP as well as bandwidth allocation strategy in home networks. In the Stackelberg game, the leader is the ISP and the followers are the home networks. We determine the optimal strategies which provide maximal payoff for the ISP. Numerical results show that our proposed SDN-enabled home network technology with the hybrid pricing scheme provides a better performance than a usage-based pricing scheme tailored for best-effort home networks. <s> BIB011 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> The increasing uptake of smart home appliances, such as lights, smoke-alarms, power switches, baby monitors, and weighing scales, raises privacy and security concerns at unprecedented scale, allowing legitimate and illegitimate entities to snoop and intrude into the family's activities. In this paper we first illustrate these threats using real devices currently available in the market. We then argue that as more such devices emerge, the attack vectors increase, and ensuring privacy/security of the house becomes more challenging. We therefore advocate that device-level protections be augmented with network-level security solutions, that can monitor network activity to detect suspicious behavior. We further propose that software defined networking technology be used to dynamically block/quarantine devices, based on their network activity and on the context within the house such as time-of-day or occupancy-level. We believe our network-centric approach can augment device-centric security for the emerging smart-home. <s> BIB012 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Internet Service Providers (ISPs) have introduced "data caps", or quotas on the amount of data that a customer can download during a billing cycle. Under this model, Internet users who reach a data cap can be subject to degraded performance, extra fees, or even temporary interruption of Internet service. For this reason, users need better visibility into and control over their Internet usage to help them understand what uses up data and control how these quotas are reached. In this paper, we present the design and implementation of a tool, called uCap, to help home users manage Internet data. We conducted a field trial of uCap in 21 home networks in three countries and performed an in-depth qualitative study of ten of these homes. We present the results of the evaluation and implications for the design of future Internet data management tools. <s> BIB013 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Today's residential Internet service is bundled and shared by a multiplicity of household devices and members, causing several performance problems. Customizing broadband sharing to the needs and usage patterns of each individual house has hitherto been difficult for ISPs and home router vendors. In this paper we design, implement, and evaluate a system that allows a third-party to create new services by which subscribers can easily customize Internet sharing within their household. Our specific contributions are three-fold: (1) We develop an over-the-top architecture that enables residential Internet customization, and propose new APIs to facilitate service innovation. (2) We identify several use-cases where subscribers benefit from the customization, including: prioritizing quality-of-experience amongst family members; monitoring individual usage volumes in relation to the household quota; and filtering age-appropriate content for selected users. (3) We develop a fully-functional prototype of our system leveraging open-source SDN platforms, deploy it in selected households, and evaluate its usability and performance benefits to demonstrate feasibility and utility in the real world. <s> BIB014 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> In recent years, there has been a rapid growth in the adoption and usage of WiFi enabled networked devices at homes such as laptops, handheld device and wireless entertainment devices. In dense wireless deployments at homes, such as apartment buildings, neighboring home WLANs share the same unlicensed spectrum by deploying consumer-grade access points in their individual homes. In such environments, WiFi networks can suffer from intermittent performance issues such as wireless packet losses, interference from WiFi and non-WiFi sources due to the increasing diversity of devices that share the spectrum. In this paper, we propose a vendor-neutral cloud-based centralized framework called COAP to configure, co-ordinate and manage individual home APs using an open API implemented over the OpenFlow SDN framework. This paper describes the framework and motivates the potential benefits of the framework in home WLANs. <s> BIB015 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> In dense wireless deployments at homes, such as apartment buildings, neighboring home WLANs share the same unlicensed spectrum by deploying consumer-grade access points in their individual homes. In such environments, WiFi networks can suffer from intermittent performance issues such as wireless packet losses, interference from WiFi and non-WiFi sources due to the rapid growth and increasing diversity of devices that share the spectrum. In this paper, we propose a vendor-neutral cloud-based centralized framework called COAP to configure, coordinate and manage individual home APs using an open API implemented by these commodity APs. The framework, implemented using OpenFlow extensions, allows the APs to share various types of information with a centralized controller — interference and traffic phenomenon and various flow contexts, and in turn receive instructions — configuration parameters (e.g., channel) and transmission parameters (through coarse-grained schedules and throttling parameters). This paper describes the framework and associated techniques, applications to motivate its potential benefits, such as, upto 47% reduction in channel congestion and our experiences from having deployed it in actual home environments. <s> BIB016 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> In recent years a lot of new consumer devices have been introduced to the home network. Modern home networks usually consists of multiple heterogeneous communication technologies such as Ethernet, Wi-Fi and power-line communications. Today, the user has to manually decide which transmission technology to use as there is no automated optimization across technologies. Load balancing algorithms can improve overall throughput while redundant links also provide the opportunity to switch flows in case of link failures. Current standards either lack real implementation in consumer devices or do not have the flexibility to support all necessary functionality towards creating a convergent hybrid home network. Therefore, we propose an alternative way by using Software-Defined Networking techniques to manage a heterogeneous home network. In this paper we specifically evaluate the ability of OpenFlow-enabled switches to perform link switching both under normal conditions and in case of link failures. Our results show that SDN-based management can be used to improve heterogeneous home networks by utilising redundant links for flow rerouting. However, they also show that improvements are still needed to reduce downtime during link failure or rerouting in case of TCP traffic. <s> BIB017 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Adaptive video streaming techniques were introduced to enable high quality video delivery over HTTP. These schemes propose to choose dynamically the appropriate video rate to match the operating conditions. In home networks, wireless access is the predominant Internet access. Multiple clients/players with different link qualities compete over a limited wireless bandwidth to transfer their video streams. As a result, some users undergo unpredictable degradations of their Quality of Experience (QoE) while others benefit from these perturbations. In this paper we introduce a new technique to address this issue at the gateway without modifying neither the client nor the video server side. We design a framework WNAVS (Wireless Network Assisted Video Streaming) that relies on the deployment of Software Defined Networking (SDN). WNAVS performs a dynamic traffic shaping based on collected network traffic statistics and allocates bandwidth for the clients in real time. We evaluate WNAVS over several metrics: fairness, instability, average video quality as well as the video traffic utilization. Our results demonstrate an improvement for all these parameters. <s> BIB018 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Software defined networking (SDN) provides a centralized control framework with real-time control of network components, residential customer routers in particular, that allows automated per-user bandwidth allocation. However, employing dynamic traffic shaping for efficient bandwidth utilization among residential users is a challenging task. In this context, understanding application usage requirements for each individual user and translating them into network policies requires expertise beyond most residential users. This paper proposes a user-centric traffic optimization scheme by profiling users based on their application trends recorded using generic NetFlow records, in order to provide a better view of per user utilization. We also propose an SDN traffic monitoring and management application for implementing Linux-based hierarchical token bucket (HTB) queues customized for individual user profiles in real-time, according to user-defined priorities. The traffic management scheme scales well under both upstream and downstream network congestion by dynamically allocating dedicated bandwidth to users based on their profile priority, resulting in a decreased packet loss and latency for a selected set of high priority users. <s> BIB019 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> In a home network, there are multiple users each running different applications interacting with the network. To enhance the experience of each user, prioritization of various network applications is important. Previous solutions to this problem assigned priorities in a static manner. Even though there has been some efforts to assign priorities dynamically, these solutions only used interactivity of the application to prioritize traffic. We present Contextual Router, which achieves better prioritization by detecting all the flows generated in a home network and assigning priorities in a dynamic manner using various features of flows collected from each user's machine. <s> BIB020 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> With the increasing number of IoT (Internet of Things) devices and advance of smart home technology, we propose an innovative bandwidth allocation framework for IoT enabled smart homes. The application scope of this research assumes a scenario that an ISP (Internet Service Provider) should support thousands of IoT enabled smart homes for a variety of services. Each smart home is equipped with tens of IoT devices with a wide spectrum of functional capabilities. The proposed bandwidth allocation framework is based on the promising software defined networking (SDN) architecture and is responsible for optimizing bandwidth allocation on both internal home traffic and external Internet traffic. The overall system architecture is separated into SDN Smart Home Cloud and Massive Smart Homes, which are interconnected by OpenFlow protocol. We modify the 3GPP LTE QoS Class Identifier (QCI) to adaptive to the services suitable for smart homes. The proposed bandwidth allocation algorithm considers fairness, delay, and service priority at the same time. With this framework, ISP is able to optimize bandwidth allocation by aggregating thousands of classified services of smart homes and thus effectively enhance Quality of Service (QoS) and user experience (QoE). <s> BIB021 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> There has always been a gap of perception between Internet Service Providers (ISPs) and their customers when considering the performance of network service. On one hand, ISPs invest to increase downstream speed of access network infrastructure. On the other hand, users cannot achieve perceived quality of experience (QoE). This paper addresses this problem by introducing a system, Conan, which enables content-aware flow scheduling to improve the QoE of users. Conan exploits to satisfy users' requirements in the access network (LAN), which is the performance bottleneck actually. By leveraging the technique of software defined networking (SDN), Conan are able to specify the expected network capacity for different applications. Automatic application identification is deployed at home gateway to improve the scalability, and flexible bandwidth allocation is realized at LAN for specified applications. Using video streaming service optimization as an example, we demonstrate that our system can automatically allocate bandwidth for video flows. <s> BIB022 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> In recent years, the smart home field has caught wide attention and witnessed rapid development. Smart devices, continuously increasing in number, make user management and implementation more difficult while promoting the development of the smart home. How to design an efficient smart home management platform is one of the great challenges the current smart home field faces. This article refers to the core idea of SDN, and proposed the software defined smart home platform, SDSH for short. The design features of virtualization, openness, and centralization can effectively integrate the heterogeneous network devices in the smart home platform, and flexibly adapt to the great difference between family scenes and user demands. At the same time, this article brings up the core technology of SDSH, and discusses the application value of the four core technologies and the new challenges the current technology is facing in a smart home scenario. In the end, regarding the SDSH application scenarios, this article analyzes the household experience innovation brought by this kind of smart home management platform, and the opportunities and challenges the SDSH platform faces. <s> BIB023 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Software-defined Home Networks (SDHN) is a key development trend of smart home which is proposed to realize multi-home visual sharing. With the improved openness and programming ability, SDHN faces increased network threat than traditional home networks. Especially, because of the diversity and heterogeneity of smart home products, multi-stage attack is more convenient to be performed in SDHN. To mitigate multi-stage attack in SDHN, some significant problems are needed to be addressed. The first problem is security assessment along with attack events. The second one is countermeasure selection problem based on security assessment result and security policy. The third one is attack mitigation countermeasure deployment problem according to current network context to meet the countermeasure decision instantly. In this paper, a multi-stage attack mitigation mechanism is proposed for SDHN using Software-Defined Networking (SDN) and Network Function Virtualization (NFV). Firstly, an evidence-driven security assessment method using SDN factors and NFV-based detection is designed to perform security assessment along with observed security events. Secondly, an attack mitigation countermeasure selection method is proposed. The evaluation shows that the proposed mechanism is effective for multi-stage attack mitigation in SDHN1. <s> BIB024 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> While enterprise networks follow best practices and security measures, residential networks often lack these protections. Home networks have constrained resources and lack a dedicated IT staff that can secure and manage the network and systems. At the same time, homes must tackle the same challenges of securing heterogeneous devices when communicating to the Internet. In this work, we explore combining software-defined networking and proxies with commodity residential Internet routers. We evaluate a “whole home” proxy solution for the Skype video conferencing application to determine the viability of the approach in practice. We find that we are able to automatically detect when a device is about to use Skype and dynamically intercept all of the Skype communication and route it through a proxy while not disturbing unrelated network flows. Our approach works across multiple operating systems, form factors, and versions of Skype. <s> BIB025 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Recent advances in wireless networking technologies are leading toward the proliferation of novel home network applications. However, the landscape of emerging scenarios is fragmented due to their varying technological requirements and the heterogeneity of current wireless technologies. We argue that the development of flexible software-defined wireless architectures, including such efforts as the wireless MAC processor, coupled with SDN concepts, will enable the support of both emerging and future home applications. In this article, we first identify problems with managing current home networks composed of separate network segments governed by different technologies. Second, we point out the flaws of current approaches to provide interoperability of these technologies. Third, we present a vision of a software-defined multi-technology network architecture (SDN@home) and demonstrate how a future home gateway (SDN controller) can directly and dynamically program network devices. Finally, we define a new type of flexibility enabled by SDN@home. Wireless protocols and features are no longer tied to specific technologies but can be used by general-purpose wireless SDN devices. This permits satisfaction of the requirements demanded by home owners and service providers under heterogeneous network conditions. <s> BIB026 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> We propose to leverage the virtualization possibilities of Network Functions Virtualization (NFV) together with the programmability of Software Defined Networking (SDN) in order to offer a portfolio of IoT-related functions to the residential users. The objectives are to reach economies of scale by offering a reasonably inexpensive customer premises equipment supporting most IoT physical communication options, whereas all self-discovery and the rest of vendor-specific functionality is externalized and implemented by the ISP (Internet Service Provider) or third parties. <s> BIB027 | In this category, we can find few different themes. The general purpose is still to control and manage the home network using ideas and tools from the SDN paradigm, but the emphasis is put on a particular aspect of home networking in each work. We could recognize several specialised themes and summarised them into 10 different subcategories, some of which have only one paper each, but the theme is distinct enough to be highlighted and pointed out for further study and research. The most popular subject in this category is the QoS and quality of user experience (QoE) when using home network applications BIB018 BIB006 BIB019 BIB020 BIB011 BIB007 BIB021 BIB004 BIB008 BIB022 . The target application in these works is generally multimedia and video streaming, and the aim is to optimize bandwidth allocation for different network applications to improve the user experience. This optimisation is mostly based on the user preferences or profile, but can also be derived from dynamic traffic shaping based on collected traffic statistics BIB018 , automatic identification of applications BIB022 or a proposed bandwidth allocation algorithm BIB021 . Most of the works enable the ISP of controlling the service quality from the cloud, though few works depend on local solution using in-home SDN controller BIB018 , BIB020 , BIB019 . One work also proposes a novel pricing scheme for ISPs, who can implement time-dependent hybrid pricing through SDN APIs BIB011 . Another distinct theme in this category is to address the issues related to IoT devices, in the context of smart home BIB027 Nobakht, Sivaraman, & Boreli, 2016; BIB012 BIB023 . This perspective is unique and new to home networking, but its relevance is increasing in modern homes with the rise of the IoT paradigm. All network-enabled devices in the home are eventually forming an internet of things, and their management can consequently be considered a networking problem; hence, the SDN comes to mind. Within the papers on IoT home devices, some works focus on the problem of managing IoT devices, such as finding a device fault easily , integrating the heterogeneous network devices in smart home environments BIB023 , and offering a portfolio of IoTrelated functions to home users BIB027 . Another focus point is to propose solutions for smart home and IoT device security (Nobakht et al., 2016 BIB012 . Apart from IoT, targeting the application of home network security is also a common theme. One of the earliest works in the complete set of the surveyed papers proposed that users outsource the management tasks related to security to a third party controller who has the required expertise and capacity to monitor coordinated activities over the Internet (Feamster, 2010) . Another work proposes a multi-stage attack mitigation mechanism for home networks using SDN BIB024 . A home-level security proxy solution for the video conferencing applications (as a case study) is proposed in BIB025 . Finally, a communitybased crowdsourced home cyber-security system is proposed in . Because the caps of Internet usage is an increasing concern for home users, several works specifically address the problem of managing Internet use through the SDN architecture BIB013 BIB014 BIB001 BIB003 ). An early work BIB001 demonstrates a system to collect usage statistics and reports them to a central controller, which displays usage information. The controller allows users to specify policies and enforces them, where policies dictate how different people, devices, and applications should consume the usage cap. The other works depend either on the ISP, where the users are allowed to choose the relative priority of their applications, and signal their preference to the ISP BIB003 , or allow a third party to control the Internet traffic usage BIB014 , BIB013 . Another group of papers address the specific issues arising from managing home WiFi access points BIB015 BIB016 , or in general all multi-technology wireless network devices BIB026 . Few papers adopt the concept of network slicing (Fratczak, BIB005 BIB009 BIB002 . Network slicing is a promising technique that creates different slices over the same physical home network, so that each slice is independently controllable and can be isolated for different services. The management of slices may be assigned to a third party. Finally, the last four papers in our collection are directed toward four special target applications. The first work proposes the instrumentation of home networks to enable their troubleshooting . This work presents the design requirements of a general-purpose home network logging platform that can record events supporting troubleshooting services for home network users. A second work discusses the idea of multi-home networking BIB010 , enabling on-demand provisioning of networked multi-home multimedia applications using SDNbased in-home consumer electronic devices. The automatic configuration of home networks is also addressed in , which proposes a method where SDN controller performs auto-recognition and registration of home devices, then manages home devices according to the home network connection state. The final work addresses the problem of heterogeneity in home networks, and evaluates the ability of OpenFlow-enabled switches to manage heterogeneous home networks by utilising redundant links for flow rerouting and performing link switching between wired and wireless technologies both under normal conditions and in case of link failures BIB017 . |
SDN in the home: A survey of home network solutions using Software Defined Networking <s> Discussion <s> Managing a home network is challenging because the underlying infrastructure is so complex. Existing interfaces either hide or expose the network's underlying complexity, but in both cases, the information that is shown does not necessarily allow a user to complete desired tasks. Recent advances in software defined networking, however, permit a redesign of the underlying network and protocols, potentially allowing designers to move complexity further from the user and, in some cases, eliminating it entirely. In this paper, we explore whether the choices of what to make visible to the user in the design of today's home network infrastructure, performance, and policies make sense. We also examine whether new capabilities for refactoring the network infrastructure - changing the underlying system without compromising existing functionality - should cause us to revisit some of these choices. Our work represents a case study of how co-designing an interface and its underlying infrastructure could ultimately improve interfaces for that infrastructure. <s> BIB001 | The purpose of this article is to explore the available works on using SDN in the realm of home networks. After a couple years of research into the paradigm of SDN that commenced circa 2008, researchers soon realised the value of the new idea in contexts other than data canters and large enterprise networks. Indeed, the challenges of modern home networks and the complexity of their management push towards new approaches and reminds every one of the flexibility and power that SDN can bring in. The separation of control from data makes it possible to isolate the low-level details of how home network devices work and provide the home user (or a third party for that matter) with a clean interface to control network operation exploiting the new programmability of networks. Many network management functions that are not possible to perform in current home networks by average home users become feasible, and easy to perform, with the help of SDN. Based on the results of our survey, a couple of points may be worth highlighting in this discussion. First, it is apparent from the derived taxonomy that many individual tasks of network management can be the target of an SDN-based solution, such as Internet usage, security and QoE. Most of the surveyed works focused on these tasks and produced different architectures and prototypes to prove the concept of their design and demonstrate its implementation. Although these works have the SDN basis in common, they are independent of each other and, most probably, incompatible. Each one alone is also incomprehensive in terms of the whole range of network management tasks. This leads to the need for further studies to analyse, evaluate and combine these solutions into a unified framework, a sort of one-stop product for software defined home networking. We might eventually end with few such products, but the potential to integrate many of the proposed ideas is great. Second, many solutions aim to put the control in the hands of the home user in managing their home networks. Whatever was the underpinning mechanism, all solutions would need to interface with the user at the front-end. The role of the interface is crucial; it should make various functionalities and information on the underlying network visible to the user, and allow the user to make changes in response to those information. As pointed out in BIB001 , this raises questions on how to expose that kind of new functionality to the users using intuitive interfaces that improve their awareness of the network status and enable them of taking actions. For example, the downstream speed of the Internet broadband connection might be represented by the width of a pipe, and packet loss might be shown as certain traffic not making it all the way through the pipe. Slides and icons can represent various devices and dropdown lists can present available actions. . .etc. That being said, it is also important to remember that while a novice user needs understand no details, an expert user might gain additional insight from knowing the underlying details, and hence exposing various levels of visibility may be required. Finally, an important point to notice is that most of the works suggest the involvement of a cloud-based, third party (such as the ISP) in the management of the virtualised home network. As pointed in BIB001 again, determining what information should be collected and presented to the operator is an area for future work. The challenge here is to balance the home user's privacy with the need for an ISP operator to see the configuration, topology, and devices on the home network. From a security perspective, exposure to external entities may also increase the attack surface on the home network, as more channels and more information are open to outside the perimeter of the home network. |
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Agents that learn on-line with partial instance memory reserve some of the previously encountered examples for use in future training episodes. We extend our previous work by combining our method for selecting extreme examples with two incremental learning algorithms, AQ11 and GEM. Using these new systems, AQ11-PM and GEM-PM, and the task computer intrusion detection, we conducted a lesion study to analyze trade-offs in performance. Results showed that, although our partial-memory model decreased predictive accuracy by 2%, it also decreased memory requirements by 75%, learning time by 75%, and in some cases, concept complexity by 10%, an outcome consistent with earlier results using our partial-memory method and batch learning. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Recently, mining data streams with concept drifts for actionable insights has become an important and challenging task for a wide range of applications including credit card fraud protection, target marketing, network intrusion detection, etc. Conventional knowledge discovery tools are facing two challenges, the overwhelming volume of the streaming data, and the concept drifts. In this paper, we propose a general framework for mining concept-drifting data streams using weighted ensemble classifiers. We train an ensemble of classification models, such as C4.5, RIPPER, naive Beyesian, etc., from sequential chunks of the data stream. The classifiers in the ensemble are judiciously weighted based on their expected classification accuracy on the test data under the time-evolving environment. Thus, the ensemble approach improves both the efficiency in learning the model and the accuracy in performing classification. Our empirical study shows that the proposed methods have substantial advantage over single-classifier approaches in prediction accuracy, and the ensemble framework is effective for a variety of classification models. <s> BIB002 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Learning concepts that change over time is important for a variety of applications in which an intelligent system must acquire and use a behavioral profile. Computer intrusion detection, calendar scheduling, and intelligent user interfaces are three examples. An interesting class of methods for learning such concepts consists of algorithms that maintain a portion of previously encountered examples. Since concepts change over time and these methods store selected examples, mechanisms must exist to identify and remove irrelevant examples of old concepts. In this paper, we describe an incremental rule learner with partial instance memory, called AQ 11 -PM+WAH, that uses Widmer and Kubat's heuristic to adjust dynamically the window over which it retains and forgets examples. We evaluated this learner using the STAGGER concepts and made direct comparisons to AQ-PM and to AQ 11 - PM, similar learners with partial instance memory. Results suggest that the forgetting heuristic is not restricted to FLORA2 the learner for which it was originally designed. Overall, result from this study and others suggest learners with partial instance memory converge more quickly to changing target concepts than algorithms that learn solely from new examples. <s> BIB003 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> We consider strategies for building classifier ensembles for non-stationary environments where the classification task changes during the operation of the ensemble. Individual classifier models capable of online learning are reviewed. The concept of ”forgetting” is discussed. Online ensembles and strategies suitable for changing environments are summarized. <s> BIB004 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Induction of a concept description given noisy instances is difficult and is further exacerbated when the concepts may change over time. This paper presents a solution which has been guided by psychological and mathematical results. The method is based on a distributed concept description which is composed of a set of weighted, symbolic characterizations. Two learning processes incrementally modify this description. One adjusts the characterization weights and another creates new characterizations. The latter process is described in terms of a search through the space of possibilities and is shown to require linear space with respect to the number of attribute-value pairs in the description language. The method utilizes previously acquired concept definitions in subsequent learning by adding an attribute for each learned concept to instance descriptions. A program called STAGGER fully embodies this method, and this paper reports on a number of empirical analyses of its performance. Since understanding the relationships between a new learning method and existing ones can be difficult, this paper first reviews a framework for discussing machine learning systems and then describes STAGGER in that framework. <s> BIB005 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Alexey Tsymbal Department of Computer Science Trinity College Dublin, Ireland [email protected] April 29, 2004 Abstract In the real world concepts are often not stable but change with time. Typical examples of this are weather prediction rules and customers’ preferences. The underlying data distribution may change as well. Often these changes make the model built on old data inconsistent with the new data, and regular updating of the model is necessary. This problem, known as concept drift, complicates the task of learning a model from data and requires special approaches, different from commonly used techniques, which treat arriving instances as equally important contributors to the final concept. This paper considers different types of concept drift, peculiarities of the problem, and gives a critical review of existing approaches to the problem. 1. Definitions and peculiarities of the problem A difficult problem with learning in many real-world domains is that the concept of interest may depend on some hidden context, not given explicitly in the form of pre-dictive features. A typical example is weather prediction rules that may vary radically with the season. Another example is the patterns of customers’ buying preferences that may change with time, depending on the current day of the week, availability of alter-natives, inflation rate, etc. Often the cause of change is hidden, not known a priori, making the learning task more complicated. Changes in the hidden context can induce more or less radical changes in the target concept, which is generally known as con-cept drift (Widmer and Kubat, 1996). An effective learner should be able to track such changes and to quickly adapt to them. A difficult problem in handling concept drift is distinguishing between true concept drift and noise. Some algorithms may overreact to noise, erroneously interpreting it as concept drift, while others may be highly robust to noise, adjusting to the changes too slowly. An ideal learner should combine robustness to noise and sensitivity to concept drift (Widmer and Kubat, 1996). In many domains, hidden contexts may be expected to recur. Recurring contexts may be due to cyclic phenomena, such as seasons of the year or may be associated with irregular phenomena, such as inflation rates or market mood (Harries and Sam-mut, 1998). In such domains, in order to adapt more quickly to concept drift, concept <s> BIB006 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Most of the work in machine learning assume that examples are generated at random according to some stationary probability distribution. In this work we study the problem of learning when the distribution that generate the examples changes over time. We present a method for detection of changes in the probability distribution of examples. The idea behind the drift detection method is to control the online error-rate of the algorithm. The training examples are presented in sequence. When a new training example is available, it is classified using the actual model. Statistical theory guarantees that while the distribution is stationary, the error will decrease. When the distribution changes, the error will increase. The method controls the trace of the online error of the algorithm. For the actual context we define a warning level, and a drift level. A new context is declared, if in a sequence of examples, the error increases reaching the warning level at example k w , and the drift level at example k d . This is an indication of a change in the distribution of the examples. The algorithm learns a new model using only the examples since k w . The method was tested with a set of eight artificial datasets and a real world dataset. We used three learning algorithms: a perceptron, a neural network and a decision tree. The experimental results show a good performance detecting drift and with learning the new concept. We also observe that the method is independent of the learning algorithm. <s> BIB007 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Induction of decision rules within the dominance–based rough set approach to the multicriteria and multiattribute classification is considered. Within this framework, we discuss two algorithms: Glance and an extended version of AllRules. The important characteristics of Glance is that it induces the set of all dominance–based rules in an incremental way. On the other hand, AllRules induces in a non–incremental way the set of all robust rules, i.e. based on objects from the set of learning examples. The main aim of this study is to compare both these algorithms. We experimentally evaluate them on several data sets. The results show that Glance and AllRules are complementary algorithms. The first one works very efficiently on data sets described by a low number of condition attributes and a high number of objects. The other one, conversely, works well on data sets characterized by a high number of attributes and a low number of objects. <s> BIB008 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift. <s> BIB009 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> An emerging problem in Data Streams is the detection of concept drift. This problem is aggravated when the drift is gradual over time. In this work we deflne a method for detecting concept drift, even in the case of slow gradual change. It is based on the estimated distribution of the distances between classiflcation errors. The proposed method can be used with any learning algorithm in two ways: using it as a wrapper of a batch learning algorithm or implementing it inside an incremental and online algorithm. The experimentation results compare our method (EDDM) with a similar one (DDM). Latter uses the error-rate instead of distance-error-rate. <s> BIB010 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Mining data streams is a challenging task that requires online systems ba- sed on incremental learning approaches. This paper describes a classification system based on decision rules that may store up-to-date border examples to avoid unneces- sary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbour algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB011 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> We address adaptive classification of streaming data in the presence of concept change. An overview of the machine learning approaches reveals a deficit of methods for explicit change detection. Typically, classifier ensembles designed for changing environments do not have a bespoke change detector. Here we take a systematic look at the types of changes in streaming data and at the current approaches and techniques in online classification. Classifier ensembles for change detection are discussed. An example is carried through to illustrate individual and ensemble change detectors for both unlabelled and labelled data. While this paper does not offer ready-made solutions, it outlines possibilities for novel approaches to classification of streaming data. <s> BIB012 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Sales prediction is an important problem for different companies involved in manufacturing, logistics, marketing, wholesaling and retailing. Different approaches have been suggested for food sales forecasting. Several researchers, including the authors of this paper, reported on the advantage of one type of technique over the others for a particular set of products. In this paper we demonstrate that besides an already recognized challenge of building accurate predictive models, the evaluation procedures themselves should be considered more carefully. We give illustrative examples to show that e.g. popular MAE and MSE estimates can be intuitive with one type of product and rather misleading with the others. Furthermore, averaging errors across differently behaving products can be also counter intuitive. We introduce new ways to evaluate the performance of wholesales prediction and discuss their biases with respect to different error types. <s> BIB013 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Fuel feeding and inhomogeneity of fuel typically cause process fluctuations in the circulating fluidized bed (CFB) boilers. If control systems fail to compensate the fluctuations, the whole plant will suffer from fluctuations that are reinforced by the closed-loop controls. Accurate estimates of fuel consumption among other factors are needed for control systems operation. In this paper we address a problem of online mass flow prediction. Particularly, we consider the problems of (1) constructing the ground truth, (2) handling noise and abrupt concept drift, and (3) learning an accurate predictor. Last but not least we emphasize the importance of having the domain knowledge concerning the considered case. We demonstrate the performance of OMPF using real data sets collected from the experimental CFB boiler. <s> BIB014 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Sales prediction is a complex task because of a large number of factors affecting the demand. We present a context aware sales prediction approach, which selects the base predictor depending on the structural properties of the historical sales. In the experimental part we show that there exist product subsets on which, using this strategy, it is possible to outperform naive methods. We also show the dependencies between product categorization accuracies and sales prediction accuracies. A case study of a food wholesaler indicates that moving average prediction can be outperformed by intelligent methods, if proper categorization is in place, which appears to be a difficult task. <s> BIB015 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Since the beginning of the Internet age and the increased use of ubiquitous computing devices, the large volume and continuous flow of distributed data have imposed new constraints on the design of learning algorithms. Exploring how to extract knowledge structures from evolving and time-changing data, Knowledge Discovery from Data Streams presents a coherent overview of state-of-the-art research in learning from data streams. The book covers the fundamentals that are imperative to understanding data streams and describes important applications, such as TCP/IP traffic, GPS data, sensor networks, and customer click streams. It also addresses several challenges of data mining in the future, when stream mining will be at the core of many applications. These challenges involve designing useful and efficient data mining solutions applicable to real-world problems. In the appendix, the author includes examples of publicly available software and online data sets. This practical, up-to-date book focuses on the new requirements of the next generation of data mining. Although the concepts presented in the text are mainly about data streams, they also are valid for different areas of machine learning and data mining. <s> BIB016 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Concept drift refers to a non stationary learning problem over time. The training and the application data often mismatch in real life problems. In this report we present a context of concept drift problem 1. We focus on the issues relevant to adaptive training set formation. We present the framework and terminology, and formulate a global picture of concept drift learners design. We start with formalizing the framework for the concept drifting data in Section 1. In Section 2 we discuss the adaptivity mechanisms of the concept drift learners. In Section 3 we overview the principle mechanisms of concept drift learners. In this chapter we give a general picture of the available algorithms and categorize them based on their properties. Section 5 discusses the related research fields and Section 5 groups and presents major concept drift applications. This report is intended to give a bird's view of concept drift research field, provide a context of the research and position it within broad spectrum of research fields and applications. <s> BIB017 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Decision rules, which can provide good interpretability and flexibility for data mining tasks, have received very little attention in the stream mining community so far. In this work we introduce a new algorithm to learn rule sets, designed for open-ended data streams. The proposed algorithm is able to continuously learn compact ordered and unordered rule sets. The experimental evaluation shows competitive results in comparison with VFDT and C4.5rules. <s> BIB018 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> This paper presents a new framework for dealing with two main types of concept drift: sudden and gradual drift in labelled data with decision attribute. The learning examples are processed in batches of the same size. This new framework, called Batch Weighted Ensemble, is based on incorporating drift detector into the evolving ensemble. Its performance was evaluated experimentaly on data sets with different types of concept drift and compared with the performance of a standard Accuracy Weighted Ensemble classifier. The results show that BWE improves evaluation measures like processing time, memory used and obtain competitive total accuracy. <s> BIB019 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Overload management has become very important in public safety systems that analyse high performance multimedia data streams, especially in the case of detection of terrorist and criminal dangers. Efficient overload management improves the accuracy of automatic identification of persons suspected of terrorist or criminal activity without requiring interaction with them. We argue that in order to improve the quality of multimedia data stream processing in the public safety arena, the innovative concept of a Multimedia Data Stream Management System (MMDSMS) using load-shedding techniques should be introduced into the infrastructure to monitor and optimize the execution of multimedia data stream queries. In this paper, we present a novel content-centered load shedding framework, based on searching and matching algorithms, for analysing video tuples arriving within multimedia data streams. The framework tracks and registers all symptoms of overload, and either prevents overload before it occurs, or minimizes its effects. We have extended our Continuous Query Language (CQL) syntax to enable this load shedding technique. The effectiveness of the framework has been verified using both artificial and real data video streams collected from monitoring devices. <s> BIB020 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Three block based ensembles, AWE, BWE and ACE, are considered in the perspective of learning from data streams with concept drift. AWE updates the ensemble after processing each successive block of incoming examples, while the other ensembles are additionally extended by different drift detectors. Experiments show that these extensions improve classification accuracy, in particular for sudden changes occurring within the block, as well as reduce computational costs. <s> BIB021 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Data streams are usually characterized by changes in the underlying distribution generating data. Therefore algorithms designed to work with data streams should be able to detect changes and quickly adapt the decision model. Rules are one of the most interpretable and flexible models for data mining prediction tasks. In this paper we present the Adaptive Very Fast Decision Rules (AVFDR), an on-line, any-time and one-pass algorithm for learning decision rules in the context of time changing data. AVFDR can learn ordered and unordered rule sets. It is able to adapt the decision model via incremental induction and specialization of rules. Detecting local drifts takes advantage of the modularity of rule sets. In AVFDR, each individual rule monitors the evolution of performance metrics to detect concept drift. AVFDR prunes rules that detect drift. This explicit change detection mechanism provides useful information about the dynamics of the process generating data, faster adaption to changes and generates compact rule sets. The experimental evaluation shows this method is able to learn fast and compact rule sets from evolving streams in comparison to alternative methods. <s> BIB022 | Data mining is a relatively young and interdisciplinary field of computing science. It is one of the steps in the Knowledge Discovery in Databases (KDD) process that tries to discover patterns and dependencies in large data sets. One subtask of data mining is the classification problem. It identifies class labels to which a new observation belongs using knowledge extracted from labeled training examples. Most of the existing classifiers are created statically. They receive the whole learning set, from which knowledge is extracted. The knowledge is obtained only once and is not updated in the future. Those standard classifiers fail to answer modern challenges like processing streaming data. Data streams are characterized by the large size of data, probably infinite. Processing streaming data may be very expensive due to multiple data access. That is why many classifiers try to minimize the number of reads. The second problem with data streams is how many examples to remember. Classifiers may have a full memory-they remember all training data, a partial memory-they memorize some important learning examples, or no memory. Some of the algorithms remember only meta data connected with learning examples. Data streams can be processed by online classifiers. Those classifiers should have the following qualities BIB004 : • Single pass through the data. The classifier reads each example only once. • Limited memory and processing time. Each example should be processed very fast and in a constant period of time. • Any-time learning. The classifier should provide the best answer at every moment of time. Processing of data streams is a very popular and interesting research topic. An example of system designed for stream analysis can be found in BIB020 . While processing streaming data a problem can be encountered that the environment and the classification task may change in time. The concepts of interest may depend on some hidden context BIB009 , which is unknown. Changes in the hidden context can induce more or less radical changes in target concepts, producing what is generally known as concept drift BIB005 . One of the common examples of changing environments is spam detection. The description of assignment to different groups of e-mails changes with time. They depend on user preferences and active spammers, who invent new solutions to trick the up-to-date classifier. The problem with concept drift is real and has a wide range of applications. According to Zliobaite BIB017 applications' domains can be divided into 4 main groups: Monitoring and control, Assistance and information, Decision making, and AI and robotics. One of the typical monitoring problems is intrusion detection. The attackers try to invent new ways of overcoming current security systems, which is a source of concept drift. Other examples of occurrence of concept drift from Monitoring and control group are fraud detection in financial sector or traffic management. Applications from Assistance and information domain mainly organize and/or personalize the flow of information. The cost of mistake is relatively low. An example of such an application is customer profiling and direct marketing, where customer's needs and interests change with time. Also smart home systems should adapt to the changing environment and user's needs. This is an example of application from the AI and robotics domain. A wide range of occurrences of the concept drift problem was presented in BIB018 BIB017 . Systems designed for specific applications like food sales or CFB Boilers were described in BIB013 BIB014 BIB015 . A more formal definition of the concept drift may be as follows. In each point of time t every example is generated by source S t , which is a distribution over the data. Concepts are stable if all examples are sampled by the same source, otherwise concept drift exists BIB017 . Two main types of concept drift may be distinguished: sudden (abrupt) and gradual (incremental) BIB006 . In case when a source at time t is suddenly replaced with another one a sudden concept drift occurs. For example, John was listening to pop music his whole teenage life but when he graduated from university he changed his preferences and started to listen only to classical music. A gradual drift would occur if John started to listen to classical music while he was still enjoying pop music but the interest in pop decreased with time. In this case, the probability of sampling from the first source decreases with time, while the probability of sampling from the second source increases. In some domains previously seen concepts may reappear after some period of time. This type of change is known as a recurring context or recurring concept. Periodic seasonality is not considered to be a concept drift problem. Reoccurring concepts differ from common seasonality because it is not known when they may reappear BIB017 . Other examples of change worth mentioning are noise and blips BIB012 . Noise is a non-significant change and a good online classifier should not react to it. A blip represents a rare event that should be treated as an outlier and should be discarded. Mining data streams in the presence of concept drift is rather a new topic in the machine learning world but there already exist algorithms that attempt to solve this problem. For a taxonomy of available concept drift learners see BIB017 . In general, they can be divided into two main groups: trigger-based and evolving. The trigger-based model contains a change detector that indicates a need for model change. The change detection process is separate from classification. Standard actions of classifiers equipped with a detector are as following: the classifier predicts a label for received example e; then the true label and the predicted label are submitted to the change detector; if the detector detects a change, the feedback is passed to the classifier; then the classifier is retrained according to the level of change BIB012 . One of the most popular drift detection methods is DDM proposed by Gama et al. in BIB007 . This approach detects changes in the probability distribution of examples. The main idea of this method is to monitor the error-rate produced by a classifier. Statistical theory affirms that the error decreases if the distribution is stable BIB007 . When the error increases, it signifies that the distribution has changed. DDM operates on labeled data that arrive one at a time. Another interesting detector that performs better than DDM for a slow gradual drift is EDDM proposed in BIB010 . It uses the distance between classification errors in order to detect a change. There is also a solution that detects change from data arriving in batches, called Batch Drift Detection Method (BDDM). It was proposed in BIB019 and improved in BIB021 . Evolving methods operate in a different way than trigger-based solutions. They try to build the most accurate classifiers at each moment of time without explicit information about the occurrence of a change. The most popular evolving technique for handling concept drift is an ensemble of classifiers BIB017 . An example of such an ensemble is Accuracy Weighted Ensemble (AWE) BIB002 . It is the best representative of block-based ensembles, where component classifiers are constructed from sequentialcoming blocks of training data. When a new block is available, a new classifier is built from it and already existing component classifiers are evaluated. The new classifier usually replaces the worst component in the ensemble. For an overview of available complex methods see BIB016 BIB004 BIB012 BIB006 BIB017 . There also exist hybrid methods that incorporate explicit drift detector with an ensemble of classifiers. An example of such an approach is Batch Weighted Ensemble (BWE) introduced in BIB019 and improved in BIB021 . BWE uses Batch Drift Detection Method (BDDM) to detect an occurrence of change and updates its ensemble according to the type of change. Another block ensemble that is combined with an online drift detector is Adaptive Classifiers Ensemble (ACE) proposed in . This system besides a drift detection mechanism and many batch learners contains also an online learner. This paper focuses on incremental or online learning. A learning task is incremental if the training examples become available over time, usually one at a time . In this case learning may need to last indefinitely. This type of learning is similar to a human's acquisition of knowledge. People learn all the time and their knowledge is constantly revised based on newly gathered information. The term "incremental" is also applied to learning algorithms. An algorithm is online if, for given training examples, it produces a sequence of hypotheses such that the current hypothesis depends only on the previous one and on the current learning example e . All learning algorithms are applicable to all learning tasks. However, the most natural and flexible way to handle incremental learning tasks is to use incremental learners. Unfortunately, incremental learning is a rather forgotten area in the machine learning world . Nevertheless, there exist many incremental learning algorithms inducing different types of knowledge. An example of an incremental classifier inducing decision rules was described in BIB008 . However most of the existing solutions are not applicable for processing data streams. One of the most popular incremental method for mining data streams is Very Fast Decision Trees (VFDT) proposed in . It is a anytime system that builds decision trees using constant memory and constant time per example. VFDT uses Hoeffding bound to guarantee that its output is asymptotically nearly identical to the result obtained by a batch learner. VFDT was improved in to deal with the concept drift problem. CVFDT uses a sliding window on incoming data and old data, which fall outside the window, is forgotten. Another knowledge representation that was adjusted to processing data streams are decision rules. Decision rules can provide descriptions that are easily interpretable by a human. They are also very flexible and can be quickly updated or removed when a change occurs. Decision rules cover selected parts of the space, so if they become out-of-date there is no need to learn from scratch-only the rules that cover regions with the change should be revised. However, according to Gama BIB018 , they have not received enough attention in the stream mining community so far. Decision rules can be more effective for mining data streams than other methods. In case of algorithms based on Hoeffding Trees, the adaptation to change is performed via incremental growth of a tree. However, for sudden change the reaction might be to slow due to the fact that it might require rebuilding the whole tree structure. This might be very inefficient. Decision rules are more flexible than trees. A set of decision rules take advantage of individual rules that can be managed independently BIB022 . Therefore, they can be altered more easily if change occurred or even removed if necessary. For gradual concept drift, the adaptation to change has probably similar complexity for both knowledge representations. Next, decision trees split the data space, where decision rules cover parts of the data space. While processing data instance by instance, a tree might need more changes in global model, while decision rules are updated independently. On the other hand, the process of incremental rule induction is more sophisticated than induction of decision tree. This may be the reason why decision rules are not as popular as decision trees for mining data streams. According to the author's best knowledge, there does not exist any survey of incremental rule-based classifiers learning from non-stationary environments. The goal of this paper is to present the key online algorithms proposed for mining data streams in the presence of concept drift. It describes four of the proposed algorithms: FLORA, AQ11-PM+WAH, FACIL and VFDR. Those are the only purely incremental rule-based classifiers mining data streams in the presence of concept drift. First, the FLORA framework is described-a first family of algorithms that flexibly react to changes in concepts, can use previous knowledge in situations when contexts reappear and is robust to the noise in data BIB009 . Then, algorithms from the AQ family are presented with their modifications. AQ-PM is a static learner that selects extreme examples from rules' boundaries and stores them in the partial memory for each incoming batch of data. AQ11-PM BIB001 is a combination of the incremental AQ11 algorithm with a partial memory mechanism. AQ11-PM+WAH BIB003 is extended with a heuristic for flexible size of the window with stored examples. The FACIL algorithm behaves similarly to AQ11-PM BIB011 . However, it differs in a way that examples stored in the partial memory do not have to be extreme ones. Those three main algorithms were not tested on massive datasets. The newest proposal called VFDR BIB018 was tested on huge data streams. It induces ordered or unordered sets of decision rules that are efficient in terms of memory and learning times. This paper is organized as follows. The next section presents the basics of rule induction. Section 3 describes the first incremental rule-based learners for a concept drift problem-the FLORA family. Section 4 is devoted to the AQ family algorithms, e.g., AQ11-PM+WAH. Section 5 familiarizes with the FACIL algorithm. Section 6 reveals the newest algorithms VFDR and AVFDR. Section 7 concludes this paper. |
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Basics of the Rule Induction <s> This research was supported in part by the National Science Foundation under Grant No. DCR 84-06801, the Office of Naval Research under Grant No. N00014-82-K-0186, the Defense Advanced Research Project Agency under Grant No. N00014-K-85-0878, and by the Slovene Research Council. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Basics of the Rule Induction <s> Abstract The decision tree output of Quinlan's ID3 algorithm is one of its major weaknesses. Not only can it be incomprehensible and difficult to manipulate, but its use in expert systems frequently demands irrelevant information to be supplied. This report argues that the problem lies in the induction algorithm itself and can only be remedied by radically altering the underlying strategy. It describes a new algorithm, PRISM which, although based on ID3, uses a different induction strategy to induce rules which are modular, thus avoiding many of the problems associated with decision trees. <s> BIB002 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Basics of the Rule Induction <s> The CN2 algorithm induces an ordered list of classification rules from examples using entropy as its search heuristic. In this short paper, we describe two improvements to this algorithm. Firstly, we present the use of the Laplacian error estimate as an alternative evaluation function and secondly, we show how unordered as well as ordered rules can be generated. We experimentally demonstrate significantly improved performances resulting from these changes, thus enhancing the usefulness of CN2 as an inductive tool. Comparisons with Quinlan's C4.5 are also made. <s> BIB003 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Basics of the Rule Induction <s> This paper presents and compares two algorithms of machine learning from examples, ID3 and AQ, and one recent algorithm from the same class, called LEM2. All three algorithms are illustrated using the same example. Production rules induced by these algorithms from the well-known Small Soybean Database are presented. Finally, some advantages and disadvantages of these algorithms are shown. <s> BIB004 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Basics of the Rule Induction <s> This paper is a survey of inductive rule learning algorithms that use a separate-and-conquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of algorithms into a single framework and analyze them along three different dimensions, namely their search, language and overfitting avoidance biases. <s> BIB005 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Basics of the Rule Induction <s> Systems for inducing concept descriptions from examples are valuable tools for assisting in the task of knowledge acquisition for expert systems. This paper presents a description and empirical evaluation of a new induction system, CN2, designed for the efficient induction of simple, comprehensible production rules in domains where problems of poor description language and/or noise may be present. Implementations of the CN2, ID3, and AQ algorithms are compared on three medical classification tasks. <s> BIB006 | A classification problem relates to an exploration of hypotheses describing so-called concepts. The term concept denotes a set of objects with some common characteristics that distinguish it from other concepts. In order to describe similar features the terms category or class are also used. Hypotheses are results of supervised learning. They are functions which best describe concepts from the supplied learning examples. Generally, hypotheses assign examples to the appropriate category (class). Those functions can be expressed in different forms. One of the most popular methods of knowledge representation are decision rules. There exist many algorithms that induce decision rules. For reviews see BIB005 BIB004 . Most of the existing classifiers extract knowledge from static data. As input they obtain the whole learning set, from which hypotheses are found. The set of learning examples may be represented in several ways, most com-mon is a decision The collection of objects U can be divided with respect to concept C k into positive E Decision rule r for concept C k is defined as an expression taking the form: if P then Q. P is the conditional part of the rule (premise; antecedent). For conditional part the term description item or description can also be used. Q is the decision part of the rule (conclusion; label ) indicating affiliation to concept C k . In the literature, a decision rule can also take the form: Conditional part P of a rule r is a conjunction of elementary conditions and is represented in the form of: where l is the number of conditions known as the length of the rule. A single elementary condition i (selector ) is represented as: where at i is a conditional attribute i and v i is a value from the domain of attribute at i . rel is a relation operator from the set of relations {=, =, <, ≤, >, ≥, ∈} . Rule r covers an example when attributes of the example match the rule's conditions. Rules can cover both positive and negative examples. Examples from the learning set that fulfill conditional part P of rule r are called coverage and are indicated by [P] . Rule r is discriminant or certain, when it covers only positive examples (no negative examples covered). Thanks to this the rule distinguishes examples belonging to the class indicated by the rule's decision part. A discriminant rule r is minimal, if removing of one of its selectors results in negative examples being covered. There also exist other types of decision rules like probabilistic rules. They do not indicate a single category but return probabilities connected with every decision class' label. Probability estimation techniques for rule learners are considered in . The problem of finding a minimal set of rules covering learning examples is NPcomplete. Many heuristic algorithms exist that induce decision rules. One of the most popular techniques is sequential covering. In general, it relies on learning a single rule for a given concept, removing examples covered by the rule and repeating this process for other examples from the same concept. Next, rules for other concepts are generated sequentially. The pseudocode of a sequential covering mechanism is presented as Algorithm 1. The function LearnSingleRule (line 5) depends on the used algorithm-sample realizations can be found in BIB002 BIB003 BIB006 BIB001 . In most of these algorithms, the |
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 1: Sequential Covering algorithm <s> This research was supported in part by the National Science Foundation under Grant No. DCR 84-06801, the Office of Naval Research under Grant No. N00014-82-K-0186, the Defense Advanced Research Project Agency under Grant No. N00014-K-85-0878, and by the Slovene Research Council. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 1: Sequential Covering algorithm <s> Abstract We present ELEM2, a machine learning system that induces classification rules from a set of data based on a heuristic search over a hypothesis space. ELEM2 is distinguished from other rule induction systems in three aspects. First, it uses a new heuristtic function to guide the heuristic search. The function reflects the degree of relevance of an attribute-value pair to a target concept and leads to selection of the most relevant pairs for formulating rules. Second, ELEM2 handles inconsistent training examples by defining an unlearnable region of a concept based on the probability distribution of that concept in the training data. The unlearnable region is used as a stopping criterion for the concept learning process, which resolves conflicts without removing inconsistent examples. Third, ELEM2 employs a new rule quality measure in its post-pruning process to prevent rules from overfitting the data. The rule quality formula measures the extent to which a rule can discriminate between the positive and negative examples of a class. We describe features of ELEM2, its rule induction algorithm and its classification procedure. We report experimental results that compare ELEM2 with C4.5 and CN2 on a number of datasets. <s> BIB002 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 1: Sequential Covering algorithm <s> Systems for inducing concept descriptions from examples are valuable tools for assisting in the task of knowledge acquisition for expert systems. This paper presents a description and empirical evaluation of a new induction system, CN2, designed for the efficient induction of simple, comprehensible production rules in domains where problems of poor description language and/or noise may be present. Implementations of the CN2, ID3, and AQ algorithms are compared on three medical classification tasks. <s> BIB003 | Input : U -a set of learning examples; A-conditional attributes Output: RS-a set of induced rules 8 Return RS initial candidate for the conditional part of the rule covers the set of all learning examples including the negative ones. Then the rule is specialized by adding elementary conditions until the acceptance threshold is reached. Candidates for the elementary conditions of a rule are evaluated with respect to different measures depending on the algorithm. The most commonly used criteria are as follows : • Maximizing the number of positive examples covered by the conjunction of elementary conditions in P . • Maximizing the ratio of covered positive examples to the total number of examples covered. • Minimizing the number of elementary conditions in P -minimizing the length of the rule. Other algorithms use an entropy of information to evaluate the conditional part of the rule. It was introduced by Shannon in . The entropy of information of given learning set S is defined as: where p i is the probability of class C in the set of examples S and n c is the number of different class labels. The entropy is a cost type measure-the smaller the value is, the better is the conjunction in P . Another important measure of evaluating the dependence of P and Q is the mestimate proposed by Cestnik in . The definition of m-estimate is: where n p is the number of positive examples covered by P , n is the total number of all examples covered by P , p i is the prior probability of the class C k and m is a constant depending on the data. A special case of m-estimate is the Laplace estimate defined as: where n c is the number of different class labels. More about these measures can be found in . One of the first algorithms basing on the sequential covering idea is AQ, proposed by Michalski BIB001 . It operates as follows. At the beginning of each iteration, the currently processed decision class is chosen. Next, sets with positive and negative examples are created with respect to the given class label. Then, a seed is selected randomly from the positive examples. In the next step, a star is generated. A star is a set of all rules that cover the seed and does not cover any of the negative examples. Extending the seed against all negative examples is a multistep procedure. While the star covers negative examples, select one of them. Then, all maximally general rules that cover the seed and exclude the negative example are found. The resulting set is called a partial star of the seed against the negative example. Next, a new partial star is generated by intersecting the initial star with the partial star of the seed against the negative example. In the end, a new partial star is trimmed if the number of rules exceeds the user defined threshold and the new partial star becomes a star. This threshold was introduced in order to limit the search space, which would grow rapidly with the number of negative examples and with the number of attributes. A typical criterion for trimming is the number of positive examples covered. In case of a tie, the minimum number of selectors is preferred. The procedure of star extension is repeated until the star no longer covers any negative examples. After the star is created, the best rule from the star is chosen according to the user-defined criteria. The rule is added to the current set of rules. This mechanism iteratively induces decision rules until all positive examples from the given decision class are covered. The whole process is rerun for every label of the decision class. For details see BIB001 . Another algorithm-CN2, proposed in BIB003 , modifies the AQ algorithm in a way that it removes the dependence on specific examples and increases the space of searched rules. Unlike the AQ-based system, which induces an unordered set of decision rules, CN2 produces an ordered list of if-then rules. CN2 works in an iterative fashion. In each iteration, it searches for a rule that covers a large number of examples of the single class C k and few of other classes. When the best rule according to the entropy measure is found, the algorithm removes the covered examples from the training set and adds the rule to the end of the rule list. This process is repeated until no more satisfactory rules can be found. CN2 searches for new rules by performing a generalto-specific search. At each stage, CN2 retains a size-limited set or star S of the best rules found so far. The system examines only specializations of this set, performing a beam search of the space of rules. A rule is specialized by either adding a new elementary condition or removing disjunctive values from one of its selectors. Each rule can be specialized in several ways-CN2 generates and evaluates all of them. In the end, star S is trimmed by removing rules with the lowest ranking values measured by given evaluation function-the likelihood ratio statistic. For more details see BIB003 . Another representative of the rule-based algorithms is MODLEM, which was originally introduced by Stefanowski in . Generally, it is based on the scheme of sequential covering and it generates an unordered minimal set of rules for every decision concept. It is particularly well-suited for analyzing data containing a mixture of numerical and qualitative attributes, inconsistent descriptions of objects, or missing attribute values. Searching for the best single rule and selecting the best condition is controlled by a criterion based on an entropy measure. For more details see . Induced set of decision rules can be used for classification of new incoming examples. Those new examples were not used during the learning phase. Their description of conditional attributes is known and the goal is to determine the correct decision class label. A classification of the new examples is based on matching the description of the new object to the conditional part of a decision rule. Two main matching types can be distinguished: full or strict and partial or flexible matching. Full matching takes place, when all elementary conditions of a rule match the example's attributes. In case of partial matching there must exist at least one elementary condition of a rule that does not match the new object's description. Classification strategy is performed in a different way depending on whether the decision rules are sorted to form a list or create a random set of rules. In case of an unordered list of decision rules, only the first rule that matches the example is fired and the label associated with the rule determines the example's class label. When the first rule covering the example is found, the rest of the rules are not visited. In case when none of the rules match the example, the default rule is used. Generally, the default rule indicates the majority class in the training set-the largest class in the training set. In case of an unordered set of decision rules using full or strict matching three situations are possible: a unique match (to one or more rules from the same class); matching more rules from different classes or not matching any rules at all. In both latter situations the suggestion is ambiguous, thus, a proper resolution strategy is necessary. One of the solutions is the strategy introduced by Grzymala-Busse . It has been successfully applied in many experiments. Generally, it is based on a voting of matching rules with their supports. The total support for class C k is defined as: where r i is a matched rule that indicates class C k , n r is the number of these rules and sup(r i ) is the number of learning objects satisfying both condition and decision parts of the rule r i . A new object is assigned to the class with the highest total support. In the case of not-matching, so called partial matching or flexible matching is considered, where at least one of the rule's conditions is satisfied by the corresponding attributes in the new object's description x. In this case, a matching factor match(r,x) is introduced as the ratio of conditions matched by object x to all conditions in rule r. The total support is modified to: where p is the number of partially-matched rules, and object x is assigned to the class with the highest value of sup(C k ). Another example of classification strategy is the proposal of Aijun Ann in BIB002 . It uses a rule quality measure different than rule support, i.e., a measure of discrimination: where P denotes probability. For more technical details of estimating probabilities and adjusting this formula to prevent zero division see BIB002 . Its interpretation says that it measures the extent to which rule r discriminates between positive and negative objects of class C k . The only difference between these two described classification strategies is choosing another rule quality measure-putting Q M D in place of sup(r). Moreover, classification strategies can be adopted to abstaining from a class prediction when the final decision is uncertain. This modification can influence the final accuracy of classification of an ensemble consisting of rule-based component classifiers. This idea was inspected by B laszczyński et al. in . Because of the natural and easy form of representation, decision rules can be inspected and interpreted by a human. They are also more comprehensive than any other knowledge representation. Generally, they provide good interpretability and flexibility for data mining tasks. They take advantage of not being hierarchically structured, so hypotheses can be easily updated when becoming out-of-date without significant decrease in performance. However, they have not received enough attention in mining data streams. |
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Abstract The decision tree output of Quinlan's ID3 algorithm is one of its major weaknesses. Not only can it be incomprehensible and difficult to manipulate, but its use in expert systems frequently demands irrelevant information to be supplied. This report argues that the problem lies in the induction algorithm itself and can only be remedied by radically altering the underlying strategy. It describes a new algorithm, PRISM which, although based on ID3, uses a different induction strategy to induce rules which are modular, thus avoiding many of the problems associated with decision trees. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> The CN2 algorithm induces an ordered list of classification rules from examples using entropy as its search heuristic. In this short paper, we describe two improvements to this algorithm. Firstly, we present the use of the Laplacian error estimate as an alternative evaluation function and secondly, we show how unordered as well as ordered rules can be generated. We experimentally demonstrate significantly improved performances resulting from these changes, thus enhancing the usefulness of CN2 as an inductive tool. Comparisons with Quinlan's C4.5 are also made. <s> BIB002 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> This paper presents and compares two algorithms of machine learning from examples, ID3 and AQ, and one recent algorithm from the same class, called LEM2. All three algorithms are illustrated using the same example. Production rules induced by these algorithms from the well-known Small Soybean Database are presented. Finally, some advantages and disadvantages of these algorithms are shown. <s> BIB003 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Agents that learn on-line with partial instance memory reserve some of the previously encountered examples for use in future training episodes. We extend our previous work by combining our method for selecting extreme examples with two incremental learning algorithms, AQ11 and GEM. Using these new systems, AQ11-PM and GEM-PM, and the task computer intrusion detection, we conducted a lesion study to analyze trade-offs in performance. Results showed that, although our partial-memory model decreased predictive accuracy by 2%, it also decreased memory requirements by 75%, learning time by 75%, and in some cases, concept complexity by 10%, an outcome consistent with earlier results using our partial-memory method and batch learning. <s> BIB004 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Abstract We present ELEM2, a machine learning system that induces classification rules from a set of data based on a heuristic search over a hypothesis space. ELEM2 is distinguished from other rule induction systems in three aspects. First, it uses a new heuristtic function to guide the heuristic search. The function reflects the degree of relevance of an attribute-value pair to a target concept and leads to selection of the most relevant pairs for formulating rules. Second, ELEM2 handles inconsistent training examples by defining an unlearnable region of a concept based on the probability distribution of that concept in the training data. The unlearnable region is used as a stopping criterion for the concept learning process, which resolves conflicts without removing inconsistent examples. Third, ELEM2 employs a new rule quality measure in its post-pruning process to prevent rules from overfitting the data. The rule quality formula measures the extent to which a rule can discriminate between the positive and negative examples of a class. We describe features of ELEM2, its rule induction algorithm and its classification procedure. We report experimental results that compare ELEM2 with C4.5 and CN2 on a number of datasets. <s> BIB005 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Learning concepts that change over time is important for a variety of applications in which an intelligent system must acquire and use a behavioral profile. Computer intrusion detection, calendar scheduling, and intelligent user interfaces are three examples. An interesting class of methods for learning such concepts consists of algorithms that maintain a portion of previously encountered examples. Since concepts change over time and these methods store selected examples, mechanisms must exist to identify and remove irrelevant examples of old concepts. In this paper, we describe an incremental rule learner with partial instance memory, called AQ 11 -PM+WAH, that uses Widmer and Kubat's heuristic to adjust dynamically the window over which it retains and forgets examples. We evaluated this learner using the STAGGER concepts and made direct comparisons to AQ-PM and to AQ 11 - PM, similar learners with partial instance memory. Results suggest that the forgetting heuristic is not restricted to FLORA2 the learner for which it was originally designed. Overall, result from this study and others suggest learners with partial instance memory converge more quickly to changing target concepts than algorithms that learn solely from new examples. <s> BIB006 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Recently, mining data streams with concept drifts for actionable insights has become an important and challenging task for a wide range of applications including credit card fraud protection, target marketing, network intrusion detection, etc. Conventional knowledge discovery tools are facing two challenges, the overwhelming volume of the streaming data, and the concept drifts. In this paper, we propose a general framework for mining concept-drifting data streams using weighted ensemble classifiers. We train an ensemble of classification models, such as C4.5, RIPPER, naive Beyesian, etc., from sequential chunks of the data stream. The classifiers in the ensemble are judiciously weighted based on their expected classification accuracy on the test data under the time-evolving environment. Thus, the ensemble approach improves both the efficiency in learning the model and the accuracy in performing classification. Our empirical study shows that the proposed methods have substantial advantage over single-classifier approaches in prediction accuracy, and the ensemble framework is effective for a variety of classification models. <s> BIB007 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Systems for inducing concept descriptions from examples are valuable tools for assisting in the task of knowledge acquisition for expert systems. This paper presents a description and empirical evaluation of a new induction system, CN2, designed for the efficient induction of simple, comprehensible production rules in domains where problems of poor description language and/or noise may be present. Implementations of the CN2, ID3, and AQ algorithms are compared on three medical classification tasks. <s> BIB008 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> This paper is a survey of inductive rule learning algorithms that use a separate-and-conquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of algorithms into a single framework and analyze them along three different dimensions, namely their search, language and overfitting avoidance biases. <s> BIB009 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Most of the work in machine learning assume that examples are generated at random according to some stationary probability distribution. In this work we study the problem of learning when the distribution that generate the examples changes over time. We present a method for detection of changes in the probability distribution of examples. The idea behind the drift detection method is to control the online error-rate of the algorithm. The training examples are presented in sequence. When a new training example is available, it is classified using the actual model. Statistical theory guarantees that while the distribution is stationary, the error will decrease. When the distribution changes, the error will increase. The method controls the trace of the online error of the algorithm. For the actual context we define a warning level, and a drift level. A new context is declared, if in a sequence of examples, the error increases reaching the warning level at example k w , and the drift level at example k d . This is an indication of a change in the distribution of the examples. The algorithm learns a new model using only the examples since k w . The method was tested with a set of eight artificial datasets and a real world dataset. We used three learning algorithms: a perceptron, a neural network and a decision tree. The experimental results show a good performance detecting drift and with learning the new concept. We also observe that the method is independent of the learning algorithm. <s> BIB010 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Induction of decision rules within the dominance–based rough set approach to the multicriteria and multiattribute classification is considered. Within this framework, we discuss two algorithms: Glance and an extended version of AllRules. The important characteristics of Glance is that it induces the set of all dominance–based rules in an incremental way. On the other hand, AllRules induces in a non–incremental way the set of all robust rules, i.e. based on objects from the set of learning examples. The main aim of this study is to compare both these algorithms. We experimentally evaluate them on several data sets. The results show that Glance and AllRules are complementary algorithms. The first one works very efficiently on data sets described by a low number of condition attributes and a high number of objects. The other one, conversely, works well on data sets characterized by a high number of attributes and a low number of objects. <s> BIB011 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> We consider strategies for building classifier ensembles for non-stationary environments where the classification task changes during the operation of the ensemble. Individual classifier models capable of online learning are reviewed. The concept of ”forgetting” is discussed. Online ensembles and strategies suitable for changing environments are summarized. <s> BIB012 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Alexey Tsymbal Department of Computer Science Trinity College Dublin, Ireland [email protected] April 29, 2004 Abstract In the real world concepts are often not stable but change with time. Typical examples of this are weather prediction rules and customers’ preferences. The underlying data distribution may change as well. Often these changes make the model built on old data inconsistent with the new data, and regular updating of the model is necessary. This problem, known as concept drift, complicates the task of learning a model from data and requires special approaches, different from commonly used techniques, which treat arriving instances as equally important contributors to the final concept. This paper considers different types of concept drift, peculiarities of the problem, and gives a critical review of existing approaches to the problem. 1. Definitions and peculiarities of the problem A difficult problem with learning in many real-world domains is that the concept of interest may depend on some hidden context, not given explicitly in the form of pre-dictive features. A typical example is weather prediction rules that may vary radically with the season. Another example is the patterns of customers’ buying preferences that may change with time, depending on the current day of the week, availability of alter-natives, inflation rate, etc. Often the cause of change is hidden, not known a priori, making the learning task more complicated. Changes in the hidden context can induce more or less radical changes in the target concept, which is generally known as con-cept drift (Widmer and Kubat, 1996). An effective learner should be able to track such changes and to quickly adapt to them. A difficult problem in handling concept drift is distinguishing between true concept drift and noise. Some algorithms may overreact to noise, erroneously interpreting it as concept drift, while others may be highly robust to noise, adjusting to the changes too slowly. An ideal learner should combine robustness to noise and sensitivity to concept drift (Widmer and Kubat, 1996). In many domains, hidden contexts may be expected to recur. Recurring contexts may be due to cyclic phenomena, such as seasons of the year or may be associated with irregular phenomena, such as inflation rates or market mood (Harries and Sam-mut, 1998). In such domains, in order to adapt more quickly to concept drift, concept <s> BIB013 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift. <s> BIB014 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> An emerging problem in Data Streams is the detection of concept drift. This problem is aggravated when the drift is gradual over time. In this work we deflne a method for detecting concept drift, even in the case of slow gradual change. It is based on the estimated distribution of the distances between classiflcation errors. The proposed method can be used with any learning algorithm in two ways: using it as a wrapper of a batch learning algorithm or implementing it inside an incremental and online algorithm. The experimentation results compare our method (EDDM) with a similar one (DDM). Latter uses the error-rate instead of distance-error-rate. <s> BIB015 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Mining data streams is a challenging task that requires online systems ba- sed on incremental learning approaches. This paper describes a classification system based on decision rules that may store up-to-date border examples to avoid unneces- sary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbour algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB016 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Mining data streams is a challenging task that requires online systems based on incremental learning approaches. This paper describes a classification system based on decision rules that may store up--to--date border examples to avoid unnecessary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbor algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB017 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> We address adaptive classification of streaming data in the presence of concept change. An overview of the machine learning approaches reveals a deficit of methods for explicit change detection. Typically, classifier ensembles designed for changing environments do not have a bespoke change detector. Here we take a systematic look at the types of changes in streaming data and at the current approaches and techniques in online classification. Classifier ensembles for change detection are discussed. An example is carried through to illustrate individual and ensemble change detectors for both unlabelled and labelled data. While this paper does not offer ready-made solutions, it outlines possibilities for novel approaches to classification of streaming data. <s> BIB018 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Sales prediction is an important problem for different companies involved in manufacturing, logistics, marketing, wholesaling and retailing. Different approaches have been suggested for food sales forecasting. Several researchers, including the authors of this paper, reported on the advantage of one type of technique over the others for a particular set of products. In this paper we demonstrate that besides an already recognized challenge of building accurate predictive models, the evaluation procedures themselves should be considered more carefully. We give illustrative examples to show that e.g. popular MAE and MSE estimates can be intuitive with one type of product and rather misleading with the others. Furthermore, averaging errors across differently behaving products can be also counter intuitive. We introduce new ways to evaluate the performance of wholesales prediction and discuss their biases with respect to different error types. <s> BIB019 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Since the beginning of the Internet age and the increased use of ubiquitous computing devices, the large volume and continuous flow of distributed data have imposed new constraints on the design of learning algorithms. Exploring how to extract knowledge structures from evolving and time-changing data, Knowledge Discovery from Data Streams presents a coherent overview of state-of-the-art research in learning from data streams. The book covers the fundamentals that are imperative to understanding data streams and describes important applications, such as TCP/IP traffic, GPS data, sensor networks, and customer click streams. It also addresses several challenges of data mining in the future, when stream mining will be at the core of many applications. These challenges involve designing useful and efficient data mining solutions applicable to real-world problems. In the appendix, the author includes examples of publicly available software and online data sets. This practical, up-to-date book focuses on the new requirements of the next generation of data mining. Although the concepts presented in the text are mainly about data streams, they also are valid for different areas of machine learning and data mining. <s> BIB020 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Fuel feeding and inhomogeneity of fuel typically cause process fluctuations in the circulating fluidized bed (CFB) boilers. If control systems fail to compensate the fluctuations, the whole plant will suffer from fluctuations that are reinforced by the closed-loop controls. Accurate estimates of fuel consumption among other factors are needed for control systems operation. In this paper we address a problem of online mass flow prediction. Particularly, we consider the problems of (1) constructing the ground truth, (2) handling noise and abrupt concept drift, and (3) learning an accurate predictor. Last but not least we emphasize the importance of having the domain knowledge concerning the considered case. We demonstrate the performance of OMPF using real data sets collected from the experimental CFB boiler. <s> BIB021 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Sales prediction is a complex task because of a large number of factors affecting the demand. We present a context aware sales prediction approach, which selects the base predictor depending on the structural properties of the historical sales. In the experimental part we show that there exist product subsets on which, using this strategy, it is possible to outperform naive methods. We also show the dependencies between product categorization accuracies and sales prediction accuracies. A case study of a food wholesaler indicates that moving average prediction can be outperformed by intelligent methods, if proper categorization is in place, which appears to be a difficult task. <s> BIB022 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Concept drift refers to a non stationary learning problem over time. The training and the application data often mismatch in real life problems. In this report we present a context of concept drift problem 1. We focus on the issues relevant to adaptive training set formation. We present the framework and terminology, and formulate a global picture of concept drift learners design. We start with formalizing the framework for the concept drifting data in Section 1. In Section 2 we discuss the adaptivity mechanisms of the concept drift learners. In Section 3 we overview the principle mechanisms of concept drift learners. In this chapter we give a general picture of the available algorithms and categorize them based on their properties. Section 5 discusses the related research fields and Section 5 groups and presents major concept drift applications. This report is intended to give a bird's view of concept drift research field, provide a context of the research and position it within broad spectrum of research fields and applications. <s> BIB023 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> This paper presents a new framework for dealing with two main types of concept drift: sudden and gradual drift in labelled data with decision attribute. The learning examples are processed in batches of the same size. This new framework, called Batch Weighted Ensemble, is based on incorporating drift detector into the evolving ensemble. Its performance was evaluated experimentaly on data sets with different types of concept drift and compared with the performance of a standard Accuracy Weighted Ensemble classifier. The results show that BWE improves evaluation measures like processing time, memory used and obtain competitive total accuracy. <s> BIB024 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Decision rules, which can provide good interpretability and flexibility for data mining tasks, have received very little attention in the stream mining community so far. In this work we introduce a new algorithm to learn rule sets, designed for open-ended data streams. The proposed algorithm is able to continuously learn compact ordered and unordered rule sets. The experimental evaluation shows competitive results in comparison with VFDT and C4.5rules. <s> BIB025 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Three block based ensembles, AWE, BWE and ACE, are considered in the perspective of learning from data streams with concept drift. AWE updates the ensemble after processing each successive block of incoming examples, while the other ensembles are additionally extended by different drift detectors. Experiments show that these extensions improve classification accuracy, in particular for sudden changes occurring within the block, as well as reduce computational costs. <s> BIB026 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Decision rules are one of the most interpretable and flexible models for data mining prediction tasks. Till now, few works presented online, any-time and one-pass algorithms for learning decision rules in the stream mining scenario. A quite recent algorithm, the Very Fast Decision Rules (VFDR), learns set of rules, where each rule discriminates one class from all the other. In this work we extend the VFDR algorithm by decomposing a multi-class problem into a set of two-class problems and inducing a set of discriminative rules for each binary problem. The proposed algorithm maintains all properties required when learning from stationary data streams: online and any-time classifiers, processing each example once. Moreover, it is able to learn ordered and unordered rule sets. The new approach is evaluated on various real and artificial datasets. The new algorithm improves the performance of the previous version and is competitive with the state-of-the-art decision tree learning method for data streams. <s> BIB027 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Data streams are usually characterized by changes in the underlying distribution generating data. Therefore algorithms designed to work with data streams should be able to detect changes and quickly adapt the decision model. Rules are one of the most interpretable and flexible models for data mining prediction tasks. In this paper we present the Adaptive Very Fast Decision Rules (AVFDR), an on-line, any-time and one-pass algorithm for learning decision rules in the context of time changing data. AVFDR can learn ordered and unordered rule sets. It is able to adapt the decision model via incremental induction and specialization of rules. Detecting local drifts takes advantage of the modularity of rule sets. In AVFDR, each individual rule monitors the evolution of performance metrics to detect concept drift. AVFDR prunes rules that detect drift. This explicit change detection mechanism provides useful information about the dynamics of the process generating data, faster adaption to changes and generates compact rule sets. The experimental evaluation shows this method is able to learn fast and compact rule sets from evolving streams in comparison to alternative methods. <s> BIB028 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Overload management has become very important in public safety systems that analyse high performance multimedia data streams, especially in the case of detection of terrorist and criminal dangers. Efficient overload management improves the accuracy of automatic identification of persons suspected of terrorist or criminal activity without requiring interaction with them. We argue that in order to improve the quality of multimedia data stream processing in the public safety arena, the innovative concept of a Multimedia Data Stream Management System (MMDSMS) using load-shedding techniques should be introduced into the infrastructure to monitor and optimize the execution of multimedia data stream queries. In this paper, we present a novel content-centered load shedding framework, based on searching and matching algorithms, for analysing video tuples arriving within multimedia data streams. The framework tracks and registers all symptoms of overload, and either prevents overload before it occurs, or minimizes its effects. We have extended our Continuous Query Language (CQL) syntax to enable this load shedding technique. The effectiveness of the framework has been verified using both artificial and real data video streams collected from monitoring devices. <s> BIB029 | Effective learning in environments with hidden contexts and concept drifts requires a learning algorithm which fulfills certain conditions BIB014 : • it can detect context changes without being explicitly informed; • it can quickly recover from a concept change and adjust its hypotheses; • it can make use of previous descriptions when concepts reappear. One of the possible solutions is to trust only the latest examples-this is known as the windowing mechanism. The window of examples may be of a fixed or a flexible size. New examples are added to the window as they arrive and the old ones are removed, when appropriate conditions are fulfilled. Those activities in window trigger modifications of current hypotheses in order to be consistent with the examples held in the window. This idea is widely used and states the main essence of the FLORA framework proposed in BIB014 . The FLORA framework is restricted to processing data containing only nominal attributes and can only solve the binary classification problem. In the FLORA framework each concept is represented by three sets comprising rules' antecedents: ADES (Accepted DEScriptors), NDES (Negative DEScriptors) and PDES (Potential DEScriptors). ADES contains descriptions covering only positive examples, and NDES only negative examples. PDES consists of descriptions that match both positive and negative examples. ADES is used to classify new incoming examples, while NDES is used to prevent the over-generalization of ADES. PDES acts as a storage for descriptions that might become relevant in the future BIB014 . Every description item has corresponding counters, which indicate how many positive or negative examples from current window are covered by the given description. The counters are updated with every modification of the learning window (addition or deletion of a learning example). A description item is held in memory as long as it covers at least one example from the window. The simple FLORA framework is presented as Algorithm 2. The FLORA framework operates as follows. When a new positive example is added to the learning window, three situations are possible: a new description item is added to ADES, descriptions existing in ADES are generalized to match the new example, or/and existing items are moved from NDES to PDES (lines BIB005 BIB015 BIB019 BIB001 BIB002 BIB008 BIB024 BIB026 BIB016 BIB017 BIB009 BIB010 . First, the ADES set is tested in order to find a description covering the incoming positive example (lines 3-6). If there does not exist such an item, a generalization of descriptions from ADES is performed (lines 7-8). If there is no covering item in ADES and there does not exist any generalization that matches the example, the example's full description is added to the ADES set (lines 9-10). Then, the PDES set is searched and counters of positive examples are incremented for the description items that cover the example (lines 11-13). In the end, the NDES set is visited. Descriptions that match the new positive example are moved to PDES and their counters are updated (lines BIB017 BIB009 BIB010 . In case when the incoming example is negative-same situations are possible but in respect to the NDES set (lines BIB020 BIB025 BIB011 BIB003 BIB027 BIB028 BIB012 BIB018 BIB029 BIB004 BIB006 . First, the NDES set is tested in order to find a description covering the incoming negative example (lines BIB011 BIB003 . If there does not exist such an item, a generalization of descriptions from NDES is performed (lines . If there is no covering item in NDES and there does not exist any generalization that matches the example, the example's full description is added to the NDES set (lines BIB027 BIB028 . Then, the PDES set is searched and counters of negative examples are incremented for the description items that cover the example (lines BIB012 BIB018 BIB029 . In the end, the ADES set is visited. Descriptions that match the new negative example are moved to PDES and their counters are updated (lines BIB004 BIB006 . When an example is deleted from the learning window, appropriate counters are decreased (lines . This may result in a removal of a description or its migration from PDES to ADES or NDES, with respect to the type of example: negative or positive. If the example to be deleted is positive, first the ADES set is visited. Counters of positive examples are decremented for the description items that match the example. If the counter is equal to 0, then the description from ADES is dropped (lines BIB013 . Then the PDES set is tested. Counters of positive examples are decremented for the description items that match the example. If the counter equals 0, then the description is moved from PDES to NDES (lines BIB007 BIB014 BIB023 BIB021 BIB022 . If the example to be deleted is negative, first the NDES set is visited. Counters of negative examples are decremented for the description items that match the example. If the counter is equal to 0, then the description from NDES is dropped (lines 49-53). Then the PDES set is tested. Counters of negative examples are decremented for the description items that match the example. If the counter equals 0, then the description is moved from PDES to ADES (lines 54-58). |
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 2: simple FLORA algorithm <s> Induction of a concept description given noisy instances is difficult and is further exacerbated when the concepts may change over time. This paper presents a solution which has been guided by psychological and mathematical results. The method is based on a distributed concept description which is composed of a set of weighted, symbolic characterizations. Two learning processes incrementally modify this description. One adjusts the characterization weights and another creates new characterizations. The latter process is described in terms of a search through the space of possibilities and is shown to require linear space with respect to the number of attribute-value pairs in the description language. The method utilizes previously acquired concept definitions in subsequent learning by adding an attribute for each learned concept to instance descriptions. A program called STAGGER fully embodies this method, and this paper reports on a number of empirical analyses of its performance. Since understanding the relationships between a new learning method and existing ones can be difficult, this paper first reviews a framework for discussing machine learning systems and then describes STAGGER in that framework. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 2: simple FLORA algorithm <s> On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift. <s> BIB002 | Input : E-incoming example; ADES-a set with accepted descriptors; P DES-a set with potential descriptors; N DES-a set with negative descriptors Output: ADES,P DES,N DES-modified description sets The ADES, NDES, and PDES sets are kept non-redundant and consistent with respect to the examples in the window. FLORA does not implement any specialization operator. If a new example cannot be covered by any description or generalization its full description is added to ADES or NDES, with respect to the type of example: positive or negative. The new incoming example acts as a specific seed, which may be generalized in the future. FLORA uses a generalization operator known as dropping condition rule, which removes attribute-value pairs from a single description item. The simple FLORA framework assumes that only the latest fixed number of examples are relevant and should be kept in the window. However the question arises of how many examples are sufficient to describe current concepts. The authors expanded FLORA with a heuristic for flexible windowing in the FLORA2 algorithm. The motivation for this improvement were the effects of an inappropriate window size: too small a window will not contain a sufficient number of examples to describe a stable concept. On the other hand, too large a window will slow down reaction to a concept drift. A good heuristic for flexible windowing should shrink the window when a concept drift seems to occur and keep the window size fixed in case when concepts are stable. Meanwhile the window size should grow until concepts are stabilized. FLORA2's heuristic called Window Adjustment Heuristic (WAH) meets the above requirements. The pseudocode of WAH is presented as Algorithm 3. If a concept drift was detected, the WAH decreases the window size by 20% (lines 1-2) . In case of extremely stable concepts the WAH decreases window size by 1 unit (lines 3-4) . If the current concepts seems stable the window size remains unchanged (lines 5-6). In the other case, when the algorithm assumes that more examples is necessary, the window size is incresed by 1 unit (lines 7-8). FLORA2 was tested on an artificial learning problem used by Schlimmer and Granger in BIB001 -STAGGER concepts. The example space is defined by three attributes: size ∈ {small, medium, large}, color ∈ {red, green, blue}, and shape ∈ {square, circle, triangle}. There also exists a sequence of three target concepts: (1) size = small ∧ color = red, (2) color = green ∧ shape = circle, and (3) size = medium ∨ size = large. FLORA's authors randomly generated 120 training examples and labeled them according to some hidden context. After processing each example, the accuracy of the classification was tested on a separate testing set with 100 examples. The concept was changed after every 40 examples. The obtained results showed that after a sudden change the total accuracy suddenly decreases but FLORA2 quickly adjusts to the new concepts and approaches 100% accuracy. WAH behaves as expected. Sudden change leads to a short increase in window size, followed by narrowing the window size and forgetting irrelevant examples. In such a case, it would be a waste of time and effort to relearn an old concept from scratch. This was the reason for inventing the FLORA3 algorithm, which introduces a mechanism for a previous concept's storage and recall. The mechanism is tightly associated with the WAH heuristic. FLORA3 differs from FLORA2 behavior in a way that after every stage of learning it checks the current state of hypotheses in order to decide whether some old concept's descriptions are useful. The main idea assumes that when a change occurs the system should check which descriptions better explain the examples currently in the window: new concepts or some old ones. On the other hand, in case of stability periods it may be worth to store the current descriptions for future reuse. WAH decides when to store or reexamine the old concepts. If WAH signals a drift, the system examines its storage of old descriptions in order to find the one that fits the current state of the learning window. If one is found that is more appropriate than the current description, it replaces the current one. The procedure for reevaluating old concepts consists of three steps. First, the best candidate is found from all stored concepts that are consistent with the current examples in the window. It is the one with the highest ratio of positive to negative examples matched from the learning window. Then, the best candidate's counters are recalculated to reflect the examples from the learning window. In the last step, the updated best candidate is compared with the current concept description on a measure of fitness. In FLORA3 the measure of fitness is estimated by the relative complexity of the descriptions-the more compact the ADES is, the better. To maintain the efficiency of the learning algorithm, the old concepts are not checked after every new training example. They are only retrieved when WAH suspects a concept drift. Moreover, the best candidate is determined by a simple heuristic measure. For more details see BIB002 . FLORA3 was tested on an artificial situation of recurring context. The dataset consisted of three STAGGER concepts repeated three times in cyclic order: 1-2-3-1-2-3-1-2-3. Training and testing examples were created using the same procedure as for FLORA2. Results showed that storing and reusing old concepts leads to a noticeable improvement in reaction time to the reappearing concepts. In most of the cases FLORA3 relearns faster and obtains higher accuracy levels than the simpler FLORA2. Previous versions of FLORA deal with the main types of concept drift and recurring concepts. However they were not robust to noise. This is one of the difficulties in incremental learning-to distinguish between real concept drift and slight irregularities that may be treated as noise in the data. Methods that react quickly to every sign of change may overreact to noise. This may result in instability and low accuracy of classification. An ideal learner should combine stability and robustness with flexible and effective tracking of concept change BIB002 . That is why FLORA4 replaces the strict consistency condition, inherited from FLORA2 and FLORA3 with a softer notion of reliability. In FLORA4 for every description item statistical confidence intervals around its classification accuracy are calculated. Decisions when to move descriptions between sets are made based on the relation between these confidence intervals and observed class frequencies. Transitions among the description sets are as follows BIB002 . • A description item is kept in ADES if the lower endpoint of its accuracy confidence interval is greater than the class frequency interval's upper endpoint. • A description item from ADES is moved to PDES, when its accuracy interval overlaps with the class frequency interval. • A description item is dropped from ADES if the upper endpoint of its accuracy interval is lower than the class frequency interval's lower endpoint. • Description items in NDES are kept as long as the lower endpoint of its accuracy confidence interval is greater than the class frequency interval's upper endpoint computed over negative examples in the window. • There is no migration between NDES and PDES. Unacceptable hypotheses from NDES are deleted. The main effect of this strategy is that generalizations in ADES and NDES may cover some negative or positive examples, respectively. PDES acts as a buffer for descriptions that cover too many negative examples or their absolute number of covered examples is too small. The rest of the algorithm's mechanisms remain unchanged. FLORA4 was also tested on STAGGER concepts and was compared with FLORA3 and FLORA2. In noise-free environment FLORA4 is initially a bit slower in reacting to a change than its predecessors. However, eventually it gains higher accuracy of classification faster than the previous versions. For a different amount of noise FLORA4 is again a bit slower in reaction to a change than the predecessors but then soon outperforms them. However, the difference in the classification accuracy is greater than for the noise-free data. FLORA4 was also compared with the IB3 algorithm. For more details see BIB002 . |
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 5: Method for finding extreme examples <s> Agents that learn on-line with partial instance memory reserve some of the previously encountered examples for use in future training episodes. We extend our previous work by combining our method for selecting extreme examples with two incremental learning algorithms, AQ11 and GEM. Using these new systems, AQ11-PM and GEM-PM, and the task computer intrusion detection, we conducted a lesion study to analyze trade-offs in performance. Results showed that, although our partial-memory model decreased predictive accuracy by 2%, it also decreased memory requirements by 75%, learning time by 75%, and in some cases, concept complexity by 10%, an outcome consistent with earlier results using our partial-memory method and batch learning. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 5: Method for finding extreme examples <s> Learning concepts that change over time is important for a variety of applications in which an intelligent system must acquire and use a behavioral profile. Computer intrusion detection, calendar scheduling, and intelligent user interfaces are three examples. An interesting class of methods for learning such concepts consists of algorithms that maintain a portion of previously encountered examples. Since concepts change over time and these methods store selected examples, mechanisms must exist to identify and remove irrelevant examples of old concepts. In this paper, we describe an incremental rule learner with partial instance memory, called AQ 11 -PM+WAH, that uses Widmer and Kubat's heuristic to adjust dynamically the window over which it retains and forgets examples. We evaluated this learner using the STAGGER concepts and made direct comparisons to AQ-PM and to AQ 11 - PM, similar learners with partial instance memory. Results suggest that the forgetting heuristic is not restricted to FLORA2 the learner for which it was originally designed. Overall, result from this study and others suggest learners with partial instance memory converge more quickly to changing target concepts than algorithms that learn solely from new examples. <s> BIB002 | Input : S i -current set of learning examples; CR-characteristic rules Output: S ee -a set of extreme examples; 5). Next, the new extreme examples are selected using the strict matching strategy (line 6). The transformed rule is applied on the current training set. The examples that match the edges of the transformed rule using the strict matching strategy are the extreme ones. In the end, current extreme examples are combined with previously obtained ones (line 7). AQ-PM is equipped in implicit forgetting-examples from partial memory are forgotten when no longer force a boundary. AQ-PM was tested on three problems: STAGGER concepts, blasting cap detection and computer intrusion detection. The algorithm was compared with a simpler version of AQ-PM (baseline), with partial memory mechanism disabled, and IB2. The STAGGER concepts dataset consisted of 120 examples with sudden changes after every 40 examples. At each time step, a single training example and 100 testing examples were randomly generated. AQ-PM obtained higher results on total accuracy of classification than its opponents. The values of accuracy are comparable to those obtained by the FLORA system. The size of the memory held by AQ-PM was compared with the FLORA2's requirements. Over the entire learning phase, FLORA2 kept 15 examples, while AQ-PM maintained on average 6.6 examples in the partial memory. Blasting cap detection and computer intrusion detection was not evaluated by other researchers, so for the results and more details on these problems see . AQ-PM was extended by combining the method for selecting extreme examples with the incremental learning system AQ11. The resulting AQ11-PM algorithm was described in BIB001 . The AQ11 learning system does not operate in batch mode but incrementally generates new rules from the existing rules and new training examples. The standard AQ11 algorithm has no instance memory. It reads each example only once and drops it after the learning phase. The AQ11's learning process consists of three main steps. In the first phase, the algorithm searches for difficult examples in the new training set-the ones that are misclassified. If a rule covers a new negative example, then in the second step, the rule is specialized to be consistent using the AQ11 covering algorithm. In the end, the specialized positive rule is combined with the new positive training examples and AQ is used to generalize them as much as possible without intersecting any of the negative rules and without covering any of the new negative examples. AQ11 uses this same procedure to learn rules incrementally for both the positive and negative class. Furthermore, this process can be adjusted to processing multiple classes. In this case, one class is selected and treated as the positive one, while other labels are treated as negative. The learning process is per-formed on such partitions. This division is performed for each class present in the new training set. Because AQ11 has no instance memory, it relies solely on its current set of rules. Its rules are complete and consistent with respect to the current examples only. Like every incremental learner it can be susceptible to the ordering effect. This can be weakened using a partial instance memory. However, certain applications may require an additional mechanism to remove examples from the partial memory when they become too old. AQ11-PM was also tested on three problems: STAGGER concepts, blasting cap detection and computer intrusion detection. For STAGGER concepts, the algorithm was compared with the unmodified version of AQ11 and AQ-PM. STAGGER concepts dataset was the same as the one created for the AQ-PM evaluation. At each time step, accuracy of classification and the number of examples in partial memory were recorded. AQ11-PM stores more examples than AQ-PM. However, it was able to achieve higher predictive accuracy on all the target concepts than its predecessor. AQ11-PM outperformed FLORA2 on accuracy of classification on the second and third context, but was weaker on the first one. Regarding memory requirements, both of the AQ family algorithms stored fewer examples during the evaluation than FLORA2. Blasting cap detection and computer intrusion detection was not evaluated by other researchers, so for the results and more details on these problems see BIB001 . The AQ11-PM algorithm was combined with FLORA's window adjustment heuristic (Algorithm 3) to adjust dynamically the window over which it retains and forgets examples. This mechanism will help to deal with changing concepts. The proposal was described in BIB002 . AQ11-PM+WAH was evaluated using STAGGER concepts. It was compared on total accuracy of classification and the number of maintained examples with AQ11, AQ11-PM and AQ-PM. The results suggest that the partialmemory classifiers learn faster than do simple incremental systems. AQ11-PM and AQ11-PM+WAH outperformed AQ-PM on all three concepts. Moreover, AQ-PM, AQ11-PM and AQ11-PM+WAH are competitive with FLORA2 in terms of predictive accuracy. In addition the AQ systems store fewer examples in the memory. For more details see BIB002 . |
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FACIL <s> Abstract We present ELEM2, a machine learning system that induces classification rules from a set of data based on a heuristic search over a hypothesis space. ELEM2 is distinguished from other rule induction systems in three aspects. First, it uses a new heuristtic function to guide the heuristic search. The function reflects the degree of relevance of an attribute-value pair to a target concept and leads to selection of the most relevant pairs for formulating rules. Second, ELEM2 handles inconsistent training examples by defining an unlearnable region of a concept based on the probability distribution of that concept in the training data. The unlearnable region is used as a stopping criterion for the concept learning process, which resolves conflicts without removing inconsistent examples. Third, ELEM2 employs a new rule quality measure in its post-pruning process to prevent rules from overfitting the data. The rule quality formula measures the extent to which a rule can discriminate between the positive and negative examples of a class. We describe features of ELEM2, its rule induction algorithm and its classification procedure. We report experimental results that compare ELEM2 with C4.5 and CN2 on a number of datasets. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FACIL <s> This paper is a survey of inductive rule learning algorithms that use a separate-and-conquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of algorithms into a single framework and analyze them along three different dimensions, namely their search, language and overfitting avoidance biases. <s> BIB002 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FACIL <s> Most of the work in machine learning assume that examples are generated at random according to some stationary probability distribution. In this work we study the problem of learning when the distribution that generate the examples changes over time. We present a method for detection of changes in the probability distribution of examples. The idea behind the drift detection method is to control the online error-rate of the algorithm. The training examples are presented in sequence. When a new training example is available, it is classified using the actual model. Statistical theory guarantees that while the distribution is stationary, the error will decrease. When the distribution changes, the error will increase. The method controls the trace of the online error of the algorithm. For the actual context we define a warning level, and a drift level. A new context is declared, if in a sequence of examples, the error increases reaching the warning level at example k w , and the drift level at example k d . This is an indication of a change in the distribution of the examples. The algorithm learns a new model using only the examples since k w . The method was tested with a set of eight artificial datasets and a real world dataset. We used three learning algorithms: a perceptron, a neural network and a decision tree. The experimental results show a good performance detecting drift and with learning the new concept. We also observe that the method is independent of the learning algorithm. <s> BIB003 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FACIL <s> Induction of decision rules within the dominance–based rough set approach to the multicriteria and multiattribute classification is considered. Within this framework, we discuss two algorithms: Glance and an extended version of AllRules. The important characteristics of Glance is that it induces the set of all dominance–based rules in an incremental way. On the other hand, AllRules induces in a non–incremental way the set of all robust rules, i.e. based on objects from the set of learning examples. The main aim of this study is to compare both these algorithms. We experimentally evaluate them on several data sets. The results show that Glance and AllRules are complementary algorithms. The first one works very efficiently on data sets described by a low number of condition attributes and a high number of objects. The other one, conversely, works well on data sets characterized by a high number of attributes and a low number of objects. <s> BIB004 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FACIL <s> Mining data streams is a challenging task that requires online systems ba- sed on incremental learning approaches. This paper describes a classification system based on decision rules that may store up-to-date border examples to avoid unneces- sary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbour algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB005 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FACIL <s> Mining data streams is a challenging task that requires online systems based on incremental learning approaches. This paper describes a classification system based on decision rules that may store up--to--date border examples to avoid unnecessary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbor algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB006 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FACIL <s> Since the beginning of the Internet age and the increased use of ubiquitous computing devices, the large volume and continuous flow of distributed data have imposed new constraints on the design of learning algorithms. Exploring how to extract knowledge structures from evolving and time-changing data, Knowledge Discovery from Data Streams presents a coherent overview of state-of-the-art research in learning from data streams. The book covers the fundamentals that are imperative to understanding data streams and describes important applications, such as TCP/IP traffic, GPS data, sensor networks, and customer click streams. It also addresses several challenges of data mining in the future, when stream mining will be at the core of many applications. These challenges involve designing useful and efficient data mining solutions applicable to real-world problems. In the appendix, the author includes examples of publicly available software and online data sets. This practical, up-to-date book focuses on the new requirements of the next generation of data mining. Although the concepts presented in the text are mainly about data streams, they also are valid for different areas of machine learning and data mining. <s> BIB007 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FACIL <s> Decision rules, which can provide good interpretability and flexibility for data mining tasks, have received very little attention in the stream mining community so far. In this work we introduce a new algorithm to learn rule sets, designed for open-ended data streams. The proposed algorithm is able to continuously learn compact ordered and unordered rule sets. The experimental evaluation shows competitive results in comparison with VFDT and C4.5rules. <s> BIB008 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FACIL <s> Three block based ensembles, AWE, BWE and ACE, are considered in the perspective of learning from data streams with concept drift. AWE updates the ensemble after processing each successive block of incoming examples, while the other ensembles are additionally extended by different drift detectors. Experiments show that these extensions improve classification accuracy, in particular for sudden changes occurring within the block, as well as reduce computational costs. <s> BIB009 | Previous solutions were not designed to process high-rate data streams. In this environment classifiers have to operate continuously, processing each item in real time only once. This forces memory and time limitations. Moreover, real data streams are susceptible to changes in contexts, so proposed methods should track and adapt to the underlying modifications. The new incremental algorithm-FACIL was proposed in BIB005 . FACIL is an acronym of the words Fast and Adaptive Classifier by Incremental Learning. It induces a set of decision rules from numerical data streams. This approach allows the rule to be inconsistent by storing positive and negative examples covered by it. Those examples lie very near one another-they are border examples. A rule is inconsistent when it covers both positive and negative examples. The aim of this system is to remember border examples until a minimum purity of the rule is reached. The purity of the rule is defined as a ratio between number of positive examples covered by the rule to the total number of covered examples. When the value of purity falls below the minimum threshold, the examples associated with the rule are used to create new consistent rules. This approach is similar to AQ11-PM system, however it differs in the way that a rule stores two positive examples for a negative one. This guarantees that an impure rule is always modified from both positive and negative examples. Nevertheless, the examples held in memory are not necessary extreme. Despite the fact that this proposal suffers from the ordering effect, it does not weaken the learning process. The initial proposal of FACIL operates on m numerical attributes. Every learning example is described by a normalized vector [0, BIB001 m and a discrete value of a class label. Decision rule r is given by a set of m closed intervals [I jl , I ju ], where l stands for a lower bound, and u-upper bound BIB005 . Rules are separated among different sets according to the appropriate class label. FACIL does not maintain any global window but each rule has a different set of associated examples. Each rule has its own window of border examples. Each rule stores a number of positive and negative examples and also an index of the last covered example. The model is updated every time a new example becomes available. The pseudocode of FACIL is presented as Algorithm 6. FACIL operates as follows. When a new example arrives, the rules associated with the example's class label are checked (lines 1-9) and the generalization necessary to describe the new example is calculated according to the formula (line 2): where g j = max(x ij ; I ju ) − min(x ij ; I jl ) and r j = I ju − I jl . The measure of growth favors the rule that involves the smallest changes in the minimum number of attributes. A rule with the minimum value of growth becomes a candidate (lines 3-4). However, the rule is taken into account as a possible candidate only if the new example can be seized with a moderate growth (lines 3-4). It occurs when ∀ j ∈ {1..m} : g j − r j ≤ κ, where κ ∈ (0; 1] BIB005 . If the first rule covering the new example is found, then the number of positive examples covered by the rule is increased and the rule's last-covered-example index is updated (lines 5-7). The example is added to the rule's window, if the number of negative examples covered by the rule increased by one unit (lines 8-9). If any of rules associated with the example's class label does not fire for the example (line 10), the rest of the rules with different class labels are visited (lines BIB009 BIB005 BIB006 BIB002 BIB003 BIB007 BIB008 BIB004 . If a rule with a different label does not cover the example, the intersection with the candidate is checked (line 21). If there exists such an intersection, the candidate is rejected (line 24). When the differentlabeled rule covers the example (line 12), its negative support is increased (line 13). Additionally, the example is added to the rule's window of examples (line 14). If the purity of the rule dropped below the minimum value given by the user (line 15), new consistent rules are created from examples associated with the initial rule and added to the model (lines BIB003 . The old rule is marked as unreliable (line 18) and cannot be used in the generalization process, even for rules with different labels. A window |
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> 14 <s> This paper presents and compares two algorithms of machine learning from examples, ID3 and AQ, and one recent algorithm from the same class, called LEM2. All three algorithms are illustrated using the same example. Production rules induced by these algorithms from the well-known Small Soybean Database are presented. Finally, some advantages and disadvantages of these algorithms are shown. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> 14 <s> We consider strategies for building classifier ensembles for non-stationary environments where the classification task changes during the operation of the ensemble. Individual classifier models capable of online learning are reviewed. The concept of ”forgetting” is discussed. Online ensembles and strategies suitable for changing environments are summarized. <s> BIB002 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> 14 <s> Mining data streams is a challenging task that requires online systems ba- sed on incremental learning approaches. This paper describes a classification system based on decision rules that may store up-to-date border examples to avoid unneces- sary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbour algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB003 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> 14 <s> Mining data streams is a challenging task that requires online systems based on incremental learning approaches. This paper describes a classification system based on decision rules that may store up--to--date border examples to avoid unnecessary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbor algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB004 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> 14 <s> We address adaptive classification of streaming data in the presence of concept change. An overview of the machine learning approaches reveals a deficit of methods for explicit change detection. Typically, classifier ensembles designed for changing environments do not have a bespoke change detector. Here we take a systematic look at the types of changes in streaming data and at the current approaches and techniques in online classification. Classifier ensembles for change detection are discussed. An example is carried through to illustrate individual and ensemble change detectors for both unlabelled and labelled data. While this paper does not offer ready-made solutions, it outlines possibilities for novel approaches to classification of streaming data. <s> BIB005 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> 14 <s> Data streams are usually characterized by changes in the underlying distribution generating data. Therefore algorithms designed to work with data streams should be able to detect changes and quickly adapt the decision model. Rules are one of the most interpretable and flexible models for data mining prediction tasks. In this paper we present the Adaptive Very Fast Decision Rules (AVFDR), an on-line, any-time and one-pass algorithm for learning decision rules in the context of time changing data. AVFDR can learn ordered and unordered rule sets. It is able to adapt the decision model via incremental induction and specialization of rules. Detecting local drifts takes advantage of the modularity of rule sets. In AVFDR, each individual rule monitors the evolution of performance metrics to detect concept drift. AVFDR prunes rules that detect drift. This explicit change detection mechanism provides useful information about the dynamics of the process generating data, faster adaption to changes and generates compact rule sets. The experimental evaluation shows this method is able to learn fast and compact rule sets from evolving streams in comparison to alternative methods. <s> BIB006 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> 14 <s> Decision rules are one of the most interpretable and flexible models for data mining prediction tasks. Till now, few works presented online, any-time and one-pass algorithms for learning decision rules in the stream mining scenario. A quite recent algorithm, the Very Fast Decision Rules (VFDR), learns set of rules, where each rule discriminates one class from all the other. In this work we extend the VFDR algorithm by decomposing a multi-class problem into a set of two-class problems and inducing a set of discriminative rules for each binary problem. The proposed algorithm maintains all properties required when learning from stationary data streams: online and any-time classifiers, processing each example once. Moreover, it is able to learn ordered and unordered rule sets. The new approach is evaluated on various real and artificial datasets. The new algorithm improves the performance of the previous version and is competitive with the state-of-the-art decision tree learning method for data streams. <s> BIB007 | add example e to the negative rule's r n window; BIB006 BIB002 BIB005 . If there exists no rule that covers the example and there is no candidate for generalization, then a maximally specific rule to describe the new example is added to the appropriate set of rules (lines BIB001 BIB007 . Rules can also be deleted from the appropriate sets (line 30). A rule is removed if it is unreliable with a support smaller than the support of any rule generated from it. The second condition for rule removal is when the number of times the rule prevented generalization of a different label rule is greater than its support. FACIL is also equipped in a forgetting mechanism for dropping learning examples (line 31). This mechanism can be either explicit or implicit. Examples which are older than a user's defined threshold are deleted-this is explicit forgetting. Implicit forgetting takes place when examples are no longer relevant-they no longer lie on any of the rules boundary. Like every rule-based classifier, FACIL is supplemented with a classification strategy. A new test example is classified by rules that cover it. Unreliable rules that cover the example are rejected. Reliable rules are used to classify the test example. Consistent rules classify new examples by strict matching. Inconsistent rules acts like the nearest neighbor algorithm and classify the new example by its distance. The authors do not explain how exactly it is performed. Probably, they calculate the Euclidean distance between the test example and the rule's boundaries. In the case when no rule covers the example, it is classified to the label associated with the reliable rule with the minimal value of growth and an empty intersection with any other different label rules. The initial version of FACIL was evaluated on 12 real datasets from the UCI repository 1 and on a synthetic data stream generated from a moving hyperplane. In case of real data, a concept drift is not present. During the experiments the total accuracy of classification, the learning time and the number of induced rules was recorded. FACIL was compared with the C4.5Rules algorithm. In half of the real problems, FACIL obtains better results on the classification accuracy. Because FACIL is a single-pass solution, the processing time is always significantly shorter than for multi-pass C4.5Rules. For the hyperplane data stream, authors evaluated the computational cost as a function of the number of attributes. FACIL was not compared with any other existing stream mining solution. For detailed results see BIB003 . The initial version of FACIL was extended to process symbolic attributes in BIB004 . The formula for calculating the growth of a rule was changed in a way to process nominal attributes: Growth(r, x) = m j=1 ∆(T j , x j ), where for numeric attributes: ∆(T j , x j ) = min(|I jl − x j |; |x j − I ju |) and for nominal attributes: if example's attribute value x i is covered by the rule then ∆(T j , x j ) = 0, in the opposite case- The extension of FACIL was tested on a moving hyperplane problem. Again, the authors focused on evaluating the computational cost as a function of the number of attributes. The total accuracy of classification drops with the number of attributes. The processing time increases with the growth of the hyperplane problem. For detailed results see BIB004 . |
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> Abstract The decision tree output of Quinlan's ID3 algorithm is one of its major weaknesses. Not only can it be incomprehensible and difficult to manipulate, but its use in expert systems frequently demands irrelevant information to be supplied. This report argues that the problem lies in the induction algorithm itself and can only be remedied by radically altering the underlying strategy. It describes a new algorithm, PRISM which, although based on ID3, uses a different induction strategy to induce rules which are modular, thus avoiding many of the problems associated with decision trees. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> The CN2 algorithm induces an ordered list of classification rules from examples using entropy as its search heuristic. In this short paper, we describe two improvements to this algorithm. Firstly, we present the use of the Laplacian error estimate as an alternative evaluation function and secondly, we show how unordered as well as ordered rules can be generated. We experimentally demonstrate significantly improved performances resulting from these changes, thus enhancing the usefulness of CN2 as an inductive tool. Comparisons with Quinlan's C4.5 are also made. <s> BIB002 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> Abstract We present ELEM2, a machine learning system that induces classification rules from a set of data based on a heuristic search over a hypothesis space. ELEM2 is distinguished from other rule induction systems in three aspects. First, it uses a new heuristtic function to guide the heuristic search. The function reflects the degree of relevance of an attribute-value pair to a target concept and leads to selection of the most relevant pairs for formulating rules. Second, ELEM2 handles inconsistent training examples by defining an unlearnable region of a concept based on the probability distribution of that concept in the training data. The unlearnable region is used as a stopping criterion for the concept learning process, which resolves conflicts without removing inconsistent examples. Third, ELEM2 employs a new rule quality measure in its post-pruning process to prevent rules from overfitting the data. The rule quality formula measures the extent to which a rule can discriminate between the positive and negative examples of a class. We describe features of ELEM2, its rule induction algorithm and its classification procedure. We report experimental results that compare ELEM2 with C4.5 and CN2 on a number of datasets. <s> BIB003 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> Systems for inducing concept descriptions from examples are valuable tools for assisting in the task of knowledge acquisition for expert systems. This paper presents a description and empirical evaluation of a new induction system, CN2, designed for the efficient induction of simple, comprehensible production rules in domains where problems of poor description language and/or noise may be present. Implementations of the CN2, ID3, and AQ algorithms are compared on three medical classification tasks. <s> BIB004 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> This paper is a survey of inductive rule learning algorithms that use a separate-and-conquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of algorithms into a single framework and analyze them along three different dimensions, namely their search, language and overfitting avoidance biases. <s> BIB005 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> An emerging problem in Data Streams is the detection of concept drift. This problem is aggravated when the drift is gradual over time. In this work we deflne a method for detecting concept drift, even in the case of slow gradual change. It is based on the estimated distribution of the distances between classiflcation errors. The proposed method can be used with any learning algorithm in two ways: using it as a wrapper of a batch learning algorithm or implementing it inside an incremental and online algorithm. The experimentation results compare our method (EDDM) with a similar one (DDM). Latter uses the error-rate instead of distance-error-rate. <s> BIB006 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> Mining data streams is a challenging task that requires online systems ba- sed on incremental learning approaches. This paper describes a classification system based on decision rules that may store up-to-date border examples to avoid unneces- sary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbour algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB007 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> Mining data streams is a challenging task that requires online systems based on incremental learning approaches. This paper describes a classification system based on decision rules that may store up--to--date border examples to avoid unnecessary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbor algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB008 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> Sales prediction is an important problem for different companies involved in manufacturing, logistics, marketing, wholesaling and retailing. Different approaches have been suggested for food sales forecasting. Several researchers, including the authors of this paper, reported on the advantage of one type of technique over the others for a particular set of products. In this paper we demonstrate that besides an already recognized challenge of building accurate predictive models, the evaluation procedures themselves should be considered more carefully. We give illustrative examples to show that e.g. popular MAE and MSE estimates can be intuitive with one type of product and rather misleading with the others. Furthermore, averaging errors across differently behaving products can be also counter intuitive. We introduce new ways to evaluate the performance of wholesales prediction and discuss their biases with respect to different error types. <s> BIB009 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> Since the beginning of the Internet age and the increased use of ubiquitous computing devices, the large volume and continuous flow of distributed data have imposed new constraints on the design of learning algorithms. Exploring how to extract knowledge structures from evolving and time-changing data, Knowledge Discovery from Data Streams presents a coherent overview of state-of-the-art research in learning from data streams. The book covers the fundamentals that are imperative to understanding data streams and describes important applications, such as TCP/IP traffic, GPS data, sensor networks, and customer click streams. It also addresses several challenges of data mining in the future, when stream mining will be at the core of many applications. These challenges involve designing useful and efficient data mining solutions applicable to real-world problems. In the appendix, the author includes examples of publicly available software and online data sets. This practical, up-to-date book focuses on the new requirements of the next generation of data mining. Although the concepts presented in the text are mainly about data streams, they also are valid for different areas of machine learning and data mining. <s> BIB010 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> Decision rules, which can provide good interpretability and flexibility for data mining tasks, have received very little attention in the stream mining community so far. In this work we introduce a new algorithm to learn rule sets, designed for open-ended data streams. The proposed algorithm is able to continuously learn compact ordered and unordered rule sets. The experimental evaluation shows competitive results in comparison with VFDT and C4.5rules. <s> BIB011 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> This paper presents a new framework for dealing with two main types of concept drift: sudden and gradual drift in labelled data with decision attribute. The learning examples are processed in batches of the same size. This new framework, called Batch Weighted Ensemble, is based on incorporating drift detector into the evolving ensemble. Its performance was evaluated experimentaly on data sets with different types of concept drift and compared with the performance of a standard Accuracy Weighted Ensemble classifier. The results show that BWE improves evaluation measures like processing time, memory used and obtain competitive total accuracy. <s> BIB012 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> Three block based ensembles, AWE, BWE and ACE, are considered in the perspective of learning from data streams with concept drift. AWE updates the ensemble after processing each successive block of incoming examples, while the other ensembles are additionally extended by different drift detectors. Experiments show that these extensions improve classification accuracy, in particular for sudden changes occurring within the block, as well as reduce computational costs. <s> BIB013 | The Very Fast Decision Rules (VFDR) algorithm proposed by Gama and Kosina in BIB011 was also designed for high-speed massive data streams. It reads every learning example only once and induces an ordered or an unordered list of rules. VFDR enables processing both nominal and numeric attributes. The algorithm starts with an empty rule set RS and an empty default rule {} → L. L is a data structure that contains the necessary information for classification of new examples and includes the statistics used for extending the rule. Each rule r is associated with the corresponding data structure L r . Every L r (also L) stores: the number of examples covered by rule r, a vector to calculate the probability of observing examples of class c i , a matrix to calculate the probability of observing value v i of a nominal attribute at i per class and a b-tree to compute the probability per class of observing values greater than v j for a numerical attribute at i BIB011 . In general, L r accumulates sufficient statistics to compute the entropy for every label of a decision class. L r is updated when its corresponding rule covers a labeled example. The pseudocode of VFDR is presented as Algorithm 7. VFDR operates as follows. When a new learning example e is available all decision rules are visited (lines BIB003 BIB006 BIB009 BIB001 BIB002 BIB004 BIB012 BIB013 BIB007 BIB008 BIB005 . If rule r covers example e (line 2), its corresponding structure L r is updated (line 3). The Hoeffding bound states the number of examples after which a rule set RS should be updated either by extending some existing rule or inducting a new rule (line 4). The Hoeffding bound guarantees that with the probability 1 − δ the true mean of a random variable x with a range R will not differ from the estimated mean after n independent observations by more than = R 2 * ln( 1 δ ) 2 * n BIB011 . In the next step, the initial value of the entropy is calculated from the statistics gathered in L r (line 5). If the value of entropy exceeded the Hoeffding bound, then a rule should be enhanced (line 7). The rule is extended as follows. For each attribute and for each of this attribute's values that were observed in more than 10% of examples, the value of the split evaluation function is computed (lines BIB002 BIB004 BIB012 BIB013 BIB007 . If the value of the interesting measure for the best split is better than for not splitting, the rule is extended with a new selector obtained from the best split (lines BIB007 . The selector that minimizes the entropy of the class labels of the examples covered by the rule is added to the previous elementary conditions of the rule. The class label of the rule is then assigned according to the majority class of observations. VFDR can learn an ordered or unordered set of decision rules. In the former case, every labeled example updates the statistics of the first rule that covers it (line 14-15). For the latter-every rule that covers the example is updated. Those sets of rules are learned in parallel. In case when none of the rules cover example e (line 16), the default rule is updated (line 17). Then, if the number of examples in L exceeds the minimum number of examples obtained from the Hoeffding bound, new decision rules are induced from the default rule-using the same mechanism of a rule's growth as described earlier (lines BIB010 BIB011 . VFDR, as every rule-based classifier, is equipped with a classification strategy. The simplest strategy uses the stored distribution of classes-an example is classified to the class with the maximum value of probability. A more sophisticated strategy bases |
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 7: Very Fast Decision Rules algorithm <s> Decision rules, which can provide good interpretability and flexibility for data mining tasks, have received very little attention in the stream mining community so far. In this work we introduce a new algorithm to learn rule sets, designed for open-ended data streams. The proposed algorithm is able to continuously learn compact ordered and unordered rule sets. The experimental evaluation shows competitive results in comparison with VFDT and C4.5rules. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 7: Very Fast Decision Rules algorithm <s> Decision rules are one of the most interpretable and flexible models for data mining prediction tasks. Till now, few works presented online, any-time and one-pass algorithms for learning decision rules in the stream mining scenario. A quite recent algorithm, the Very Fast Decision Rules (VFDR), learns set of rules, where each rule discriminates one class from all the other. In this work we extend the VFDR algorithm by decomposing a multi-class problem into a set of two-class problems and inducing a set of discriminative rules for each binary problem. The proposed algorithm maintains all properties required when learning from stationary data streams: online and any-time classifiers, processing each example once. Moreover, it is able to learn ordered and unordered rule sets. The new approach is evaluated on various real and artificial datasets. The new algorithm improves the performance of the previous version and is competitive with the state-of-the-art decision tree learning method for data streams. <s> BIB002 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 7: Very Fast Decision Rules algorithm <s> Data streams are usually characterized by changes in the underlying distribution generating data. Therefore algorithms designed to work with data streams should be able to detect changes and quickly adapt the decision model. Rules are one of the most interpretable and flexible models for data mining prediction tasks. In this paper we present the Adaptive Very Fast Decision Rules (AVFDR), an on-line, any-time and one-pass algorithm for learning decision rules in the context of time changing data. AVFDR can learn ordered and unordered rule sets. It is able to adapt the decision model via incremental induction and specialization of rules. Detecting local drifts takes advantage of the modularity of rule sets. In AVFDR, each individual rule monitors the evolution of performance metrics to detect concept drift. AVFDR prunes rules that detect drift. This explicit change detection mechanism provides useful information about the dynamics of the process generating data, faster adaption to changes and generates compact rule sets. The experimental evaluation shows this method is able to learn fast and compact rule sets from evolving streams in comparison to alternative methods. <s> BIB003 | Input : e-a new learning example; RS-current set of rules; ordered-flag indicating induction of ordered set of rules; S min -a minimum number of examples from Hoeffding bound; δ-threshold for probability used in Hoeffding bound; SEF -a split evaluation function; Output: RS -modified set of decision rules 1 foreach rule r ∈ RS do 20 Return RS on the Bayes rule with the assumption of attribute independence for the class. The Naive Bayes strategy uses the prior distribution of the classes and also the conditional probabilities of the attribute-value pairs given the class. As a result, for each testing example e = (v 1 , ..., v j ), the probability that example e belongs to decision class c k is P (c k |e) ∝ P (c k ) jP (v j |c k ) BIB001 . Thanks to using this strategy more information available with each rule is exploited. An example is classified to the class with the maximum value of the posteriori probability. In case of the ordered set of rules only the first rule that covers an example is fired. Using the unordered set of rules-results returned by all rules that match the example are combined using weighted voting. This type of voting assumes that not all voters are equal. Instead, they are diversified by giving them different amounts of weights. The authors did not provide information on how weights are assigned to each of the decision rules. VFDR was tested on six different data streams: disjunctive concepts, hyperplane, LED, SEA, STAGGER, and Waveform. The authors tested two different classification strategies. Usage of the Bayes theorem improves the predictive capabilities of the algorithm. Authors also compared an ordered versus an unordered set of rules. The experimental evaluation showed that unordered rule set is more competitive than the ordered one with respect to the accuracy of classification. In the end, VFDR (with the Bayes classification strategy and an unordered set of rules) was compared with VFDT and C4.5Rules. VFDR is much more efficient than C4.5Rules in terms of memory and processing time. It also obtained competitive results against VFDT. For more details see BIB001 . The initial version of VFDR was extended to deal with multi-class problems in BIB002 . The proposed algorithm VFDR-MC decomposes a multi-class problem into a set of two-class problems and induces a set of discriminative rules for each binary problem. VFDR-MC applies one versus all strategy in which examples of one class are positive and other are negative. It considers a rule expansion for each of the classes observed with current rule. The expansion of a rule is different for the default rule and for the already existing rule. It also depends on the type of generated rule set: ordered or unordered. The default rule is expanded to a new rule with a literal for which a gain function, adopted from FOIL classifier, obtains the best value. The rule's decision class is indicated by the class with minimum frequency among those that satisfy the Hoeffding bound condition. The ordered strategy stops after finding the first candidate rule that is better than the previous one. The unordered strategy checks all possible expansions for every decision class. In case of extending a rule that already exists, the procedure also depends whether the ordered or unordered set of decision rules is induced. In case of ordered set only literals for positive class are tested. For unordered set the decision rules the class of expanded rule is maintained as positive for the first calculations of the gain measure. Next, computations for other classes set as positive ones are performed. This allows to produce more than one rule but not always for all the available decision classes. For more details see BIB002 . VFDR-MC was tested on six different data streams: KDDCup99, covtype, hyperplane, SEA, LED, and Random Tree. The authors observed that the unordered version obtained generally better results of classification accuracy than the ordered one. Moreover, unordered VFDR-MC mostly outperforms base version of VFDR on multi-class data sets. Learning time for ordered rule set is almost the same as in case of creation of the Hoeffding Tree. In case of unordered set of decision rules the learning time grows with the number of rules. For more details see BIB002 . VFDR was also improved in order to handle time changing data. The resulting algorithm Adaptive Very Fast Decision Rules (AVFDR) was described in BIB003 . AVFDR extends VFDR-MC with explicit drift detection. Each rule in the set of decision rules is equipped in a drift detection method, which tracks the performance of the rule during learning. Applied drift detector is presented as Algorithm 8. For every learning example covered by the rule, the rule updates its error of classification. Moreover, the drift detector manages two additional statistics: error min and stddev min . Those registers are updated if for given learning example e error e + stddev e < error min + stddev min . The flag indicating type of change for given rule can take one of three values: None, Warning or Drift. If the rule achieved warning level, then the rule's learning process is stopped until the flag is set to None again. In case of Drift level, the rule is so weak that it is removed from the set of decision rules. This helps to keep the final set of decision rules effective and up-to-date. For more details see BIB003 . |
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 8: AVFDR Drift Detection Method <s> This paper presents a new framework for dealing with two main types of concept drift: sudden and gradual drift in labelled data with decision attribute. The learning examples are processed in batches of the same size. This new framework, called Batch Weighted Ensemble, is based on incorporating drift detector into the evolving ensemble. Its performance was evaluated experimentaly on data sets with different types of concept drift and compared with the performance of a standard Accuracy Weighted Ensemble classifier. The results show that BWE improves evaluation measures like processing time, memory used and obtain competitive total accuracy. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 8: AVFDR Drift Detection Method <s> Data streams are usually characterized by changes in the underlying distribution generating data. Therefore algorithms designed to work with data streams should be able to detect changes and quickly adapt the decision model. Rules are one of the most interpretable and flexible models for data mining prediction tasks. In this paper we present the Adaptive Very Fast Decision Rules (AVFDR), an on-line, any-time and one-pass algorithm for learning decision rules in the context of time changing data. AVFDR can learn ordered and unordered rule sets. It is able to adapt the decision model via incremental induction and specialization of rules. Detecting local drifts takes advantage of the modularity of rule sets. In AVFDR, each individual rule monitors the evolution of performance metrics to detect concept drift. AVFDR prunes rules that detect drift. This explicit change detection mechanism provides useful information about the dynamics of the process generating data, faster adaption to changes and generates compact rule sets. The experimental evaluation shows this method is able to learn fast and compact rule sets from evolving streams in comparison to alternative methods. <s> BIB002 | Input : r-tested decision rule; e-current learning example; Output: f lag ∈ {N one, W arning, Drif t}-flag indicating type of change 1 flag = None; 2 compute error of classification error e for given learning example e with its standard deviation stddev e .; 3 if (error e + stddev e ) < (error min + stddev min ) then 4 error min = error e ; 5 stddev min = stddev e ; 6 if (error e + stddev e ) ≥ (error min + 3 * stddev min ) then 7 flag = Drift; 8 else if (error e + stddev e ) ≥ (error min + 2 * stddev min ) then 9 flag = Warning; BIB001 Return f lag AVFDR was tested on five artificial data streams: Hyperplane, SEA, LED, RBF, and Waveform and six real datasets: KDDCup99, Covtype, Elec, Airlines, Connect-4, and Activity. The results obtained on artificial data show that AVFDR works best for changing environments. The accuracy of classification of VFDR's base version decreases with time. In case of real datasets, AVFDR u obtains competitive results on the accuracy of classification with a lower size of the induced model. For more details see BIB002 . |
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Conclusions <s> Agents that learn on-line with partial instance memory reserve some of the previously encountered examples for use in future training episodes. We extend our previous work by combining our method for selecting extreme examples with two incremental learning algorithms, AQ11 and GEM. Using these new systems, AQ11-PM and GEM-PM, and the task computer intrusion detection, we conducted a lesion study to analyze trade-offs in performance. Results showed that, although our partial-memory model decreased predictive accuracy by 2%, it also decreased memory requirements by 75%, learning time by 75%, and in some cases, concept complexity by 10%, an outcome consistent with earlier results using our partial-memory method and batch learning. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Conclusions <s> Learning concepts that change over time is important for a variety of applications in which an intelligent system must acquire and use a behavioral profile. Computer intrusion detection, calendar scheduling, and intelligent user interfaces are three examples. An interesting class of methods for learning such concepts consists of algorithms that maintain a portion of previously encountered examples. Since concepts change over time and these methods store selected examples, mechanisms must exist to identify and remove irrelevant examples of old concepts. In this paper, we describe an incremental rule learner with partial instance memory, called AQ 11 -PM+WAH, that uses Widmer and Kubat's heuristic to adjust dynamically the window over which it retains and forgets examples. We evaluated this learner using the STAGGER concepts and made direct comparisons to AQ-PM and to AQ 11 - PM, similar learners with partial instance memory. Results suggest that the forgetting heuristic is not restricted to FLORA2 the learner for which it was originally designed. Overall, result from this study and others suggest learners with partial instance memory converge more quickly to changing target concepts than algorithms that learn solely from new examples. <s> BIB002 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Conclusions <s> On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift. <s> BIB003 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Conclusions <s> Mining data streams is a challenging task that requires online systems ba- sed on incremental learning approaches. This paper describes a classification system based on decision rules that may store up-to-date border examples to avoid unneces- sary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbour algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB004 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Conclusions <s> Decision rules, which can provide good interpretability and flexibility for data mining tasks, have received very little attention in the stream mining community so far. In this work we introduce a new algorithm to learn rule sets, designed for open-ended data streams. The proposed algorithm is able to continuously learn compact ordered and unordered rule sets. The experimental evaluation shows competitive results in comparison with VFDT and C4.5rules. <s> BIB005 | Mining data streams recently became a very popular topic of research. Data streams are susceptible to changes in the hidden context, producing what is generally known as concept drift. There exist two main types of concept drift: sudden and gradual. However, there are also other types like recurring context and two cases, to which a good classifier should be resistant: blips and noise. Learning from non-stationary environments is rather a new discipline, but there already exist algorithms that attempt to solve this problem. They can be divided into two main groups: trigger-based and evolving methods. In this paper four key rule-based online algorithms proposed for mining data streams in the presence of concept drift were presented. First, FLORA was described-a first family of algorithms that flexibly react to changes in concepts, can use previous hypotheses in situations when context reappears and are robust to noise in data BIB003 . Then, algorithms from the AQ family were presented with their modifications. AQ-PM is a static learner that selects extreme examples from rules' boundaries for each incoming batch of data and stores them in the partial memory. AQ11-PM BIB001 is a combination of the incremental AQ11 algorithm with a partial memory mechanism. AQ11-PM+WAH BIB002 is extended with a heuristic for a flexible size of the window with stored examples. The FACIL algorithm operates similarly to AQ11-PM BIB004 . However, it differs in the way that examples stored in the partial memory do not have to be extreme ones. Those three main algorithms were not tested on huge datasets. For massive high-speed data streams a new algorithm called VFDR was proposed in BIB005 . It induces an ordered or an unordered sets of decision rules that are efficient in terms of memory and processing time. Those solutions use the same representation of knowledge-decision rules, however they operate in a differet way. These four algorithms can be compared on several criteria, like the type of data. FLORA is restricted only to nominal attributes, where AQ11-PM+WAH, FACIL and VFDR process both nominal and numerical attributes. On the other hand, FLORA, AQ11-PM+WAH and FACIL are adjusted to deal with concept drift, where VFDR are suitable only to stationary environments. Moreover, FLORA was designed and tested on different types of concept drift: sudden, recurring, and noise. Unfortunately, the first three solutions were not tested on massive data streams with concept drift. Two of them (FLORA and AQ11-PM+WAH) were tested on STAGGER concepts with 120 learning examples, where FACIL was evaluated on the moving hyperplane problem. FLORA and AQ11-PM+WAH solve binary classification problem, but the latter one can be extended for the multi-class problem. FACIL and VFDR do not have any restrictions on the number of decision classes. The four proposals differ also on the type of memory that they maintain. FLORA remembers only a window of the most recent examples. AQ11-PM+WAH has a partial memory with extreme examples that lie on the boundaries of induced decision rules. Additionally, application of WAH heuristic introduced a global learning window, outside which old examples are forgotten. FACIL also maintains a partial memory but the stored examples do not have to be extreme ones. Every decision rule has its own window of learning examples. Moreover, it remembers more examples than its predecessor (it stores two positive per one negative example). On the other hand, VFDR has no instance memory-it only maintains a set of decision rules with their corresponding data structures L r containing all necessary statistics. Knowledge representation is also maintained in a different way. FLORA stores the conditional part of rules in three description sets: ADES, PDES, and NDES. AQ11-PM+WAH induces a classical unordered set of decision rules. In case of FACIL, rules consist of all conditional attributes, which define an m-dimensional space (intervals). VFDR is the only algorithm that can induce either an unordered or an ordered set of decision rules. Its rules have to be as short as possible. Another criterion that differs the four described algorithms is the way of use of induced decision rules for new examples' classification. Moreover, all algorithms were evaluated in different setups and on different data sets, so the obtained results cannot be compared with each other. It is difficult to state which of the described algorithms is the best. They were introduced in different times and were tested on different data sets. It would be interesting to perform a comparison of those solutions on many data streams containing different types of concept drift with respect to the total accuracy of classification, the memory usage and the processing time. Nowadays the MOA environment-a framework for data stream mining, is very helpful. It contains a collection of machine learning algorithms, data generators and tools for evaluation. More can be found about this project in the literature and on the MOA project website 2 . MOA can be easily extended with new mining algorithms, but also with new stream generators or evaluation measures. Unfortunately the implementations of FLORA, AQ11-PM+WAH, FACIL, and VDFR are not publicly available, hindering such a comparison at present. |
Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> We predict regulatory targets for 14 Arabidopsis microRNAs (miRNAs) by identifying mRNAs with near complementarity. Complementary sites within predicted targets are conserved in rice. Of the 49 predicted targets, 34 are members of transcription factor gene families involved in developmental patterning or cell differentiation. The near-perfect complementarity between plant miRNAs and their targets suggests that many plant miRNAs act similarly to small interfering RNAs and direct mRNA cleavage. The targeting of developmental transcription factors suggests that many plant miRNAs function during cellular differentiation to clear key regulatory transcripts from daughter cell lineages. <s> BIB001 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> MicroRNAs (miRNAs) are short RNA molecules that regulate gene expression by binding to target messenger RNAs and by controlling protein production or causing RNA cleavage. To date, functions have been assigned to only a few of the hundreds of identified miRNAs, in part because of the difficulty in identifying their targets. The short length of miRNAs and the fact that their complementarity to target sequences is imperfect mean that target identification in animal genomes is not possible by standard sequence comparison methods. Here we screen conserved 3′ UTR sequences from the Drosophila melanogaster genome for potential miRNA targets. The screening procedure combines a sequence search with an evaluation of the predicted miRNA–target heteroduplex structures and energies. We show that this approach successfully identifies the five previously validated let-7, lin-4, and bantam targets from a large database and predict new targets for Drosophila miRNAs. Our target predictions reveal striking clusters of functionally related targets among the top predictions for specific miRNAs. These include Notch target genes for miR-7, proapoptotic genes for the miR-2 family, and enzymes from a metabolic pathway for miR-277. We experimentally verified three predicted targets each for miR-7 and the miR-2 family, doubling the number of validated targets for animal miRNAs. Statistical analysis indicates that the best single predicted target sites are at the border of significance; thus, target predictions should be considered as tentative until experimentally validated. We identify features shared by all validated targets that can be used to evaluate target predictions for animal miRNAs. Our initial evaluation and experimental validation of target predictions suggest functions for two miRNAs. For others, the screen suggests plausible functions, such as a role for miR-277 as a metabolic switch controlling amino acid catabolism. Cross-genome comparison proved essential, as it allows reduction of the sequence search space. Improvements in genome annotation and increased availability of cDNA sequences from other genomes will allow more sensitive screens. An increase in the number of confirmed targets is expected to reveal general structural features that can be used to improve their detection. While the screen is likely to miss some targets, our study shows that valid targets can be identified from sequence alone. <s> BIB002 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> Recent experiments have shown that the genomes of organisms such as worm, fly, human and mouse encode hundreds of microRNA genes. Many of these microRNAs are thought to regulate the translational expression of other genes by binding to partially complementary sites in messenger RNAs. Phenotypic and expression analysis suggest an important role of microRNAs during development. Therefore, it is of fundamental importance to identify microRNA targets. However, no experimental or computational high-throughput method for target site identification in animals has been published yet. Our main result is a new computational method which is designed to identify microRNA target sites. This method recovers with high specificity known microRNA target sites which previously have been defined experimentally. Based on these results, we present a simple model for the mechanism of microRNA target site recognition. Our model incorporates both kinetic and thermodynamic components of target recognition. When we applied our method to a set of 74 Drosophila melanogaster microRNAs, searching 3' UTR sequences of a predefined set of fly mRNAs for target sites which were evolutionary conserved between Drosophila melanogaster and Drosophila pseudoobscura, we found that a number of key developmental body patterning genes such as hairy and fushi-tarazu are likely to be translationally regulated by microRNAs. <s> BIB003 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> Summary We present rna22 , a method for identifying microRNA binding sites and their corresponding heteroduplexes. Rna22 does not rely upon cross-species conservation, is resilient to noise, and, unlike previous methods, it first finds putative microRNA binding sites in the sequence of interest, then identifies the targeting microRNA. Computationally, we show that rna22 identifies most of the currently known heteroduplexes. Experimentally, with luciferase assays, we demonstrate average repressions of 30% or more for 168 of 226 tested targets. The analysis suggests that some microRNAs may have as many as a few thousand targets, and that between 74% and 92% of the gene transcripts in four model genomes are likely under microRNA control through their untranslated and amino acid coding regions. We also extended the method's key idea to a low-error microRNA-precursor-discovery scheme; our studies suggest that the number of microRNA precursors in mammalian genomes likely ranges in the tens of thousands. <s> BIB004 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> Computational microRNA (miRNA) target prediction is a field in flux. Here we present a guide through five widely used mammalian target prediction programs. We include an analysis of the performance of these individual programs and of various combinations of these programs. For this analysis we compiled several benchmark data sets of experimentally supported miRNA-target gene interactions. Based on the results, we provide a discussion on the status of target prediction and also suggest a stepwise approach toward predicting and selecting miRNA targets for experimental testing. <s> BIB005 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> MicroRNAs (miRNAs) are small noncoding RNAs that control gene expression by inducing RNA cleavage or translational inhibition. Most human miRNAs are intragenic and are transcribed as part of their hosting transcription units. We hypothesized that the expression profiles of miRNA host genes and of their targets are inversely correlated and devised a novel procedure, HOCTAR (host gene oppositely correlated targets), which ranks predicted miRNA target genes based on their anti-correlated expression behavior relative to their respective miRNA host genes. HOCTAR is the first tool for systematic miRNA target prediction that utilizes the same set of microarray experiments to monitor the expression of both miRNAs (through their host genes) and candidate targets. We applied the procedure to 178 human intragenic miRNAs and found that it performs better than currently available prediction softwares in pinpointing previously validated miRNA targets. The high-scoring HOCTAR predicted targets were enriched in Gene Ontology categories, which were consistent with previously published data, as in the case of miR-106b and miR-93. By means of overexpression and loss-of-function assays, we also demonstrated that HOCTAR is efficient in predicting novel miRNA targets and we identified, by microarray and qRT-PCR procedures, 34 and 28 novel targets for miR-26b and miR-98, respectively. Overall, we believe that the use of HOCTAR significantly reduces the number of candidate miRNA targets to be tested compared to the procedures based solely on target sequence recognition. Finally, our data further confirm that miRNAs have a significant impact on the mRNA levels of most of their targets. <s> BIB006 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> MicroRNAs (miRNAs) are a class of short endogenously expressed RNA molecules that regulate gene expression by binding directly to the messenger RNA of protein coding genes. They have been found to confer a novel layer of genetic regulation in a wide range of biological processes. Computational miRNA target prediction remains one of the key means used to decipher the role of miRNAs in development and disease. Here we introduce the basic idea behind the experimental identification of miRNA targets and present some of the most widely used computational miRNA target identification programs. The review includes an assessment of the prediction quality of these programs and their combinations. Contact: [email protected] Supplementary information: Supplementary data are available at Bioinformatics online. <s> BIB007 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> RNA interference (RNAi) is one of the most significant recent breakthroughs in biomedical sciences. In 2006, Drs. Fire and Mello were awarded the Nobel Price for Physiology or Medicine for their discovery of gene silencing by double-stranded RNA. Basic scientists have used RNAi as a tool to study gene regulation, signal transduction and disease mechanisms, while preclinical drug development has gained from its use in target validation and lead optimization. RNAi has also shown promise in therapeutic applications, and several synthetic RNA molecules have entered clinical trials. The family of short regulatory RNA molecules, including small interfering RNAs (siRNAs) and micro-RNAs (miRNAs), offers many possibilities for the innovative mind. When conventional small molecule inhibitors cannot be used, RNAi technology offers the possibility for sequence-specific targeting and subsequent target gene knockdown. Currently the major challenges related to RNAi -based drug development include delivery, off-target effects, activation of the immune system and RNA degradation. Although many of the expectations related to drug development have not been met thus far, these physiologically important molecules are used in several applications. This review summarizes recent patent applications concerning micro-RNA biology. Despite the somewhat unclear intellectual property right (IPR) status for RNAi, there are many possibilities for new inventions, and much remains to be learned from the physiology behind gene regulation by short RNA molecules. <s> BIB008 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> MicroRNAs (miRNAs) suppress gene expression by inhibiting translation, promoting mRNA decay or both. Each miRNA may regulate hundreds of genes to control the cell's response to developmental and other environmental cues. The best way to understand the function of a miRNA is to identify the genes that it regulates. Target gene identification is challenging because miRNAs bind to their target mRNAs by partial complementarity over a short sequence, suppression of an individual target gene is often small, and the rules of targeting are not completely understood. Here we review computational and experimental approaches to the identification of miRNA-regulated genes. The examination of changes in gene expression that occur when miRNA expression is altered and biochemical isolation of miRNA-associated transcripts complement target prediction algorithms. Bioinformatic analysis of over-represented pathways and nodes in protein-DNA interactomes formed from experimental candidate miRNA gene target lists can focus attention on biologically significant target genes. <s> BIB009 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> The liver-expressed microRNA-122 (miR-122) is essential for hepatitis C virus (HCV) RNA accumulation in cultured liver cells, but its potential as a target for antiviral intervention has not been assessed. We found that treatment of chronically infected chimpanzees with a locked nucleic acid (LNA)-modified oligonucleotide (SPC3649) complementary to miR-122 leads to long-lasting suppression of HCV viremia, with no evidence of viral resistance or side effects in the treated animals. Furthermore, transcriptome and histological analyses of liver biopsies demonstrated derepression of target mRNAs with miR-122 seed sites, down-regulation of interferon-regulated genes, and improvement of HCV-induced liver pathology. The prolonged virological response to SPC3649 treatment without HCV rebound holds promise of a new antiviral therapy with a high barrier to resistance. <s> BIB010 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> Proper coordination of cholesterol biosynthesis and trafficking is essential to human health. The sterol regulatory element-binding proteins (SREBPs) are key transcription regulators of genes involved in cholesterol biosynthesis and uptake. We show here that microRNAs (miR-33a/b) embedded within introns of the SREBP genes target the adenosine triphosphate-binding cassette transporter A1 (ABCA1), an important regulator of high-density lipoprotein (HDL) synthesis and reverse cholesterol transport, for posttranscriptional repression. Antisense inhibition of miR-33 in mouse and human cell lines causes up-regulation of ABCA1 expression and increased cholesterol efflux, and injection of mice on a western-type diet with locked nucleic acid-antisense oligonucleotides results in elevated plasma HDL. Our findings indicate that miR-33 acts in concert with the SREBP host genes to control cholesterol homeostasis and suggest that miR-33 may represent a therapeutic target for ameliorating cardiometabolic diseases. <s> BIB011 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> Dominant negative genetic disorders, in which a mutant allele of a gene causes disease in the presence of a second, normal copy, have been challenging since there is no cure and treatments are only to alleviate the symptoms. Current therapies involving pharmacological and biological drugs are not suitable to target mutant genes selectively due to structural indifference of the normal variant of their targets from the disease-causing mutant ones. In instances when the target contains single nucleotide polymorphism (SNP), whether it is an enzyme or structural or receptor protein are not ideal for treatment using conventional drugs due to their lack of selectivity. Therefore, there is a need to develop new approaches to accelerate targeting these previously inaccessible targets by classical therapeutics. Although there is a cooling trend by the pharmaceutical industry for the potential of RNA interference (RNAi), RNAi and other RNA targeting drugs (antisense, ribozyme, etc.) still hold their promise as the only drugs that provide an opportunity to target genes with SNP mutations found in dominant negative disorders, genes specific to pathogenic tumor cells, and genes that are critical for mediating the pathology of various other diseases. Because of its exquisite specificity and potency, RNAi has attracted a considerable interest as a new class of therapeutic for genetic diseases including amyotrophic lateral sclerosis, Huntington’s disease (HD), Alzheimer’s disease (AD), Parkinson’s disease (PD), spinocerebellar ataxia, dominant muscular dystrophies, and cancer. In this review, progress and challenges in developing RNAi therapeutics for genetic diseases will be discussed. <s> BIB012 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> RNA interference (RNAi) is a robust gene silencing mechanism that degrades mRNAs complementary to the antisense strands of double-stranded, short interfering RNAs (siRNAs). As a therapeutic strategy, RNAi has an advantage over small-molecule drugs, as virtually all genes are susceptible to targeting by siRNA molecules. This advantage is, however, counterbalanced by the daunting challenge of achieving safe, effective delivery of oligonucleotides to specific tissues in vivo. Lipid-based carriers of siRNA therapeutics can now target the liver in metabolic diseases and are being assessed in clinical trials for the treatment of hypercholesterolemia. For this indication, a chemically modified oligonucleotide that targets endogenous small RNA modulators of gene expression (microRNAs) is also under investigation in clinical trials. Emerging 'self-delivery' siRNAs that are covalently linked to lipophilic moieties show promise for the future development of therapies. Besides the liver, inflammation of the adipose tissue in patients with obesity and type 2 diabetes mellitus may be an attractive target for siRNA therapeutics. Administration of siRNAs encapsulated within glucan microspheres can silence genes in inflammatory phagocytic cells, as can certain lipid-based carriers of siRNA. New technologies that combine siRNA molecules with antibodies or other targeting molecules also appear encouraging. Although still at an early stage, the emergence of RNAi-based therapeutics has the potential to markedly influence our clinical future. <s> BIB013 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> Since its discovery in 1998, RNA interference (RNAi) has revolutionized basic and clinical research. Small RNAs, including small interfering RNA (siRNA), short hairpin RNA (shRNA) and microRNA (miRNA), mediate RNAi effects through either cleavage-dependent or cleavage-independent RNA inducible silencing complex (RISC) effector processes. As a result of its efficacy and potential, RNAi has been elevated to the status of "blockbuster therapeutic" alongside recombinant protein and monoclonal antibody. RNAi has already contributed to our understanding of neoplasia and has great promise for anti-cancer therapeutics, particularly so for personalized cancer therapy. Despite this potential, several hurdles have to be overcome for successful development of RNAi-based pharmaceuticals. This review will discuss the potential for, challenges to, and the current status of RNAi-based cancer therapeutics. <s> BIB014 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> MicroRNAs (miRNAs) provide new therapeutic targets for many diseases, while their myriad roles in development and cellular processes make them fascinating to study. We still do not fully understand the molecular mechanisms by which miRNAs regulate gene expression nor do we know the complete repertoire of mRNAs each miRNA regulates. However, recent progress in the development of effective strategies to block miRNAs suggests that anti-miRNA drugs may soon be used in the clinic. <s> BIB015 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> miRNA target genes prediction represents a crucial step in miRNAs functional characterization. In this context, the challenging issue remains predictions accuracy and recognition of false positive results. In this article myMIR, a web based system for increasing reliability of miRNAs predicted targets lists, is presented. myMIR implements an integrated pipeline for computing ranked miRNA::target lists and provides annotations for narrowing them down. The system relies on knowledge base data, suitably integrated in order to extend the functional characterization of targeted genes to miRNAs, by highlighting the search on over-represented annotation terms. Validation results show a dramatic reduction in the quantity of predictions and an increase in the sensitivity, when compared to other methods. This improves the predictions accuracy and allows the formulation of novel hypotheses on miRNAs functional involvement. <s> BIB016 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> An emerging new category of therapeutic agents based on ribonucleic acid has emerged and shown very promising in vitro, animal and pre-clinical results, known as small interfering RNAs (siRNAs), microRNAs mimics (miRNA mimics) and their derivates. siRNAs are small RNA molecules that promote potent and specific silencing of mutant, exogenous or aberrant genes through a mechanism known as RNA interference. These agents have called special attention to medicine since they have been used to experimentally treat a series of neurological conditions with distinct etiologies such as prion, viral, bacterial, fungal, genetic disorders and others. siRNAs have also been tested in other scenarios such as: control of anxiety, alcohol consumption, drug-receptor blockage and inhibition of pain signaling. Although in a much earlier stage, miRNAs mimics, anti-miRs and small activating RNAs (saRNAs) also promise novel therapeutic approaches to control gene expression. In this review we intend to introduce clinicians and medical researchers to the most recent advances in the world of siRNA- and miRNA-mediated gene control, its history, applications in cells, animals and humans, delivery methods (an yet unsolved hurdle), current status and possible applications in future clinical practice. <s> BIB017 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> Rarely a new research area has gotten such an overwhelming amount of attention as have microRNAs. Although several basic questions regarding their biological principles still remain to be answered, many specific characteristics of microRNAs in combination with compelling therapeutic efficacy data and a clear involvement in human disease have triggered the biotechnology community to start exploring the possibilities of viewing microRNAs as therapeutic entities. This review serves to provide some general insight into some of the current microRNAs targets, how one goes from the initial bench discovery to actually developing a therapeutically useful modality, and will briefly summarize the current patent landscape and the companies that have started to explore microRNAs as the next drug target. <s> BIB018 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> Neurodegenerative diseases are typically late-onset, progressive disorders that affect neural function and integrity. Although most attention has been focused on the genetic underpinnings of familial disease, mechanisms are likely to be shared with more predominant sporadic forms, which can be influenced by age, environment, and genetic inputs. Previous work has largely addressed the roles of select protein-coding genes; however, disease pathogenesis is complicated and can be modulated through not just protein-coding genes, but also regulatory mechanisms mediated by the exploding world of small non-coding RNAs. Here, we focus on emerging roles of miRNAs in age-associated events impacting long-term brain integrity and neurodegenerative disease. <s> BIB019 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> Analysis of data from The Cancer Genome Atlas generates a pan-cancer network of 143 recurrent miRNA-target relationships. The identified miRNAs were frequently regulated by genetic and epigenetic alterations in cancer. The work also reveals that some miRNAs might coordinately regulate cancer pathways, such as miR-29 regulation of TET1 and TDG mRNAs, encoding components from the active DNA demethylation pathway. <s> BIB020 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> MicroRNA (miRNA) are non-coding regulatory RNA usually consisting of 20-24 nucleotides. Over the past decade, increases and decreases in miRNA expression have been shown to associate with various types of disease, including cancer. The first two known miRNA aberrations resulted from altered expression of DLEU2 and C13orf25 in hematological malignancies. DLEU2, which encodes miR-15a and miR-16-1, was discovered from 13q14 deletion in chronic lymphocytic leukemia, while C13orf25, which encodes six mature miRNA (miR-17, miR-18, miR-19a, miR-19b, miR-20a and miR-92a), was identified from 13q31 amplification in aggressive B-cell lymphomas. These miRNA were downregulated or upregulated in accordance with genomic deletion or amplification, which suggests that they contribute to tumorigenesis through altered regulation of target oncogenes or tumor suppressors. Consistent with that idea, miR-15a/16-1 is known to regulate Bcl2 in chronic lymphocytic leukemia, and miR-17-92 regulates the tumor suppressors p21, Pten and Bim in aggressive B-cell lymphomas. Dysregulation of other miRNA, including miR-21, miR-29, miR-150 and miR-155, have also been shown to play crucial roles in the pathogenesis of aggressive transformed, high-grade and refractory lymphomas. Addition of miRNA dysregulation to the original genetic events likely enhances tumorigenicity of malignant lymphoma through activation of one or more signaling pathways. <s> BIB021 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> MicroRNAs (miRNA) are a group of naturally occurring, small, noncoding, and single-strand RNA molecules that regulate gene expression at the posttranscriptional and translational levels. By controlling the expression of oncogenic and tumor suppressor proteins, miRNAs are believed to play an important role in pathologic processes associated with malignant progression including tumor cell proliferation, apoptosis, differentiation, angiogenesis, invasion, and metastasis. However, relatively few studies have investigated the influence of chemopreventive agents on miRNA expression and their regulation of target genes. Given the significance of miRNAs in modulating gene expression, such research can provide insight into the pleiotropic biologic effects that chemopreventive agents often display and a deeper understanding of their mechanism of action to inhibit carcinogenesis. In addition, miRNAs can provide useful biomarkers for assessing antineoplastic activity of these agents in preclinical and clinical observations. In this review, we summarize recent publications that highlight a potentially important role of miRNAs in cancer chemoprevention research. <s> BIB022 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> MicroRNAs (miRNAs) are small non-coding RNAs which play a key role in the post-transcriptional regulation of many genes. Elucidating miRNA-regulated gene networks is crucial for the understanding of mechanisms and functions of miRNAs in many biological processes, such as cell proliferation, development, differentiation and cell homeostasis, as well as in many types of human tumors. To this aim, we have recently presented the biclustering method HOCCLUS2, for the discovery of miRNA regulatory networks. Experiments on predicted interactions revealed that the statistical and biological consistency of the obtained networks is negatively affected by the poor reliability of the output of miRNA target prediction algorithms. Recently, some learning approaches have been proposed to learn to combine the outputs of distinct prediction algorithms and improve their accuracy. However, the application of classical supervised learning algorithms presents two challenges: i) the presence of only positive examples in datasets of experimentally verified interactions and ii) unbalanced number of labeled and unlabeled examples. We present a learning algorithm that learns to combine the score returned by several prediction algorithms, by exploiting information conveyed by (only positively labeled/) validated and unlabeled examples of interactions. To face the two related challenges, we resort to a semi-supervised ensemble learning setting. Results obtained using miRTarBase as the set of labeled (positive) interactions and mirDIP as the set of unlabeled interactions show a significant improvement, over competitive approaches, in the quality of the predictions. This solution also improves the effectiveness of HOCCLUS2 in discovering biologically realistic miRNA:mRNA regulatory networks from large-scale prediction data. Using the miR-17-92 gene cluster family as a reference system and comparing results with previous experiments, we find a large increase in the number of significantly enriched biclusters in pathways, consistent with miR-17-92 functions. The proposed approach proves to be fundamental for the computational discovery of miRNA regulatory networks from large-scale predictions. This paves the way to the systematic application of HOCCLUS2 for a comprehensive reconstruction of all the possible multiple interactions established by miRNAs in regulating the expression of gene networks, which would be otherwise impossible to reconstruct by considering only experimentally validated interactions. <s> BIB023 | MicroRNAs (miRNAs) are abundant and short endogenous noncoding RNAs made of 19-23 nt that bind to target mRNAs, typically resulting in degradation and translational repression of mRNAs. The fine-tuning of gene regulation in biological processes and disease pathways by these small RNAs recently attracted significant attention; the number of related articles has grown exponentially over the past decade (Supplementary Figure S1) . MiRNAs are used to study signal transduction and pathogenesis of genetic BIB012 BIB017 , neurodegenerative BIB019 and metabolic diseases BIB013 and cancer BIB014 BIB020 . They are also used in preclinical drug development for target validation and lead optimization, and a few synthetic miRNAs entered clinical trials BIB008 . Development of the miRNA-directed novel therapeutics is already under way BIB018 BIB015 and miRNA-based targeting in cancer is not far behind BIB021 BIB022 . MiRNAs account for about 1% of human genes and are shown to regulate >60% of genes . On average, miRNAs bind to hundreds of target sites BIB009 , with some that have a few thousand sites BIB004 . The number of known miRNAs has substantially increased during the past few years, and based on release 21 of the miRBase database , it currently stands at >35 000 in >200 species. Unfortunately, the annotation of their targets falls Xiao Fan is a PhD candidate at the University of Alberta. Her research interests involve high-throughput characterization of microRNAs, microRNA target prediction and analysis of microRNA regulatory networks. Lukasz Kurgan is a Professor at the University of Alberta. His research group focuses on high-throughput structural and functional characterization of proteins and small RNAs. behind as only about 1000 miRNAs (3% of known miRNAs) have validated targets. Moreover, the number of curated targets per miRNA ( Supplementary Table S1 ) is far lower than their estimated count. Traditionally, the targets are annotated using low-throughput experimental biochemical assays including quantitative polymerase chain reaction (qPCR), luciferase assay and western blot. In recent years, a few high-throughput experimental methods to annotate miRNA targets were developed. They include microarrays and RNA sequencing that use gene expression levels and pulsed SILAC (pSILAC; stable isotope labeling by/with amino acids in cell culture) that focus on protein expression levels. These annotations are performed by assuming that miRNA targets (genes or proteins) with large reduction in expression levels in miRNA-overexpressed cells are functional (i.e. they are downregulated) BIB006 . One drawback of such approach to annotate miRNA targets is that it requires a threshold of the expression changes, which may vary depending on specific miRNA-mRNA pair, cell types, culture conditions, etc. Another drawback is that these experiments are done for per single miRNA and are difficult to scale to cover all known miRNAs. Lastly, these annotations are at the gene level, i.e. they indicate whether a given mRNA interacts with a given miRNA, in contrast to the duplex level, i.e. whether a given fragment on mRNA (binding site) interacts with a given miRNA. The latter is motivated by the fact that knowledge of the binding sites is important for the development of gene therapeutics BIB010 BIB011 . Cross-linking immunoprecipitation (CLIP)-based techniques attracted attention in recent years, as they can specify the sites targeted by miRNAs. However, these methods are not miRNA specific, which means that they find binding sites of the Argonaute (Ago) protein that facilitates miRNA:mRNA binding but without coupling them to specific miRNAs. In parallel to the experimental efforts, dozens of computational miRNAs target predictors, which find targets from the mRNA and miRNA sequences, have been developed since the first method was released in 2003 BIB002 (Supplementary Figure S2) . The underlying principle is to use data generated by (usually lowthroughput) experimental methods to build predictive models, which in turn can be used to perform high-throughput predictions for specific miRNAs of interest that lack the experimental data. The results generated by these (base) predictors can be filtered or combined together by meta predictors, i.e. methods that refine predictions of the base methods such as Pio's approach and myMIR BIB023 BIB016 . However, the meta predictors often lack integration with the base predictive models (they were developed separately from the base methods and require manual collection of the predictions from the base methods) and they rely on availability of results generated by multiple base methods, which makes them more challenging to use. The targets can be also predicted computationally by ranking the gene expression or CLIPbased data, but in this case the inputs are the experimental data, which limits their applications. In this review we focus on the computational miRNAs target predictors that require only the knowledge of the miRNA and mRNA sequences (sequence-based miRNA target prediction), excluding the meta methods. The field of sequence-based miRNA target prediction has reached maturity, as evidenced by the declining trend in the development efforts (Supplementary Figure S2 ). After the initial spike in 2005 when eight methods were developed, more recent years have seen on average only three new methods per year. These predictors differ on many aspects including their underlying predictive methodology (mechanistic details of miRNA-mRNA binding that they consider including use of complementarity of base pairing, site accessibility and evolutionary conservation), empirical evaluation (data sets and evaluation procedures; type of predictive model they use), usability (availability and ease of use) popularity and impact and predictive performance. Availability of many difficult-to-compare methods makes it challenging for the end users to select a proper tool and prompts the need for contributions that summarize and evaluate these methods to guide the users and to help the developers to revitalize this field. Supplementary Table S2 compares existing reviews of the miRNA target predictors based on the inclusion of discussion and analysis of the abovementioned aspects. We observe that these reviews summarized the latest miRNA target predictors at the time of their publication and compared or at least described the methodology used by these predictors. Most of these contributions also discussed availability of predictors and some aspects of their usability, focusing on the species that they were designed for. However, other important aspects of usability, such as the number of input parameters (that determines flexibility of use for an expert user), the format of the input miRNAs and genes, the ability to predict for novel miRNA sequences, the format of the outputs and the number of predicted targets (which differs substantially between methods), were omitted. They also neglected to discuss popularity and impact of the predictors and details concerning their evaluation. Only three relatively older reviews provided comparative evaluation. The first review by Rajewsky assessed nine methods on 113 experimentally annotated miRNA-target pairs, but only in Drosophila BIB003 . Review from 2006 by Sethupathy BIB005 used a small set of 84 annotated miRNA-target pairs and lacked assessment on the nonfunctional pairs (whether these methods can correctly recognize lack of interaction). The latest comparative review from 2009 by Alexiou BIB007 used 150 miRNA-target duplexes but considered only relatively old methods that were published in 2007 or earlier. Moreover, the evaluation criteria included only sensitivity and precision, which does not cover quality of prediction of the nonfunctional pairs. To summarize, prior reviews of the sequence-based miRNA target prediction methods suffer from lack of or limited and outdated empirical evaluation, inclusion of a relatively small set of predictors, lack of or shallow treatment of certain aspects, such as usability and impact of the prediction methods, evaluation procedures and practical insights for the end users and developers. To this end, we provide a comprehensive and practical summary of this field. We introduce and discuss 38 base predictors of miRNA targets in animals including recent methods. The focus on animals is motivated by an observation that predictions of targets in plants are relatively easy and are considered a solved problem BIB001 . We provide analysis from all key perspectives that are relevant to the end users and developers including overview of the mechanistic basis of miRNA-mRNA interaction and how this information is incorporated into the underlying predictive methodologies. We also give detailed summary of evaluation, usability and popularity/impact of the 38 predictors. As one often omitted dimension, we discuss the scope of the outputs, i.e. whether a given method provides propensity score (probability of binding) or only a binary outcome (binding versus nonbinding), and whether it predicts positions of the miRNA binding site on the target gene. We are the first to conduct an empirical comparative assessment on both low-throughput and high-throughput experimental data for the predictions at the miRNA:mRNA duplex and gene levels. We use four benchmark data sets and consider seven representative methods including recent predictors. We systematically evaluate both binary and (for the first time) real-valued propensity to compare multiple methods. Moreover, we use our in-depth analytical and empirical review to provide practical insights for the end users and developers. |
Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> microRNAs (miRNAs) regulate mRNA translation and mRNA decay in plants and animals (49). Hundreds of human miRNAs are now known (4-6, 15, 19). In animals, miRNAs regulate thousands of genes with spatial and temporal specificity, helping to ensure the accuracy of gene expression programs (17, 38, 47). Understanding the precise biological functions of animal miRNAs will require the identification of their multiple targets and the pathways that they control. ::: ::: Animal miRNAs generally share limited sequence complementarity with their targets. miRNA target recognition involves complementary base pairing of the target with the 5′ end (positions 1 to 8) of the miRNA guide strand seed region. However, the extent of seed region complementarity is not precisely determined and can be modified by 3′ pairing (8). Computational methods have been used to predict human miRNA targets (31, 34, 37, 45, 52). Most predicted miRNA target recognition sites lie in 3′ untranslated regions (3′UTRs), although coding region sites (CDS) may also be used (8, 36). Current estimates are that 30% or more of human mRNAs are regulated by miRNAs (36). ::: ::: While thousands of miRNA targets have been predicted, relatively few have been experimentally validated. Available methods for validation are laborious and not easily amenable to high-throughput methodologies (4). Since a single miRNA can regulate hundreds of targets, the biological pathways regulated by miRNAs are not always obvious from an examination of their targets. There is a clear need for high-throughput, low-cost methods to experimentally determine miRNA targets, validate computational predictions, and decipher miRNA function. ::: ::: One method to experimentally identify miRNA targets and their functions is microarray analysis (50). Although miRNAs may silence their targets via translational blocking (16), they also regulate target transcript levels. miRNAs in transfected cells down-regulate hundreds of mRNAs detectable by microarray profiling (38). These down-regulated transcripts have expression patterns that are complementary to that of the introduced miRNA and are also highly enriched within their 3′UTRs with hexamer, heptamer, and octamer motifs complementary to miRNA seed regions. This regulation resembles the “off-target” silencing of imperfectly matched targets by small interfering RNAs (siRNAs) (28, 29). Thus, both miRNAs and siRNAs can target partially complementary transcripts for degradation, resulting in transcript changes that can be monitored using microarrays. In fact, changes in transcript levels due to miRNA activity have been observed directly in vivo. The let-7 and lin-4 miRNAs trigger the degradation of their target mRNAs (2). Also, the depletion of miRNAs in mice and zebrafish led to the up-regulation of target mRNAs that were measured on microarrays (18, 35). ::: ::: A potential advantage of using microarrays to analyze miRNA targets is the utility of expression profiles for predicting gene function (25). In this study, we have explored the use of miRNA expression profiles to analyze miRNA targets and functions. Included in our analysis are several miRNAs reported to have a role in cancer (9-12, 14, 21, 22, 27, 32, 40, 41). We use this approach to show that a family of miRNAs sharing seed region identity with miRNA-16 (miR-16) negatively regulates cell cycle progression from G0/G1 to S. <s> BIB001 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> A global decrease in microRNA (miRNA) levels is often observed in human cancers, indicating that small RNAs may have an intrinsic function in tumour suppression. To identify miRNA components of tumour suppressor pathways, we compared miRNA expression profiles of wild-type and p53-deficient cells. Here we describe a family of miRNAs, miR-34a-c, whose expression reflected p53 status. Genes encoding miRNAs in the miR-34 family are direct transcriptional targets of p53, whose induction by DNA damage and oncogenic stress depends on p53 both in vitro and in vivo. Ectopic expression of miR-34 induces cell cycle arrest in both primary and tumour-derived cell lines, which is consistent with the observed ability of miR-34 to downregulate a programme of genes promoting cell cycle progression. The p53 network suppresses tumour formation through the coordinated activation of multiple transcriptional targets, and miR-34 may act in concert with other effectors to inhibit inappropriate cell proliferation. <s> BIB002 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> MicroRNAs (miRNAs) are an important class of small noncoding RNAs capable of regulating other genes’ expression. Much progress has been made in computational target prediction of miRNAs in recent years. More than 10 miRNA target prediction programs have been established, yet, the prediction of animal miRNA targets remains a challenging task. We have developed miRecords, an integrated resource for animal miRNA–target interactions. The Validated Targets component of this resource hosts a large, high-quality manually curated database of experimentally validated miRNA–target interactions with systematic documentation of experimental support for each interaction. The current release of this database includes 1135 records of validated miRNA–target interactions between 301 miRNAs and 902 target genes in seven animal species. The Predicted Targets component of miRecords stores predicted miRNA targets produced by 11 established miRNA target prediction programs. miRecords is expected to serve as a useful resource not only for experimental miRNA researchers, but also for informatics scientists developing the next-generation miRNA target prediction programs. The miRecords is available at http:// miRecords.umn.edu/miRecords. <s> BIB003 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> ‘miR2Disease’, a manually curated database, aims at providing a comprehensive resource of microRNA deregulation in various human diseases. The current version of miR2Disease documents 1939 curated relationships between 299 human microRNAs and 94 human diseases by reviewing more than 600 published papers. Around one-seventh of the microRNA–disease relationships represent the pathogenic roles of deregulated microRNA in human disease. Each entry in the miR2Disease contains detailed information on a microRNA–disease relationship, including a microRNA ID, the disease name, a brief description of the microRNA–disease relationship, an expression pattern of the microRNA, the detection method for microRNA expression, experimentally verified target gene(s) of the microRNA and a literature reference. miR2Disease provides a user-friendly interface for a convenient retrieval of each entry by microRNA ID, disease name, or target gene. In addition, miR2Disease offers a submission page that allows researchers to submit established microRNA–disease relationships that are not documented. Once approved by the submission review committee, the submitted records will be included in the database. miR2Disease is freely available at http://www.miR2Disease.org. <s> BIB004 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> MicroRNAs (miRNAs) and short interfering RNAs (siRNAs) are classes of regulatory small RNA molecules, ranging from 18 to 24 nucleotides in length, whose roles in development and disease are becoming increasingly recognized. They function by altering the stability or translational efficiency of messenger RNAs (mRNAs) with which they share sequence complementarity, and are predicted to affect up to one-third of all human genes. Computer algorithms and microarray data estimate the presence of nearly 1000 human miRNAs, and direct examination of candidate miRNAs has validated their involvement in various cancers, disorders of neuronal development, cardiac hypertrophy, and skin diseases such as psoriasis. This article reviews the history of miRNA and siRNA discovery, key aspects of their biogenesis and mechanism of action, and known connections to human health, with an emphasis on their roles in skin development and disease. Learning objectives After completing this learning activity, participants should be able to summarize the relevance of microRNAs in development and disease, explain the molecular steps of how small RNAs regulate their targets within the human cell, and discuss the role of small RNAs in the diagnosis and treatment of disease. <s> BIB005 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> Animal microRNAs (miRNAs) regulate gene expression by inhibiting translation and/or by inducing degradation of target messenger RNAs. It is unknown how much translational control is exerted by miRNAs on a genome-wide scale. We used a new proteomic approach to measure changes in synthesis of several thousand proteins in response to miRNA transfection or endogenous miRNA knockdown. In parallel, we quantified mRNA levels using microarrays. Here we show that a single miRNA can repress the production of hundreds of proteins, but that this repression is typically relatively mild. A number of known features of the miRNA-binding site such as the seed sequence also govern repression of human protein synthesis, and we report additional target sequence characteristics. We demonstrate that, in addition to downregulating mRNA levels, miRNAs also directly repress translation of hundreds of genes. Finally, our data suggest that a miRNA can, by direct or indirect effects, tune protein synthesis from thousands of genes. <s> BIB006 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> Current methods for system-wide gene expression analysis detect changes in mRNA abundance, but neglect regulation at the level of translation. Pulse labeling with stable isotopes has been used to measure protein turnover rates, but this does not directly provide information about translation rates. Here, we developed pulsed stable isotope labeling by amino acids in cell culture (pSILAC) with two heavy isotope labels to directly quantify protein translation on a proteome-wide scale. We applied the method to cellular iron homeostasis as a model system and demonstrate that it can confidently identify proteins that are translationally regulated by iron availability. <s> BIB007 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> In recent years, the discovery of small ncRNAs (noncoding RNAs) has unveiled a slew of powerful riboregulators of gene expression. So far, many different types of small ncRNAs have been described. Of these, miRNAs (microRNAs), siRNAs (small interfering RNAs), and piRNAs (Piwi-interacting RNAs) have been studied in more detail. A significant fraction of genes in most organisms and tissues is targets of these small ncRNAs. Because these tiny RNAs are turning out to be important regulators of gene and genome expression, their aberrant expression profiles are expected to be associated with cellular dysfunction and disease. In fact, an ever-increasing number of studies have implicated miRNAs and siRNAs in human health and disease ranging from metabolic disorders to diseases of various organ systems as well as various forms of cancer. Nevertheless, despite the flurry of research on these small ncRNAs, many aspects of their biology still remain to be understood. The following discussion focuses on some aspects of the biogenesis and function of small ncRNAs with major emphasis on miRNAs since these are the most widespread endogenous small ncRNAs that have been called "micromanagers" of gene expression. Their emerging significance in toxicology is also discussed. <s> BIB008 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> AbstractmirSVR is a new machine learning method for ranking microRNA target sites by a down-regulation score. The algorithm trains a regression model on sequence and contextual features extracted from miRanda-predicted target sites. In a large-scale evaluation, miRanda-mirSVR is competitive with other target prediction methods in identifying target genes and predicting the extent of their downregulation at the mRNA or protein levels. Importantly, the method identifies a significant number of experimentally determined non-canonical and non-conserved sites. <s> BIB009 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> As the relevant literature and the number of experiments increase at a super linear rate, databases that curate and collect experimentally verified microRNA (miRNA) targets have gradually emerged. These databases attempt to provide efficient access to this wealth of experimental data, which is scattered in thousands of manuscripts. Aim of TarBase 6.0 (http://www.microrna.gr/tarbase) is to face this challenge by providing a significant increase of available miRNA targets derived from all contemporary experimental techniques (gene specific and high-throughput), while incorporating a powerful set of tools in a user-friendly interface. TarBase 6.0 hosts detailed information for each miRNA–gene interaction, ranging from miRNA- and gene-related facts to information specific to their interaction, the experimental validation methodologies and their outcomes. All database entries are enriched with function-related data, as well as general information derived from external databases such as UniProt, Ensembl and RefSeq. DIANA microT miRNA target prediction scores and the relevant prediction details are available for each interaction. TarBase 6.0 hosts the largest collection of manually curated experimentally validated miRNA–gene interactions (more than 65 000 targets), presenting a 16.5–175-fold increase over other available manually curated databases. <s> BIB010 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> The Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) is an international public repository for high-throughput microarray and next-generation sequence functional genomic data sets submitted by the research community. The resource supports archiving of raw data, processed data and metadata which are indexed, cross-linked and searchable. All data are freely available for download in a variety of formats. GEO also provides several web-based tools and strategies to assist users to query, analyse and visualize data. This article reports current status and recent database developments, including the release of GEO2R, an R-based web application that helps users analyse GEO data. <s> BIB011 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> MicroRNAs, or miRNAs, post-transcriptionally repress the expression of protein-coding genes. The human genome encodes over 1000 miRNA genes that collectively target the majority of messenger RNAs (mRNAs). Base pairing of the so-called miRNA ‘seed’ region with mRNAs identifies many thousands of putative targets. Evaluating the strength of the resulting mRNA repression remains challenging, but is essential for a biologically informative ranking of potential miRNA targets. To address these challenges, predictors may use thermodynamic, evolutionary, probabilistic or sequence-based features. We developed an open-source software library, miRmap, which for the first time comprehensively covers all four approaches using 11 predictor features, 3 of which are novel. This allowed us to examine feature correlations and to compare their predictive power in an unbiased way using high-throughput experimental data from immunopurification, transcriptomics, proteomics and polysome fractionation experiments. Overall, target site accessibility appears to be the most predictive feature. Our novel feature based on PhyloP, which evaluates the significance of negative selection, is the best performing predictor in the evolutionary category. We combined all the features into an integrated model that almost doubles the predictive power of TargetScan. miRmap is freely available from http://cegg.unige.ch/mirmap. <s> BIB012 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> MicroRNAs (miRNAs) are small non-coding RNA molecules capable of negatively regulating gene expression to control many cellular mechanisms. The miRTarBase database (http://mirtarbase.mbc.nctu.edu.tw/) provides the most current and comprehensive information of experimentally validated miRNA-target interactions. The database was launched in 2010 with data sources for >100 published studies in the identification of miRNA targets, molecular networks of miRNA targets and systems biology, and the current release (2013, version 4) includes significant expansions and enhancements over the initial release (2010, version 1). This article reports the current status of and recent improvements to the database, including (i) a 14-fold increase to miRNA-target interaction entries, (ii) a miRNA-target network, (iii) expression profile of miRNA and its target gene, (iv) miRNA target-associated diseases and (v) additional utilities including an upgrade reminder and an error reporting/user feedback system. <s> BIB013 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> Motivation: Research interests in microRNAs have increased rapidly in the past decade. Many studies have showed that microRNAs have close relationships with various human cancers, and they potentially could be used as cancer indicators in diagnosis or as a suppressor for treatment purposes. There are several databases that contain microRNA–cancer associations predicted by computational methods but few from empirical results. Despite the fact that abundant experiments investigating microRNA expressions in cancer cells have been carried out, the results have remain scattered in the literature. We propose to extract microRNA–cancer associations by text mining and store them in a database called miRCancer. Results: The text mining is based on 75 rules we have constructed, which represent the common sentence structures typically used to state microRNA expressions in cancers. The microRNA–cancer association database, miRCancer, is updated regularly by running the text mining algorithm against PubMed. All miRNA–cancer associations are confirmed manually after automatic extraction. miRCancer currently documents 878 relationships between 236 microRNAs and 79 human cancers through the processing of426 000 published articles. Availability: miRCancer is freely available on the web at http://mircan <s> BIB014 | There are five databases of experimentally validated and curated miRNA targets ( Supplementary Table S1 ). Only three of them provide information necessary to characterize the miRNA:mRNA duplexes: TarBase, miRecords and miTarBase. miTarBase 4.5 stores the largest number of >5000 miRNA:target pairs BIB013 , with large number of new data from sequencing effort in TarBase v6.0 BIB010 . miRecords includes 2574 interactions BIB003 . miR2Disease BIB004 and miRCancer BIB014 focus on selected diseases associated with miRNAs and also do not include information about miRNA:mRNA duplexes. We developed four benchmark data sets using the miRTarBase repository, gene expression data from Gene Expression Omnibus (GEO) and pSILAC. miRTarBase provides the largest number of positive (functional) and negative (nonfunctional) miRNA:mRNA complexes; the functional miRNA-mRNA interactions are defined as those where mRNA is downregulated by the corresponding miRNA. GEO is the largest source of microarray, sequencing and other forms of high-throughput genomics data BIB011 . pSILAC is a technique for quantitative proteomics BIB007 . Our data sets cover human and mouse, which is motivated by research interests in using miRNAs in human health-related applications BIB008 BIB005 and our objective to include the largest possible number of predictors, i.e. relatively few methods work on other species. The first data set, called TEST_duplex, is used to assess the target site prediction at the duplex level. We selected targets that were validated by at least one of the low-throughput experimental methods, which are considered as strong evidence: qPCR, luciferase assay or western blot. We focused on targets that were released recently to limit overlap between our benchmark data and data used to develop the evaluated predictors. The functional targets deposited to miRTarBase after 2012 (after the newest method included in our evaluation was published) and all nonfunctional duplexes from human and mouse were included; we used all nonfunctional targets because of their small number. The second, TEST_gene data set focuses on the evaluation at the gene level. We selected miRNAs that have both functional and nonfunctional genes in miRTarBase and for which the functional genes were validated after 2012. Furthermore, we extend our evaluation to analyze whether the current methods are capable of predicting at the cell level using two additional data sets that rely on the annotations from the high-throughput methods. TEST_geo data set is based on results from three microarray-based experiments: GSE6838, GSE7864 and GSE8501. The interactions for 25 miRNAs were annotated the contrasting expression arrays before miRNA transfection and at 24 h after miRNA mimics were transfected BIB001 BIB002 . As recommended in BIB009 BIB012 , we remove the genes for which the expression magnitudes are below the median in the control transfection experiments. TEST_psilac data set was originally developed in a proteomic study that used pSILAC technique BIB007 BIB006 . Previous studies assume that genes that are more repressed (characterized by higher drop in the expression levels) are more likely to be targeted by the transfected miRNA. These studies use a certain fraction of the genes with the highest magnitude of the change in the expression levels (repressed genes) as functional and the same fraction of the genes for which expression levels have increased by the largest margin (overexpressed genes) as nonfunctional BIB009 . Instead of using an arbitrary fraction value to define the functional and nonfunctional targets, we vary this value between 1% and 50%. Detailed summary of the four data sets is shown in the Supplementary Table S3 . The TEST_duplex and TEST_gene data sets are given in the Supplementary Tables S4 and S5 , respectively. The comprehensiveness of our tests stems from the fact that we consider targets as gene segments (TEST_duplex data set), genes (TEST_gene and TEST_geo data sets) and proteins (TEST_psilac data set). We also use different source of information that is used to perform annotations including low-throughput assays (TEST_duplex and TEST_gene data sets), microarrays (TEST_geo data set) and pSILAC (TEST_psilac data set). |
Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> We predict regulatory targets for 14 Arabidopsis microRNAs (miRNAs) by identifying mRNAs with near complementarity. Complementary sites within predicted targets are conserved in rice. Of the 49 predicted targets, 34 are members of transcription factor gene families involved in developmental patterning or cell differentiation. The near-perfect complementarity between plant miRNAs and their targets suggests that many plant miRNAs act similarly to small interfering RNAs and direct mRNA cleavage. The targeting of developmental transcription factors suggests that many plant miRNAs function during cellular differentiation to clear key regulatory transcripts from daughter cell lineages. <s> BIB001 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> MicroRNAs (miRNAs) are short RNA molecules that regulate gene expression by binding to target messenger RNAs and by controlling protein production or causing RNA cleavage. To date, functions have been assigned to only a few of the hundreds of identified miRNAs, in part because of the difficulty in identifying their targets. The short length of miRNAs and the fact that their complementarity to target sequences is imperfect mean that target identification in animal genomes is not possible by standard sequence comparison methods. Here we screen conserved 3′ UTR sequences from the Drosophila melanogaster genome for potential miRNA targets. The screening procedure combines a sequence search with an evaluation of the predicted miRNA–target heteroduplex structures and energies. We show that this approach successfully identifies the five previously validated let-7, lin-4, and bantam targets from a large database and predict new targets for Drosophila miRNAs. Our target predictions reveal striking clusters of functionally related targets among the top predictions for specific miRNAs. These include Notch target genes for miR-7, proapoptotic genes for the miR-2 family, and enzymes from a metabolic pathway for miR-277. We experimentally verified three predicted targets each for miR-7 and the miR-2 family, doubling the number of validated targets for animal miRNAs. Statistical analysis indicates that the best single predicted target sites are at the border of significance; thus, target predictions should be considered as tentative until experimentally validated. We identify features shared by all validated targets that can be used to evaluate target predictions for animal miRNAs. Our initial evaluation and experimental validation of target predictions suggest functions for two miRNAs. For others, the screen suggests plausible functions, such as a role for miR-277 as a metabolic switch controlling amino acid catabolism. Cross-genome comparison proved essential, as it allows reduction of the sequence search space. Improvements in genome annotation and increased availability of cDNA sequences from other genomes will allow more sensitive screens. An increase in the number of confirmed targets is expected to reveal general structural features that can be used to improve their detection. While the screen is likely to miss some targets, our study shows that valid targets can be identified from sequence alone. <s> BIB002 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> The Vienna RNA secondary structure server provides a web interface to the most frequently used functions of the Vienna RNA software package for the analysis of RNA secondary structures. It currently offers prediction of secondary structure from a single sequence, prediction of the consensus secondary structure for a set of aligned sequences and the design of sequences that will fold into a predefined structure. All three services can be accessed via the Vienna RNA web server at http://rna.tbi.univie.ac.at/. <s> BIB003 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> The abbreviated name,‘mfold web server’,describes a number of closely related software applications available on the World Wide Web (WWW) for the prediction of the secondary structure of single stranded nucleic acids. The objective of this web server is to provide easy access to RNA and DNA folding and hybridization software to the scientific community at large. By making use of universally available web GUIs (Graphical User Interfaces),the server circumvents the problem of portability of this software. Detailed output,in the form of structure plots with or without reliability information,single strand frequency plots and ‘energy dot plots’, are available for the folding of single sequences. A variety of ‘bulk’ servers give less information,but in a shorter time and for up to hundreds of sequences at once. The portal for the mfold web server is http://www.bioinfo.rpi.edu/applications/ mfold. This URL will be referred to as ‘MFOLDROOT’. <s> BIB004 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Given that microRNAs select their targets by nucleotide base-pairing, it follows that it should be possible to find microRNA targets computationally. There has been considerable progress, but assessing success and biological significance requires a move into the 'wet' lab. <s> BIB005 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> TheSfoldwebserverprovidesuser-friendlyaccessto Sfold, a recently developed nucleic acid folding software package, via the World Wide Web (WWW). The software is based on a new statistical sampling paradigm for the prediction of RNA secondary structure. One of the main objectives of this software is to offer computational tools for the rational design of RNAtargetingnucleicacids,whichincludesmallinterfering RNAs (siRNAs), antisense oligonucleotides and trans-cleaving ribozymes for gene knock-down studies. The methodology for siRNA design is based on a combination of RNA target accessibility prediction, siRNA duplex thermodynamic properties and empirical design rules. Our approach to target accessibility evaluation is an original extension of the underlying RNA folding algorithm to account for the likely existence of a population of structures for the target mRNA. In addition to the application modules Sirna, Soligo and Sribo for siRNAs, antisense oligos and ribozymes, respectively, the moduleSrna offers comprehensive features for statistical representation of sampledstructures.Detailedoutputin both graphical andtextformatsisavailableforallmodules.TheSfold server is available at http://sfold.wadsworth.org and http://www.bioinfo.rpi.edu/applications/sfold. <s> BIB006 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> MicroRNAs (miRNAs) are short RNAs that post-transcriptionally regulate the expression of target genes by binding to the target mRNAs. Although a large number of animal miRNAs has been defined, only a few targets are known. In contrast to plant miRNAs, which usually bind nearly perfectly to their targets, animal miRNAs bind less tightly, with a few nucleotides being unbound, thus producing more complex secondary structures of miRNA/target duplexes. Here, we present a program, RNA-hybrid, that predicts multiple potential binding sites of miRNAs in large target RNAs. In general, the program finds the energetically most favorable hybridization sites of a small RNA in a large RNA. Intramolecular hybridizations, that is, base pairings between target nucleotides or between miRNA nucleotides are not allowed. For large targets, the time complexity of the algorithm is linear in the target length, allowing many long targets to be searched in a short time. Statistical significance of predicted targets is assessed with an extreme value statistics of length normalized minimum free energies, a Poisson approximation of multiple binding sites, and the calculation of effective numbers of orthologous targets in comparative studies of multiple organisms. We applied our method to the prediction of Drosophila miRNA targets in 3'UTRs and coding sequence. RNAhybrid, with its accompanying programs RNAcalibrate and RNAeffective, is available for download and as a Web tool on the Bielefeld Bioinformatics Server (http://bibiserv.techfak.uni-bielefeld.de/rnahybrid/). <s> BIB007 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> BackgroundMicroRNAs (miRNAs) mediate a form of translational regulation in animals. Hundreds of animal miRNAs have been identified, but only a few of their targets are known. Prediction of miRNA targets for translational regulation is challenging, since the interaction with the target mRNA usually occurs via incomplete and interrupted base pairing. Moreover, the rules that govern such interactions are incompletely defined.ResultsMovingTargets is a software program that allows a researcher to predict a set of miRNA targets that satisfy an adjustable set of biological constraints. We used MovingTargets to identify a high-likelihood set of 83 miRNA targets in Drosophila, all of which adhere to strict biological constraints. We tested and verified 3 of these predictions in cultured cells, including a target for the Drosophila let-7 homolog. In addition, we utilized the flexibility of MovingTargets by relaxing the biological constraints to identify and validate miRNAs targeting tramtrack, a gene also known to be subject to translational control dependent on the RNA binding protein Musashi.ConclusionMovingTargets is a flexible tool for the accurate prediction of miRNA targets in Drosophila. MovingTargets can be used to conduct a genome-wide search of miRNA targets using all Drosophila miRNAs and potential targets, or it can be used to conduct a focused search for miRNAs targeting a specific gene. In addition, the values for a set of biological constraints used to define a miRNA target are adjustable, allowing the software to incorporate the rules used to characterize a miRNA target as these rules are experimentally determined and interpreted. <s> BIB008 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> MicroRNAs are small noncoding RNAs that serve as posttranscriptional regulators of gene expression in higher eukaryotes. Their widespread and important role in animals is highlighted by recent estimates that 20%-30% of all genes are microRNA targets. Here, we report that a large set of genes involved in basic cellular processes avoid microRNA regulation due to short 3'UTRs that are specifically depleted of microRNA binding sites. For individual microRNAs, we find that coexpressed genes avoid microRNA sites, whereas target genes and microRNAs are preferentially expressed in neighboring tissues. This mutually exclusive expression argues that microRNAs confer accuracy to developmental gene-expression programs, thus ensuring tissue identity and supporting cell-lineage decisions. <s> BIB009 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> microRNAs are small noncoding genes that regulate the protein production of genes by binding to partially complementary sites in the mRNAs of targeted genes. Here, using our algorithm PicTar, we exploit cross-species comparisons to predict, on average, 54 targeted genes per microRNA above noise in Drosophila melanogaster. Analysis of the functional annotation of target genes furthermore suggests specific biological functions for many microRNAs. We also predict combinatorial targets for clustered microRNAs and find that some clustered microRNAs are likely to coordinately regulate target genes. Furthermore, we compare microRNA regulation between insects and vertebrates. We find that the widespread extent of gene regulation by microRNAs is comparable between flies and mammals but that certain microRNAs may function in clade-specific modes of gene regulation. One of these microRNAs (miR-210) is predicted to contribute to the regulation of fly oogenesis. We also list specific regulatory relationships that appear to be conserved between flies and mammals. Our findings provide the most extensive microRNA target predictions in Drosophila to date, suggest specific functional roles for most microRNAs, indicate the existence of coordinate gene regulation executed by clustered microRNAs, and shed light on the evolution of microRNA function across large evolutionary distances. All predictions are freely accessible at our searchable Web site http://pictar.bio.nyu.edu. <s> BIB010 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> We present a new microRNA target prediction algorithm called TargetBoost, and show that the algorithm is stable and identifies more true targets than do existing algorithms. TargetBoost uses machine learning on a set of validated microRNA targets in lower organisms to create weighted sequence motifs that capture the binding characteristics between microRNAs and their targets. Existing algorithms require candidates to have (1) near-perfect complementarity between microRNAs' 5' end and their targets; (2) relatively high thermodynamic duplex stability; (3) multiple target sites in the target's 3' UTR; and (4) evolutionary conservation of the target between species. Most algorithms use one of the two first requirements in a seeding step, and use the three others as filters to improve the method's specificity. The initial seeding step determines an algorithm's sensitivity and also influences its specificity. As all algorithms may add filters to increase the specificity, we propose that methods should be compared before such filtering. We show that TargetBoost's weighted sequence motif approach is favorable to using both the duplex stability and the sequence complementarity steps. (TargetBoost is available as a Web tool from http://www.interagon.com/demo/.). <s> BIB011 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> The DINAMelt web server simulates the melting of one or two single-stranded nucleic acids in solution. The goal is to predict not just a melting temperature for a hybridized pair of nucleic acids, but entire equilibrium melting profiles as a function of temperature. The two molecules are not required to be complementary, nor must the two strand concentrations be equal. Competition among different molecular species is automatically taken into account. Calculations consider not only the heterodimer, but also the two possible homodimers, as well as the folding of each single-stranded molecule. For each of these five molecular species, free energies are computed by summing Boltzmann factors over every possible hybridized or folded state. For temperatures within a user-specified range, calculations predict species mole fractions together with the free energy, enthalpy, entropy and heat capacity of the ensemble. Ultraviolet (UV) absorbance at 260 nm is simulated using published extinction coefficients and computed base pair probabilities. All results are available as text files and plots are provided for species concentrations, heat capacity and UV absorbance versus temperature. This server is connected to an active research program and should evolve as new theory and software are developed. The server URL is http://www.bioinfo. rpi.edu/applications/hybrid/. <s> BIB012 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Summary We present rna22 , a method for identifying microRNA binding sites and their corresponding heteroduplexes. Rna22 does not rely upon cross-species conservation, is resilient to noise, and, unlike previous methods, it first finds putative microRNA binding sites in the sequence of interest, then identifies the targeting microRNA. Computationally, we show that rna22 identifies most of the currently known heteroduplexes. Experimentally, with luciferase assays, we demonstrate average repressions of 30% or more for 168 of 226 tested targets. The analysis suggests that some microRNAs may have as many as a few thousand targets, and that between 74% and 92% of the gene transcripts in four model genomes are likely under microRNA control through their untranslated and amino acid coding regions. We also extended the method's key idea to a low-error microRNA-precursor-discovery scheme; our studies suggest that the number of microRNA precursors in mammalian genomes likely ranges in the tens of thousands. <s> BIB013 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> BackgroundThe accurate prediction of a comprehensive set of messenger RNAs (targets) regulated by animal microRNAs (miRNAs) remains an open problem. In particular, the prediction of targets that do not possess evolutionarily conserved complementarity to their miRNA regulators is not adequately addressed by current tools.ResultsWe have developed MicroTar, an animal miRNA target prediction tool based on miRNA-target complementarity and thermodynamic data. The algorithm uses predicted free energies of unbound mRNA and putative mRNA-miRNA heterodimers, implicitly addressing the accessibility of the mRNA 3' untranslated region. MicroTar does not rely on evolutionary conservation to discern functional targets, and is able to predict both conserved and non-conserved targets. MicroTar source code and predictions are accessible at http://tiger.dbs.nus.edu.sg/microtar/, where both serial and parallel versions of the program can be downloaded under an open-source licence.ConclusionMicroTar achieves better sensitivity than previously reported predictions when tested on three distinct datasets of experimentally-verified miRNA-target interactions in C. elegans, Drosophila, and mouse. <s> BIB014 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> MicroRNAs are key regulators of gene expression, but the precise mechanisms underlying their interaction with their mRNA targets are still poorly understood. Here, we systematically investigate the role of target-site accessibility, as determined by base-pairing interactions within the mRNA, in microRNA target recognition. We experimentally show that mutations diminishing target accessibility substantially reduce microRNA-mediated translational repression, with effects comparable to those of mutations that disrupt sequence complementarity. We devise a parameter-free model for microRNA-target interaction that computes the difference between the free energy gained from the formation of the microRNA-target duplex and the energetic cost of unpairing the target to make it accessible to the microRNA. This model explains the variability in our experiments, predicts validated targets more accurately than existing algorithms, and shows that genomes accommodate site accessibility by preferentially positioning targets in highly accessible regions. Our study thus demonstrates that target accessibility is a critical factor in microRNA function. <s> BIB015 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> MicroRNAs (miRNAs) are small noncoding RNAs that repress protein synthesis by binding to target messenger RNAs. We investigated the effect of target secondary structure on the efficacy of repression by miRNAs. Using structures predicted by the Sfold program, we model the interaction between an miRNA and a target as a two-step hybridization reaction: nucleation at an accessible target site followed by hybrid elongation to disrupt local target secondary structure and form the complete miRNA-target duplex. This model accurately accounts for the sensitivity to repression by let-7 of various mutant forms of the Caenorhabditis elegans lin-41 3¢ untranslated region and for other experimentally tested miRNA-target interactions in C. elegans and Drosophila melanogaster. These findings indicate a potent effect of target structure on target recognition by miRNAs and establish a structure-based framework for genome-wide identification of animal miRNA targets. <s> BIB016 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Summary A U- r ich e lements (AREs), present in mRNA 3′-UTRs, are potent posttranscriptional regulatory signals that can rapidly effect changes in mRNA stability and translation, thereby dramatically altering gene expression with clinical and developmental consequences. In human cell lines, the TNFα ARE enhances translation relative to mRNA levels upon serum starvation, which induces cell-cycle arrest. An in vivo crosslinking-coupled affinity purification method was developed to isolate ARE-associated complexes from activated versus basal translation conditions. We surprisingly found two microRNP-related proteins, f ragile- X -mental-retardation- r elated protein 1 (FXR1) and A r go naute 2 (AGO2), that associate with the ARE exclusively during translation activation. Through tethering and shRNA-knockdown experiments, we provide direct evidence for the translation activation function of both FXR1 and AGO2 and demonstrate their interdependence for upregulation. This novel cell-growth-dependent translation activation role for FXR1 and AGO2 allows new insights into ARE-mediated signaling and connects two important posttranscriptional regulatory systems in an unexpected way. <s> BIB017 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> BackgroundMicroRNAs (miRs) are small noncoding RNAs that bind to complementary/partially complementary sites in the 3' untranslated regions of target genes to regulate protein production of the target transcript and to induce mRNA degradation or mRNA cleavage. The ability to perform accurate, high-throughput identification of physiologically active miR targets would enable functional characterization of individual miRs. Current target prediction methods include traditional approaches that are based on specific base-pairing rules in the miR's seed region and implementation of cross-species conservation of the target site, and machine learning (ML) methods that explore patterns that contrast true and false miR-mRNA duplexes. However, in the case of the traditional methods research shows that some seed region matches that are conserved are false positives and that some of the experimentally validated target sites are not conserved.ResultsWe present HuMiTar, a computational method for identifying common targets of miRs, which is based on a scoring function that considers base-pairing for both seed and non-seed positions for human miR-mRNA duplexes. Our design shows that certain non-seed miR nucleotides, such as 14, 18, 13, 11, and 17, are characterized by a strong bias towards formation of Watson-Crick pairing. We contrasted HuMiTar with several representative competing methods on two sets of human miR targets and a set of ten glioblastoma oncogenes. Comparison with the two best performing traditional methods, PicTar and TargetScanS, and a representative ML method that considers the non-seed positions, NBmiRTar, shows that HuMiTar predictions include majority of the predictions of the other three methods. At the same time, the proposed method is also capable of finding more true positive targets as a trade-off for an increased number of predictions. Genome-wide predictions show that the proposed method is characterized by 1.99 signal-to-noise ratio and linear, with respect to the length of the mRNA sequence, computational complexity. The ROC analysis shows that HuMiTar obtains results comparable with PicTar, which are characterized by high true positive rates that are coupled with moderate values of false positive rates.ConclusionThe proposed HuMiTar method constitutes a step towards providing an efficient model for studying translational gene regulation by miRs. <s> BIB018 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> BackgroundMicroRNAs are small endogenously expressed non-coding RNA molecules that regulate target gene expression through translation repression or messenger RNA degradation. MicroRNA regulation is performed through pairing of the microRNA to sites in the messenger RNA of protein coding genes. Since experimental identification of miRNA target genes poses difficulties, computational microRNA target prediction is one of the key means in deciphering the role of microRNAs in development and disease.ResultsDIANA-microT 3.0 is an algorithm for microRNA target prediction which is based on several parameters calculated individually for each microRNA and combines conserved and non-conserved microRNA recognition elements into a final prediction score, which correlates with protein production fold change. Specifically, for each predicted interaction the program reports a signal to noise ratio and a precision score which can be used as an indication of the false positive rate of the prediction.ConclusionRecently, several computational target prediction programs were benchmarked based on a set of microRNA target genes identified by the pSILAC method. In this assessment DIANA-microT 3.0 was found to achieve the highest precision among the most widely used microRNA target prediction programs reaching approximately 66%. The DIANA-microT 3.0 prediction results are available online in a user friendly web server at http://www.microrna.gr/microT <s> BIB019 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> BackgroundMicroRNAs (miRNAs) are single-stranded non-coding RNAs known to regulate a wide range of cellular processes by silencing the gene expression at the protein and/or mRNA levels. Computational prediction of miRNA targets is essential for elucidating the detailed functions of miRNA. However, the prediction specificity and sensitivity of the existing algorithms are still poor to generate meaningful, workable hypotheses for subsequent experimental testing. Constructing a richer and more reliable training data set and developing an algorithm that properly exploits this data set would be the key to improve the performance current prediction algorithms.ResultsA comprehensive training data set is constructed for mammalian miRNAs with its positive targets obtained from the most up-to-date miRNA target depository called miRecords and its negative targets derived from 20 microarray data. A new algorithm SVMicrO is developed, which assumes a 2-stage structure including a site support vector machine (SVM) followed by a UTR-SVM. SVMicrO makes prediction based on 21 optimal site features and 18 optimal UTR features, selected by training from a comprehensive collection of 113 site and 30 UTR features. Comprehensive evaluation of SVMicrO performance has been carried out on the training data, proteomics data, and immunoprecipitation (IP) pull-down data. Comparisons with some popular algorithms demonstrate consistent improvements in prediction specificity, sensitivity and precision in all tested cases. All the related materials including source code and genome-wide prediction of human targets are available at http://compgenomics.utsa.edu/svmicro.html.ConclusionsA 2-stage SVM based new miRNA target prediction algorithm called SVMicrO is developed. SVMicrO is shown to be able to achieve robust performance. It holds the promise to achieve continuing improvement whenever better training data that contain additional verified or high confidence positive targets and properly selected negative targets are available. <s> BIB020 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Considering accessibility of the 3′UTR is believed to increase the precision of microRNA target predictions. We show that, contrary to common belief, ranking by the hybridization energy or by the sum of the opening and hybridization energies, used in currently available algorithms, is not an efficient way to rank predictions. Instead, we describe an algorithm which also considers only the accessible binding sites but which ranks predictions according to over-representation. When compared with experimentally validated and refuted targets in the fruit fly and human, our algorithm shows a remarkable improvement in precision while significantly reducing the computational cost in comparison with other free energy based methods. In the human genome, our algorithm has at least twice higher precision than other methods with their default parameters. In the fruit fly, we find five times more validated targets among the top 500 predictions than other methods with their default parameters. Furthermore, using a common statistical framework we demonstrate explicitly the advantages of using the canonical ensemble instead of using the minimum free energy structure alone. We also find that ‘naive’ global folding sometimes outperforms the local folding approach. <s> BIB021 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Summary MicroRNAs (miRNAs) are endogenous ∼22-nucleotide RNAs that play important gene-regulatory roles by pairing to the mRNAs of protein-coding genes to direct their repression. Repression of these regulatory targets leads to decreased translational efficiency and/or decreased mRNA levels, but the relative contributions of these two outcomes have been largely unknown, particularly for endogenous targets expressed at low-to-moderate levels. Here, we use ribosome profiling to measure the overall effects on protein production and compare these to simultaneously measured effects on mRNA levels. For both ectopic and endogenous miRNA regulatory interactions, lowered mRNA levels account for most (≥84%) of the decreased protein production. These results show that changes in mRNA levels closely reflect the impact of miRNAs on gene expression and indicate that destabilization of target mRNAs is the predominant reason for reduced protein output. <s> BIB022 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> microRNAs (miRNAs) are small endogenous RNA molecules that are implicated in many biological processes through post-transcriptional regulation of gene expression. The DIANA-microT Web server provides a user-friendly interface for comprehensive computational analysis of miRNA targets in human and mouse. The server has now been extended to support predictions for two widely studied species: Drosophila melanogaster and Caenorhabditis elegans. In the updated version, the Web server enables the association of miRNAs to diseases through bibliographic analysis and provides insights for the potential involvement of miRNAs in biological processes. The nomenclature used to describe mature miRNAs along different miRBase versions has been extensively analyzed, and the naming history of each miRNA has been extracted. This enables the identification of miRNA publications regardless of possible nomenclature changes. User interaction has been further refined allowing users to save results that they wish to analyze further. A connection to the UCSC genome browser is now provided, enabling users to easily preview predicted binding sites in comparison to a wide array of genomic tracks, such as single nucleotide polymorphisms. The Web server is publicly accessible in www.microrna.gr/microT-v4. <s> BIB023 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> In animals, RNA binding proteins (RBPs) and microRNAs (miRNAs) post-transcriptionally regulate the expression of virtually all genes by binding to RNA. Recent advances in experimental and computational methods facilitate transcriptome-wide mapping of these interactions. It is thought that the combinatorial action of RBPs and miRNAs on target mRNAs form a posttranscriptional regulatory code. We provide a database that supports the quest for deciphering this regulatory code. Within doRiNA, we are systematically curating, storing and integrating binding site data for RBPs and miRNAs. Users are free to take a target (mRNA) or regulator (RBP and/or miRNA) centric view on the data. We have implemented a database framework with short query response times for complex searches (e.g. asking for all targets of a particular combination of regulators). All search results can be browsed, inspected and analyzed in conjunction with a huge selection of other genome-wide data, because our database is directly linked to a local copy of the UCSC genome browser. At the time of writing, doRiNA encompasses RBP data for the human, mouse and worm genomes. For computational miRNA target site predictions, we provide an update of PicTar predictions. <s> BIB024 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> The last few decades observed an increasing interest in development and application of 1-dimensional (1D) descriptors of protein structure. These descriptors project 3D structural features onto 1D strings of residue-wise structural assignments. They cover a wide-range of structural aspects including conformation of the backbone, burying depth/solvent exposure and flexibility of residues, and inter-chain residue-residue contacts. We perform first-of-its-kind comprehensive comparative review of the existing 1D structural descriptors. We define, review and categorize ten structural descriptors and we also describe, summarize and contrast over eighty computational models that are used to predict these descriptors from the protein sequences. We show that the majority of the recent sequence-based predictors utilize machine learning models, with the most popular being neural networks, support vector machines, hidden Markov models, and support vector and linear regressions. These methods provide high-throughput predictions and most of them are accessible to a non-expert user via web servers and/or stand-alone software packages. We empirically evaluate several recent sequence-based predictors of secondary structure, disorder, and solvent accessibility descriptors using a benchmark set based on CASP8 targets. Our analysis shows that the secondary structure can be predicted with over 80% accuracy and segment overlap (SOV), disorder with over 0.9 AUC, 0.6 Matthews Correlation Coefficient (MCC), and 75% SOV, and relative solvent accessibility with PCC of 0.7 and MCC of 0.6 (0.86 when homology is used). We demonstrate that the secondary structure predicted from sequence without the use of homology modeling is as good as the structure extracted from the 3D folds predicted by top-performing template-based methods. <s> BIB025 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Background ::: Machine learning based miRNA-target prediction algorithms often fail to obtain a balanced prediction accuracy in terms of both sensitivity and specificity due to lack of the gold standard of negative examples, miRNA-targeting site context specific relevant features and efficient feature selection process. Moreover, all the sequence, structure and machine learning based algorithms are unable to distribute the true positive predictions preferentially at the top of the ranked list; hence the algorithms become unreliable to the biologists. In addition, these algorithms fail to obtain considerable combination of precision and recall for the target transcripts that are translationally repressed at protein level. ::: Methodology/Principal Finding ::: In the proposed article, we introduce an efficient miRNA-target prediction system MultiMiTar, a Support Vector Machine (SVM) based classifier integrated with a multiobjective metaheuristic based feature selection technique. The robust performance of the proposed method is mainly the result of using high quality negative examples and selection of biologically relevant miRNA-targeting site context specific features. The features are selected by using a novel feature selection technique AMOSA-SVM, that integrates the multi objective optimization technique Archived Multi-Objective Simulated Annealing (AMOSA) and SVM. ::: Conclusions/Significance ::: MultiMiTar is found to achieve much higher Matthew’s correlation coefficient (MCC) of 0.583 and average class-wise accuracy (ACA) of 0.8 compared to the others target prediction methods for a completely independent test data set. The obtained MCC and ACA values of these algorithms range from −0.269 to 0.155 and 0.321 to 0.582, respectively. Moreover, it shows a more balanced result in terms of precision and sensitivity (recall) for the translationally repressed data set as compared to all the other existing methods. An important aspect is that the true positive predictions are distributed preferentially at the top of the ranked list that makes MultiMiTar reliable for the biologists. MultiMiTar is now available as an online tool at www.isical.ac.in/~bioinfo_miu/multimitar.htm. MultiMiTar software can be downloaded from www.isical.ac.in/~bioinfo_miu/multimitar-download.htm. <s> BIB026 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> BACKGROUND ::: Many computational microRNA target prediction tools are focused on several key features, including complementarity to 5'seed of miRNAs and evolutionary conservation. While these features allow for successful target identification, not all miRNA target sites are conserved and adhere to canonical seed complementarity. Several studies have propagated the use of energy features of mRNA:miRNA duplexes as an alternative feature. However, different independent evaluations reported conflicting results on the reliability of energy-based predictions. Here, we reassess the usefulness of energy features for mammalian target prediction, aiming to relax or eliminate the need for perfect seed matches and conservation requirement. ::: ::: ::: METHODOLOGY/PRINCIPAL FINDINGS ::: We detect significant differences of energy features at experimentally supported human miRNA target sites and at genome-wide sites of AGO protein interaction. This trend is confirmed on datasets that assay the effect of miRNAs on mRNA and protein expression changes, and a simple linear regression model leads to significant correlation of predicted versus observed expression change. Compared to 6-mer seed matches as baseline, application of our energy-based model leads to ∼3-5-fold enrichment on highly down-regulated targets, and allows for prediction of strictly imperfect targets with enrichment above baseline. ::: ::: ::: CONCLUSIONS/SIGNIFICANCE ::: In conclusion, our results indicate significant promise for energy-based miRNA target prediction that includes a broader range of targets without having to use conservation or impose stringent seed match rules. <s> BIB027 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> A number of web tools are available for the prediction and identification of target microRNAs (miRNAs). The choice, availability, validity and selection of an optimal yet appropriate tool are a challenge for the design of high throughput assays with promising miRNA targets. The current trends and challenges for target microRNAs (miRNAs) prediction, identification and selection is described in this review. <s> BIB028 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Summary: Although small non-coding RNAs, such as microRNAs, have well-established functions in the cell, long non-coding RNAs (lncRNAs) have only recently started to emerge as abundant regulators of cell physiology, and their functions may be diverse. A small number of studies describe interactions between small and lncRNAs, with lncRNAs acting either as inhibitory decoys or as regulatory targets of microRNAs, but such interactions are still poorly explored. To facilitate the study of microRNA–lncRNA interactions, we implemented miRcode: a comprehensive searchable map of putative microRNA target sites across the complete GENCODE annotated transcriptome, including 10 419 lncRNA genes in the current version. ::: ::: Availability: http://www.mircode.org ::: ::: Contact: [email protected] ::: ::: Supplementary Information: Supplementary data are available at Bioinformatics online. <s> BIB029 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> BackgroundMicroRNA (miRNA) target genes tend to have relatively long and conserved 3' untranslated regions (UTRs), but to what degree these characteristics contribute to miRNA targeting is poorly understood. Different high-throughput experiments have, for example, shown that miRNAs preferentially regulate genes with both short and long 3' UTRs and that target site conservation is both important and irrelevant for miRNA targeting.ResultsWe have analyzed several gene context-dependent features, including 3' UTR length, 3' UTR conservation, and messenger RNA (mRNA) expression levels, reported to have conflicting influence on miRNA regulation. By taking into account confounding factors such as technology-dependent experimental bias and competition between transfected and endogenous miRNAs, we show that two factors - target gene expression and competition - could explain most of the previously reported experimental differences. Moreover, we find that these and other target site-independent features explain about the same amount of variation in target gene expression as the target site-dependent features included in the TargetScan model.ConclusionsOur results show that it is important to consider confounding factors when interpreting miRNA high throughput experiments and urge special caution when using microarray data to compare average regulatory effects between groups of genes that have different average gene expression levels. <s> BIB030 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> MicroRNAs (miRNAs) are small endogenous RNA molecules that regulate gene expression through mRNA degradation and/or translation repression, affecting many biological processes. DIANA-microT web server (http://www.microrna.gr/webServer) is dedicated to miRNA target prediction/functional analysis, and it is being widely used from the scientific community, since its initial launch in 2009. DIANA-microT v5.0, the new version of the microT server, has been significantly enhanced with an improved target prediction algorithm, DIANAmicroT-CDS. It has been updated to incorporate miRBase version 18 and Ensembl version 69. The in silico-predicted miRNA–gene interactions in Homo sapiens, Mus musculus, Drosophila melanogaster and Caenorhabditis elegans exceed 11 million in total. The web server was completely redesigned, to host a series of sophisticated workflows, which can be used directly from the on-line web interface, enabling users without the necessary bioinformatics infrastructure to perform advanced multi-step functional miRNA analyses. For instance, one available pipeline performs miRNA target prediction using different thresholds and meta-analysis statistics, followed by pathway enrichment analysis. DIANAmicroT web server v5.0 also supports a complete integration with the Taverna Workflow Management System (WMS), using the in-house developed DIANA-Taverna Plug-in. This plug-in provides ready-to-use modules for miRNA target prediction and functional analysis, which can be used to form advanced high-throughput analysis pipelines. <s> BIB031 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Pattern recognition is concerned with the development of systems that learn to solve a given problem using a set of example instances, each represented by a number of features. These problems include clustering, the grouping of similar instances; classification, the task of assigning a discrete label to a given instance; and dimensionality reduction, combining or selecting features to arrive at a more useful representation. The use of statistical pattern recognition algorithms in bioinformatics is pervasive. Classification and clustering are often applied to high-throughput measurement data arising from microarray, mass spectrometry and next-generation sequencing experiments for selecting markers, predicting phenotype and grouping objects or genes. Less explicitly, classification is at the core of a wide range of tools such as predictors of genes, protein function, functional or genetic interactions, etc., and used extensively in systems biology. A course on pattern recognition (or machine learning) should therefore be at the core of any bioinformatics education program. In this review, we discuss the main elements of a pattern recognition course, based on material developed for courses taught at the BSc, MSc and PhD levels to an audience of bioinformaticians, computer scientists and life scientists. We pay attention to common problems and pitfalls encountered in applications and in interpretation of the results obtained. <s> BIB032 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> MicroRNAs (miRNAs) have been emerged as a novel class of endogenous posttranscriptional regulators in a variety of animal and plant species. One challenge facing miRNA research is to accurately identify the target mRNAs, because of the very limited sequence complementarity between miRNAs and their target sites, and the scarcity of experimentally validated targets to guide accurate prediction. In this paper, we propose a new method called SuperMirTar that exploits super vised distance learning to predict miRNA targets. Specifically, we use the experimentally supported miRNA-mRNA pairs as a training set to learn a distance metric function that minimizes the distances between miRNAs and mRNAs with validated interactions, then use the learned function to calculate the distances of test miRNA-mRNA interactions, and those with smaller distances than a predefined threshold are regarded as true interactions. We carry out performance comparison between the proposed approach and seven existing methods on independent datasets; the results show that our method achieves superior performance and can effectively narrow the gap between the number of predicted miRNA targets and the number of experimentally validated ones. <s> BIB033 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Motivation: The massive spread of repetitive elements in the human genome presents a substantial challenge to the organism, as such elements may accidentally contain seemingly functional motifs. A striking example is offered by the roughly one million copies of Alu repeats in the genome, of which ~0.5% reside within genes’ untranslated regions (UTRs), presenting ~30 000 novel potential targets for highly conserved microRNAs (miRNAs). Here, we examine the functionality of miRNA targets within Alu elements in 3′UTRs in the human genome. ::: ::: Results: Using a comprehensive dataset of miRNA overexpression assays, we show that mRNAs with miRNA targets within Alus are significantly less responsive to the miRNA effects compared with mRNAs that have the same targets outside Alus. Using Ago2-binding mRNA profiling, we confirm that the miRNA machinery avoids miRNA targets within Alus, as opposed to the highly efficient binding of targets outside Alus. We propose three features that prevent potential miRNA sites within Alus from being recognized by the miRNA machinery: (i) Alu repeats that contain miRNA targets and genuine functional miRNA targets appear to reside in distinct mutually exclusive territories within 3′UTRs; (ii) Alus have tight secondary structure that may limit access to the miRNA machinery; and (iii) A-to-I editing of Alu-derived mRNA sequences may divert miRNA targets. The combination of these features is proposed to allow toleration of Alu insertions into mRNAs. Nonetheless, a subset of miRNA targets within Alus appears not to possess any of the aforementioned features, and thus may represent cases where Alu insertion in the genome has introduced novel functional miRNA targets. ::: ::: Contact: [email protected] or [email protected] ::: ::: Supplementary information: Supplementary data are available at Bioinformatics online. <s> BIB034 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Most of what is presently known about how miRNAs regulate gene expression comes from studies that characterized the regulatory effect of miRNA binding sites located in the 3' untranslated regions (UTR) of mRNAs. In recent years, there has been increasing evidence that miRNAs also bind in the coding region (CDS), but the implication of these interactions remains obscure because they have a smaller impact on mRNA stability compared with miRNA-target interactions that involve 3' UTRs. Here we show that miRNA-complementary sites that are located in both CDS and 3'-UTRs are under selection pressure and share the same sequence and structure properties. Analyzing recently published data of ribosome-protected fragment profiles upon miRNA transfection from the perspective of the location of miRNA-complementary sites, we find that sites located in the CDS are most potent in inhibiting translation, while sites located in the 3' UTR are more efficient at triggering mRNA degradation. Our study suggests that miRNAs may combine targeting of CDS and 3' UTR to flexibly tune the time scale and magnitude of their post-transcriptional regulatory effects. <s> BIB035 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Finding microRNA targets in the coding region is difficult due to the overwhelming signal encoding the amino acid sequence. Here, we introduce an algorithm (called PACCMIT-CDS) that finds potential microRNA targets within coding sequences by searching for conserved motifs that are complementary to the microRNA seed region and also overrepresented in comparison with a background model preserving both codon usage and amino acid sequence. Precision and sensitivity of PACCMIT-CDS are evaluated using PAR-CLIP and proteomics data sets. Thanks to the properly constructed background, the new algorithm achieves a lower rate of false positives and better ranking of predictions than do currently available algorithms, which were designed to find microRNA targets within 3' UTRs. <s> BIB036 | We selected several representative predictors for the empirical evaluation. The selected methods have to be conveniently accessible to the end users via a web server or a precomputed database. They also have to cover human and mouse, predict target sites (to perform evaluation at the duplex level) and provide propensity (probability) of the interaction. Using these filters we selected eight methods (see Supplementary Table S6 ). We use their latest versions of these methods, except for PicTar2, which is substantially different from PicTar and no longer qualifies as a sequence-based predictor. PicTar 2005 was first published in 2005; five methods including TargetScan 6.2, miRanda 2010, EIMMo3, miREE and mirTarget2 v4 were proposed or updated between 2010 and 2012; and two in 2013: DIANA-microT-CDS and miRmap v1.1. We excluded miREE from the evaluation because this method did not predict any targets on our TEST_duplex and TEST_gene data sets. The remaining seven methods use a diverse set of predictive models, with four that use heuristic scoring functions and three that use the machine learning models including Bayesian classifier, support vector machine (SVM) and regression. miRmap was built based on gene expression data, while the other methods were derived based on the low-throughput experimentally validated data. We collected predictions for these methods using either their online web servers or downloadable precomputed predictions. We recorded their predicted binding targets (sequences or positions) and the corresponding propensities. We consider 38 sequence-based methods, from the earliest predictor that was published in 2003 to the latest method that was released in 2013; chronological list of methods is shown in Table 1 . We exclude the meta methods (because they are inconvenient to use and require availability of results from base methods) and approaches that rely on the experimental data. Most of the miRNA target predictors were developed by different research groups, with several groups that continue maintaining and updating their algorithms. Cohen's group at EMBL proposed the first miRNA target predictor in 2003 BIB002 and updated it in 2005 BIB009 . TargetScan and TargetScanS were developed by Bartel at MIT and Burge at Cambridge . Another popular tool, DIANA-microT, which was created by Hatzigeorgiou group, has been recently updated to version 5.0 BIB019 BIB023 BIB031 . Rajewsky's lab published their predictor PicTar in 2005 and updated it in 2011 BIB010 BIB024 . Predictive methodologies and mechanistic basis of miRNA-mRNA interaction Table 1 summarizes types of predictive models and the underlying details of the miRNA-mRNA interactions that they use to predict miRNA targets. There are two categories of predictive models: heuristic and empirical. The heuristic models use screening algorithms that search positions along the mRNA sequence and scoring functions that filter targets by combining values of several inputs in an ad hoc manner. Early predictors applied heuristic approaches owing to the lack of sufficient amount of data to build the empirical knowledge-based models. Even today the scoring function-based designs are dominant (19 of 38 methods) because of their easy setup, flexibility to integrate different types of inputs and computational efficiency. The empirical models are inferred from a training data set. Given the success of machine learning-based models in bioinformatics BIB025 BIB032 and growing size of the experimental data, since 2006 progressively more predictors use empirical machine learning models including SVMs, decisions trees and artificial neural networks (ANNs). The predictive models use inputs that are derived from the knowledge of mechanistic details of the miRNA-mRNA interactions. The most commonly used predictive input is the complementarity of the base pairing between miRNA and mRNA. In contrast to the near-perfect base pairing in plants BIB001 , animal miRNAs usually bind mRNAs with only some positions that are paired BIB005 . Complementarity of the base pairing in the seed region (the first eight nucleotides at the 5 0 end of miRNAs) is particularly important; only six methods did not consider it. To compare, 15 methods did not consider complementarity in the nonseed region. The major types of complementarity in the seed include 6-mer (six consecutive matches between second and seventh positions from the 5 0 end of miRNA), 7-mer-A1 (extends 6-mer with an adenine (A) nucleotide at the first position of target 3 0 end), 7-mer-m8 (seven consecutive matches from second to eighth position of miRNA) and 8-mer (combines 7-mer-m8 and 7-mer-A1). Some methods consider binding of the first eight nucleotides as important but do not restrict it to particular seed types. Moreover, several predictors (HuMiTar BIB018 , TargetMiner , MultiMiTar BIB026 , miREE and SuperMirTar BIB033 ) also suggest specific positions that are more useful for the prediction. These methods, except for HuMiTar, use machine learning models and empirical feature selection to find these positions. One other exception is that TargetBoost BIB011 , RNA22 BIB013 and SVMicrO BIB020 use patterns of complementarity generated from native miRNA:mRNA complexes, rather than focusing on the seed types. The site accessibility and evolutionary conservation inputs are used to increase specificity. The accessibility is relevant because miRNA:mRNA interaction requires binding of a relatively large RNA-induced silencing complex BIB028 . This input is quantified with content of adenine and uracil nucleotides (AU content) and free energy that estimates stability of the mRNA sequences. Most target predictors use existing software, like Vienna RNA package BIB003 , mFold BIB004 , DINAMelt BIB012 and sFold BIB006 , to calculate the free energy. Authors of RNAhybrid claim that their own approach prevents intramolecular base pairing and bulge loops, which leads to improved estimates of the free energy BIB007 ; this approach was also used in the predictor by Stark et al. BIB009 and in SuperMirTar BIB033 . Most predictors calculate the free energy of the miRNA-target duplexes. However, several methods (MicroTar BIB014 , STarMir BIB015 , PITA BIB016 , TargetMiner , SVMicrO BIB020 , PACMIT BIB021 and miREE ) calculate arguably more relevant relative energy, which is the hybridization energy gained by miRNA:mRNA binding minus the disruption energy lost by opening up the local mRNA structure of the target. Several studies found that enriched AU content in mRNA 3 0 untranslated regions (UTRs) is important for interaction with miRNAs BIB034 BIB017 . This was exploited in 2003 in TargetScan, even before experimental data to that effect was published . Since then several methods have used this information (see 'AU %' column in Table 1 ). Use of the evolutionary conservation of miRNA targets is motivated by a premise that 'similar' species should share common miRNAs and their targets. However, this leads to We summarize key aspects including model type, region that is searched to predict targets and inclusion of several mechanistic properties that are known to provide useful inputs for prediction, such as complementarity between miRNA and mRNA, site accessibility and conservation across species; means that a given aspect was Overview and assessment of miRNA target predictions in animals | 5 at Bibliothek Der TU Muenchen on December 16, 2014 http://bib.oxfordjournals.org/ omission of the nonconserved targets BIB014 BIB027 . The value of the inclusion of the target conservation remains an open question; Table 1 reveals that conservation is used less frequently in recent years. Still, methods that search for targets in long coding DNA segments (CDSs) use conservation to improve specificity BIB008 BIB029 BIB035 BIB036 . Based on an observation that targeting of multiple sites enhances the mRNA regulation BIB022 BIB030 , 17 of the 38 methods increase the propensity of binding to a target gene with multiple predicted sites (see 'Multiple sites' column in Table 1 ). |
Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Criteria for empirical evaluation <s> MicroRNAs (miRNAs) are short RNA molecules that regulate gene expression by binding to target messenger RNAs and by controlling protein production or causing RNA cleavage. To date, functions have been assigned to only a few of the hundreds of identified miRNAs, in part because of the difficulty in identifying their targets. The short length of miRNAs and the fact that their complementarity to target sequences is imperfect mean that target identification in animal genomes is not possible by standard sequence comparison methods. Here we screen conserved 3′ UTR sequences from the Drosophila melanogaster genome for potential miRNA targets. The screening procedure combines a sequence search with an evaluation of the predicted miRNA–target heteroduplex structures and energies. We show that this approach successfully identifies the five previously validated let-7, lin-4, and bantam targets from a large database and predict new targets for Drosophila miRNAs. Our target predictions reveal striking clusters of functionally related targets among the top predictions for specific miRNAs. These include Notch target genes for miR-7, proapoptotic genes for the miR-2 family, and enzymes from a metabolic pathway for miR-277. We experimentally verified three predicted targets each for miR-7 and the miR-2 family, doubling the number of validated targets for animal miRNAs. Statistical analysis indicates that the best single predicted target sites are at the border of significance; thus, target predictions should be considered as tentative until experimentally validated. We identify features shared by all validated targets that can be used to evaluate target predictions for animal miRNAs. Our initial evaluation and experimental validation of target predictions suggest functions for two miRNAs. For others, the screen suggests plausible functions, such as a role for miR-277 as a metabolic switch controlling amino acid catabolism. Cross-genome comparison proved essential, as it allows reduction of the sequence search space. Improvements in genome annotation and increased availability of cDNA sequences from other genomes will allow more sensitive screens. An increase in the number of confirmed targets is expected to reveal general structural features that can be used to improve their detection. While the screen is likely to miss some targets, our study shows that valid targets can be identified from sequence alone. <s> BIB001 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Criteria for empirical evaluation <s> A new paradigm of gene expression regulation has emerged recently with the discovery of microRNAs (miRNAs). Most, if not all, miRNAs are thought to control gene expression, mostly by base pairing with miRNA-recognition elements (MREs) found in their messenger RNA (mRNA) targets. Although a large number of human miRNAs have been reported, many of their mRNA targets remain unknown. Here we used a combined bioinformatics and experimental approach to identify important rules governing miRNA-MRE recognition that allow prediction of human miRNA targets. We describe a computational program, "DIANA-microT", that identifies mRNA targets for animal miRNAs and predicts mRNA targets, bearing single MREs, for human and mouse miRNAs. <s> BIB002 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Criteria for empirical evaluation <s> MicroRNAs (miRNAs) are short RNAs that post-transcriptionally regulate the expression of target genes by binding to the target mRNAs. Although a large number of animal miRNAs has been defined, only a few targets are known. In contrast to plant miRNAs, which usually bind nearly perfectly to their targets, animal miRNAs bind less tightly, with a few nucleotides being unbound, thus producing more complex secondary structures of miRNA/target duplexes. Here, we present a program, RNA-hybrid, that predicts multiple potential binding sites of miRNAs in large target RNAs. In general, the program finds the energetically most favorable hybridization sites of a small RNA in a large RNA. Intramolecular hybridizations, that is, base pairings between target nucleotides or between miRNA nucleotides are not allowed. For large targets, the time complexity of the algorithm is linear in the target length, allowing many long targets to be searched in a short time. Statistical significance of predicted targets is assessed with an extreme value statistics of length normalized minimum free energies, a Poisson approximation of multiple binding sites, and the calculation of effective numbers of orthologous targets in comparative studies of multiple organisms. We applied our method to the prediction of Drosophila miRNA targets in 3'UTRs and coding sequence. RNAhybrid, with its accompanying programs RNAcalibrate and RNAeffective, is available for download and as a Web tool on the Bielefeld Bioinformatics Server (http://bibiserv.techfak.uni-bielefeld.de/rnahybrid/). <s> BIB003 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Criteria for empirical evaluation <s> microRNAs are small noncoding genes that regulate the protein production of genes by binding to partially complementary sites in the mRNAs of targeted genes. Here, using our algorithm PicTar, we exploit cross-species comparisons to predict, on average, 54 targeted genes per microRNA above noise in Drosophila melanogaster. Analysis of the functional annotation of target genes furthermore suggests specific biological functions for many microRNAs. We also predict combinatorial targets for clustered microRNAs and find that some clustered microRNAs are likely to coordinately regulate target genes. Furthermore, we compare microRNA regulation between insects and vertebrates. We find that the widespread extent of gene regulation by microRNAs is comparable between flies and mammals but that certain microRNAs may function in clade-specific modes of gene regulation. One of these microRNAs (miR-210) is predicted to contribute to the regulation of fly oogenesis. We also list specific regulatory relationships that appear to be conserved between flies and mammals. Our findings provide the most extensive microRNA target predictions in Drosophila to date, suggest specific functional roles for most microRNAs, indicate the existence of coordinate gene regulation executed by clustered microRNAs, and shed light on the evolution of microRNA function across large evolutionary distances. All predictions are freely accessible at our searchable Web site http://pictar.bio.nyu.edu. <s> BIB004 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Criteria for empirical evaluation <s> BackgroundMicroRNAs (miRs) are small noncoding RNAs that bind to complementary/partially complementary sites in the 3' untranslated regions of target genes to regulate protein production of the target transcript and to induce mRNA degradation or mRNA cleavage. The ability to perform accurate, high-throughput identification of physiologically active miR targets would enable functional characterization of individual miRs. Current target prediction methods include traditional approaches that are based on specific base-pairing rules in the miR's seed region and implementation of cross-species conservation of the target site, and machine learning (ML) methods that explore patterns that contrast true and false miR-mRNA duplexes. However, in the case of the traditional methods research shows that some seed region matches that are conserved are false positives and that some of the experimentally validated target sites are not conserved.ResultsWe present HuMiTar, a computational method for identifying common targets of miRs, which is based on a scoring function that considers base-pairing for both seed and non-seed positions for human miR-mRNA duplexes. Our design shows that certain non-seed miR nucleotides, such as 14, 18, 13, 11, and 17, are characterized by a strong bias towards formation of Watson-Crick pairing. We contrasted HuMiTar with several representative competing methods on two sets of human miR targets and a set of ten glioblastoma oncogenes. Comparison with the two best performing traditional methods, PicTar and TargetScanS, and a representative ML method that considers the non-seed positions, NBmiRTar, shows that HuMiTar predictions include majority of the predictions of the other three methods. At the same time, the proposed method is also capable of finding more true positive targets as a trade-off for an increased number of predictions. Genome-wide predictions show that the proposed method is characterized by 1.99 signal-to-noise ratio and linear, with respect to the length of the mRNA sequence, computational complexity. The ROC analysis shows that HuMiTar obtains results comparable with PicTar, which are characterized by high true positive rates that are coupled with moderate values of false positive rates.ConclusionThe proposed HuMiTar method constitutes a step towards providing an efficient model for studying translational gene regulation by miRs. <s> BIB005 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Criteria for empirical evaluation <s> BackgroundVirtually all currently available microRNA target site prediction algorithms require the presence of a (conserved) seed match to the 5' end of the microRNA. Recently however, it has been shown that this requirement might be too stringent, leading to a substantial number of missed target sites.ResultsWe developed TargetSpy, a novel computational approach for predicting target sites regardless of the presence of a seed match. It is based on machine learning and automatic feature selection using a wide spectrum of compositional, structural, and base pairing features covering current biological knowledge. Our model does not rely on evolutionary conservation, which allows the detection of species-specific interactions and makes TargetSpy suitable for analyzing unconserved genomic sequences.In order to allow for an unbiased comparison of TargetSpy to other methods, we classified all algorithms into three groups: I) no seed match requirement, II) seed match requirement, and III) conserved seed match requirement. TargetSpy predictions for classes II and III are generated by appropriate postfiltering. On a human dataset revealing fold-change in protein production for five selected microRNAs our method shows superior performance in all classes. In Drosophila melanogaster not only our class II and III predictions are on par with other algorithms, but notably the class I (no-seed) predictions are just marginally less accurate. We estimate that TargetSpy predicts between 26 and 112 functional target sites without a seed match per microRNA that are missed by all other currently available algorithms.ConclusionOnly a few algorithms can predict target sites without demanding a seed match and TargetSpy demonstrates a substantial improvement in prediction accuracy in that class. Furthermore, when conservation and the presence of a seed match are required, the performance is comparable with state-of-the-art algorithms. TargetSpy was trained on mouse and performs well in human and drosophila, suggesting that it may be applicable to a broad range of species. Moreover, we have demonstrated that the application of machine learning techniques in combination with upcoming deep sequencing data results in a powerful microRNA target site prediction tool http://www.targetspy.org. <s> BIB006 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Criteria for empirical evaluation <s> BackgroundMicroRNAs (miRNAs) are single-stranded non-coding RNAs known to regulate a wide range of cellular processes by silencing the gene expression at the protein and/or mRNA levels. Computational prediction of miRNA targets is essential for elucidating the detailed functions of miRNA. However, the prediction specificity and sensitivity of the existing algorithms are still poor to generate meaningful, workable hypotheses for subsequent experimental testing. Constructing a richer and more reliable training data set and developing an algorithm that properly exploits this data set would be the key to improve the performance current prediction algorithms.ResultsA comprehensive training data set is constructed for mammalian miRNAs with its positive targets obtained from the most up-to-date miRNA target depository called miRecords and its negative targets derived from 20 microarray data. A new algorithm SVMicrO is developed, which assumes a 2-stage structure including a site support vector machine (SVM) followed by a UTR-SVM. SVMicrO makes prediction based on 21 optimal site features and 18 optimal UTR features, selected by training from a comprehensive collection of 113 site and 30 UTR features. Comprehensive evaluation of SVMicrO performance has been carried out on the training data, proteomics data, and immunoprecipitation (IP) pull-down data. Comparisons with some popular algorithms demonstrate consistent improvements in prediction specificity, sensitivity and precision in all tested cases. All the related materials including source code and genome-wide prediction of human targets are available at http://compgenomics.utsa.edu/svmicro.html.ConclusionsA 2-stage SVM based new miRNA target prediction algorithm called SVMicrO is developed. SVMicrO is shown to be able to achieve robust performance. It holds the promise to achieve continuing improvement whenever better training data that contain additional verified or high confidence positive targets and properly selected negative targets are available. <s> BIB007 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Criteria for empirical evaluation <s> Considering accessibility of the 3′UTR is believed to increase the precision of microRNA target predictions. We show that, contrary to common belief, ranking by the hybridization energy or by the sum of the opening and hybridization energies, used in currently available algorithms, is not an efficient way to rank predictions. Instead, we describe an algorithm which also considers only the accessible binding sites but which ranks predictions according to over-representation. When compared with experimentally validated and refuted targets in the fruit fly and human, our algorithm shows a remarkable improvement in precision while significantly reducing the computational cost in comparison with other free energy based methods. In the human genome, our algorithm has at least twice higher precision than other methods with their default parameters. In the fruit fly, we find five times more validated targets among the top 500 predictions than other methods with their default parameters. Furthermore, using a common statistical framework we demonstrate explicitly the advantages of using the canonical ensemble instead of using the minimum free energy structure alone. We also find that ‘naive’ global folding sometimes outperforms the local folding approach. <s> BIB008 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Criteria for empirical evaluation <s> BackgroundMicroRNAs (miRNAs) play an essential task in gene regulatory networks by inhibiting the expression of target mRNAs. As their mRNA targets are genes involved in important cell functions, there is a growing interest in identifying the relationship between miRNAs and their target mRNAs. So, there is now a imperative need to develop a computational method by which we can identify the target mRNAs of existing miRNAs. Here, we proposed an efficient machine learning model to unravel the relationship between miRNAs and their target mRNAs.ResultsWe present a novel computational architecture MTar for miRNA target prediction which reports 94.5% sensitivity and 90.5% specificity. We identified 16 positional, thermodynamic and structural parameters from the wet lab proven miRNA:mRNA pairs and MTar makes use of these parameters for miRNA target identification. It incorporates an Artificial Neural Network (ANN) verifier which is trained by wet lab proven microRNA targets. A number of hitherto unknown targets of many miRNA families were located using MTar. The method identifies all three potential miRNA targets (5' seed-only, 5' dominant, and 3' canonical) whereas the existing solutions focus on 5' complementarities alone.ConclusionMTar, an ANN based architecture for identifying functional regulatory miRNA-mRNA interaction using predicted miRNA targets. The area of target prediction has received a new momentum with the function of a thermodynamic model incorporating target accessibility. This model incorporates sixteen structural, thermodynamic and positional features of residues in miRNA: mRNA pairs were employed to select target candidates. So our novel machine learning architecture, MTar is found to be more comprehensive than the existing methods in predicting miRNA targets, especially human transcritome. <s> BIB009 | We used a comprehensive set of evaluation measures to assess the predictions of the miRNA:target duplexes and miRNA-gene pairs. Each prediction takes two forms: binary value that indicates whether a given duplex or miRNA-gene pair is predicted to be functional; and the real-valued probability (propensity) of a given predicted interaction. The binary predictions were assessed using the following seven measures: where true positives (TP) and true negatives (TN) are the counts of correctly predicted functional and nonfunctional miRNA targets, respectively, and false positives (FP) and false negatives (FN) are the counts of incorrectly predicted functional and nonfunctional miRNA targets, respectively. The values of the Matthews Correlation Coefficient (MCC) range between À1 and 1, with 0 for random predictions and higher values denoting more accurate predictions. MCC provides a robust measurement for skewed data sets (when number of positive and negative outcomes in unbalanced), which is the case with our TEST_duplex data set. Signal-to-Noise Ratio (SNR) of correctly over incorrectly predicted functional targets was calculated in several prior works BIB001 BIB002 BIB004 BIB003 . We computed the SNR of predicted functional (SNRþ) and also nonfunctional samples (SNRÀ) to provide a complete set of measures. Given the skewed counts of native (true) functional and nonfunctional samples in our data sets, we normalized the SNR values as follows: where P_duplex (P_gene) and N_duplex (N_gene) are the numbers of native (true) functional and nonfunctional duplexes (genes) in the TEST_duplex (TEST_gene) data set. The overall count of predicted functional targets is assessed using Predicted-to-Native positive Ratio (PNR) ¼ predicted_functional_ count/true_functional_count. PNR indicates whether a given predictor overpredicts (PNR value > 1) or underpredicts (PNR value < 1) the number of functional miRNA targets. The real-valued propensities were assessed using the receiver operating characteristic (ROC) curve, which represents relation between true-positive rates (TPR) ¼ TP/(TP þ FN) and false-positive rates (FPR) ¼ FP/(FP þ TN). The ROC curves reflect a trade-off between sensitivity and specificity, providing comprehensive information about the predictive performance. We compute the area under the ROC curve (AUC) that ranges between 0 (for a method that does not predict TP) and 1 (for a perfect predictor), with 0.5 denoting a random predictor. Except for the PNR and SNRÀ, which we introduced, and the normalization of the SNRþ and SNRÀ values that is motivated by the unbalanced nature of the benchmark data sets, the other criteria were used to evaluate some of the prior predictors BIB005 BIB007 BIB008 BIB006 BIB009 (see column 'Criteria' in Table 2 ). We also evaluate statistical significance of differences in predictive performance between predictors. We randomly choose 50% of a given data set, calculate the predictive performance and repeat this 10 times. The corresponding 10 pairs of results (to compare a given pair of predictors) are evaluated with the student's t-test if distributions are normal; otherwise we use the Mann-Whitney test. The distribution type is verified using the Anderson-Darling test with the P-value of 0.05. |
Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Evaluation protocols <s> BackgroundMicroRNAs (miRNAs) are small noncoding RNAs, which play significant roles as posttranscriptional regulators. The functions of animal miRNAs are generally based on complementarity for their 5' components. Although several computational miRNA target-gene prediction methods have been proposed, they still have limitations in revealing actual target genes.ResultsWe implemented miTarget, a support vector machine (SVM) classifier for miRNA target gene prediction. It uses a radial basis function kernel as a similarity measure for SVM features, categorized by structural, thermodynamic, and position-based features. The latter features are introduced in this study for the first time and reflect the mechanism of miRNA binding. The SVM classifier produces high performance with a biologically relevant data set obtained from the literature, compared with previous tools. We predicted significant functions for human miR-1, miR-124a, and miR-373 using Gene Ontology (GO) analysis and revealed the importance of pairing at positions 4, 5, and 6 in the 5' region of a miRNA from a feature selection experiment. We also provide a web interface for the program.ConclusionmiTarget is a reliable miRNA target gene prediction tool and is a successful application of an SVM classifier. Compared with previous tools, its predictions are meaningful by GO analysis and its performance can be improved given more training examples. <s> BIB001 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Evaluation protocols <s> MicroRNAs (miRNAs) are ~22-nt RNA segments that are involved in the regulation of protein expression primarily by binding to one or more target sites on an mRNA transcript and inhibiting translation. MicroRNAs are likely to factor into multiple developmental pathways, multiple mechanisms of gene regulation, and underlie an array of inherited disease processes and phenotypic determinants. Several computational programs exist to predict miRNA targets in mammals, fruit flies, worms, and plants. However, to date, there is no systematic collection and description of miRNA targets with experimental support. We describe a database, TarBase, which houses a manually curated collection of experimentally tested miRNA targets, in human/mouse, fruit fly, worm, and zebrafish, distinguishing between those that tested positive and those that tested negative. Each positive target site is described by the miRNA that binds it, the gene in which it occurs, the nature of the experiments that were conducted to test it, the sufficiency of the site to induce translational repression and/or cleavage, and the paper from which all these data were extracted. Additionally, the database is functionally linked to several other useful databases such as Gene Ontology (GO) and UCSC Genome Browser. TarBase reveals significantly more experimentally supported targets than even recent reviews claim, thereby providing a comprehensive data set from which to assess features of miRNA targeting that will be useful for the next generation of target prediction programs. TarBase can be accessed at http://www.diana.pcbi.upenn.edu/tarbase. <s> BIB002 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Evaluation protocols <s> BackgroundMicroRNAs (miRs) are small noncoding RNAs that bind to complementary/partially complementary sites in the 3' untranslated regions of target genes to regulate protein production of the target transcript and to induce mRNA degradation or mRNA cleavage. The ability to perform accurate, high-throughput identification of physiologically active miR targets would enable functional characterization of individual miRs. Current target prediction methods include traditional approaches that are based on specific base-pairing rules in the miR's seed region and implementation of cross-species conservation of the target site, and machine learning (ML) methods that explore patterns that contrast true and false miR-mRNA duplexes. However, in the case of the traditional methods research shows that some seed region matches that are conserved are false positives and that some of the experimentally validated target sites are not conserved.ResultsWe present HuMiTar, a computational method for identifying common targets of miRs, which is based on a scoring function that considers base-pairing for both seed and non-seed positions for human miR-mRNA duplexes. Our design shows that certain non-seed miR nucleotides, such as 14, 18, 13, 11, and 17, are characterized by a strong bias towards formation of Watson-Crick pairing. We contrasted HuMiTar with several representative competing methods on two sets of human miR targets and a set of ten glioblastoma oncogenes. Comparison with the two best performing traditional methods, PicTar and TargetScanS, and a representative ML method that considers the non-seed positions, NBmiRTar, shows that HuMiTar predictions include majority of the predictions of the other three methods. At the same time, the proposed method is also capable of finding more true positive targets as a trade-off for an increased number of predictions. Genome-wide predictions show that the proposed method is characterized by 1.99 signal-to-noise ratio and linear, with respect to the length of the mRNA sequence, computational complexity. The ROC analysis shows that HuMiTar obtains results comparable with PicTar, which are characterized by high true positive rates that are coupled with moderate values of false positive rates.ConclusionThe proposed HuMiTar method constitutes a step towards providing an efficient model for studying translational gene regulation by miRs. <s> BIB003 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Evaluation protocols <s> BackgroundVirtually all currently available microRNA target site prediction algorithms require the presence of a (conserved) seed match to the 5' end of the microRNA. Recently however, it has been shown that this requirement might be too stringent, leading to a substantial number of missed target sites.ResultsWe developed TargetSpy, a novel computational approach for predicting target sites regardless of the presence of a seed match. It is based on machine learning and automatic feature selection using a wide spectrum of compositional, structural, and base pairing features covering current biological knowledge. Our model does not rely on evolutionary conservation, which allows the detection of species-specific interactions and makes TargetSpy suitable for analyzing unconserved genomic sequences.In order to allow for an unbiased comparison of TargetSpy to other methods, we classified all algorithms into three groups: I) no seed match requirement, II) seed match requirement, and III) conserved seed match requirement. TargetSpy predictions for classes II and III are generated by appropriate postfiltering. On a human dataset revealing fold-change in protein production for five selected microRNAs our method shows superior performance in all classes. In Drosophila melanogaster not only our class II and III predictions are on par with other algorithms, but notably the class I (no-seed) predictions are just marginally less accurate. We estimate that TargetSpy predicts between 26 and 112 functional target sites without a seed match per microRNA that are missed by all other currently available algorithms.ConclusionOnly a few algorithms can predict target sites without demanding a seed match and TargetSpy demonstrates a substantial improvement in prediction accuracy in that class. Furthermore, when conservation and the presence of a seed match are required, the performance is comparable with state-of-the-art algorithms. TargetSpy was trained on mouse and performs well in human and drosophila, suggesting that it may be applicable to a broad range of species. Moreover, we have demonstrated that the application of machine learning techniques in combination with upcoming deep sequencing data results in a powerful microRNA target site prediction tool http://www.targetspy.org. <s> BIB004 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Evaluation protocols <s> BackgroundMicroRNAs (miRNAs) play an essential task in gene regulatory networks by inhibiting the expression of target mRNAs. As their mRNA targets are genes involved in important cell functions, there is a growing interest in identifying the relationship between miRNAs and their target mRNAs. So, there is now a imperative need to develop a computational method by which we can identify the target mRNAs of existing miRNAs. Here, we proposed an efficient machine learning model to unravel the relationship between miRNAs and their target mRNAs.ResultsWe present a novel computational architecture MTar for miRNA target prediction which reports 94.5% sensitivity and 90.5% specificity. We identified 16 positional, thermodynamic and structural parameters from the wet lab proven miRNA:mRNA pairs and MTar makes use of these parameters for miRNA target identification. It incorporates an Artificial Neural Network (ANN) verifier which is trained by wet lab proven microRNA targets. A number of hitherto unknown targets of many miRNA families were located using MTar. The method identifies all three potential miRNA targets (5' seed-only, 5' dominant, and 3' canonical) whereas the existing solutions focus on 5' complementarities alone.ConclusionMTar, an ANN based architecture for identifying functional regulatory miRNA-mRNA interaction using predicted miRNA targets. The area of target prediction has received a new momentum with the function of a thermodynamic model incorporating target accessibility. This model incorporates sixteen structural, thermodynamic and positional features of residues in miRNA: mRNA pairs were employed to select target candidates. So our novel machine learning architecture, MTar is found to be more comprehensive than the existing methods in predicting miRNA targets, especially human transcritome. <s> BIB005 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Evaluation protocols <s> AbstractmirSVR is a new machine learning method for ranking microRNA target sites by a down-regulation score. The algorithm trains a regression model on sequence and contextual features extracted from miRanda-predicted target sites. In a large-scale evaluation, miRanda-mirSVR is competitive with other target prediction methods in identifying target genes and predicting the extent of their downregulation at the mRNA or protein levels. Importantly, the method identifies a significant number of experimentally determined non-canonical and non-conserved sites. <s> BIB006 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Evaluation protocols <s> Background ::: Machine learning based miRNA-target prediction algorithms often fail to obtain a balanced prediction accuracy in terms of both sensitivity and specificity due to lack of the gold standard of negative examples, miRNA-targeting site context specific relevant features and efficient feature selection process. Moreover, all the sequence, structure and machine learning based algorithms are unable to distribute the true positive predictions preferentially at the top of the ranked list; hence the algorithms become unreliable to the biologists. In addition, these algorithms fail to obtain considerable combination of precision and recall for the target transcripts that are translationally repressed at protein level. ::: Methodology/Principal Finding ::: In the proposed article, we introduce an efficient miRNA-target prediction system MultiMiTar, a Support Vector Machine (SVM) based classifier integrated with a multiobjective metaheuristic based feature selection technique. The robust performance of the proposed method is mainly the result of using high quality negative examples and selection of biologically relevant miRNA-targeting site context specific features. The features are selected by using a novel feature selection technique AMOSA-SVM, that integrates the multi objective optimization technique Archived Multi-Objective Simulated Annealing (AMOSA) and SVM. ::: Conclusions/Significance ::: MultiMiTar is found to achieve much higher Matthew’s correlation coefficient (MCC) of 0.583 and average class-wise accuracy (ACA) of 0.8 compared to the others target prediction methods for a completely independent test data set. The obtained MCC and ACA values of these algorithms range from −0.269 to 0.155 and 0.321 to 0.582, respectively. Moreover, it shows a more balanced result in terms of precision and sensitivity (recall) for the translationally repressed data set as compared to all the other existing methods. An important aspect is that the true positive predictions are distributed preferentially at the top of the ranked list that makes MultiMiTar reliable for the biologists. MultiMiTar is now available as an online tool at www.isical.ac.in/~bioinfo_miu/multimitar.htm. MultiMiTar software can be downloaded from www.isical.ac.in/~bioinfo_miu/multimitar-download.htm. <s> BIB007 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Evaluation protocols <s> The Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) is an international public repository for high-throughput microarray and next-generation sequence functional genomic data sets submitted by the research community. The resource supports archiving of raw data, processed data and metadata which are indexed, cross-linked and searchable. All data are freely available for download in a variety of formats. GEO also provides several web-based tools and strategies to assist users to query, analyse and visualize data. This article reports current status and recent database developments, including the release of GEO2R, an R-based web application that helps users analyse GEO data. <s> BIB008 | Benchmark data sets used to develop and test the predictors and the corresponding evaluation procedures are summarized in Table 2 . Many early methods were designed/evaluated using data only from Drosophila owing to limited availability of validated miRNA targets. However, even some early predictors (TargetScan BIB008 , DIANA-microT , miRanda BIB002 and TargetScanS ) considered higher eukaryotes. More recent methods generally cover more species. Interestingly, in 14 cases predictors were validated on test data sets but there was no mention about data being used to design these predictive models. This may mean that the test data was used in the design, e.g. to set thresholds and parameters. HuMiTar was the first method that was properly tested on an independent (from the training set) data set BIB003 . Even with the currently available relatively large number of validated miRNA targets, only a few recent predictors (TargetMiner , TargetSpy BIB004 , Mtar BIB005 , MultiMiTar BIB007 and miREE ) were trained and tested on different (independent) data sets. Moreover, the sizes of some training data sets are relatively small (a few dozen samples) and some data sets are unbalanced and have more artificial nonfunctional targets than the functional targets; some data sets use only a few validated nonfunctional targets. A particularly challenging aspect is a low number of experimentally validated nonfunctional samples, i.e. mRNA validated not to interact with a given miRNA. Several early methods used artificial nonfunctional targets created by either shuffling miRNAs sequences or by randomization of mRNAs; these approaches were criticized to generate unrealistic samples . More recent attempts scan the mRNA transcripts where validated target sites or Ago-binding sites are masked and use the target segments with at least 4-mer matches in the seed region or one mismatch or G:U wobble in the 6-mer seed as the nonfunctional samples BIB006 BIB005 BIB001 . This approach assumes that the knowledge of functional targets or Ago-binding sites is complete, while in fact these computationally generated nonfunctional miRNA-mRNA pairs could be functional. Some recent methods label overexpressed genes when particular miRNA mimics are added to cells as nonfunctional, but data from this limited number of miRNAs may be biased. These various attempts to generate the benchmark data sets may result in mislabeling, overfitting the training data sets and generation of unrealistic (possibly inflated) evaluation of predictive performance. We also analyze the evaluation procedures. The early predictors were evaluated primarily based on SNR between the number of predicted targets in functional genes and in true or artificial nonfunctional genes. PicTar was the first to report sensitivity, based on only 19 native targets. TargetBoost and miTarget were the first to use more informative ROC curves, but with the caveat of using artificial nonfunctional targets. The criteria used to evaluate predictive quality vary widely between methods. Some measures are biased by the composition of the data set (e.g. accuracy and precision) and provide incomplete picture (e.g. sensitivity without specificity and vice versa). This makes comparisons across predictors virtually impossible. The standards to compare between methods are also relatively low, as in most cases evaluation did not include statistical tests. On the positive side, the assessment of several methods included experimental validation of targets. The authors of RNA22 method performed a large-scale validation and claimed that 168 of 226 tested targets were repressed; however, they did not find whether these targets were bound by the specific miRNAs. Some primarily older methods also included functional analysis of the predicted targets. Table 3 shows that miRNA target predictors are available to the end users as web servers, stand-alone packages, precomputed data sets and upon request. The 21 methods that are provided as web servers are convenient for ad hoc (occasional) users. The 13 stand-alone packages are suitable for users who anticipate a high-throughput use and/or who would like to include them into their local software platforms; most of these are also available as the web servers. The convenient to collect precomputed results are provided for 10 methods. However, these predictions may not be updated timely and do not include results for novel miRNAs. |
Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> MicroRNAs (miRNAs) are short RNAs that post-transcriptionally regulate the expression of target genes by binding to the target mRNAs. Although a large number of animal miRNAs has been defined, only a few targets are known. In contrast to plant miRNAs, which usually bind nearly perfectly to their targets, animal miRNAs bind less tightly, with a few nucleotides being unbound, thus producing more complex secondary structures of miRNA/target duplexes. Here, we present a program, RNA-hybrid, that predicts multiple potential binding sites of miRNAs in large target RNAs. In general, the program finds the energetically most favorable hybridization sites of a small RNA in a large RNA. Intramolecular hybridizations, that is, base pairings between target nucleotides or between miRNA nucleotides are not allowed. For large targets, the time complexity of the algorithm is linear in the target length, allowing many long targets to be searched in a short time. Statistical significance of predicted targets is assessed with an extreme value statistics of length normalized minimum free energies, a Poisson approximation of multiple binding sites, and the calculation of effective numbers of orthologous targets in comparative studies of multiple organisms. We applied our method to the prediction of Drosophila miRNA targets in 3'UTRs and coding sequence. RNAhybrid, with its accompanying programs RNAcalibrate and RNAeffective, is available for download and as a Web tool on the Bielefeld Bioinformatics Server (http://bibiserv.techfak.uni-bielefeld.de/rnahybrid/). <s> BIB001 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Comprehensive identification of all functional elements encoded in the human genome is a fundamental need in biomedical research. Here, we present a comparative analysis of the human, mouse, rat and dog genomes to create a systematic catalogue of common regulatory motifs in promoters and 3' untranslated regions (3' UTRs). The promoter analysis yields 174 candidate motifs, including most previously known transcription-factor binding sites and 105 new motifs. The 3'-UTR analysis yields 106 motifs likely to be involved in post-transcriptional regulation. Nearly one-half are associated with microRNAs (miRNAs), leading to the discovery of many new miRNA genes and their likely target genes. Our results suggest that previous estimates of the number of human miRNA genes were low, and that miRNAs regulate at least 20% of human genes. The overall results provide a systematic view of gene regulation in the human, which will be refined as additional mammalian genomes become available. <s> BIB002 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Regulationofpost-transcriptionalgeneexpressionby microRNAs(miRNA)hassofarbeenvalidatedforonly a few mRNA targets. Based on the large number of miRNA genes and the possibility that one miRNA might influence gene expression of several targets simultaneously, the quantity of ribo-regulated genes is expected to be much higher. Here, we describe the web tool MicroInspector that will analyse a userdefined RNA sequence, which is typically an mRNA or a part of an mRNA, for the occurrence of binding sites for known and registered miRNAs. The program allows variation of temperature, the setting of energy values as well as the selection of different miRNA databasestoidentifymiRNA-bindingsitesofdifferent strength. MicroInspector could spot the correct sites for miRNA-interaction in known target mRNAs. Using other mRNAs, for which such an interaction has not yet been described, we discovered frequently potential miRNA binding sites of similar quality, which can now be analysed experimentally. The MicroInspector program is easy to use and does not require specific computer skills. The service can be accessed via the MicroInspector web server at http://www.imbb. forth.gr/microinspector. <s> BIB003 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> We present a new microRNA target prediction algorithm called TargetBoost, and show that the algorithm is stable and identifies more true targets than do existing algorithms. TargetBoost uses machine learning on a set of validated microRNA targets in lower organisms to create weighted sequence motifs that capture the binding characteristics between microRNAs and their targets. Existing algorithms require candidates to have (1) near-perfect complementarity between microRNAs' 5' end and their targets; (2) relatively high thermodynamic duplex stability; (3) multiple target sites in the target's 3' UTR; and (4) evolutionary conservation of the target between species. Most algorithms use one of the two first requirements in a seeding step, and use the three others as filters to improve the method's specificity. The initial seeding step determines an algorithm's sensitivity and also influences its specificity. As all algorithms may add filters to increase the specificity, we propose that methods should be compared before such filtering. We show that TargetBoost's weighted sequence motif approach is favorable to using both the duplex stability and the sequence complementarity steps. (TargetBoost is available as a Web tool from http://www.interagon.com/demo/.). <s> BIB004 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> MicroRNAs are small noncoding RNAs that serve as posttranscriptional regulators of gene expression in higher eukaryotes. Their widespread and important role in animals is highlighted by recent estimates that 20%-30% of all genes are microRNA targets. Here, we report that a large set of genes involved in basic cellular processes avoid microRNA regulation due to short 3'UTRs that are specifically depleted of microRNA binding sites. For individual microRNAs, we find that coexpressed genes avoid microRNA sites, whereas target genes and microRNAs are preferentially expressed in neighboring tissues. This mutually exclusive expression argues that microRNAs confer accuracy to developmental gene-expression programs, thus ensuring tissue identity and supporting cell-lineage decisions. <s> BIB005 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> microRNAs are small noncoding genes that regulate the protein production of genes by binding to partially complementary sites in the mRNAs of targeted genes. Here, using our algorithm PicTar, we exploit cross-species comparisons to predict, on average, 54 targeted genes per microRNA above noise in Drosophila melanogaster. Analysis of the functional annotation of target genes furthermore suggests specific biological functions for many microRNAs. We also predict combinatorial targets for clustered microRNAs and find that some clustered microRNAs are likely to coordinately regulate target genes. Furthermore, we compare microRNA regulation between insects and vertebrates. We find that the widespread extent of gene regulation by microRNAs is comparable between flies and mammals but that certain microRNAs may function in clade-specific modes of gene regulation. One of these microRNAs (miR-210) is predicted to contribute to the regulation of fly oogenesis. We also list specific regulatory relationships that appear to be conserved between flies and mammals. Our findings provide the most extensive microRNA target predictions in Drosophila to date, suggest specific functional roles for most microRNAs, indicate the existence of coordinate gene regulation executed by clustered microRNAs, and shed light on the evolution of microRNA function across large evolutionary distances. All predictions are freely accessible at our searchable Web site http://pictar.bio.nyu.edu. <s> BIB006 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Summary We present rna22 , a method for identifying microRNA binding sites and their corresponding heteroduplexes. Rna22 does not rely upon cross-species conservation, is resilient to noise, and, unlike previous methods, it first finds putative microRNA binding sites in the sequence of interest, then identifies the targeting microRNA. Computationally, we show that rna22 identifies most of the currently known heteroduplexes. Experimentally, with luciferase assays, we demonstrate average repressions of 30% or more for 168 of 226 tested targets. The analysis suggests that some microRNAs may have as many as a few thousand targets, and that between 74% and 92% of the gene transcripts in four model genomes are likely under microRNA control through their untranslated and amino acid coding regions. We also extended the method's key idea to a low-error microRNA-precursor-discovery scheme; our studies suggest that the number of microRNA precursors in mammalian genomes likely ranges in the tens of thousands. <s> BIB007 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> BackgroundThe accurate prediction of a comprehensive set of messenger RNAs (targets) regulated by animal microRNAs (miRNAs) remains an open problem. In particular, the prediction of targets that do not possess evolutionarily conserved complementarity to their miRNA regulators is not adequately addressed by current tools.ResultsWe have developed MicroTar, an animal miRNA target prediction tool based on miRNA-target complementarity and thermodynamic data. The algorithm uses predicted free energies of unbound mRNA and putative mRNA-miRNA heterodimers, implicitly addressing the accessibility of the mRNA 3' untranslated region. MicroTar does not rely on evolutionary conservation to discern functional targets, and is able to predict both conserved and non-conserved targets. MicroTar source code and predictions are accessible at http://tiger.dbs.nus.edu.sg/microtar/, where both serial and parallel versions of the program can be downloaded under an open-source licence.ConclusionMicroTar achieves better sensitivity than previously reported predictions when tested on three distinct datasets of experimentally-verified miRNA-target interactions in C. elegans, Drosophila, and mouse. <s> BIB008 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> BackgroundMicroRNAs have emerged as important regulatory genes in a variety of cellular processes and, in recent years, hundreds of such genes have been discovered in animals. In contrast, functional annotations are available only for a very small fraction of these miRNAs, and even in these cases only partially.ResultsWe developed a general Bayesian method for the inference of miRNA target sites, in which, for each miRNA, we explicitly model the evolution of orthologous target sites in a set of related species. Using this method we predict target sites for all known miRNAs in flies, worms, fish, and mammals. By comparing our predictions in fly with a reference set of experimentally tested miRNA-mRNA interactions we show that our general method performs at least as well as the most accurate methods available to date, including ones specifically tailored for target prediction in fly. An important novel feature of our model is that it explicitly infers the phylogenetic distribution of functional target sites, independently for each miRNA. This allows us to infer species-specific and clade-specific miRNA targeting. We also show that, in long human 3' UTRs, miRNA target sites occur preferentially near the start and near the end of the 3' UTR.To characterize miRNA function beyond the predicted lists of targets we further present a method to infer significant associations between the sets of targets predicted for individual miRNAs and specific biochemical pathways, in particular those of the KEGG pathway database. We show that this approach retrieves several known functional miRNA-mRNA associations, and predicts novel functions for known miRNAs in cell growth and in development.ConclusionWe have presented a Bayesian target prediction algorithm without any tunable parameters, that can be applied to sequences from any clade of species. The algorithm automatically infers the phylogenetic distribution of functional sites for each miRNA, and assigns a posterior probability to each putative target site. The results presented here indicate that our general method achieves very good performance in predicting miRNA target sites, providing at the same time insights into the evolution of target sites for individual miRNAs. Moreover, by combining our predictions with pathway analysis, we propose functions of specific miRNAs in nervous system development, inter-cellular communication and cell growth. The complete target site predictions as well as the miRNA/pathway associations are accessible on the ElMMo web server. <s> BIB009 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> MicroRNAs are key regulators of gene expression, but the precise mechanisms underlying their interaction with their mRNA targets are still poorly understood. Here, we systematically investigate the role of target-site accessibility, as determined by base-pairing interactions within the mRNA, in microRNA target recognition. We experimentally show that mutations diminishing target accessibility substantially reduce microRNA-mediated translational repression, with effects comparable to those of mutations that disrupt sequence complementarity. We devise a parameter-free model for microRNA-target interaction that computes the difference between the free energy gained from the formation of the microRNA-target duplex and the energetic cost of unpairing the target to make it accessible to the microRNA. This model explains the variability in our experiments, predicts validated targets more accurately than existing algorithms, and shows that genomes accommodate site accessibility by preferentially positioning targets in highly accessible regions. Our study thus demonstrates that target accessibility is a critical factor in microRNA function. <s> BIB010 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> MicroRNA.org (http://www.microrna.org) is a comprehensive resource of microRNA target predictions and expression profiles. Target predictions are based on a development of the miRanda algorithm which incorporates current biological knowledge on target rules and on the use of an up-to-date compendium of mammalian microRNAs. MicroRNA expression profiles are derived from a comprehensive sequencing project of a large set of mammalian tissues and cell lines of normal and disease origin. Using an improved graphical interface, a user can explore (i) the set of genes that are potentially regulated by a particular microRNA, (ii) the implied cooperativity of multiple microRNAs on a particular mRNA and (iii) microRNA expression profiles in various tissues. To facilitate future updates and development, the microRNA.org database structure and software architecture is flexibly designed to incorporate new expression and target discoveries. The web resource provides users with functional information about the growing number of microRNAs and their interaction with target genes in many species and facilitates novel discoveries in microRNA gene regulation. <s> BIB011 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Summary A U- r ich e lements (AREs), present in mRNA 3′-UTRs, are potent posttranscriptional regulatory signals that can rapidly effect changes in mRNA stability and translation, thereby dramatically altering gene expression with clinical and developmental consequences. In human cell lines, the TNFα ARE enhances translation relative to mRNA levels upon serum starvation, which induces cell-cycle arrest. An in vivo crosslinking-coupled affinity purification method was developed to isolate ARE-associated complexes from activated versus basal translation conditions. We surprisingly found two microRNP-related proteins, f ragile- X -mental-retardation- r elated protein 1 (FXR1) and A r go naute 2 (AGO2), that associate with the ARE exclusively during translation activation. Through tethering and shRNA-knockdown experiments, we provide direct evidence for the translation activation function of both FXR1 and AGO2 and demonstrate their interdependence for upregulation. This novel cell-growth-dependent translation activation role for FXR1 and AGO2 allows new insights into ARE-mediated signaling and connects two important posttranscriptional regulatory systems in an unexpected way. <s> BIB012 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> MicroRNAs (miRNAs) are small noncoding RNAs that repress protein synthesis by binding to target messenger RNAs. We investigated the effect of target secondary structure on the efficacy of repression by miRNAs. Using structures predicted by the Sfold program, we model the interaction between an miRNA and a target as a two-step hybridization reaction: nucleation at an accessible target site followed by hybrid elongation to disrupt local target secondary structure and form the complete miRNA-target duplex. This model accurately accounts for the sensitivity to repression by let-7 of various mutant forms of the Caenorhabditis elegans lin-41 3¢ untranslated region and for other experimentally tested miRNA-target interactions in C. elegans and Drosophila melanogaster. These findings indicate a potent effect of target structure on target recognition by miRNAs and establish a structure-based framework for genome-wide identification of animal miRNA targets. <s> BIB013 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> BackgroundMicroRNAs (miRs) are small noncoding RNAs that bind to complementary/partially complementary sites in the 3' untranslated regions of target genes to regulate protein production of the target transcript and to induce mRNA degradation or mRNA cleavage. The ability to perform accurate, high-throughput identification of physiologically active miR targets would enable functional characterization of individual miRs. Current target prediction methods include traditional approaches that are based on specific base-pairing rules in the miR's seed region and implementation of cross-species conservation of the target site, and machine learning (ML) methods that explore patterns that contrast true and false miR-mRNA duplexes. However, in the case of the traditional methods research shows that some seed region matches that are conserved are false positives and that some of the experimentally validated target sites are not conserved.ResultsWe present HuMiTar, a computational method for identifying common targets of miRs, which is based on a scoring function that considers base-pairing for both seed and non-seed positions for human miR-mRNA duplexes. Our design shows that certain non-seed miR nucleotides, such as 14, 18, 13, 11, and 17, are characterized by a strong bias towards formation of Watson-Crick pairing. We contrasted HuMiTar with several representative competing methods on two sets of human miR targets and a set of ten glioblastoma oncogenes. Comparison with the two best performing traditional methods, PicTar and TargetScanS, and a representative ML method that considers the non-seed positions, NBmiRTar, shows that HuMiTar predictions include majority of the predictions of the other three methods. At the same time, the proposed method is also capable of finding more true positive targets as a trade-off for an increased number of predictions. Genome-wide predictions show that the proposed method is characterized by 1.99 signal-to-noise ratio and linear, with respect to the length of the mRNA sequence, computational complexity. The ROC analysis shows that HuMiTar obtains results comparable with PicTar, which are characterized by high true positive rates that are coupled with moderate values of false positive rates.ConclusionThe proposed HuMiTar method constitutes a step towards providing an efficient model for studying translational gene regulation by miRs. <s> BIB014 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Computational microRNA (miRNA) target prediction is one of the key means for deciphering the role of miRNAs in development and disease. Here, we present the DIANA-microT web server as the user interface to the DIANA-microT 3.0 miRNA target prediction algorithm. The web server provides extensive information for predicted miRNA:target gene interactions with a user-friendly interface, providing extensive connectivity to online biological resources. Target gene and miRNA functions may be elucidated through automated bibliographic searches and functional information is accessible through Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways. The web server offers links to nomenclature, sequence and protein databases, and users are facilitated by being able to search for targeted genes using different nomenclatures or functional features, such as the genes possible involvement in biological pathways. The target prediction algorithm supports parameters calculated individually for each miRNA:target gene interaction and provides a signal-to-noise ratio and a precision score that helps in the evaluation of the significance of the predicted results. Using a set of miRNA targets recently identified through the pSILAC method, the performance of several computational target prediction programs was assessed. DIANA-microT 3.0 achieved there with 66% the highest ratio of correctly predicted targets over all predicted targets. The DIANA-microT web server is freely available at www.microrna.gr/microT. <s> BIB015 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Computational identification of putative microRNA (miRNA) targets is an important step towards elucidating miRNA functions. Several miRNA target-prediction algorithms have been developed followed by publicly available databases of these predictions. Here we present a new database offering miRNA target predictions of several binding types, identified by our recently developed modular algorithm RepTar. RepTar is based on identification of repetitive elements in 3′-UTRs and is independent of both evolutionary conservation and conventional binding patterns (i.e. Watson–Crick pairing of ‘seed’ regions). The modularity of RepTar enables the prediction of targets with conventional seed sites as well as rarer targets with non-conventional sites, such as sites with seed wobbles (G-U pairing in the seed region), 3′-compensatory sites and the newly discovered centered sites. Furthermore, RepTar’s independence of conservation enables the prediction of cellular targets of the less evolutionarily conserved viral miRNAs. Thus, the RepTar database contains genome-wide predictions of human and mouse miRNAs as well as predictions of cellular targets of human and mouse viral miRNAs. These predictions are presented in a user-friendly database, which allows browsing through the putative sites as well as conducting simple and advanced queries including data intersections of various types. The RepTar database is available at http://reptar.ekmd.huji.ac.il. <s> BIB016 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> BackgroundMicroRNAs (miRNAs) play an essential task in gene regulatory networks by inhibiting the expression of target mRNAs. As their mRNA targets are genes involved in important cell functions, there is a growing interest in identifying the relationship between miRNAs and their target mRNAs. So, there is now a imperative need to develop a computational method by which we can identify the target mRNAs of existing miRNAs. Here, we proposed an efficient machine learning model to unravel the relationship between miRNAs and their target mRNAs.ResultsWe present a novel computational architecture MTar for miRNA target prediction which reports 94.5% sensitivity and 90.5% specificity. We identified 16 positional, thermodynamic and structural parameters from the wet lab proven miRNA:mRNA pairs and MTar makes use of these parameters for miRNA target identification. It incorporates an Artificial Neural Network (ANN) verifier which is trained by wet lab proven microRNA targets. A number of hitherto unknown targets of many miRNA families were located using MTar. The method identifies all three potential miRNA targets (5' seed-only, 5' dominant, and 3' canonical) whereas the existing solutions focus on 5' complementarities alone.ConclusionMTar, an ANN based architecture for identifying functional regulatory miRNA-mRNA interaction using predicted miRNA targets. The area of target prediction has received a new momentum with the function of a thermodynamic model incorporating target accessibility. This model incorporates sixteen structural, thermodynamic and positional features of residues in miRNA: mRNA pairs were employed to select target candidates. So our novel machine learning architecture, MTar is found to be more comprehensive than the existing methods in predicting miRNA targets, especially human transcritome. <s> BIB017 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Animal miRNAs are a large class of small regulatory RNAs that are known to directly and negatively regulate the expression of a large fraction of all protein encoding genes. The identification and characterization of miRNA targets is thus a fundamental problem in biology. miRNAs regulate target genes by binding to 3′ untranslated regions (3′UTRs) of target mRNAs, and multiple binding sites for the same miRNA in 3′UTRs can strongly enhance the degree of regulation. Recent experiments have demonstrated that a large fraction of miRNA binding sites reside in coding sequences. Overall, miRNA binding sites in coding regions were shown to mediate smaller regulation than 3′UTR binding. However, possible interactions between target sites in coding sequences and 3′UTRs have not been studied. Using transcriptomics and proteomics data of ten miRNA mis-expression experiments as well as transcriptome-wide experimentally identified miRNA target sites, we found that mRNA and protein expression of genes containing target sites both in coding regions and 3′UTRs were in general mildly but significantly more regulated than those containing target sites in 3′UTRs only. These effects were stronger for conserved target sites of length 7–8 nt in coding regions compared to non-conserved sites. Combined with our other finding that miRNA target sites in coding regions are under negative selection, our results shed light on the functional importance of miRNA targeting in coding regions. <s> BIB018 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Background ::: Machine learning based miRNA-target prediction algorithms often fail to obtain a balanced prediction accuracy in terms of both sensitivity and specificity due to lack of the gold standard of negative examples, miRNA-targeting site context specific relevant features and efficient feature selection process. Moreover, all the sequence, structure and machine learning based algorithms are unable to distribute the true positive predictions preferentially at the top of the ranked list; hence the algorithms become unreliable to the biologists. In addition, these algorithms fail to obtain considerable combination of precision and recall for the target transcripts that are translationally repressed at protein level. ::: Methodology/Principal Finding ::: In the proposed article, we introduce an efficient miRNA-target prediction system MultiMiTar, a Support Vector Machine (SVM) based classifier integrated with a multiobjective metaheuristic based feature selection technique. The robust performance of the proposed method is mainly the result of using high quality negative examples and selection of biologically relevant miRNA-targeting site context specific features. The features are selected by using a novel feature selection technique AMOSA-SVM, that integrates the multi objective optimization technique Archived Multi-Objective Simulated Annealing (AMOSA) and SVM. ::: Conclusions/Significance ::: MultiMiTar is found to achieve much higher Matthew’s correlation coefficient (MCC) of 0.583 and average class-wise accuracy (ACA) of 0.8 compared to the others target prediction methods for a completely independent test data set. The obtained MCC and ACA values of these algorithms range from −0.269 to 0.155 and 0.321 to 0.582, respectively. Moreover, it shows a more balanced result in terms of precision and sensitivity (recall) for the translationally repressed data set as compared to all the other existing methods. An important aspect is that the true positive predictions are distributed preferentially at the top of the ranked list that makes MultiMiTar reliable for the biologists. MultiMiTar is now available as an online tool at www.isical.ac.in/~bioinfo_miu/multimitar.htm. MultiMiTar software can be downloaded from www.isical.ac.in/~bioinfo_miu/multimitar-download.htm. <s> BIB019 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Summary: Although small non-coding RNAs, such as microRNAs, have well-established functions in the cell, long non-coding RNAs (lncRNAs) have only recently started to emerge as abundant regulators of cell physiology, and their functions may be diverse. A small number of studies describe interactions between small and lncRNAs, with lncRNAs acting either as inhibitory decoys or as regulatory targets of microRNAs, but such interactions are still poorly explored. To facilitate the study of microRNA–lncRNA interactions, we implemented miRcode: a comprehensive searchable map of putative microRNA target sites across the complete GENCODE annotated transcriptome, including 10 419 lncRNA genes in the current version. ::: ::: Availability: http://www.mircode.org ::: ::: Contact: [email protected] ::: ::: Supplementary Information: Supplementary data are available at Bioinformatics online. <s> BIB020 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> MicroRNAs, or miRNAs, post-transcriptionally repress the expression of protein-coding genes. The human genome encodes over 1000 miRNA genes that collectively target the majority of messenger RNAs (mRNAs). Base pairing of the so-called miRNA ‘seed’ region with mRNAs identifies many thousands of putative targets. Evaluating the strength of the resulting mRNA repression remains challenging, but is essential for a biologically informative ranking of potential miRNA targets. To address these challenges, predictors may use thermodynamic, evolutionary, probabilistic or sequence-based features. We developed an open-source software library, miRmap, which for the first time comprehensively covers all four approaches using 11 predictor features, 3 of which are novel. This allowed us to examine feature correlations and to compare their predictive power in an unbiased way using high-throughput experimental data from immunopurification, transcriptomics, proteomics and polysome fractionation experiments. Overall, target site accessibility appears to be the most predictive feature. Our novel feature based on PhyloP, which evaluates the significance of negative selection, is the best performing predictor in the evolutionary category. We combined all the features into an integrated model that almost doubles the predictive power of TargetScan. miRmap is freely available from http://cegg.unige.ch/mirmap. <s> BIB021 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> MicroRNAs (miRNAs) are small endogenous RNA molecules that regulate gene expression through mRNA degradation and/or translation repression, affecting many biological processes. DIANA-microT web server (http://www.microrna.gr/webServer) is dedicated to miRNA target prediction/functional analysis, and it is being widely used from the scientific community, since its initial launch in 2009. DIANA-microT v5.0, the new version of the microT server, has been significantly enhanced with an improved target prediction algorithm, DIANAmicroT-CDS. It has been updated to incorporate miRBase version 18 and Ensembl version 69. The in silico-predicted miRNA–gene interactions in Homo sapiens, Mus musculus, Drosophila melanogaster and Caenorhabditis elegans exceed 11 million in total. The web server was completely redesigned, to host a series of sophisticated workflows, which can be used directly from the on-line web interface, enabling users without the necessary bioinformatics infrastructure to perform advanced multi-step functional miRNA analyses. For instance, one available pipeline performs miRNA target prediction using different thresholds and meta-analysis statistics, followed by pathway enrichment analysis. DIANAmicroT web server v5.0 also supports a complete integration with the Taverna Workflow Management System (WMS), using the in-house developed DIANA-Taverna Plug-in. This plug-in provides ready-to-use modules for miRNA target prediction and functional analysis, which can be used to form advanced high-throughput analysis pipelines. <s> BIB022 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Finding microRNA targets in the coding region is difficult due to the overwhelming signal encoding the amino acid sequence. Here, we introduce an algorithm (called PACCMIT-CDS) that finds potential microRNA targets within coding sequences by searching for conserved motifs that are complementary to the microRNA seed region and also overrepresented in comparison with a background model preserving both codon usage and amino acid sequence. Precision and sensitivity of PACCMIT-CDS are evaluated using PAR-CLIP and proteomics data sets. Thanks to the properly constructed background, the new algorithm achieves a lower rate of false positives and better ranking of predictions than do currently available algorithms, which were designed to find microRNA targets within 3' UTRs. <s> BIB023 | The ease of use is affected by the use and number of parameters, scope of predictions, format of inputs and ability to predict for novel miRNAs. The prediction methods rely on parameters that can be used to control how prediction is performed, e.g. the seed size, the number of allowed guanine-uracil wobbles and mismatches, selection of mRNA regions that are searched and the cutoffs for free energy and predicted propensity score. These parameters are usually set based on experience of the designer or user of a given method, or are optimized empirically using a data set. Eleven methods hardcode and hide these parameters from the users, which arguably makes them easier to use but also reduces ability of the end users to tune the models for specific needs or projects. RNAhybrid BIB001 offers eight (the most) parameters for tuning; RepTar and PITA BIB010 BIB016 have seven and five parameters, respectively; and eight predictors allow adjusting between one and four parameters. Importantly, these predictors provide default values for the parameters, so they can be seamlessly used even by layman users. A 'user-friendly' method should allow predicting a wide range of species and target types. Most of the early methods only allow predictions in the 3 0 UTRs, except for RNAhybrid BIB001 , miRanda BIB011 , DIANA-microT-CDS BIB022 and PACMIT-CDS BIB023 , that also search coding DNA sequences (CDSs) and TargetScanS and Xie's method BIB002 that consider open reading frames (ORFs) and promoters, respectively. As more miRNA targets were discovered beyond the 3 0 UTRs BIB012 BIB018 , several newer programs (RNA22 BIB007 , STarMir BIB013 , Mtar BIB017 and miRcode BIB020 ) predict in the 3 0 UTRs, CDSs and 5 0 UTRs. A few methods (RNAhybrid BIB001 , MicroInspector BIB003 , MicroTar BIB008 and MIRZA [100]) do not limit species for which they predict. They accept target genes as RNA sequences or provide stand-alone packages where users can prepare their own mRNA database. Most of the other predictors are constrained to human, mouse, fly and worm. The latter two were the first two species that were used to study miRNA targets. Seven methods consider a more restrictive set We summarize availability, ease of use and impact/popularity. means that a given aspect was missing. $ denotes unknown, as the information was not available in the paper or on the web server. 'Availability' focuses on type of implementation available to the end user: stand-alone (s), web server (ws), precomputed results (p) and upon request (ur), and provides the corresponding URLs. The links shown in bold font did not work. 'Ease of use' covers aspects related to the scope of a given method and ease to run it including the number of input parameters of the corresponding web servers, the targets regions and species that can be predicted, the approximate number of predicted targets, the format in which the searched genes are provided and the ability to predict for new miRNAs. 'Target region' indicates where a given method searches for targets: untranslated region (UTR), coding DNA segment (CDS) and open reading frame (ORF). The covered species are chicken (c), drosophila (d), chimpanzee (e), dog (g), human (h), mouse (m), nematode (n), opossum (o), rat (r), cow (w), thale cress (t), zebra fish (z) and vertebrate (V). The estimated count of predicted targets per miRNA per gene, or per miRNA only (for predictors do not allow inputting target gene), which is denoted by *, is given in the 'Number of targets' column; counts were estimated based on the corresponding papers or by testing the web servers. The possible formats of the input genes are by name, by sequence or by either name or sequence; 'none' denotes that searching particular genes is not allowed. 'new miRNA' shows whether a given method allows to predict new miRNAs.; methods that allow inputting miRNA sequences can be used to predict new miRNAs and are annotated with , otherwise . 'Impact/popularity' is assessed using the number of times a given method was highlighted and considered in the 15 review papers listed in Supplementary Table S2; of species including human and mouse, and four of them also predict for rat or chicken. Four recent methods (HuMiTar BIB014 , TargetMiner , MultiMiTar BIB019 and miRcode BIB020 ) focus on human mRNAs, and TargetBoost BIB004 works only in worms. Next, we analyze format of the inputs. The target genes can be specified by the name or identifier, by the mRNA sequence or are preloaded and the user is not allowed to enter them. Entering the name (e.g. GenBank Accession, NCBI gene ID and/ or name) is arguably convenient but it also limits the prediction to the mRNAs that are available in the considered reference database(s). Allowing the user to provide mRNA sequence alleviates this drawback. Six predictors (MicroInspector BIB003 , STarMir BIB013 , PITA BIB010 , MultiMiTar BIB019 , miREE and miRmap BIB021 ) accept either the name or the sequence, while 3 and 11 programs accept only sequences or names, respectively. The miRNAs can be inputted in two formats: by name and/or by sequence. Again, although it may be convenient to specify miRNAs by their names, this is a rather substantial drawback, which does not allow predicting for novel miRNAs that are nowadays discovered at a rapid pace. Six methods that offer web servers (TargetScan , DIANA-microT BIB022 , MicroInspector BIB003 , PITA BIB010 , miREE and miRmap BIB021 ) accept either the miRNAs name or the sequence, while 3 and 10 only take the sequences or the names, respectively. Table 3 reveals that 12 methods can predict targets of novel miRNAs. When considering the outputs, the number of predicted targets varies widely between methods. Table 3 reports that while most methods predict a few targets per gene per miRNA, some predict hundreds, while miRanda BIB011 generates hundreds of thousands of targets per miRNA. One way to measure impact/popularity of a given method is to analyze its inclusion in prior reviews. Considering the 16 reviews ( Supplementary Table S2 ), 29 of the 38 methods were included in at least one review and 11 in five or more. Moreover, five reviews highlighted/recommended certain predictors. TargetScan and TargetScanS were recommended in three and four reviews, respectively; DIANA-microT BIB015 and RNAhybrid BIB001 twice, and EMBL method BIB005 , PicTar BIB006 , EIMMo BIB009 and PITA BIB013 once. We also calculated the average citation counts per year since a given predictors was proposed, using the Web of Knowledge. Table 3 reveals that 21 of the 38 methods receive on average >10 citations per year and all methods published before 2008 receive at least five citations per year. Three early methods receive >100 citations every year. TargetScan/ TargetScanS is on the extreme end (400þ citations per year), and this could be attributed to its popularity and convenient availability, the fact that empirical studies often compare to this predictor, and because it is widely used in practical applications. |
Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Empirical evaluation of representative miRNA target predictors <s> microRNAs are small noncoding genes that regulate the protein production of genes by binding to partially complementary sites in the mRNAs of targeted genes. Here, using our algorithm PicTar, we exploit cross-species comparisons to predict, on average, 54 targeted genes per microRNA above noise in Drosophila melanogaster. Analysis of the functional annotation of target genes furthermore suggests specific biological functions for many microRNAs. We also predict combinatorial targets for clustered microRNAs and find that some clustered microRNAs are likely to coordinately regulate target genes. Furthermore, we compare microRNA regulation between insects and vertebrates. We find that the widespread extent of gene regulation by microRNAs is comparable between flies and mammals but that certain microRNAs may function in clade-specific modes of gene regulation. One of these microRNAs (miR-210) is predicted to contribute to the regulation of fly oogenesis. We also list specific regulatory relationships that appear to be conserved between flies and mammals. Our findings provide the most extensive microRNA target predictions in Drosophila to date, suggest specific functional roles for most microRNAs, indicate the existence of coordinate gene regulation executed by clustered microRNAs, and shed light on the evolution of microRNA function across large evolutionary distances. All predictions are freely accessible at our searchable Web site http://pictar.bio.nyu.edu. <s> BIB001 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Empirical evaluation of representative miRNA target predictors <s> BackgroundMicroRNAs have emerged as important regulatory genes in a variety of cellular processes and, in recent years, hundreds of such genes have been discovered in animals. In contrast, functional annotations are available only for a very small fraction of these miRNAs, and even in these cases only partially.ResultsWe developed a general Bayesian method for the inference of miRNA target sites, in which, for each miRNA, we explicitly model the evolution of orthologous target sites in a set of related species. Using this method we predict target sites for all known miRNAs in flies, worms, fish, and mammals. By comparing our predictions in fly with a reference set of experimentally tested miRNA-mRNA interactions we show that our general method performs at least as well as the most accurate methods available to date, including ones specifically tailored for target prediction in fly. An important novel feature of our model is that it explicitly infers the phylogenetic distribution of functional target sites, independently for each miRNA. This allows us to infer species-specific and clade-specific miRNA targeting. We also show that, in long human 3' UTRs, miRNA target sites occur preferentially near the start and near the end of the 3' UTR.To characterize miRNA function beyond the predicted lists of targets we further present a method to infer significant associations between the sets of targets predicted for individual miRNAs and specific biochemical pathways, in particular those of the KEGG pathway database. We show that this approach retrieves several known functional miRNA-mRNA associations, and predicts novel functions for known miRNAs in cell growth and in development.ConclusionWe have presented a Bayesian target prediction algorithm without any tunable parameters, that can be applied to sequences from any clade of species. The algorithm automatically infers the phylogenetic distribution of functional sites for each miRNA, and assigns a posterior probability to each putative target site. The results presented here indicate that our general method achieves very good performance in predicting miRNA target sites, providing at the same time insights into the evolution of target sites for individual miRNAs. Moreover, by combining our predictions with pathway analysis, we propose functions of specific miRNAs in nervous system development, inter-cellular communication and cell growth. The complete target site predictions as well as the miRNA/pathway associations are accessible on the ElMMo web server. <s> BIB002 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Empirical evaluation of representative miRNA target predictors <s> AbstractmirSVR is a new machine learning method for ranking microRNA target sites by a down-regulation score. The algorithm trains a regression model on sequence and contextual features extracted from miRanda-predicted target sites. In a large-scale evaluation, miRanda-mirSVR is competitive with other target prediction methods in identifying target genes and predicting the extent of their downregulation at the mRNA or protein levels. Importantly, the method identifies a significant number of experimentally determined non-canonical and non-conserved sites. <s> BIB003 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Empirical evaluation of representative miRNA target predictors <s> MicroRNAs, or miRNAs, post-transcriptionally repress the expression of protein-coding genes. The human genome encodes over 1000 miRNA genes that collectively target the majority of messenger RNAs (mRNAs). Base pairing of the so-called miRNA ‘seed’ region with mRNAs identifies many thousands of putative targets. Evaluating the strength of the resulting mRNA repression remains challenging, but is essential for a biologically informative ranking of potential miRNA targets. To address these challenges, predictors may use thermodynamic, evolutionary, probabilistic or sequence-based features. We developed an open-source software library, miRmap, which for the first time comprehensively covers all four approaches using 11 predictor features, 3 of which are novel. This allowed us to examine feature correlations and to compare their predictive power in an unbiased way using high-throughput experimental data from immunopurification, transcriptomics, proteomics and polysome fractionation experiments. Overall, target site accessibility appears to be the most predictive feature. Our novel feature based on PhyloP, which evaluates the significance of negative selection, is the best performing predictor in the evolutionary category. We combined all the features into an integrated model that almost doubles the predictive power of TargetScan. miRmap is freely available from http://cegg.unige.ch/mirmap. <s> BIB004 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Empirical evaluation of representative miRNA target predictors <s> MicroRNAs (miRNAs) are small endogenous RNA molecules that regulate gene expression through mRNA degradation and/or translation repression, affecting many biological processes. DIANA-microT web server (http://www.microrna.gr/webServer) is dedicated to miRNA target prediction/functional analysis, and it is being widely used from the scientific community, since its initial launch in 2009. DIANA-microT v5.0, the new version of the microT server, has been significantly enhanced with an improved target prediction algorithm, DIANAmicroT-CDS. It has been updated to incorporate miRBase version 18 and Ensembl version 69. The in silico-predicted miRNA–gene interactions in Homo sapiens, Mus musculus, Drosophila melanogaster and Caenorhabditis elegans exceed 11 million in total. The web server was completely redesigned, to host a series of sophisticated workflows, which can be used directly from the on-line web interface, enabling users without the necessary bioinformatics infrastructure to perform advanced multi-step functional miRNA analyses. For instance, one available pipeline performs miRNA target prediction using different thresholds and meta-analysis statistics, followed by pathway enrichment analysis. DIANAmicroT web server v5.0 also supports a complete integration with the Taverna Workflow Management System (WMS), using the in-house developed DIANA-Taverna Plug-in. This plug-in provides ready-to-use modules for miRNA target prediction and functional analysis, which can be used to form advanced high-throughput analysis pipelines. <s> BIB005 | We empirically evaluate seven representative target sequencebased predictors, i.e. methods that predict targets from miRNA and mRNA sequences, which are conveniently available to the end users that predict for human and mouse, and which provide sufficiently rich set of outputs. The selection criteria are discussed in the 'Materials and Methods' and Supplementary Table S6 . They include older (PicTar 2005 BIB001 ) and newer (TargetScan 6.2 , DIANA-microT-CDS BIB005 , miRanda 2010 BIB003 , EIMMo3 BIB002 , mirTarget2 v4 and miRmap v1.1 BIB004 ) approaches that use a variety of types of predictive models. The predictions, which were collected using their web servers or precomputed predictions, consist of binding targets (mRNA sequences and/or positions of the binding site on mRNA) and the corresponding propensities (real-valued scores that quantify probability of the miRNA:target interaction). Table 4 and Supplementary Table S7 summarize results of the assessment at the gene level (to predict mRNAs that interact with a given miRNA) on the TEST_gene data set and the duplex levels (to predict whether a given fragment on mRNA interacts with a given miRNA) on the TEST_duplex data set. A given miRNA:target pair was predicted as functional if the target was predicted using the corresponding miRNA; the remaining targets were assumed to be predicted as nonfunctional and the corresponding propensity was set to 0. When assessing the gene level predictions, we scored a given gene using the sum of propensities among all its predicted target sites for a given miRNA. Because these seven methods were initially published before 2012, we use experimentally validated miRNA targets that were published after 2012 to perform the empirical assessment. This limits a bias caused by a potential overlap between our benchmark data and data used to develop a given method. Considering the predictions of the miRNA:mRNA duplexes, TargetScan and DIANA-microT secure the highest AUC values of 0.674 and 0.673, respectively. Moreover, DIANA-microT has the highest MCC, which improves over the second best TargetScan by 0.073 [relative improvement of (0.273-0.200)/ 0.200*100% ¼ 36.8%]. TargetScan offers the highest sensitivity, i.e. it correctly predicts the largest fraction of the functional duplexes. On the other hand, PicTar has the highest specificity, i.e. it correctly predicts the largest number of the nonfunctional duplexes. This means that functional targets predicted by PicTar are likely to be functional. DIANA-microT offers the highest SNR of correct to incorrect functional predictions (SNRþ). TargetScan has the highest SNRÀ (SNR for the nonfunctional predictions), relatively good SNRþ and very good PNR (ratio of the number of predicted to native functional duplexes). PNR value of TargetScan reveals that it only slightly underpredicts, by 3.8%, the number of functional duplexes. The other methods, except for miRmap and EIMMo, underpredict the functional duplexes by a large margin. We illustrate relation between predictive quality (SNR values) and the outputted propensities binned to 10 intervals in Supplementary Figure S3A . The number of predicted duplexes and their SNR values in each interval are denoted by size and color of the bubbles (dark blue for accurate predictions), respectively. Alternating red and blue bubbles for a given predictor indicate that values of its propensity do not correlate with the underlying predictive quality. All methods have blue bubbles for propensity of 0, which means that they predict the nonfunctional duplexes well. However, predicted functional targets (propensity > 0) are often inaccurate (red bubbles) particularly for lower values of propensity. DIANA-microT predicts well when its propensity > 0.7, and miRmap and TargetScan when > 0.4 and 0.8, respectively. Analysis of statistical significance reveals that the differences in the AUC values (results above diagonal in Supplementary Table S7 ) are not statistically significant between TargetScan, DIANA-microT and miRmap. However, these three predictors are significantly better than the other four methods (P-value 0.001). Table 5 analyzes anticipated predictive performance at the duplex level based on information that is available before the prediction is performed, including the nucleotide composition of the seed region and the overall size of the input miRNA sequences. The hints summarized in this Table could guide selection of a predictor based on the miRNA sequence. Most methods, especially TargetScan, DIANA-microT and miRmap, Overview and assessment of miRNA target predictions in animals | 9 at Bibliothek Der TU Muenchen on December 16, 2014 http://bib.oxfordjournals.org/ predict well for medium-sized (22 nucleotides long) miRNAs. The predictions for longer miRNAs are generally less accurate. Considering the nucleotide content in the seed region, the same three methods provide high-quality predictions for miRNAs when the seeds have 2 adenines or 2 guanines, and < 2 cytosines. DIANA-microT also predict well for < 2 adenines and > 2 uracil and miRmap for < 2 adenines. Overall, we recommend TargetScan, DIANA-microT and miRmap because their AUCs > 0.7 for specific types of miRNAs. The overall prediction quality is higher and ranking of the methods is slightly different for the predictions on TEST_gene data set when compared with the TEST_duplex data set ( Table 4 ). TargetScan secures the highest AUC, while EIMMo moves up to the second place and provides the highest MCC. TargetScan improves in AUC over the second best EIMMo by 0.023 (relative improvement of 3.2%) and over miRmap by 0.043 (relative improvement of 4.8%). miRmap offers the highest sensitivity and TargetScan provides arguably the best balance between sensitivity and specificity (both scores are high and similar). MirTarget2 is the most conservative method given its highest specificity, precision and SNRþ, i.e. it predicts only a few functional targets but with high success rate. The PNR values reveal that TargetScan predicts exactly the right number of functional genes and EIMMo only 5.3% too few. Supplementary Figure S3B shows relation between predictive quality (SNR values) and the propensities generated by the prediction methods. Interestingly, predictions associated with higher propensities are more likely to be more accurate, as evidenced by the presence of (dark) blue bubbles. As a highlight, EIMMo predicts well in every propensity bin, and the targets predicted by TargetScan and miRanda with propensities >0.3 and 0.4, respectively, are characterized by high SNR values. Analysis of statistical significance of differences in the AUC values (results below diagonal in Supplementary Table S7 ) reveals that TargetScan's results are significant better (P-value 0.001) compared with the other predictors. AUCs of EIMMo and miRmap are not significantly different and significantly higher than AUCs of the other four methods (P-value 0.001). We also analyze relation between predictive performance at the gene level and the number of target sites predicted in a given gene (Supplementary Figure S3C ). Most methods, except for MirTarget2 and miRanda, can predict three or more target sites per gene for a given miRNA. We observe that predictive quality for genes for which at least two sites are predicted is better (bubbles have darker blue color), particularly for EIMMo, TargetScan and miRanda. This suggests that for these The compositional characteristics include the size of miRNA and the count of each nucleotide type in the seed region. The sizes are divided into short (<22 nt), medium (¼22 nt) and long (>22 nt). The count of nucleotides in the seeds of miRNAs is grouped into low (<2 nt), medium (¼2 nt) and high (>2 nt). The AUC values obtained by a given predictor are coded as: 'À' for [0, 0.55], '¼' for (0.55, 0.6], 'þ' for (0.6, 0.7] and 'þþ' for (0.7, 1.0]. We evaluate seven representative targets predictors. We measure area under the ROC curve (AUC), Matthews correlation coefficient (MCC), sensitivity (Sen.), specifity (Spe.), precision (Prec.), signal-to-noise ratio for predicted functional (SNRþ) and predicted nonfunctional targets (SNRÀ) and predicted-to-native functional target ratio (PNR). Methods are sorted in the descending order by their AUC values. The best value of each measurement across all the predictors is given in bold font. predictors higher number of predicted sites could be used as a marker of higher predictive quality. Predictions at the transcriptome/proteome scale on the TEST_geo and TEST_psilac data sets are evaluated at different thresholds that define the fraction of the most repressed and most overexpressed genes that are annotated as functional and nonfunctional, respectively (Figure 1 ). AUCs are generally higher at the gene level (TEST_geo data set) than at the protein level (TEST_psilac data set). Considering the three gene-level data sets, the ranking of the methods on the TEST_psilac data set is the same as on the TEST_gene data set, and slightly different on the TEST_geo data set. Based on the microarray data, miRmap achieves the best AUC, which is comparable with the AUC of TargetScan and EIMMo. These three predictors have AUCs > 0.7 when evaluated on the top 4% of genes with largest expression changes; using this threshold, on average each miRNA targets 176 mRNAs. We note miRmap was originally trained and tested on two of the three microarrays from the TEST_geo data set, so its predictive quality on this data set could be overestimated. Considering the pSILAC data, only TargetScan provides AUC > 0.7 when using top 1% of proteins for which expression levels change most; this threshold results in an annotation where on average each miRNA regulates 39 proteins. Overall, the AUC values decrease when more ambiguous genes (genes for which expression changes are weaker) are included, i.e. the fraction of the included repressed and overexpressed genes is higher. Figure S4A and B) leads to similar conclusions. TargetScan, EIMMo and miRmap secure the highest values of this index. |
Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Discussion <s> Computational microRNA (miRNA) target prediction is a field in flux. Here we present a guide through five widely used mammalian target prediction programs. We include an analysis of the performance of these individual programs and of various combinations of these programs. For this analysis we compiled several benchmark data sets of experimentally supported miRNA-target gene interactions. Based on the results, we provide a discussion on the status of target prediction and also suggest a stepwise approach toward predicting and selecting miRNA targets for experimental testing. <s> BIB001 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Discussion <s> In recent years, microRNAs (miRNAs) have emerged as a major class of regulatory genes, present in most metazoans and important for a diverse range of biological functions. Because experimental identification of miRNA targets is difficult, there has been an explosion of computational target predictions. Although the initial round of predictions resulted in very diverse results, subsequent computational and experimental analyses suggested that at least a certain class of conserved miRNA targets can be confidently predicted and that this class of targets is large, covering, for example, at least 30% of all human genes when considering about 60 conserved vertebrate miRNA gene families. Most recent approaches have also shown that there are correlations between domains of miRNA expression and mRNA levels of their targets. Our understanding of miRNA function is still extremely limited, but it may be that by integrating mRNA and miRNA sequence and expression data with other comparative genomic data, we will be able to gain global and yet specific insights into the function and evolution of a broad layer of post-transcriptional control. <s> BIB002 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Discussion <s> MicroRNAs (miRNAs) are a class of short endogenously expressed RNA molecules that regulate gene expression by binding directly to the messenger RNA of protein coding genes. They have been found to confer a novel layer of genetic regulation in a wide range of biological processes. Computational miRNA target prediction remains one of the key means used to decipher the role of miRNAs in development and disease. Here we introduce the basic idea behind the experimental identification of miRNA targets and present some of the most widely used computational miRNA target identification programs. The review includes an assessment of the prediction quality of these programs and their combinations. Contact: [email protected] Supplementary information: Supplementary data are available at Bioinformatics online. <s> BIB003 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Discussion <s> BackgroundmicroRNAs (miRNAs) are tiny endogenous RNAs that have been discovered in animals and plants, and direct the post-transcriptional regulation of target mRNAs for degradation or translational repression via binding to the 3'UTRs and the coding exons. To gain insight into the biological role of miRNAs, it is essential to identify the full repertoire of mRNA targets (target genes). A number of computer programs have been developed for miRNA-target prediction. These programs essentially focus on potential binding sites in 3'UTRs, which are recognized by miRNAs according to specific base-pairing rules.ResultsHere, we introduce a novel method for miRNA-target prediction that is entirely independent of existing approaches. The method is based on the hypothesis that transcription of a miRNA and its target genes tend to be co-regulated by common transcription factors. This hypothesis predicts the frequent occurrence of common cis-elements between promoters of a miRNA and its target genes. That is, our proposed method first identifies putative cis-elements in a promoter of a given miRNA, and then identifies genes that contain common putative cis-elements in their promoters. In this paper, we show that a significant number of common cis-elements occur in ~28% of experimentally supported human miRNA-target data. Moreover, we show that the prediction of human miRNA-targets based on our method is statistically significant. Further, we discuss the random incidence of common cis-elements, their consensus sequences, and the advantages and disadvantages of our method.ConclusionsThis is the first report indicating prevalence of transcriptional regulation of a miRNA and its target genes by common transcription factors and the predictive ability of miRNA-targets based on this property. <s> BIB004 | We reviewed 38 miRNA target predictors from all significant perspectives including their prediction models, availability, impact, user friendliness and protocols and measures that are used to evaluate their predictive performance. We found that standardized evaluation procedures are urgently needed because currently predictors are evaluated using different measures, different test protocols and using vastly different data sets. This hinders comparison among these methods and appropriate selection by the end users. To this end, we empirically and systematically compared seven representative predictors on four benchmark data sets, considering prediction of miRNA:mRNA duplexes and targets genes and proteins. We found that although certain methods, like TargetScan and miRmap, offer high overall predictive quality, there is no universally best predictor. For instance, PicTar and MirTarget2 provide predictions with high specificity and low number of FP (incorrectly predicted functional genes/duplexes). Thus, these two methods are suitable for users that would like to obtain a small subset of accurately predicted functional duplexes or genes. EIMMo predicts well at the gene level. We observe that the count of functional target sites or genes predicted by TargetScan is the closest to the native count (PNR value close to 1), and thus, this method should be used to accurately estimate the number of miRNA targets. We found that genes predicted as functional based on a higher number of sites are more likely to be accurate, particularly for the EIMMo and TargetScan predictors. Finally, the benchmark data sets and empirical results that we provide are useful to develop and comparatively assess future prediction methods. We observe that predictions at the duplex level are characterized by lower predictive quality than the predictions of targets genes. This agrees with intuition that predicting target sites should be more difficult than predicting target genes that offer more input information (longer mRNA sequence). Moreover, our estimates of the predictive performance are often lower than the estimates from the original publications. Possible reasons are as follows: (i) we use experimental validated data, which is likely more challenging than the artificial data that were used to assess previous predictors; (ii) the nonfunctional validated duplexes that we use have relatively many Watson-Crick (WC) base pairs in the seed regions (83% have at least six pairs, see Supplementary Table S8 ). These sites were likely hypothesized to be functional, refuted and thus annotated as nonfunctional. This is why they have such seeds, which in turn makes them more challenging to separate from the functional duplexes when compared with a more 'random' site; and (iii) miRanda, PicTar, EIMMo and MirTarget2 provide only precomputed predictions, which may not include most upto-date miRNA and transcript databases. Unfortunately, we could not compare results with the previous reviews BIB001 BIB003 BIB002 because they did not consider a balanced selection of measurements (e.g. only provided sensitivity and precision, which ignore TN), and such one-sided evaluation would not be meaningful. Our review offers in-depth insights that could be used by the end users to select prediction methods based on their predictive performance ( Table 4 ) and their input miRNAs ( Table 5 ). We also provide several practical observations that consider specifics of applications of interest. Arguably, the commonly considered characteristics of the applications of the miRNA target predictors include the need to consider novel miRNAs and to focus on certain regions in the mRNA, to predict a more complete or smaller and more accurate list of targets, to predict for a large set of miRNAs, to tweak desired parameters of the miRNA-mRNA interaction and to generate propensities for the predicted interactions. We address these characteristics as follows: • Only some methods can predict targets for novel miRNAs (see 'New miRNA' column in Table 3 ). • Applications that focus on particular regions (e.g. 5 0 UTR, CDS, promoters) should use predictors that were designed to consider these regions (see 'target region' column in Table 3 ). • Some methods generate few and potentially more accurate targets, while some predict a larger and more complete set of targets that may include more FP (see 'Number of targets' column in Table 3 ). Users should choose an appropriate method depending on whether they look for a more complete or a more accurate set of targets. • When predicting for a large number of miRNAs, the downloadable precomputed results or methods that provide APIs should be used (see 'batch search' in the 'Note' column in the Supplementary Table S6 ). • The end users should apply predictors with tunable seed type parameter, such as PITA, when searching for targets that use a particular seed type. Also, when aiming to find targets with low number of WC pairs in the seed region, only some predictors that consider such targets, like miREE, can be used. • When predicting the target site, the methods that can only predict target genes cannot be used (see 'Target site tracking' column in Supplementary Table S6 ). • Only some predictors provide predictions with the associated propensities of the interaction; many methods only provide binary (functional versus nonfunctional) predictions (see 'Score' column in Supplementary Table S6) Although undoubtedly computational miRNA target predictors are useful and their predictive performance is relatively good, we suggest several areas where further improvements are possible: • Current methods use many different predictive models. In contrast to other areas of bioinformatics, the empirical (knowledgebased) models do not outperform the heuristic models. This could be due to the low quantity of training data, use of artificial training data (randomly generated nonfunctional targets) and unbalanced nature of the data (low number of nonfunctional targets). Thus, one of the future aims should be to improve the quality and quantity of the training data. • Further improvements in predictive quality could be attained by finding and using not yet known characteristics of miRNA:target interactions. For instance, recently cis-element was used to connect primary miRNAs to their potential targets BIB004 , and Gene Ontology annotations and protein-protein interaction networks • Computational miRNA target prediction from sequence is essential to characterize miRNA functions and to develop miRNA-based therapeutics. • We comprehensively summarize 38 miRNA target predictors and empirically evaluate seven methods on four benchmark data sets that annotate targets at the binding site, gene and protein levels. • Current miRNA target prediction methods substantially vary in their predictive methodology, usability and predictive performance. • We offer insights for the end users to select appropriate methods according to their specific application and we discuss advantages and disadvantages of the considered predictors. • New miRNA target predictors are needed, particularly focusing on the high-throughput predictions, improved predictive performance and provision of an expanded range of predicted outputs. Overview and assessment of miRNA target predictions in animals | 13 at Bibliothek Der TU Muenchen on December 16, 2014 http://bib.oxfordjournals.org/ Downloaded from |
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Introduction <s> Due to the introduction of newer technologies like Long Term Evolution (LTE) in already deployed cellular access networks, changes in the energy-efficiency of networks consisting predominantly of macro base station sites (BSSs) can be expected. An investigation has been performed for two prominent energy metrics of cellular networks: Power per Unit Area (PUA) and Energy per bit and Unit Area (EbUA). Analytical relations have been developed that express the influence of parameters such as BSs' transmit (Tx) powers, inter-site distances (ISDs), and a number of heterogeneous macro or LTE micro BSSs on the PUA and EbUA. It has been shown that appropriate selection of these parameters can ensure significant energy savings. Besides the possibility of finding an optimal trade-off among ISDs and Tx powers of macro BSSs, which will minimize PUA and maximize EbUA, adding micro LTE BSs to such heterogeneous networks contributes to the improvement of network energy efficiency. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Introduction <s> In the present scenario, an energy efficiency has become a matter of prime importance for wireless networks. To meet the demands of an increased capacity, an improved data rate, and a better quality of the service of the next-generation networks, there is a need to adopt energy-efficient architectures. Along with these requirements, it is also our social responsibility to reduce the carbon footprint by reducing the power consumption in a wireless network. Hence, a green communication is an urgent need. In this paper, we have surveyed various techniques for the power optimization of the upcoming 5G networks. The primary focus is on the use of relays and small cells to improve the energy efficiency of the network. We have discussed the various scenarios of relaying for the next-generation networks. Along with this, the importance of simultaneous wireless power and information transfer, massive multiple input multiple output, and millimeter waves has been analyzed for 5G networks. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Introduction <s> The emerging 5G wireless networks will pose extreme requirements such as high throughput and low latency. Caching as a promising technology can effectively decrease latency and provide customized services based on group users behaviour (GUB). In this paper, we carry out the energy efficiency analysis in the cache-enabled hyper cellular networks (HCNs), where the macro cells and small cells (SCs) are deployed heterogeneously with the control and user plane (C/U) split. Benefiting from the assistance of macro cells, a novel access scheme is proposed according to both user interest and fairness of service, where the SCs can turn into semi- sleep mode. Expressions of coverage probability, throughput and energy efficiency (EE) are derived analytically as the functions of key parameters, including the cache ability, search radius and backhaul limitation. Numerical results show that the proposed scheme in HCNs can increase the network coverage probability by more than 200% compared with the single- tier networks. The network EE can be improved by 54% than the nearest access scheme, with larger research radius and higher SC cache capacity under lower traffic load. Our performance study provides insights into the efficient use of cache in the 5G software defined networking (SDN). <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Introduction <s> The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in 5G cellular networks. While massive MIMO will reduce the transmission power at the expense of higher computational cost, the question remains as to which (computation or transmission power) is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this article is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50 percent of the energy is consumed by the computation power at 5G small cell BSs. Moreover, the computation power of a 5G small cell BS can approach 800 W when massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks. <s> BIB004 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Introduction <s> Although the promising 5G cell network technology has increased the transmitting rate greatly, it has also brought some challenges. The energy efficiency has become an important topic in 5G networks. In this paper, the energy efficiency of small cell networks is analyzed, and the existing objective functions are classified in order to minimize the energy consumption, and to maximize the energy efficiency, harvested energy, and energy-aware transmission. Commonly used metrics were analyzed on equipment, base station, and network levels, respectively. Moreover, the methods for energy efficiency improvement were introduced according to above-mentioned metrics. Afterward, the relationships between energy efficiency, spectrum efficiency, and space efficiency were discussed. In order to improve efficiency on equipment, base station, and network levels, the energy and spectrum market is proposed and guidelines for the future research on metrics, methods, and market are presented. The proposed market was verified by simulations, and the simulation results have shown that the proposed market improves the energy efficiencies effectively. <s> BIB005 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Introduction <s> Along with spectral efficiency (SE), energy efficiency (EE) is a key performance metric for the design of 5G and beyond 5G (B5G) wireless networks. At the same time, infrastructure sharing among multiple operators has also emerged as a new trend in wireless communication networks. This paper presents an optimization framework for EE and SE maximization in a network, where radio resources are shared among multiple operators. We define a heterogeneous service level agreement (SLA) framework for a shared network, in which the constraints of different operators are handled by two different multi-objective optimization approaches namely the utility profile and scalarization methods. Pareto-optimal solutions are obtained by merging these approaches with the theory of generalized fractional programming. The approach applies to both noise-limited and interference-limited systems, with single-carrier or multi-carrier transmission. Extensive numerical results illustrate the effect of the operator specific SLA requirements on the global spectral and EE. Three network scenarios are considered in the numerical results, each one corresponding to a different SLA, with different operator-specific EE and SE constraints. <s> BIB006 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Introduction <s> To avoid the latency from RRC state transition procedure, legacy network maintains UE as RRC connected state for pre-defined time duration even without any traffic arrival. However, it consumes UE battery rapidly because UE should monitor PDCCH, and send CQI feedback periodically. In this paper, we design the RRC connection control to enhance energy efficiency with moderate control signaling overhead. In RRC INACTIVE state, the newly introduced in NR, both network and UE save UE context including bearer configuration and security even after UE released from network. Owing to the saved UE context, the RRC state transition from RRC INACTIVE to RRC CONNECTED requires fewer number of CN signalling. And thus network can release UE to RRC INACTIVE more aggressively with shorter timer. Furthermore, we propose the connectionless data transmission in RRC INACTIVE without RRC state transition to RRC CONNECTED. In our performance analysis, UE energy consumption can be improved by 50% for the modem only and 18% reduction for the total device including display. Furthermore, when the small data or background (keep-alive) traffic is transferred in RRC INACTIVE, the energy efficiency is increased up to double. <s> BIB007 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Introduction <s> Using a network of cache enabled small cells, traffic during peak hours can be reduced by proactively fetching the content that is most likely to be requested. In this paper, we aim to explore the impact of proactive caching on an important metric for future generation networks, namely, energy efficiency (EE). We argue that, exploiting the spatial repartitions of users in addition to the correlation in their content popularity profiles, can result in considerable improvement of the achievable EE. In this paper, the optimization of EE is decoupled into two related subproblems. The first one addresses the issue of content popularity modeling. While most existing works assume similar popularity profiles for all users, we consider an alternative framework in which, users are clustered according to their popularity profiles. In order to showcase the utility of the proposed clustering, we use a statistical model selection criterion, namely, Akaike information criterion. Using stochastic geometry, we derive a closed-form expression of the achievable EE and we find the optimal active small cell density vector that maximizes it. The second subproblem investigates the impact of exploiting the spatial repartitions of users. After considering a snapshot of the network, we formulate a combinatorial problem that optimizes content placement in order to minimize the transmission power. Numerical results show that the clustering scheme considerably improves the cache hit probability and consequently the EE, compared with an unclustered approach. Simulations also show that the small base station allocation algorithm improves the energy efficiency and hit probability. <s> BIB008 | Advances in telecommunication systems around the world have always been pushing the wireless infrastructure to be more resilient and scalable. Ever growing faster data rates and a demand for the highest quality of service has been a strong constraint when energy conservation needs to be considered. Data rates as high as that of 1 Gbps have been foreseen with the advent of 5G. In addition, with an explosive number of heterogeneous devices coming online, including sensors for home security, tablets, and wearable health monitors, the computational power of base stations must increase. An estimated 50% increase in the computing power of baseband units has been predicted to handle this traffic burst BIB004 . Thus, the focus on energy-efficiency needs to include optimization of computational complexity in addition to optimization of transmission power. An estimated 75% of the Information and Communications Technology (ICT) industry is supposed to be wireless by 2020 and today 5% of the world's carbon footprint is coming from this industry alone. A consensus between academia and industry dictates that the foreseen 1000× capacity gain must be achieved with either the present energy consumption or lower . Thanks to energy-efficiency efforts world-wide, energy consumption in the 5G realm, in terms of bits/joule, has been considered as an important design parameter. In 4th generation (4G), the concept of small cells has been introduced to increase the coverage and capacity. Therefore, BIB001 conducted an analysis on energy consumption per unit area for a heterogeneous deployment of cells for fourth generation networks. With 5G, small cells are inevitable in deployments due to their advantage of improved traffic handling within a smaller area as well as the shorter cell ranges that result from the use of higher frequencies. Yet, the increasing number of base stations translate into more energy consumption, although the increase in consumption will not be linear. Small cells, or in other words densification, calls for sophisticated management of resources. Most recently, intelligent resource allocation and control techniques utilizing machine learning algorithms have been suggested to help next generation radios in their autonomous reconfiguration for improving the data rates, energy efficiency and interference mitigation. Overall, the emerging sophistication in both User Equipment (UE) and network side has increased the energy consumption and thus objective functions have been devised to maximize the energy efficiency, harvested energy and energy aware transmission BIB005 . Many of the existing energy efficiency improvement techniques include the use of green energy sources for base stations, modifying the coverage area of a base station depending upon the load level, putting lightly loaded base stations to sleep and load balancing by handing over the UEs to the macro base station. A survey on these technologies for the 5G Radio Access Network (RAN) can be found in BIB002 . This survey has been aimed to contribute towards a greener and a sustainable telecommunication's ecosystem by reviewing and bringing together some of the latest ideas and techniques of energy conservation at base station and network level. A high level diagram shows the areas addressed in Figure 1 . A few of the prominent examples include the introduction of a newer Radio Resource Control (RRC) state for context signalling and cutting down on the redundant state changes BIB007 . Utilization of advanced clustering and caching techniques on the RAN side have been highly appreciated for their benefits of improving the latency of getting the data requested by a group of users and possibly eliminating the factor of clogging the network by a huge number of requests for the same content BIB008 BIB003 . A case study of commercial resource sharing among different operators bears fruitful results in terms of reduced deployment costs and good data rates with minimum interference among them BIB006 . The upcoming sections introduce the basics of energy efficiency, provide justification for the need of gauging the energy consumption and then present the most recent research works carried out for the optimization at different levels of the architecture. This survey bears its uniqueness in its holistic approach to energy-efficiency by covering radio, core and computing side of 5G. This paper is also different than the surveys in the literature BIB004 BIB001 BIB005 , as it focuses on works published in the last few years where the majority of the studies focus on concepts specific to the new 5G standard. |
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Background on Energy Efficiency <s> In this paper heterogeneous wireless cellular networks based on two-tier architecture consisting of macrocells and femtocells are considered. Methods of femtocells deployment and management are explored in order to determine their effects on performance of wireless cellular networks. Thus, network performance parameters are described and analytically calculated for different two-tier network architectures. A specific approach is presented in the paper, where calculations of the network performance parameters are supported with some of the results obtained using an appropriate simulation tool. In such a manner, energy efficiency of the considered two-tier network architectures is studied by introducing a number of so called green metrics. It is clearly shown that significant energy efficiency, as well as throughput, improvements can be achieved by adopting heterogeneous architecture for wireless cellular networks. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Background on Energy Efficiency <s> This paper studies interoperability concerns arising from coupling energy-aware radio resource and topology management techniques which are developed to minimise the energy consumption of current and future mobile broadband systems. This paper also proposes an Energy-aware Network Management middleware that harmonises the joint operation of energy-aware radio resource and topology management schemes enhancing the system QoS as well as energy efficiency. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Background on Energy Efficiency <s> Although the promising 5G cell network technology has increased the transmitting rate greatly, it has also brought some challenges. The energy efficiency has become an important topic in 5G networks. In this paper, the energy efficiency of small cell networks is analyzed, and the existing objective functions are classified in order to minimize the energy consumption, and to maximize the energy efficiency, harvested energy, and energy-aware transmission. Commonly used metrics were analyzed on equipment, base station, and network levels, respectively. Moreover, the methods for energy efficiency improvement were introduced according to above-mentioned metrics. Afterward, the relationships between energy efficiency, spectrum efficiency, and space efficiency were discussed. In order to improve efficiency on equipment, base station, and network levels, the energy and spectrum market is proposed and guidelines for the future research on metrics, methods, and market are presented. The proposed market was verified by simulations, and the simulation results have shown that the proposed market improves the energy efficiencies effectively. <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Background on Energy Efficiency <s> Today many users with their smart mobile devices enjoy the benefits of broadband Internet services. This is primarily enabled by pushing computing, control, data storage and processing into the cloud. However, the cloud encounters growing limitations, such as reduced latency, high mobility, high scalability and real-time execution in order to meet the computing and intelligent networking demands for the next 5G mobile and wireless network. A new paradigm called Fog Computing and Networking, or briefly Fog has emerged to resolve these limits. Fog distributes computing, data processing, and networking services closer to the end users. It is an architecture where distributed edge and user devices collaborate with each other and with the clouds to carry out computing, control, networking, and data management tasks. Fog applied in 5G network can significantly improve network performance in terms of spectral and energy efficiency, enable direct device-to-device wireless communications, and support the growing trend of network function virtualization and separation of network control intelligence from radio network hardware. This paper evaluates the quality of cloud and fog computing and networking orchestrated services in 5G mobile and wireless network in terms of energy efficiency. <s> BIB004 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Background on Energy Efficiency <s> In next generation wireless networks along with the overwhelming demand of high data rate and network capacity, the user demands ubiquitous connectivity with the network. In order to fulfill the demand of anywhere at any time data services, the network operators have to install more and more base stations that eventually leads towards high power consumption. For this, the potential solution is derived from 5G network that proposes a heterogeneous environment of wireless access networks. More particularly, deployment of Femto and Pico cell under the umbrella of Macro cell base stations (BS). Such networking strategy will result high network capacity and energy efficiency along with better network coverage. In this article, an analysis of energy efficiency has been carried out by using two-tier and three tier network configurations. The simulation results demonstrate that rational deployment of small cells improves the energy efficiency of wireless network. <s> BIB005 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Background on Energy Efficiency <s> Energy efficiency is a major requirement for next generation mobile networks both as an end to reduce operational expenses and to increase the systems' ecological friendliness. Another integral part of 5G networks is the increased density of the deployment of small radius base stations, such as femtocells. Based on the design principle that demands a system to be active and transmitting only when and where it is needed, we evaluate the energy savings harvested when sleep mode techniques are enforced in dense femtocell deployments. We present our novel variations of sleep mode combined with hybrid access strategies and we estimate capacity and energy benefits. Our simulations show significant advantages in performance and energy efficiency. <s> BIB006 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Background on Energy Efficiency <s> As a new promising of higher data rates and to enable the Internet of Things (IoT), the thirst of energy efficiency in communication networks has become an important milestone in the design and operation. With the emergence of the 5G of wireless networks and the deployment of billions of base stations to the connected devices, the requirement for system design and energy efficiency management will become more attractive. In addition, in the next era of cellular, the energy efficiency is the most important requirement determined by the needs in reducing the carbon footprint of communications, and also in extending the life of the terminal battery. Nevertheless, the new challenge has emerged especially in the backbone of the networks. Therefore, the aim of this paper is to present the potential of 5G system to meet the increasing needs in devices and explosive capacity without causing any significant energy consumption based on functional split architecture particularly for 5G backhaul. <s> BIB007 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Background on Energy Efficiency <s> Cell switch-off (CSO) is an important approach to reducing energy consumption in cellular networks during off-peak periods. CSO addresses the research question of which cells to switch off when. Whereas online CSO, based on immediate user demands and channel states, is problematic to implement and difficult to model, off-line CSO is more practical and tractable. Furthermore, it is known that regular cell layouts generally provide the best coverage and spectral efficiency, which leads us to prefer regular static (off-line) CSO. We introduce sector-based regular CSO patterns for the first time. We organize the existing and newly introduced patterns using a systematic nomenclature; studying 26 patterns in total. We compare these patterns in terms of energy efficiency and the average number of users supported, via a combination of analysis and simulation. We also compare the performance of CSO with two benchmark algorithms. We show that the average number of users can be captured by one parameter. Moreover, we find that the distribution of the number of users is close to Gaussian, with a tractable variance. Our results demonstrate that several patterns that activate only one out of three sectors are particularly beneficial; such CSO patterns have not been studied before. <s> BIB008 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Background on Energy Efficiency <s> Given the rising concerns on carbon emission and increasing operating expense pressure, mobile network operators and device vendors are actively driving the energy-efficient network evolution. Energy efficiency (EE) has been determined as one of the key objectives of the 5G system. To realise sustainable 5G, various new technologies have been proposed to reduce conventional energy consumption. Meanwhile, green energy sources are explored to reduce the dependence on conventional energy. This study makes a survey on recent academia and industry research on the EE of the 5G system from perspectives of radio resource management, architecture and deployment paradigm, green energy harvesting and smart grid integration. Typical green 5G enabling technologies are presented and discussed comprehensively. Moreover, the latest progress on EE in 3GPP is also investigated. Given the broad research areas, only those critical open issues and challenges are presented to inspire further investigations. Finally, the authors identify several research directions as the way forward to realise the green 5G system. <s> BIB009 | A formal relationship between energy efficiency and Signal to Interference Noise Ratio (SINR) has been presented in using the bit/joule notion. Meanwhile, Reference BIB003 lays the foundation for energy efficiency in different parts of the network including base stations and the core network. In the literature, energy saving and use of green energy resources have been the two mainstream approaches to offer energy efficiency. Among the energy saving techniques, cell-switch off techniques have been widely exploited. For instance, in the EU FP7 ABSOLUTE project, an energy aware middleware has been proposed that would use the capacity-based thresholds for activation of the base stations BIB002 . In several other studies, data offloading has been considered as an energy-efficient approach. Furthermore, authors in BIB009 have put together several techniques for not only reducing the energy consumption from the traditional energy sources but also for surveying newer Energy Efficiency (EE) schemes in the End-to-End (E2E) system. One of the remarkable mentions by the authors includes the implementation of 3rd Generation Partnership Project (3GPP) compliant EE manager that would be responsible for monitoring energy demands in an E2E session and for implementation of the policies needed for catering to the ongoing energy demand. In addition to energy saving approaches, recently simultaneous wireless energy transfer has been studied. Furthermore, local caching techniques have been proved to be beneficial for relieving the load on the backhaul network by storing the content locally and limiting the re-transmissions, hence reducing energy consumption. Similarly, a cloud based RAN has been envisioned as a possible solution for the computational redistribution in BIB003 . Many of the tasks previously performed by a base station (BS) would be taken away to a data center and only decision making for Radio Frequency (RF) chains as well as baseband to RF conversion would be given to base stations. Traffic pattern and demands would then be catered for well before time and redundant BS would be put to sleep mode according to BIB004 . Furthermore, full duplex Device-to-Device (D2D) communication with uplink channel reuse has been considered to improve SINR and transmission power constraints. A gain of 36% energy efficiency has been demonstrated using the full duplex scheme with enhanced self-interference mitigation mechanism instead of half duplex [14] . As machine learning is penetrating more and more into the operation of wireless networks, Reference suggests that machine learning algorithms would greatly help to predict the hot spots so that other resources could be switched off when not needed. The concept of energy efficiency being treated as a key performance indicator in the upcoming 5G standard considers it to be a global ambition, but it cannot be declared as a specific actionable item on either the operator or vendor side. Divide and conquer approach has been applied to the entire network and improvements have been targeted at either component level, equipment level or at network level employing newer algorithms at both BS and UE side. This discussion advocates the fact that operators would have the leverage of tuning their network for a balance between quality of service and energy consumption. In the following sections, we introduce the recent works in energy-efficiency in 5G as highlighted in Table 1 preceding to a discussion on open issues and challenges. EE improvement by a Centralized BB processing design BIB007 Analytical modelling of EE for a heterogeneous network BIB005 Energy Efficiency Metrics for Heterogeneous Wireless Cellular Networks BIB001 Incentive based sleeping mechanism for densely deployed femto cells BIB006 Sector based switching technique BIB008 |
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> As a promising downlink multiple access scheme for future radio access (FRA), this paper discusses the concept and practical considerations of non-orthogonal multiple access (NOMA) with a successive interference canceller (SIC) at the receiver side. The goal is to clarify the benefits of NOMA over orthogonal multiple access (OMA) such as OFDMA adopted by Long-Term Evolution (LTE). Practical considerations of NOMA, such as multi-user power allocation, signalling overhead, SIC error propagation, performance in high mobility scenarios, and combination with multiple input multiple output (MIMO) are discussed. Using computer simulations, we provide system-level performance of NOMA taking into account practical aspects of the cellular system and some of the key parameters and functionalities of the LTE radio interface such as adaptive modulation and coding (AMC) and frequency-domain scheduling. We show under multiple configurations that the system-level performance achieved by NOMA is higher by more than 30% compared to OMA. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Dynamic adaptation of the base stations on/off activity or transmit power, according to space and time traffic variations, are measures accepted in the most contemporary resource management approaches dedicated to improving energy efficiency of cellular access networks. Practical implementation of both measures results in changes to instantaneous base station power consumption. In this paper, extensive analyses presenting influence of the transmit power scaling and on/off switching on instantaneous macro base stations power consumption are given. Based on real on-site measurements performed on a set of macro base stations of different access technologies and production years, we developed linear power consumption models. These models are developed by means of linear regression and precisely model the influence of transmit power on instantaneous power consumption for the second, third and fourth generations of macro base stations. In order to estimate the potential energy savings of transmit power scaling and on/off switching for base stations of different generations, statistical analyses of measured power consumptions are performed. Also, transient times and variations of base stations instantaneous power consumption during transient periods initiated with on/off switching and transmit power scaling are presented. Since the developed power consumption models have huge confidence follow measured results, they can be used as general models for expressing the relationship between transmitted and consumed power for macro base stations of different technologies and generations. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Mobility, resource constraints and unreliable wireless links of mobile P2P networks will cause high data access latency and the communication overhead. Cooperative caching is widely seen as an effective solution to improve the overall system performance in mobile P2P networks. In this paper we present a novel cooperative caching scheme for mobile P2P networks. In our scheme the caching space of each node is divided into three parts: locale caching, cooperative caching and path caching, which respectively store the requested data objects of the nodes, the hot data objects in the networks and the data objects path. We also put forward the cache replacement strategy according to our scheme. Proposed cache replacement strategy not only takes into account the need of the nodes, but also pays attention to collaborative work between nodes. We evaluate the performance of our scheme by using NS-2. The experimental results show that the cache hit ratio is effectively increased and the average hops count is reduced. <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> This paper focuses on energy efficiency aspects and related benefits of radio-access-network-as-a-service (RANaaS) implementation (using commodity hardware) as architectural evolution of LTE-advanced networks toward 5G infrastructure. RANaaS is a novel concept introduced recently, which enables the partial centralization of RAN functionalities depending on the actual needs as well as on network characteristics. In the view of future definition of 5G systems, this cloud-based design is an important solution in terms of efficient usage of network resources. The aim of this paper is to give a vision of the advantages of the RANaaS, to present its benefits in terms of energy efficiency and to propose a consistent system-level power model as a reference for assessing innovative functionalities toward 5G systems. The incremental benefits through the years are also discussed in perspective, by considering technological evolution of IT platforms and the increasing matching between their capabilities and the need for progressive virtualization of RAN functionalities. The description is complemented by an exemplary evaluation in terms of energy efficiency, analyzing the achievable gains associated with the RANaaS paradigm. <s> BIB004 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> This monograph presents a unified framework for energy efficiency maximization in wireless networks via fractional programming theory. The definition of energy efficiency is introduced, with reference to single-user and multi-user wireless networks, and it is observed how the problem of resource allocation for energy efficiency optimization is naturally cast as a fractional program. An extensive review of the state-of-the-art in energy efficiency optimization by fractional programming is provided, with reference to centralized and distributed resource allocation schemes. A solid background on fractional programming theory is provided. The key-notion of generalized concavity is presented and its strong connection with fractional functions described. A taxonomy of fractional problems is introduced, and for each class of fractional problem, general solution algorithms are described, discussing their complexity and convergence properties. The described theoretical and algorithmic framework is applied to solve energy efficiency maximization problems in practical wireless networks. A general system and signal model is developed which encompasses many relevant special cases, such as one-hop and two-hop heterogeneous networks, multi-cell networks, small-cell networks, device-to-device systems, cognitive radio systems, and hardware-impaired networks, wherein multiple-antennas and multiple subcarriers are possibly employed. Energy-efficient resource allocation algorithms are developed, considering both centralized, cooperative schemes, as well as distributed approaches for self-organizing networks. Finally, some remarks on future lines of research are given, stating some open problems that remain to be studied. It is shown how the described framework is general enough to be extended in these directions, proving useful in tackling future challenges that may arise in the design of energy-efficient future wireless networks. <s> BIB005 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Recent trend of network communication is leading towards the innovation of high speed wireless broadband technology. The scheduling of real-time traffic in certain network will give high impact on the system, so the most efficient scheduling is crucial. This paper proposes an energy-efficient resource allocation scheduler with QoS aware support for LTE network. The ultimate aim is to promote and achieve the green wireless LTE network and environmental friendly. Some related works on green LTE networks are also being discussed. <s> BIB006 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Traditional wireless networks mainly rely on macro cell deployments, meanwhile with the advances in forth generation networks, the recent architectures of LTE and LTE-A support Heterogeneous Networks (HetNets) that employ a mix of macro and small cells. Small cells aim at increasing coverage and capacity. Coverage both at cell edges and indoor environments can be significantly improved by relays and small cells. Capacity is inherently limited because of the limited spectrum, and although 4G wireless networks have been able to provide a considerable amount of increase in capacity, it has always been challenging to keep up with the growing user demands. In particular, the high volume of traffic resulting from video uploads or downloads is the major reason for the ever growing user demand. In the Internet, content caching at locations closer to the users have been a successful approach to enhance resource utilization. Very recently, content caching within the wireless network has been considered for 4G networks. In this paper, we propose an Integer Linear Programming (ILP)-based energy-efficient content placement approach for small cells. The proposed model, namely minimize Uplink Power and Caching Power (minUPCA), jointly minimizes uplink and caching powers. We compare the performance of minUPCA with a scheme that only aims to minimize uplink power. Our results show that minUPCA provides a compromise between the uplink energy budget of the User Equipment (UE) and the caching energy budget of the Small Cell Base Station (SCBS). <s> BIB007 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> In this paper we evaluate the energy efficiency of a 5G radio access network (RAN) based on LTE technology when comparing two small cell deployment strategies to enhance the RAN capacity. Specifically, we compare densifying a 3-sector macrocell RAN with small cells against first upgrading to a 6-sector macrocell RAN before densifying with small cells. The latter strategy has been used in urban areas by 4G network operators. The energy consumption gain (ECG) is used as a figure of merit in this paper. The radio base station power consumption is estimated by using a realistic power consumption model. Our results show that deploying a small cell overlay in a 3-sector macrocell RAN is more energy efficient than deploying a small cell overlay in a 6-sector macrocell RAN even though the latter uses fewer small cells. Further energy savings can be achieved by implementing an adaptive sectorisation technique. An energy saving of 25% is achieved for 6-sectors when progressively decreasing the number of active sectors from 6 to 1 in accordance with the temporal average traffic load. Irrespective, the 3-sector option with or without incorporating the adaptive sectorisation technique is always more energy efficient. <s> BIB008 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> A number of merits could be brought by network function virtualization (NFV) such as scalability, on demand allocation of resources, and the efficient utilization of network resources. In this paper, we introduce a framework for designing an energy efficient architecture for 5G mobile network function virtualization. In the proposed architecture, the main functionalities of the mobile core network which include the packet gateway (P-GW), serving gateway (S-GW), mobility management entity (MME), policy control and charging role function, and the home subscriber server (HSS) functions are virtualized and provisioned on demand. We also virtualize the functions of the base band unit (BBU) of the evolved node B (eNB) and offload them from the mobile radio side. We leverage the capabilities of gigabit passive optical networks (GPON) as the radio access technology to connect the remote radio head (RRH) to new virtualized BBUs. We consider the IP/WDM backbone network and the GPON based access network as the hosts of virtual machines (VMs) where network functions will be implemented. Two cases were investigated; in the first case, we considered virtualization in the IP/WDM network only (since the core network is typically the location that supports virtualization) and in the second case we considered virtualization in both the IP/WDM and GPON access network. Our results indicate that we can achieve energy savings of 22% on average with virtualization in both the IP/WDM network and GPON access network compared to the case where virtualization is only done in the IP/WDM network. <s> BIB009 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> In this paper, we investigate the interference management problem in a full-duplex cellular network from a spectrum resource allocation perspective. In order to maximize the full-duplex network throughput, we propose an interference area based resource allocation algorithm, which can pair the downlink UE and uplink UE with limited mutual interference. The simulation results verify the efficiency of the proposed interference area based resource allocation algorithm in the investigated full-duplex cellular network. <s> BIB010 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> The emerging 5G wireless networks will pose extreme requirements such as high throughput and low latency. Caching as a promising technology can effectively decrease latency and provide customized services based on group users behaviour (GUB). In this paper, we carry out the energy efficiency analysis in the cache-enabled hyper cellular networks (HCNs), where the macro cells and small cells (SCs) are deployed heterogeneously with the control and user plane (C/U) split. Benefiting from the assistance of macro cells, a novel access scheme is proposed according to both user interest and fairness of service, where the SCs can turn into semi- sleep mode. Expressions of coverage probability, throughput and energy efficiency (EE) are derived analytically as the functions of key parameters, including the cache ability, search radius and backhaul limitation. Numerical results show that the proposed scheme in HCNs can increase the network coverage probability by more than 200% compared with the single- tier networks. The network EE can be improved by 54% than the nearest access scheme, with larger research radius and higher SC cache capacity under lower traffic load. Our performance study provides insights into the efficient use of cache in the 5G software defined networking (SDN). <s> BIB011 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Wireless networks have made huge progress over the past three decades. Nevertheless, emerging fifth-generation (5G) networks are under pressure to continue in this direction at an even more rapid pace, at least for the next ten to 20 years. This pressure is exercised by rigid requirements as well as emerging technology trends that are aimed at introducing improvements to the 5G wireless world. <s> BIB012 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> In this paper, we study the joint resource allocation algorithm design for downlink and uplink multicarrier transmission assisted by a shared user equipment (UE)-side distributed antenna system (SUDAS). The proposed SUDAS simultaneously utilizes licensed frequency bands and unlicensed frequency bands, (e.g. millimeter wave bands), to enable a spatial multiplexing gain for single-antenna UEs to improve energy efficiency and system throughput of $5$-th generation (5G) outdoor-to-indoor communication. The design of the UE selection, the time allocation to uplink and downlink, and the transceiver processing matrix is formulated as a non-convex optimization problem for the maximization of the end-to-end system energy efficiency (bits/Joule). The proposed problem formulation takes into account minimum data rate requirements for delay sensitive UEs and the circuit power consumption of all transceivers. In order to design a tractable resource allocation algorithm, we first show that the optimal transmitter precoding and receiver post-processing matrices jointly diagonalize the end-to-end communication channel for both downlink and uplink communication via SUDAS. Subsequently, the matrix optimization problem is converted to an equivalent scalar optimization problem for multiple parallel channels, which is solved by an asymptotically globally optimal iterative algorithm. Besides, we propose a suboptimal algorithm which finds a locally optimal solution of the non-convex optimization problem. Simulation results illustrate that the proposed resource allocation algorithms for SUDAS achieve a significant performance gain in terms of system energy efficiency and spectral efficiency compared to conventional baseline systems by offering multiple parallel data streams for single-antenna UEs. <s> BIB013 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> 5G wireless technology is paving the way to revolutionize future ubiquitous and pervasive networking, wireless applications, and user quality of experience. To realize its potential, 5G must provide considerably higher network capacity, enable massive device connectivity with reduced latency and cost, and achieve considerable energy savings compared to existing wireless technologies. The main objective of this article is to explore the potential of NFV in enhancing 5G radio access networks' functional, architectural, and commercial viability, including increased automation, operational agility, and reduced capital expenditure. The ETSI NFV Industry Specification Group has recently published drafts focused on standardization and implementation of NFV. Harnessing the potential of 5G and network functions virtualization, we discuss how NFV can address critical 5G design challenges through service abstraction and virtualized computing, storage, and network resources. We describe NFV implementation with network overlay and SDN technologies. In our discussion, we cover the first steps in understanding the role of NFV in implementing CoMP, D2D communication, and ultra densified networks. <s> BIB014 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Several critical benefits are encompassed by the concept of NFV when this concept is brought under the roof of 5G such as scalability, high level of flexibility, efficient utilisation of network resources, cost and power reduction, and on demand allocation of network resources. NFV could reduce the cost for installing and maintaining network equipment through consolidating the hardware resources. By deploying NFV, network resources could be shared between different users and several network functions in a facile and flexible way. Beside this the network resources could be rescaled and allocated to each function in the network. As a result, the NFV can be customised according the precise demands, so that all the network components and users could be handled and accommodated efficiently. In this paper we extend the virtualization framework that was introduced in our previous work to include a large range of virtual machine workloads with the presence of mobile core network virtual machine intra communication. In addition, we investigate a wide range of traffic reduction factors which are caused by base band virtual machines (BBUVM) and their effect on the power consumption. We used two general scenarios to group our finding, the first one is virtualization in both IP over WDM (core network) and GPON (access network) while the second one is only in IP over WDM network (core network). We illustrate that the virtualization in IP over WDM and GPON can achieve power saving around (16.5% – 19.5%) for all cases compared to the case where no NFV is deployed, while the virtualization in IP over WDM records around (13.5% – 16.5%). <s> BIB015 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Network Function Virtualization (NFV) enables mobile operators to virtualize their network entities as Virtualized Network Functions (VNFs), offering fine-grained on-demand network capabilities. VNFs can be dynamically scale-in/out to meet the performance desire and other dynamic behaviors. However, designing the auto-scaling algorithm for desired characteristics with low operation cost and low latency, while considering the existing capacity of legacy network equipment, is not a trivial task. In this paper, we propose a VNF Dynamic Auto Scaling Algorithm (DASA) considering the tradeoff between performance and operation cost. We develop an analytical model to quantify the tradeoff and validate the analysis through extensive simulations. The results show that the DASA can significantly reduce operation cost given the latency upper-bound. Moreover, the models provide a quick way to evaluate the cost- performance tradeoff and system design without wide deployment, which can save cost and time. <s> BIB016 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> In cloud computing paradigm, virtual resource autoscaling approaches have been intensively studied recent years. Those approaches dynamically scale in/out virtual resources to adjust system performance for saving operation cost. However, designing the autoscaling algorithm for desired performance with limited budget, while considering the existing capacity of legacy network equipment, is not a trivial task. In this paper, we propose a Deadline and Budget Constrained Autoscaling (DBCA) algorithm for addressing the budget-performance tradeoff. We develop an analytical model to quantify the tradeoff and cross-validate the model by extensive simulations. The results show that the DBCA can significantly improve system performance given the budget upper-bound. In addition, the model provides a quick way to evaluate the budget-performance tradeoff and system design without wide deployment, saving on cost and time. <s> BIB017 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Ultra-dense networks can further improve the spectrum efficiency (SE) and the energy efficiency (EE). However, the interference avoidance and the green design are becoming more complex due to the intrinsic densification and scalability. It is known that the much denser small cells are deployed, the more cooperation opportunities exist among them. In this paper, we characterize the cooperative behaviors in the Nash bargaining cooperative game-theoretic framework, where we maximize the EE performance with a certain sacrifice of SE performance. We first analyze the relationship between the EE and the SE, based on which we formulate the Nash-product EE maximization problem. We achieve the closed-form sub-optimal SE equilibria to maximize the EE performance with and without the minimum SE constraints. We finally propose a CE2MG algorithm, and numerical results verify the improved EE and fairness of the presented CE2MG algorithm compared with the non-cooperative scheme. <s> BIB018 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Wireless cellular networks have seen dramatic growth in number of mobile users. As a result, data requirements, and hence the base-station power consumption has increased significantly. It in turn adds to the operational expenditures and also causes global warming. The base station power consumption in long-term evolution (LTE) has, therefore, become a major challenge for vendors to stay green and profitable in competitive cellular industry. It necessitates novel methods to devise energy efficient communication in LTE. Importance of the topic has attracted huge research interests worldwide. Energy saving (ES) approaches proposed in the literature can be broadly classified in categories of energy efficient resource allocation, load balancing, carrier aggregation, and bandwidth expansion. Each of these methods has its own pros and cons leading to a tradeoff between ES and other performance metrics resulting into open research questions. This paper discusses various ES techniques for the LTE systems and critically analyses their usability through a comprehensive comparative study. <s> BIB019 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> In this paper, device-to-device (D2D) communication and small cell technology are introduced into cellular networks to form three layers of heterogeneous network (HetNet). The resource allocation problem of D2D users and small cellular users (SCUEs) is studied in this network, and a resource allocation method under satisfying the communication quality of macro cellular users, D2D users and SCUEs is proposed. Firstly, in order to reduce the computational complexity, regional restrictions on macro base station and users are conducted; Then, in order to improve the system throughput, a resource allocation method based on interference control is proposed. The simulation results show that the proposed method can effectively reduce the computational complexity and improve the overall system throughput. <s> BIB020 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> The Orthogonal Frequency Division Multiplexing (OFDM) has been widely used in the next generation networks. With the increasing of the wireless equipment, the problem of energy consumption for the wireless network has become a big challenge. Power control is the key of the network management, while power allocations and channel assignments have been investigated for maximizing energy efficiency in each cell in the OFDM-based cellular network. The optimal problem of maximizing energy efficiency of networks has been formulated as a non-linear fractional program. The dual decomposition and sub-gradient iteration have been used to solve it. Furthermore, a numerical simulation has been proposed to verify the algorithm proposed in this paper. The simulation results show that the maximum energy efficiency in each cell can be obtained. <s> BIB021 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Content caching is an efficient technique to reduce delivery latency and system congestion during peak-traffic times by bringing data closer to end users. Existing works consider caching only at higher layers separated from physical layer. In this paper, we study wireless caching networks by taking into account cache capability when designing the signal transmission. In particular, we investigate multi-layer caching and their performance in edge-caching wireless networks where both base station (BS) and users are capable of storing content data in their local cache. Two notable uncoded and coded caching strategies are studied. Firstly, we propose a coded caching strategy that is applied to arbitrary value of cache size. The required backhaul and access rates are given as a function of the BS and user cache size. Secondly, closed-form expressions for the system energy efficiency (EE) corresponding to the two caching methods are derived. Thirdly, the system EE is maximized via precoding vectors design and optimization while satisfying the user request rate. Finally, numerical results are presented to verify the effectiveness of the two caching methods. <s> BIB022 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Along with spectral efficiency (SE), energy efficiency (EE) is a key performance metric for the design of 5G and beyond 5G (B5G) wireless networks. At the same time, infrastructure sharing among multiple operators has also emerged as a new trend in wireless communication networks. This paper presents an optimization framework for EE and SE maximization in a network, where radio resources are shared among multiple operators. We define a heterogeneous service level agreement (SLA) framework for a shared network, in which the constraints of different operators are handled by two different multi-objective optimization approaches namely the utility profile and scalarization methods. Pareto-optimal solutions are obtained by merging these approaches with the theory of generalized fractional programming. The approach applies to both noise-limited and interference-limited systems, with single-carrier or multi-carrier transmission. Extensive numerical results illustrate the effect of the operator specific SLA requirements on the global spectral and EE. Three network scenarios are considered in the numerical results, each one corresponding to a different SLA, with different operator-specific EE and SE constraints. <s> BIB023 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> This paper focuses on resource allocation in energy-cooperation enabled two-tier heterogeneous networks (HetNets) with non-orthogonal multiple access (NOMA), where base stations (BSs) are powered by both renewable energy sources and the conventional grid. Each BS can serve multiple users at the same time and frequency band. To deal with the fluctuation of renewable energy harvesting, we consider that renewable energy can be shared between BSs via the smart grid. In such networks, user association and power control need to be re-designed, since existing approaches are based on OMA. Therefore, we formulate a problem to find the optimum user association and power control schemes for maximizing the energy efficiency of the overall network, under quality-of-service constraints. To deal with this problem, we first propose a distributed algorithm to provide the optimal user association solution for the fixed transmit power. Furthermore, a joint user association and power control optimization algorithm is developed to determine the traffic load in energy-cooperation enabled NOMA HetNets, which achieves much higher energy efficiency performance than existing schemes. Our simulation results demonstrate the effectiveness of the proposed algorithm, and show that NOMA can achieve higher energy efficiency performance than OMA in the considered networks. <s> BIB024 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Non-orthogonal multiple access (NOMA) has been recently considered as a promising multiple access technique for fifth generation (5G) mobile networks as an enabling technology to meet the demands of low latency, high reliability, massive connectivity, and high throughput. The two dominants types of NOMA are: power-domain and code-domain. The key feature of power-domain NOMA is to allow different users to share the same time, frequency, and code, but with different power levels. In code-domain NOMA, different spread-spectrum codes are assigned to different users and are then multiplexed over the same time-frequency resources. This paper concentrates on power-domain NOMA. In power-domain NOMA, Successive Interference Cancellation (SIC) is employed at the receiver. In this paper, the optimum received uplink power levels using a SIC detector is determined analytically for any number of transmitters. The optimum uplink received power levels using the SIC decoder in NOMA strongly resembles the μ-law encoding used in pulse code modulation (PCM) speech companders. <s> BIB025 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Energy efficiency is likely to be the litmus test for the sustainability of upcoming 5G networks. Before the new generation of cellular networks are ready to roll out, their architecture designers are motivated to leverage the SDN technology for the sake of its offered flexibility, scalability, and programmability to achieve the 5G KPI of 10 times lower energy consumption. In this paper, we present Proofs-of-Concept of Energy Management and Monitoring Applications (EMMAs) in the context of three challenging, realistic case studies, along with a SDN/NFV-based MANO architecture to manage converged fronthaul/backhaul 5G transport networks. <s> BIB026 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Next-generation wireless networks are expected to support extremely high data rates and radically new applications, which require a new wireless radio technology paradigm. The challenge is that of assisting the radio in intelligent adaptive learning and decision making, so that the diverse requirements of next-generation wireless networks can be satisfied. Machine learning is one of the most promising artificial intelligence tools, conceived to support smart radio terminals. Future smart 5G mobile terminals are expected to autonomously access the most meritorious spectral bands with the aid of sophisticated spectral efficiency learning and inference, in order to control the transmission power, while relying on energy efficiency learning/inference and simultaneously adjusting the transmission protocols with the aid of quality of service learning/inference. Hence we briefly review the rudimentary concepts of machine learning and propose their employment in the compelling applications of 5G networks, including cognitive radios, massive MIMOs, femto/small cells, heterogeneous networks, smart grid, energy harvesting, device-todevice communications, and so on. Our goal is to assist the readers in refining the motivation, problem formulation, and methodology of powerful machine learning algorithms in the context of future networks in order to tap into hitherto unexplored applications and services. <s> BIB027 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Abstract Spurred by both economic and environmental concerns, energy efficiency (EE) has now become one of the key pillars for the fifth generation (5G) mobile communication networks. To maximize the downlink EE of the 5G ultra dense network (UDN), we formulate a constrained EE maximization problem and translate it into a convex representation based on the fractional programming theory. To solve this problem, we first adopt a centralized algorithm to reach the optimum based on Dinkelbach’s procedure. To improve the efficiency and reduce the computational complexity, we further propose a distributed iteration resource allocation algorithm based on alternating direction method of multipliers (ADMM). For the proposed distributed algorithm, the local and dual variables are updated by each base station (BS) in parallel and independently, and the global variables are updated through the coordination and information exchange among BSs. Moreover, as the noise may lead to imperfect information exchange among BSs, the global variables update may be subject to failure. To cope with this problem, we propose a robust distributed algorithm, for which the global variable only updates as the information exchange is successful. We prove that this modified robust distributed algorithm converges to the optimal solution of the primal problem almost surely. Simulation results validate our proposed centralized and distributed algorithms. Especially, the proposed robust distributed algorithm can effectively eliminate the impact of noise and converge to the optimal value at the cost of a little increase of computational complexity. <s> BIB028 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Energy and spectral efficiencies are key metrics to assess the performance of networks and compare different configurations or techniques. There are many ways to define those metrics, and the performance indicators used in their calculation can also be measured in different ways. Using an LTE-A network, we measure different performance indicators and the metrics' outputs are compared. Modifying the transmitted output power, the bandwidth, and the number of base stations, different network configurations are also compared. As expected, the measurements show that increasing the bandwidth increases the throughput more than it increases the energy consumption. Results clearly show that using inappropriate indicators can be misleading. The power indicator should include all energy consumed and the throughput should be dependent on the traffic, taking into account the idle time of the network, if any. There is a need to include more performance indicators into the metrics, especially those related to quality of service. <s> BIB029 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> To avoid the latency from RRC state transition procedure, legacy network maintains UE as RRC connected state for pre-defined time duration even without any traffic arrival. However, it consumes UE battery rapidly because UE should monitor PDCCH, and send CQI feedback periodically. In this paper, we design the RRC connection control to enhance energy efficiency with moderate control signaling overhead. In RRC INACTIVE state, the newly introduced in NR, both network and UE save UE context including bearer configuration and security even after UE released from network. Owing to the saved UE context, the RRC state transition from RRC INACTIVE to RRC CONNECTED requires fewer number of CN signalling. And thus network can release UE to RRC INACTIVE more aggressively with shorter timer. Furthermore, we propose the connectionless data transmission in RRC INACTIVE without RRC state transition to RRC CONNECTED. In our performance analysis, UE energy consumption can be improved by 50% for the modem only and 18% reduction for the total device including display. Furthermore, when the small data or background (keep-alive) traffic is transferred in RRC INACTIVE, the energy efficiency is increased up to double. <s> BIB030 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Using a network of cache enabled small cells, traffic during peak hours can be reduced by proactively fetching the content that is most likely to be requested. In this paper, we aim to explore the impact of proactive caching on an important metric for future generation networks, namely, energy efficiency (EE). We argue that, exploiting the spatial repartitions of users in addition to the correlation in their content popularity profiles, can result in considerable improvement of the achievable EE. In this paper, the optimization of EE is decoupled into two related subproblems. The first one addresses the issue of content popularity modeling. While most existing works assume similar popularity profiles for all users, we consider an alternative framework in which, users are clustered according to their popularity profiles. In order to showcase the utility of the proposed clustering, we use a statistical model selection criterion, namely, Akaike information criterion. Using stochastic geometry, we derive a closed-form expression of the achievable EE and we find the optimal active small cell density vector that maximizes it. The second subproblem investigates the impact of exploiting the spatial repartitions of users. After considering a snapshot of the network, we formulate a combinatorial problem that optimizes content placement in order to minimize the transmission power. Numerical results show that the clustering scheme considerably improves the cache hit probability and consequently the EE, compared with an unclustered approach. Simulations also show that the small base station allocation algorithm improves the energy efficiency and hit probability. <s> BIB031 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> In this paper, we study delay-aware cooperative online content caching with limited caching space and unknown content popularity in dense small cell wireless networks. We propose a Cooperative Online Content cAching algorithm (COCA) that decides in which BS the requested content should be cached with considerations of three important factors: the residual cache space in each small cell basestation (SBS), the number of coordinated connections each SBS establishes with other SBSs, and the number of served users in the coverage area of each SBS. In addition, due to limited storage space in the cache, the proposed COCA algorithm eliminates the least recently used (LRU) contents to free up the space. We compare the delay performance of the proposed COCA algorithm with the existing offline cooperative caching schemes through simulations. Simulation results demonstrate that the proposed COCA algorithm has a better delay performance than the existing offline algorithms. <s> BIB032 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> We propose and compare different potential placement schemes for baseband functions and mobile edge computing on their energy efficiency. Simulation results show that NFV enabled flexible placement reduces more than 20% power than traditional solutions. <s> BIB033 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> The current base station centric cellular network architecture hinders the implementation of effective sleep techniques, often resulting in energy-inefficient mobile networks. The efforts towards 5G and network densification, however, open new possibilities and may, at last, allow the integration of sleep modes without any QoS degradation. In this paper, we consider heterogeneous networks in which data and control planes are split and independent, referred to as SDHN. We present an energy consumption metric that can be used to evaluate the radio access power consumption and the associated energy efficiency of these networks. Concerning other metrics in literature, the proposal accounts for both the coverage area as well as the traffic load, and it is relatively simple to use. The proposed metric is applied to evaluate the power consumption performance of an LTE SDHN in an urban indoor scenario. Results confirm that sleep modes in such architectures can effectively cut power consumption and improve energy efficiency while preserving QoS. <s> BIB034 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Recently, Fog-RANs have been introduced as the evolution of Cloud Radio Access Networks (CRAN) for enabling edge computing in 5G systems. By alleviating the fronthaul burden for data transfer, transport delays are expected to be greatly reduced. However, in order to support envisioned 5G real-time and delay-sensitive applications, tailored radio resource and interference management schemes become necessary. Therefore, this paper investigates the issues of user scheduling and beamforming for energy efficient Fog-RAN. We formulate the energy efficiency maximization problem, taking into account the local user clustering constraint specific to Fog-RANs. Given the difficulty of this non-convex optimization problem, we propose a strategy where the energy efficient user scheduling is split in two parts: first, we solve an equivalent sum-rate maximization problem, then, the most energy-efficient FogAPs are activated in a greedy manner. To meet the requirement of low computational complexity of FogAPs, local beamforming is performed given fixed user scheduling. Simulation results show that the proposed scheme not only provides similar levels of user rates and fairness, but also largely outperforms the system energy efficiency in comparison with the baseline scheme1. <s> BIB035 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> NOMA has been recognized as a highly promising FRA technology to satisfy the requirements of the fifth generation era on high spectral efficiency and massive connectivity. Since the EE has become a growing concern in FRA from both the industrial and societal perspectives, this article discusses the sustainability issues of NOMA. We first thoroughly examine the theoretical power regions of NOMA to show the minimum transmission power with fixed data rate requirement, demonstrating the EE performance advantage of NOMA over orthogonal multiple access. Then we explore the role of energy-aware resource allocation and grant-free transmission in further enhancing the EE performance of NOMA. Based on this exploration, a hybrid NOMA strategy that reaps the joint benefits of resource allocation and grantfree transmission is investigated to simultaneously accomplish high throughput, large connectivity, and low energy cost. Finally, we identify some important and interesting future directions for NOMA designers to follow in the next decade. <s> BIB036 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> By analytically showing that index coding (IC) is more power efficient than superposition coding (SC) when appropriate caching contents are available for a pair of users, we propose a sub-optimal joint user clustering and power allocation scheme for a single-cell downlink non-orthogonal multiple access network with caching memory at the receivers that alternate between IC and SC. Simulation studies demonstrate that the proposed scheme significantly reduces the transmission power when compared with the benchmark scheme that only allows SC. <s> BIB037 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> In this paper, network function virtualization (NVF) is identified as a promising key technology that can contribute to energy-efficiency improvement in 5G networks. An optical network supported architecture is proposed and investigated in this work to provide the wired infrastructure needed in 5G networks and to support NFV towards an energy efficient 5G network. In this architecture the mobile core network functions as well as baseband function are virtualized and provided as VMs. The impact of the total number of active users in the network, backhaul/fronthaul configurations and VM inter-traffic are investigated. A mixed integer linear programming (MILP) optimization model is developed with the objective of minimizing the total power consumption by optimizing the VMs location and VMs servers’ utilization. The MILP model results show that virtualization can result in up to 38% (average 34%) energy saving. The results also reveal how the total number of active users affects the baseband virtual machines (BBUVMs) optimal distribution whilst the core network virtual machines (CNVMs) distribution is affected mainly by the inter-traffic between the VMs. For real-time implementation, two heuristics are developed, an Energy Efficient NFV without CNVMs inter-traffic (EENFVnoITr) heuristic and an Energy Efficient NFV with CNVMs inter-traffic (EENFVwithITr) heuristic, both produce comparable results to the optimal MILP results. Finally, a Genetic algorithm is developed for further verification of the results. <s> BIB038 | On interdependence among transmit and consumed power of macro base station technologies BIB002 Utilization of Nash product for maximizing cooperative EE BIB018 Energy Efficiency in Wireless Networks via Fractional Programming Theory BIB005 Energy efficiency maximization oriented resource allocation in 5G ultra-dense network: Centralized and distributed algorithms BIB028 Comparison of Spectral and Energy Efficiency Metrics Using Measurements in a LTE-A Network BIB029 Energy Management in LTE Networks BIB019 Energy-efficient resource allocation scheduler with QoS aware supports for green LTE network BIB006 Interference-area-based resource allocation for full-duplex communications BIB010 A resource allocation method for D2D and small cellular users in HetNet BIB020 Highly Energy-Efficient Resource Allocation in Power Telecommunication Network BIB021 EE enhancement with RRC Connection Control for 5G New Radio (NR) BIB030 Proactive caching based on the content popularity on small cells BIB031 Cooperative Online Caching in Small Cell Networks with Limited Cache Size and Unknown Content Popularity BIB032 Economical Energy Efficiency: An Advanced Performance Metric for 5G Systems Energy-efficient design for edge-caching wireless networks: When is coded-caching beneficial? BIB022 Content caching in small cells with optimized UL and caching power BIB007 An effective cooperative caching scheme for mobile P2P networks BIB003 EE analysis of heterogeneous cache enabled 5G hyper cellular networks BIB011 EE at the network level Motivation for infrastructure sharing based on current energy consumption figures BIB012 Energy efficiency in 5G access networks: Small cell densification and high order sectorisation BIB008 EE at the network level Energy-Efficient User Association and Beamforming for 5G Fog Radio Access Networks BIB035 Global energy and spectral efficiency maximization in a shared noise-limited environment BIB023 EE Resource Allocation in NOMA BIB024 Concept and practical considerations of non-orthogonal multiple access (NOMA) for future radio access BIB001 Optimum received power levels of UL NOMA signals for EE improvement BIB025 Spectral efficient nonorthogonal multiple access schemes (NOMA vs RAMA) Non-Orthogonal Multiple Access: Achieving Sustainable Future Radio Access BIB036 Mode Selection Between Index Coding and Superposition Coding in Cache-based NOMA Networks BIB037 Use case of shared UE side distributed antenna System for indoor usage BIB013 Optimized Energy Aware 5G Network Function Virtualization BIB038 Energy Efficient Network Function Virtualization in 5G Networks BIB009 Network Function Virtualization in 5G BIB014 A Framework for Energy Efficient NFV in 5G Networks BIB015 Energy efficient Placement of Baseband Functions and Mobile Edge Computing in 5G Networks BIB033 Energy Efficiency Benefits of RAN-as-a-Service Concept for a Cloud-Based 5G Mobile Network Infrastructure BIB004 Dynamic Auto Scaling Algorithm (DASA) for 5G Mobile Networks BIB016 Design and Analysis of Deadline and Budget Constrained Autoscaling (DBCA) Algorithm for 5G Mobile Networks BIB017 EE using SDN technology Impact of software defined networking (SDN) paradigm on EE BIB026 EE gains from the separated control and data planes in a heterogeneous network BIB034 EE using ML techniques Machine Learning Paradigms for Next-Generation Wireless Networks BIB027 Switch-on/off policies for energy harvesting small cells through distributed Q-learning |
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> EE using ML techniques <s> A hybrid network architecture has been proposed for machine-to-machine M2M communications in the fifth generation wireless systems, where M2M gateways connect the capillary networks and cellular networks. In this paper, we develop novel energy efficient and end-to-end delay duty cycle control scheme for controllers at the gateway and the capillary networks coordinator. We first formulate a duty cycle control problem with joint-optimisation of energy consumption and end-to-end delay. Then, a distributed duty cycle control scheme is proposed. The proposed scheme consists of two parts i a transmission policy, which decides the optimal number of packets to be transmitted between M2M devices, coordinators and gateways; and ii a duty cycle control for IEEE 802.15.4. We analytically derived the optimal duty cycle control and developed algorithms to compute the optimal duty cycle. It is to increase the feasibility of implementing the control on computation-limited devices where a suboptimal low complexity rollout algorithm-based duty cycle control RADutyCon is proposed. The simulation results show that RADutyCon achieves an exponential reduction of the computation complexity as compared with that of the optimal duty cycle control. The simulation results show that RADutyCon performs close to the optimal control, and it performs no worse than the heuristic base control. Copyright © 2014 John Wiley & Sons, Ltd. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> EE using ML techniques <s> The explosive growth of mobile multimedia services has caused tremendous network traffic in wireless networks and a great part of the multimedia services are delay-sensitive. Therefore, it is important to design efficient radio resource allocation algorithms to increase network capacity and guarantee the delay QoS. In this paper, we study the power control problem in the downlink of two-tier femtocell networks with the consideration of the delay QoS provisioning. Specifically, we introduce the effective capacity (EC) as the network performance measure instead of the Shannon capacity to provide the statistical delay QoS provisioning. Then, the optimization problem is modeled as a non- cooperative game and the existence of Nash Equilibriums (NE) is investigated. However, in order to enhance the selforganization capacity of femtocells, based on non-cooperative game, we employ a Q-learning framework in which all of the femtocell base stations (FBSs) are considered as agents to achieve power allocation. Then a distributed Q- learning-based power control algorithm is proposed to make femtocell users (FUs) gain maximum EC. Numerical results show that the proposed algorithm can not only maintain the delay requirements of the delay-sensitive services, but also has a good convergence performance. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> EE using ML techniques <s> We study the energy efficiency issue in 5G communications scenarios, where cognitive femtocells coexist with picocells operating at the same frequency bands. Optimal energy-efficient power allocation based on the sensing-based spectrum sharing (SBSS) is proposed for the uplink cognitive femto users operating in a multiuser MIMO mode. Both hard-decision and soft-decision schemes are considered for the SBSS. Different from the existing energy-efficient designs in multiuser scenarios, which consider system-wise energy efficiency, we consider user-wise energy efficiency and optimize them in a Pareto sense. To resolve the nonconvexity of the formulated optimization problem, we include an additional power constraint to convexify the problem without losing global optimality. Simulation results show that the proposed schemes significantly enhance the energy efficiency of the cognitive femto users compared with the existing spectral-efficient designs. <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> EE using ML techniques <s> Heterogeneous cloud radio access networks (H-CRAN) is a new trend of SC that aims to leverage the heterogeneous and cloud radio access networks advantages. Low power remote radio heads (RRHs) are exploited to provide high data rates for users with high quality of service requirements (QoS), while high power macro base stations (BSs) are deployed for coverage maintenance and low QoS users support. However, the inter-tier interference between the macro BS and RRHs and energy efficiency are critical challenges that accompany resource allocation in H-CRAN. Therefore, we propose a centralized resource allocation scheme using online learning, which guarantees interference mitigation and maximizes energy efficiency while maintaining QoS requirements for all users. To foster the performance of such scheme with a model-free learning, we consider users' priority in resource blocks (RBs) allocation and compact state representation based learning methodology to enhance the learning process. Simulation results confirm that the proposed resource allocation solution can mitigate interference, increase energy and spectral efficiencies significantly, and maintain users' QoS requirements. <s> BIB004 | Duty cycle control with joint optimization of delay and energy efficiency for capillary machine-to-machine networks in 5G communication system BIB001 Distributed power control for two tier femtocell networks with QoS provisioning based on Q-learning BIB002 Spectrum sensing techniques using both hard and soft decisions BIB003 EE resource allocation in 5G heterogeneous cloud radio access network BIB004 |
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Base Station Energy Consumption and Cell Switch Off Techniques <s> In this paper heterogeneous wireless cellular networks based on two-tier architecture consisting of macrocells and femtocells are considered. Methods of femtocells deployment and management are explored in order to determine their effects on performance of wireless cellular networks. Thus, network performance parameters are described and analytically calculated for different two-tier network architectures. A specific approach is presented in the paper, where calculations of the network performance parameters are supported with some of the results obtained using an appropriate simulation tool. In such a manner, energy efficiency of the considered two-tier network architectures is studied by introducing a number of so called green metrics. It is clearly shown that significant energy efficiency, as well as throughput, improvements can be achieved by adopting heterogeneous architecture for wireless cellular networks. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Base Station Energy Consumption and Cell Switch Off Techniques <s> Dynamic adaptation of the base stations on/off activity or transmit power, according to space and time traffic variations, are measures accepted in the most contemporary resource management approaches dedicated to improving energy efficiency of cellular access networks. Practical implementation of both measures results in changes to instantaneous base station power consumption. In this paper, extensive analyses presenting influence of the transmit power scaling and on/off switching on instantaneous macro base stations power consumption are given. Based on real on-site measurements performed on a set of macro base stations of different access technologies and production years, we developed linear power consumption models. These models are developed by means of linear regression and precisely model the influence of transmit power on instantaneous power consumption for the second, third and fourth generations of macro base stations. In order to estimate the potential energy savings of transmit power scaling and on/off switching for base stations of different generations, statistical analyses of measured power consumptions are performed. Also, transient times and variations of base stations instantaneous power consumption during transient periods initiated with on/off switching and transmit power scaling are presented. Since the developed power consumption models have huge confidence follow measured results, they can be used as general models for expressing the relationship between transmitted and consumed power for macro base stations of different technologies and generations. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Base Station Energy Consumption and Cell Switch Off Techniques <s> The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in 5G cellular networks. While massive MIMO will reduce the transmission power at the expense of higher computational cost, the question remains as to which (computation or transmission power) is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this article is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50 percent of the energy is consumed by the computation power at 5G small cell BSs. Moreover, the computation power of a 5G small cell BS can approach 800 W when massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks. <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Base Station Energy Consumption and Cell Switch Off Techniques <s> In next generation wireless networks along with the overwhelming demand of high data rate and network capacity, the user demands ubiquitous connectivity with the network. In order to fulfill the demand of anywhere at any time data services, the network operators have to install more and more base stations that eventually leads towards high power consumption. For this, the potential solution is derived from 5G network that proposes a heterogeneous environment of wireless access networks. More particularly, deployment of Femto and Pico cell under the umbrella of Macro cell base stations (BS). Such networking strategy will result high network capacity and energy efficiency along with better network coverage. In this article, an analysis of energy efficiency has been carried out by using two-tier and three tier network configurations. The simulation results demonstrate that rational deployment of small cells improves the energy efficiency of wireless network. <s> BIB004 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Base Station Energy Consumption and Cell Switch Off Techniques <s> As we make progress towards the era of fifth generation (5G) communication networks, energy efficiency (EE) becomes an important design criterion because it guarantees sustainable evolution. In this regard, the massive multiple-input multiple-output (MIMO) technology, where the base stations (BSs) are equipped with a large number of antennas so as to achieve multiple orders of spectral and energy efficiency gains, will be a key technology enabler for 5G. In this article, we present a comprehensive discussion on state-of-the-art techniques which further enhance the EE gains offered by massive MIMO (MM). We begin with an overview of MM systems and discuss how realistic power consumption models can be developed for these systems. Thereby, we discuss and identify few shortcomings of some of the most prominent EE-maximization techniques present in the current literature. Then, we discuss "hybrid MM systems" operating in a 5G architecture, where MM operates in conjunction with other potential technology enablers, such as millimetre wave, heterogenous networks, and energy harvesting networks. Multiple opportunities and challenges arise in such a 5G architecture because these technologies benefit mutually from each other and their coexistence introduces several new constraints on the design of energy-efficient systems. Despite clear evidence that hybrid MM systems can achieve significantly higher EE gains than conventional MM systems, several open research problems continue to roadblock system designers from fully harnessing the EE gains offered by hybrid MM systems. Our discussions lead to the conclusion that hybrid MM systems offer a sustainable evolution towards 5G networks and are therefore an important research topic for future work. <s> BIB005 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Base Station Energy Consumption and Cell Switch Off Techniques <s> Energy efficiency is a major requirement for next generation mobile networks both as an end to reduce operational expenses and to increase the systems' ecological friendliness. Another integral part of 5G networks is the increased density of the deployment of small radius base stations, such as femtocells. Based on the design principle that demands a system to be active and transmitting only when and where it is needed, we evaluate the energy savings harvested when sleep mode techniques are enforced in dense femtocell deployments. We present our novel variations of sleep mode combined with hybrid access strategies and we estimate capacity and energy benefits. Our simulations show significant advantages in performance and energy efficiency. <s> BIB006 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Base Station Energy Consumption and Cell Switch Off Techniques <s> As a new promising of higher data rates and to enable the Internet of Things (IoT), the thirst of energy efficiency in communication networks has become an important milestone in the design and operation. With the emergence of the 5G of wireless networks and the deployment of billions of base stations to the connected devices, the requirement for system design and energy efficiency management will become more attractive. In addition, in the next era of cellular, the energy efficiency is the most important requirement determined by the needs in reducing the carbon footprint of communications, and also in extending the life of the terminal battery. Nevertheless, the new challenge has emerged especially in the backbone of the networks. Therefore, the aim of this paper is to present the potential of 5G system to meet the increasing needs in devices and explosive capacity without causing any significant energy consumption based on functional split architecture particularly for 5G backhaul. <s> BIB007 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Base Station Energy Consumption and Cell Switch Off Techniques <s> Cell switch-off (CSO) is an important approach to reducing energy consumption in cellular networks during off-peak periods. CSO addresses the research question of which cells to switch off when. Whereas online CSO, based on immediate user demands and channel states, is problematic to implement and difficult to model, off-line CSO is more practical and tractable. Furthermore, it is known that regular cell layouts generally provide the best coverage and spectral efficiency, which leads us to prefer regular static (off-line) CSO. We introduce sector-based regular CSO patterns for the first time. We organize the existing and newly introduced patterns using a systematic nomenclature; studying 26 patterns in total. We compare these patterns in terms of energy efficiency and the average number of users supported, via a combination of analysis and simulation. We also compare the performance of CSO with two benchmark algorithms. We show that the average number of users can be captured by one parameter. Moreover, we find that the distribution of the number of users is close to Gaussian, with a tractable variance. Our results demonstrate that several patterns that activate only one out of three sectors are particularly beneficial; such CSO patterns have not been studied before. <s> BIB008 | Knowing the accurate energy consumption of a base station constitutes an important part of the understanding of the energy budget of a wireless network. For this purpose, authors in BIB003 have specifically discussed energy conservation at equipment level by presenting the breakdown of a base station. A typical BS has been presented by dividing it into five parts, namely antenna interface, power amplifier, RF chains, Baseband unit, mains power supply and the DC-DC supply. These modules have been shown in Figure 2 . An important claim has been made stating that up to 57% of the power consumption at a base station is experienced at the transmission end, i.e., the power amplifier and antenna interface. Yet, with small cells, the power consumption per base station has been reduced due to shorter distances between the base stations and the users BIB003 BIB004 . In BIB004 , analytical modelling of the energy efficiency for a heterogeneous network comprising upon macro, pico and femto base stations has been discussed. To a certain extent emphasis has been put on the baseband unit which is specifically in charge of the computing operations and must be sophisticated enough to handle huge bursts of traffic. A baseband unit has been described to be composed of four different logical systems including a baseband system used for evaluating Fast Fourier Transforms (FFT) and wireless channel coding, the control system for resource allocation, the transfer system used for management operations among neighbouring base stations and finally the system for powering up the entire base station site including cooling and monitoring systems. Furthermore, the use of mmWave and massive MIMO would need an even greater push on the computation side of the base station since more and more users are now being accommodated. The study in discusses the achievable sum rates and energy efficiency of a downlink single cell M-MIMO systems under various precoding schemes whereas several design constraints and future opportunities concerning existing and upcoming MIMO technologies have been discussed in BIB005 . The computation power of base station would increase when number of antennas and the bandwidth increases. In the case of using 128 antennas the computation power would go as high as 3000 W for a macrocell and 800 W for a small cell according to BIB003 . Authors in BIB007 have discussed the utility of taking most of the baseband processing functionality away from the base station towards a central, more powerful and organized unit for supporting higher data rates and traffic density. Users have envisioned experiencing more flexibility using this central RAN since they would be able to get signaling from one BS and get data transfer through another best possible neighboring BS. Visible gains in latency and fronthaul bandwidth have thus been observed by having stronger backhaul links but this research avenue still needs to be formally exploited for devising globally energy efficient mechanisms. The choice of the best suited BS would allow the network to have a lower transmission power thus increasing the energy efficiency. An analysis of throughput as a performance metric has been provided for a two-tier heterogeneous network comprising upon macro and femto cells in BIB001 . The claimed improvement in throughput originates from a distributed mesh of small cells so that the minimal transmission distance between the end user and the serving base station would be cashed out in terms of reduced antenna's transmission power. Considering these findings on BS energy consumption, cell switch-off techniques have been explored in the literature. An incentive based sleeping mechanism for densely deployed femtocells has been considered in BIB006 and energy consumption reduction up to 40% has been observed by turning the RF chains off and only keeping the backhaul links alive. The key enabler here would be to have prompt toggling between active and sleep modes for maintaining the quality of service. According to BIB006 , a "sniffer" component installed at these small cells that would be responsible for detecting activity in the network by checking the power in uplink connections, a value surpassing the threshold, would indicate a connection with the macrocell. Mobility Management Entity (MME) has also been suggested to potentially take a lead by sending wake up signals to the respective femtocells and keeping others asleep. In contrast to the usual techniques of handing their users over to the neighbouring base stations and turning that cell off, it would be beneficial to give incentives to users for connecting to a neighbouring cell if they get to have better data rates. Authors in BIB008 have conducted a thorough study for classification of the switching techniques as well as calculation of the outage probability of UEs, under realistic constraints. Their claim states that the energy consumption of the base station is not directly proportional to its load so an improved switching algorithm was needed that would allow the UEs to maintain the SINR thresholds. They have thus brought forward a sector based switching technique for the first time. Furthermore, their claim favors an offline switching technique instead of a more dynamic online scheme because of practical constraints such as random UE distribution and realistic interference modelling. Authors in BIB002 discuss influence of the transmit power scaling and on/off switching on instantaneous macro base stations power consumption. The proposed power consumption models have been claimed to be used as generic models for the relationship between transmitted and consumed power for macro base stations of different technologies and generations. In addition to these techniques, recently, machine learning techniques have been used to implement cell switch off which are discussed in Section 6. |
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Interference-Aware Energy Efficiency Techniques in 5G Ultra Dense Networks <s> This monograph presents a unified framework for energy efficiency maximization in wireless networks via fractional programming theory. The definition of energy efficiency is introduced, with reference to single-user and multi-user wireless networks, and it is observed how the problem of resource allocation for energy efficiency optimization is naturally cast as a fractional program. An extensive review of the state-of-the-art in energy efficiency optimization by fractional programming is provided, with reference to centralized and distributed resource allocation schemes. A solid background on fractional programming theory is provided. The key-notion of generalized concavity is presented and its strong connection with fractional functions described. A taxonomy of fractional problems is introduced, and for each class of fractional problem, general solution algorithms are described, discussing their complexity and convergence properties. The described theoretical and algorithmic framework is applied to solve energy efficiency maximization problems in practical wireless networks. A general system and signal model is developed which encompasses many relevant special cases, such as one-hop and two-hop heterogeneous networks, multi-cell networks, small-cell networks, device-to-device systems, cognitive radio systems, and hardware-impaired networks, wherein multiple-antennas and multiple subcarriers are possibly employed. Energy-efficient resource allocation algorithms are developed, considering both centralized, cooperative schemes, as well as distributed approaches for self-organizing networks. Finally, some remarks on future lines of research are given, stating some open problems that remain to be studied. It is shown how the described framework is general enough to be extended in these directions, proving useful in tackling future challenges that may arise in the design of energy-efficient future wireless networks. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Interference-Aware Energy Efficiency Techniques in 5G Ultra Dense Networks <s> Recent trend of network communication is leading towards the innovation of high speed wireless broadband technology. The scheduling of real-time traffic in certain network will give high impact on the system, so the most efficient scheduling is crucial. This paper proposes an energy-efficient resource allocation scheduler with QoS aware support for LTE network. The ultimate aim is to promote and achieve the green wireless LTE network and environmental friendly. Some related works on green LTE networks are also being discussed. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Interference-Aware Energy Efficiency Techniques in 5G Ultra Dense Networks <s> In this paper, we investigate the interference management problem in a full-duplex cellular network from a spectrum resource allocation perspective. In order to maximize the full-duplex network throughput, we propose an interference area based resource allocation algorithm, which can pair the downlink UE and uplink UE with limited mutual interference. The simulation results verify the efficiency of the proposed interference area based resource allocation algorithm in the investigated full-duplex cellular network. <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Interference-Aware Energy Efficiency Techniques in 5G Ultra Dense Networks <s> Ultra-dense networks can further improve the spectrum efficiency (SE) and the energy efficiency (EE). However, the interference avoidance and the green design are becoming more complex due to the intrinsic densification and scalability. It is known that the much denser small cells are deployed, the more cooperation opportunities exist among them. In this paper, we characterize the cooperative behaviors in the Nash bargaining cooperative game-theoretic framework, where we maximize the EE performance with a certain sacrifice of SE performance. We first analyze the relationship between the EE and the SE, based on which we formulate the Nash-product EE maximization problem. We achieve the closed-form sub-optimal SE equilibria to maximize the EE performance with and without the minimum SE constraints. We finally propose a CE2MG algorithm, and numerical results verify the improved EE and fairness of the presented CE2MG algorithm compared with the non-cooperative scheme. <s> BIB004 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Interference-Aware Energy Efficiency Techniques in 5G Ultra Dense Networks <s> Wireless cellular networks have seen dramatic growth in number of mobile users. As a result, data requirements, and hence the base-station power consumption has increased significantly. It in turn adds to the operational expenditures and also causes global warming. The base station power consumption in long-term evolution (LTE) has, therefore, become a major challenge for vendors to stay green and profitable in competitive cellular industry. It necessitates novel methods to devise energy efficient communication in LTE. Importance of the topic has attracted huge research interests worldwide. Energy saving (ES) approaches proposed in the literature can be broadly classified in categories of energy efficient resource allocation, load balancing, carrier aggregation, and bandwidth expansion. Each of these methods has its own pros and cons leading to a tradeoff between ES and other performance metrics resulting into open research questions. This paper discusses various ES techniques for the LTE systems and critically analyses their usability through a comprehensive comparative study. <s> BIB005 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Interference-Aware Energy Efficiency Techniques in 5G Ultra Dense Networks <s> In this paper, device-to-device (D2D) communication and small cell technology are introduced into cellular networks to form three layers of heterogeneous network (HetNet). The resource allocation problem of D2D users and small cellular users (SCUEs) is studied in this network, and a resource allocation method under satisfying the communication quality of macro cellular users, D2D users and SCUEs is proposed. Firstly, in order to reduce the computational complexity, regional restrictions on macro base station and users are conducted; Then, in order to improve the system throughput, a resource allocation method based on interference control is proposed. The simulation results show that the proposed method can effectively reduce the computational complexity and improve the overall system throughput. <s> BIB006 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Interference-Aware Energy Efficiency Techniques in 5G Ultra Dense Networks <s> The Orthogonal Frequency Division Multiplexing (OFDM) has been widely used in the next generation networks. With the increasing of the wireless equipment, the problem of energy consumption for the wireless network has become a big challenge. Power control is the key of the network management, while power allocations and channel assignments have been investigated for maximizing energy efficiency in each cell in the OFDM-based cellular network. The optimal problem of maximizing energy efficiency of networks has been formulated as a non-linear fractional program. The dual decomposition and sub-gradient iteration have been used to solve it. Furthermore, a numerical simulation has been proposed to verify the algorithm proposed in this paper. The simulation results show that the maximum energy efficiency in each cell can be obtained. <s> BIB007 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Interference-Aware Energy Efficiency Techniques in 5G Ultra Dense Networks <s> Abstract Spurred by both economic and environmental concerns, energy efficiency (EE) has now become one of the key pillars for the fifth generation (5G) mobile communication networks. To maximize the downlink EE of the 5G ultra dense network (UDN), we formulate a constrained EE maximization problem and translate it into a convex representation based on the fractional programming theory. To solve this problem, we first adopt a centralized algorithm to reach the optimum based on Dinkelbach’s procedure. To improve the efficiency and reduce the computational complexity, we further propose a distributed iteration resource allocation algorithm based on alternating direction method of multipliers (ADMM). For the proposed distributed algorithm, the local and dual variables are updated by each base station (BS) in parallel and independently, and the global variables are updated through the coordination and information exchange among BSs. Moreover, as the noise may lead to imperfect information exchange among BSs, the global variables update may be subject to failure. To cope with this problem, we propose a robust distributed algorithm, for which the global variable only updates as the information exchange is successful. We prove that this modified robust distributed algorithm converges to the optimal solution of the primal problem almost surely. Simulation results validate our proposed centralized and distributed algorithms. Especially, the proposed robust distributed algorithm can effectively eliminate the impact of noise and converge to the optimal value at the cost of a little increase of computational complexity. <s> BIB008 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Interference-Aware Energy Efficiency Techniques in 5G Ultra Dense Networks <s> Energy and spectral efficiencies are key metrics to assess the performance of networks and compare different configurations or techniques. There are many ways to define those metrics, and the performance indicators used in their calculation can also be measured in different ways. Using an LTE-A network, we measure different performance indicators and the metrics' outputs are compared. Modifying the transmitted output power, the bandwidth, and the number of base stations, different network configurations are also compared. As expected, the measurements show that increasing the bandwidth increases the throughput more than it increases the energy consumption. Results clearly show that using inappropriate indicators can be misleading. The power indicator should include all energy consumed and the throughput should be dependent on the traffic, taking into account the idle time of the network, if any. There is a need to include more performance indicators into the metrics, especially those related to quality of service. <s> BIB009 | The advantages of small cell deployment, in terms of increased system capacity and better load balancing capability, have been discussed in the previous sections. Yet, it is important to mention that densification suffers from added system complexity. Therefore, energy efficiency as well as spectral efficiency becomes harder to evaluate. Nash energy efficiency maximization theory has been presented for discussing the relationship between energy and spectral efficiency in BIB004 . Both are inversely related to each other, increase in one of them demands a natural decrease in the other quantity which usually has been the case of medium to high transmission power. Most of the research conducted in ultra-dense small cell networks has been on coming up with techniques optimizing both energy efficiency (EE) and spectral efficiency (SE). Authors in BIB004 also brings forth the idea of gaining energy efficiency at the cost of spectral efficiency where the small cells are under the coverage of a macro cell and pose interference issues due to the sharing of bandwidth among them.In such a scenario, all the small cells participate in energy efficiency maximization according to a game theoretic methodology. The suggested game theoretic model has been deemed to be a distributed model and utilizes Nash product for maximizing cooperative energy efficiency. Analysis of the algorithms shows that energy efficiency, although it increases with the increase in the number of small cells, it saturates after about 200 cells and afterwards only experiences a minor increase. Fractional programming has been extensively used in BIB001 for modelling the energy efficiency ratio for a Point-to-Point (P2P) network as well as for a full scaled communication network using MIMO. EE has been considered as a cost benefit ratio and minimum rate constraints have been put together for modelling real life scenarios. In addition, fairness in resource allocation has been considered a major factor in the overall energy distribution. These two constraints might tend to increase the power consumption in case the minimum thresholds tend to be too high. Adding to the use cases of fractional programming, BIB008 laid out a robust distributed algorithm for reducing the adverse effects of computational complexity and noise towards resource allocation. Authors in BIB009 , have presented an experimental setup for defining the right kind of key performance indicators when measuring either EE or SE. The setup includes a set of UE(s), three small BS(s) and running iperf traffic using User Datagram Protocol (UDP) and File Transfer Protocol (FTP). Results have indicated that utilization of a higher bandwidth would not increase the power consumption, that throughput must incorporate the traffic density and that the idle power of the equipment needs to be considered for energy consumption calculations. In BIB005 , use of varying transmission power levels by the aid of custom power levels in a two-tier network has been encouraged for the optimization of needed power in Long Term Evolution (LTE). Intelligent switching of control channels in the DL and tuning the power levels according to the UE's feedback have been envisioned to aid in allocation of the resource blocks with an optimum power. Authors in BIB002 , have discussed the opportunities for the less explored domain of user scheduling in LTE. 3GPP has no fixed requirement on scheduling and thus researchers have devised their own mechanisms depending upon their pain points. Authors have proposed the idea of associating Quality of Service (QoS) with scheduling for accommodating cell edge users. Authors in BIB003 have proposed a resource allocation technique for minimizing the interference at the UE side. Considering a full duplex communication setup, a circular interference area for a DL UE has been demarcated by the BS based upon a predefined threshold. Resource block for this UE has been shared by an UL UE from outside the interference region for keeping the mutual interference to a minimal level. Simulation results claim to improve the overall network throughput based on the efficient pairing of UEs but the throughput might degrade with a large increase in the distance between the paired UEs. A heuristic algorithm presented in BIB006 improves the system throughput using resource reuse in the three-tier architecture while regulating the interference regions of UEs being served by either macro BS, small BS or in a D2D way. Visible gains in the throughput have been noted with an increased user density for an efficient user selection and having a minimum distance between the UEs being served in a D2D fashion for a stronger link retention. Moreover in BIB007 , authors have constructed objective functions for EE maximization and have thus compared max-min power consumption model against their nonlinear fractional optimization model. Results have been promising for a reduction in the power consumption because of the mutual participation of cells as their number starts to increase. |
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient and Cache-Enabled 5G <s> Mobility, resource constraints and unreliable wireless links of mobile P2P networks will cause high data access latency and the communication overhead. Cooperative caching is widely seen as an effective solution to improve the overall system performance in mobile P2P networks. In this paper we present a novel cooperative caching scheme for mobile P2P networks. In our scheme the caching space of each node is divided into three parts: locale caching, cooperative caching and path caching, which respectively store the requested data objects of the nodes, the hot data objects in the networks and the data objects path. We also put forward the cache replacement strategy according to our scheme. Proposed cache replacement strategy not only takes into account the need of the nodes, but also pays attention to collaborative work between nodes. We evaluate the performance of our scheme by using NS-2. The experimental results show that the cache hit ratio is effectively increased and the average hops count is reduced. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient and Cache-Enabled 5G <s> Traditional wireless networks mainly rely on macro cell deployments, meanwhile with the advances in forth generation networks, the recent architectures of LTE and LTE-A support Heterogeneous Networks (HetNets) that employ a mix of macro and small cells. Small cells aim at increasing coverage and capacity. Coverage both at cell edges and indoor environments can be significantly improved by relays and small cells. Capacity is inherently limited because of the limited spectrum, and although 4G wireless networks have been able to provide a considerable amount of increase in capacity, it has always been challenging to keep up with the growing user demands. In particular, the high volume of traffic resulting from video uploads or downloads is the major reason for the ever growing user demand. In the Internet, content caching at locations closer to the users have been a successful approach to enhance resource utilization. Very recently, content caching within the wireless network has been considered for 4G networks. In this paper, we propose an Integer Linear Programming (ILP)-based energy-efficient content placement approach for small cells. The proposed model, namely minimize Uplink Power and Caching Power (minUPCA), jointly minimizes uplink and caching powers. We compare the performance of minUPCA with a scheme that only aims to minimize uplink power. Our results show that minUPCA provides a compromise between the uplink energy budget of the User Equipment (UE) and the caching energy budget of the Small Cell Base Station (SCBS). <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient and Cache-Enabled 5G <s> The emerging 5G wireless networks will pose extreme requirements such as high throughput and low latency. Caching as a promising technology can effectively decrease latency and provide customized services based on group users behaviour (GUB). In this paper, we carry out the energy efficiency analysis in the cache-enabled hyper cellular networks (HCNs), where the macro cells and small cells (SCs) are deployed heterogeneously with the control and user plane (C/U) split. Benefiting from the assistance of macro cells, a novel access scheme is proposed according to both user interest and fairness of service, where the SCs can turn into semi- sleep mode. Expressions of coverage probability, throughput and energy efficiency (EE) are derived analytically as the functions of key parameters, including the cache ability, search radius and backhaul limitation. Numerical results show that the proposed scheme in HCNs can increase the network coverage probability by more than 200% compared with the single- tier networks. The network EE can be improved by 54% than the nearest access scheme, with larger research radius and higher SC cache capacity under lower traffic load. Our performance study provides insights into the efficient use of cache in the 5G software defined networking (SDN). <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient and Cache-Enabled 5G <s> Content caching is an efficient technique to reduce delivery latency and system congestion during peak-traffic times by bringing data closer to end users. Existing works consider caching only at higher layers separated from physical layer. In this paper, we study wireless caching networks by taking into account cache capability when designing the signal transmission. In particular, we investigate multi-layer caching and their performance in edge-caching wireless networks where both base station (BS) and users are capable of storing content data in their local cache. Two notable uncoded and coded caching strategies are studied. Firstly, we propose a coded caching strategy that is applied to arbitrary value of cache size. The required backhaul and access rates are given as a function of the BS and user cache size. Secondly, closed-form expressions for the system energy efficiency (EE) corresponding to the two caching methods are derived. Thirdly, the system EE is maximized via precoding vectors design and optimization while satisfying the user request rate. Finally, numerical results are presented to verify the effectiveness of the two caching methods. <s> BIB004 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient and Cache-Enabled 5G <s> Using a network of cache enabled small cells, traffic during peak hours can be reduced by proactively fetching the content that is most likely to be requested. In this paper, we aim to explore the impact of proactive caching on an important metric for future generation networks, namely, energy efficiency (EE). We argue that, exploiting the spatial repartitions of users in addition to the correlation in their content popularity profiles, can result in considerable improvement of the achievable EE. In this paper, the optimization of EE is decoupled into two related subproblems. The first one addresses the issue of content popularity modeling. While most existing works assume similar popularity profiles for all users, we consider an alternative framework in which, users are clustered according to their popularity profiles. In order to showcase the utility of the proposed clustering, we use a statistical model selection criterion, namely, Akaike information criterion. Using stochastic geometry, we derive a closed-form expression of the achievable EE and we find the optimal active small cell density vector that maximizes it. The second subproblem investigates the impact of exploiting the spatial repartitions of users. After considering a snapshot of the network, we formulate a combinatorial problem that optimizes content placement in order to minimize the transmission power. Numerical results show that the clustering scheme considerably improves the cache hit probability and consequently the EE, compared with an unclustered approach. Simulations also show that the small base station allocation algorithm improves the energy efficiency and hit probability. <s> BIB005 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient and Cache-Enabled 5G <s> In this paper, we study delay-aware cooperative online content caching with limited caching space and unknown content popularity in dense small cell wireless networks. We propose a Cooperative Online Content cAching algorithm (COCA) that decides in which BS the requested content should be cached with considerations of three important factors: the residual cache space in each small cell basestation (SBS), the number of coordinated connections each SBS establishes with other SBSs, and the number of served users in the coverage area of each SBS. In addition, due to limited storage space in the cache, the proposed COCA algorithm eliminates the least recently used (LRU) contents to free up the space. We compare the delay performance of the proposed COCA algorithm with the existing offline cooperative caching schemes through simulations. Simulation results demonstrate that the proposed COCA algorithm has a better delay performance than the existing offline algorithms. <s> BIB006 | In BIB005 , the idea of proactive caching based on the content popularity on small cells has been proposed for improving the energy efficiency. Owing to the abundance of small cells, networks are getting constrained by the overall backhaul link capacity and much of the load is corresponding to transactions of the same requests repeatedly. Energy efficiency has been evaluated with regards to the content placement techniques and more emphasis has been put into organizing the content based on user locations and constantly fine tuning the clusters based on the content popularity distribution instead of spanning the same content across the network. Various topologies are shown in Figure 4 . Energy efficiency has been formulated in relation to the small cell density vector. A heterogeneous file popularity distribution has been considered and a popularity vector has been maintained at every user. Users have been grouped into clusters depending upon the similarity in their interests and the cached files are an average of these popularity vectors. Users would usually be allowed to communicate with the base station within a specified distance of their cluster and in case of a cache miss event, the content would then be requested from the core via backhaul links. Spanning the same data across the network tends to sacrifice the information diversity and hence a content-based clustering approach has been brought forward. Simulations have been presented to demonstrate that with the increased base station density, significant energy efficiency gains have been experienced since the allocation problem gets simplified and interference and transmission powers would be reduced. In a unique approach for addressing the energy efficiency challenge has been presented. The proposed E3 ratio thus incorporates a cost factor when calculating the number of UEs being served against the power spent over this operation by the BS. It has been made clear that although the cost factor might not have a direct impact on the spectral efficiency, it would be an important factor when regulating the cost of the entire network. Thus, operators have been addressed to carefully incorporate the features of edge caching and gigabit X-haul links to strike a fair balance between the cost overhead and the need of the feature. Otherwise it would be an overkill which has been meant to be strictly avoided. Mathematical analysis for EE maximization presented in BIB004 supports the fact that for the cases of low user cache size, non coded schemes should be utilized for a faster delivery system. Highlight of the research work conducted in BIB006 has been the assumption of a finite cache memory for a more realistic analysis. Delay bounds of an online cooperative caching scheme have been brought forward as compared to offline and a random caching scheme. The cache being periodically updated promises to deliver a tighter user association and aims to have minimum possible latency. The algorithm also aims to accurately cache the data in highest demand with an increased user density. Application of cooperative caching on P2P networks has been discussed in BIB001 , authors have demonstrated the effectiveness of the algorithm by the segmentation of cache memory at the base stations. It would not only keep track of the cached data of the highly demanded information but would also record data paths and the newly requested data. The simulations have illustrated the usefulness of this optimization technique by the reduced number of hops and latency. On the other hand, uplink energy conservation has been considered in the context of dense small cells BIB002 . In BIB003 , energy efficiency analysis of heterogeneous cache enabled 5G hyper cellular networks was performed. The control and user plane separation is considered to aid in devising enhanced access schemes and retain fairness in service. Furthermore, base station on-off strategy is taken into account to help in cutting down costs spent on redundant small cells BIB003 . In that scenario, macro cells would be the masters handling mobility, home subscriber and the user admission whereas small cells would be the slave part of the radio resource management scheme. With this increasing growth of the network infrastructure, irregularities in traffic behavior must be taken into account along with the actual user distribution for a realistic scenario. Caching has been sought after as a viable solution for reducing the end to end latency by storing content at the base stations. Small cells would typically involve macro base station in its communication with the UE in a semi sleep mode and ensure that it would always be aware of the UE positioning in the network as well as the cache memory statistics. Macro cell also ensures that the UE would be served by the closest and best possible small cell and would turn off the remaining ones to concentrate on a specified area for improving the throughput. On the other hand, there would be a predefined search radius and content would be fetched from a neighbouring base station within that distance. Otherwise, UE would associate to the macro base station for getting access to the needed content. Expressions for the coverage probability for the UE to get signal to interference (SIR) ratio within the threshold, throughput and power consumption and efficiency have been documented in BIB003 . |
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Resource Sharing in 5G with Energy-Efficiency Goal <s> In this paper we evaluate the energy efficiency of a 5G radio access network (RAN) based on LTE technology when comparing two small cell deployment strategies to enhance the RAN capacity. Specifically, we compare densifying a 3-sector macrocell RAN with small cells against first upgrading to a 6-sector macrocell RAN before densifying with small cells. The latter strategy has been used in urban areas by 4G network operators. The energy consumption gain (ECG) is used as a figure of merit in this paper. The radio base station power consumption is estimated by using a realistic power consumption model. Our results show that deploying a small cell overlay in a 3-sector macrocell RAN is more energy efficient than deploying a small cell overlay in a 6-sector macrocell RAN even though the latter uses fewer small cells. Further energy savings can be achieved by implementing an adaptive sectorisation technique. An energy saving of 25% is achieved for 6-sectors when progressively decreasing the number of active sectors from 6 to 1 in accordance with the temporal average traffic load. Irrespective, the 3-sector option with or without incorporating the adaptive sectorisation technique is always more energy efficient. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Resource Sharing in 5G with Energy-Efficiency Goal <s> Wireless networks have made huge progress over the past three decades. Nevertheless, emerging fifth-generation (5G) networks are under pressure to continue in this direction at an even more rapid pace, at least for the next ten to 20 years. This pressure is exercised by rigid requirements as well as emerging technology trends that are aimed at introducing improvements to the 5G wireless world. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Resource Sharing in 5G with Energy-Efficiency Goal <s> Along with spectral efficiency (SE), energy efficiency (EE) is a key performance metric for the design of 5G and beyond 5G (B5G) wireless networks. At the same time, infrastructure sharing among multiple operators has also emerged as a new trend in wireless communication networks. This paper presents an optimization framework for EE and SE maximization in a network, where radio resources are shared among multiple operators. We define a heterogeneous service level agreement (SLA) framework for a shared network, in which the constraints of different operators are handled by two different multi-objective optimization approaches namely the utility profile and scalarization methods. Pareto-optimal solutions are obtained by merging these approaches with the theory of generalized fractional programming. The approach applies to both noise-limited and interference-limited systems, with single-carrier or multi-carrier transmission. Extensive numerical results illustrate the effect of the operator specific SLA requirements on the global spectral and EE. Three network scenarios are considered in the numerical results, each one corresponding to a different SLA, with different operator-specific EE and SE constraints. <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Resource Sharing in 5G with Energy-Efficiency Goal <s> Recently, Fog-RANs have been introduced as the evolution of Cloud Radio Access Networks (CRAN) for enabling edge computing in 5G systems. By alleviating the fronthaul burden for data transfer, transport delays are expected to be greatly reduced. However, in order to support envisioned 5G real-time and delay-sensitive applications, tailored radio resource and interference management schemes become necessary. Therefore, this paper investigates the issues of user scheduling and beamforming for energy efficient Fog-RAN. We formulate the energy efficiency maximization problem, taking into account the local user clustering constraint specific to Fog-RANs. Given the difficulty of this non-convex optimization problem, we propose a strategy where the energy efficient user scheduling is split in two parts: first, we solve an equivalent sum-rate maximization problem, then, the most energy-efficient FogAPs are activated in a greedy manner. To meet the requirement of low computational complexity of FogAPs, local beamforming is performed given fixed user scheduling. Simulation results show that the proposed scheme not only provides similar levels of user rates and fairness, but also largely outperforms the system energy efficiency in comparison with the baseline scheme1. <s> BIB004 | Spectrum and physical resource sharing needs to be considered for accomplishing the energy efficiency goal of 5G. However, the need of service quality retention with respect to throughput and packet drops must also be addressed. Thoughts on infrastructure sharing have been gaining enough traction owing to several factors, for example, lack of space acquisition for site deployment or utilizing the available resources at their full potential and refraining from any new deployment. This section puts together the studies for bringing improvements in energy efficiency by a mutual sharing of infrastructure. Operators would have the flexibility of resource sharing at either full or partial level naturally emphasizing improved security for their equipment. Additionally, the cost of commissioning every site would lead to a higher expenditure and would minimize the expected revenues. Projects such as EARTH and GREEN TOUCH detail this avenue and brings forth an expectation of a decreased energy consumption by 1000 folds BIB002 . For this level of sophisticated resource sharing, a complete knowledge about the functionality and capacity of the network entities needs to be available which may not be possible in practice. However, the avenue of spectrum sharing still welcomes more discussion and aims to be a potential pathway for gaining solutions to the resource scarcity problem. Details of system level simulations for comparisons drawn between energy consumption and shared infrastructure at different load levels have been documented in BIB002 where a gain of up to 55% for energy efficiency in the dense areas has been demonstrated. Other significant advantages of resource sharing would include less interference by a planned cell deployment in accordance with the user demands per area. These efforts aim to eliminate the problems of either over provisioning or under-utilization of the deployed network entities. Authors in BIB004 have discussed the application of an improved resource allocation in a fog RAN. The suggested idea relies upon the fact that the usage of a centralized baseband processing unit, which, while increasing the processing power of the system, remains at risk of getting outdated measurements from the radio heads because of larger transport delays. The suggested algorithm starts off by switching off the redundant access points for conserving the energy and then modifying the beam weights for providing the end user with an optimum signal to leakage and noise ratio. User association is made centrally and then the information gets passed on to the fog access points after being scheduled for users. Following this phase, the proposed greedy algorithm tracks the global as well as the local energy efficiency readings and switches off the access points not needed until the rising trend of global energy efficiency ceases. Simulations have been carried out using a layout of macro and pico cells showing about a three-fold increase in the reported Channel State Information (CSI). Furthermore, authors in BIB001 have demonstrated the EE gains in a dynamic six-sector BS, capable of operating at either one or a maximum of all the sectors fully functioning, to be up to 75% as compared to the case of an always on approach. In BIB003 , a case study of infrastructure sharing between different operators has been presented as well. Service level agreement between the participating operators is defined and handled by multi-objective optimization methods. In such a shared environment, QoS should go hand in hand with fair resource utilization. Authors have specifically considered the case of obeying operator specific energy and spectral efficiency criteria along with the global spectral and energy efficiency maximization. The most prominent outcomes of this research are the global energy and spectral efficiency maximization in a shared noise-limited environment and the application of the framework to a network shared by any number of operators each serving different numbers of users and an optimal fulfillment of utility targets. Detailed mathematical analysis has been presented for system modelling with noise and interference constraints. SINR equations, which originally were used as a starting point, were thus gradually modified by incorporating weighting factors for influencing the priorities. This model turns out to be working in a polynomial complexity and maximizes the given objective function. Moreover, maximum and minimum bounds have been enclosed. In the paper, authors have presented the application of the mathematical tools by presenting the case of a base station installed in a crowded place such as an airport or shopping mall where the site owner is the neutral party and the frequency resources are either pooled or one of the operators grants some of his portion to others. Firstly, the case of two operators has been presented when they do not have any global constraints and the multi-objective problem set of noise limited scenario would be used. Secondly, site owner restricts the interference level or the global energy efficiency for both the operators and both of them target a minimum QoS constraint. Thirdly, there would be three operators with the same condition as of the first case. The work has laid the foundation to establish the criterion for the energy-spectral trade off in a single/multi carrier scenario. |
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Resource Allocation in NOMA <s> As a promising downlink multiple access scheme for future radio access (FRA), this paper discusses the concept and practical considerations of non-orthogonal multiple access (NOMA) with a successive interference canceller (SIC) at the receiver side. The goal is to clarify the benefits of NOMA over orthogonal multiple access (OMA) such as OFDMA adopted by Long-Term Evolution (LTE). Practical considerations of NOMA, such as multi-user power allocation, signalling overhead, SIC error propagation, performance in high mobility scenarios, and combination with multiple input multiple output (MIMO) are discussed. Using computer simulations, we provide system-level performance of NOMA taking into account practical aspects of the cellular system and some of the key parameters and functionalities of the LTE radio interface such as adaptive modulation and coding (AMC) and frequency-domain scheduling. We show under multiple configurations that the system-level performance achieved by NOMA is higher by more than 30% compared to OMA. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Resource Allocation in NOMA <s> This paper focuses on resource allocation in energy-cooperation enabled two-tier heterogeneous networks (HetNets) with non-orthogonal multiple access (NOMA), where base stations (BSs) are powered by both renewable energy sources and the conventional grid. Each BS can serve multiple users at the same time and frequency band. To deal with the fluctuation of renewable energy harvesting, we consider that renewable energy can be shared between BSs via the smart grid. In such networks, user association and power control need to be re-designed, since existing approaches are based on OMA. Therefore, we formulate a problem to find the optimum user association and power control schemes for maximizing the energy efficiency of the overall network, under quality-of-service constraints. To deal with this problem, we first propose a distributed algorithm to provide the optimal user association solution for the fixed transmit power. Furthermore, a joint user association and power control optimization algorithm is developed to determine the traffic load in energy-cooperation enabled NOMA HetNets, which achieves much higher energy efficiency performance than existing schemes. Our simulation results demonstrate the effectiveness of the proposed algorithm, and show that NOMA can achieve higher energy efficiency performance than OMA in the considered networks. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Resource Allocation in NOMA <s> Non-orthogonal multiple access (NOMA) has been recently considered as a promising multiple access technique for fifth generation (5G) mobile networks as an enabling technology to meet the demands of low latency, high reliability, massive connectivity, and high throughput. The two dominants types of NOMA are: power-domain and code-domain. The key feature of power-domain NOMA is to allow different users to share the same time, frequency, and code, but with different power levels. In code-domain NOMA, different spread-spectrum codes are assigned to different users and are then multiplexed over the same time-frequency resources. This paper concentrates on power-domain NOMA. In power-domain NOMA, Successive Interference Cancellation (SIC) is employed at the receiver. In this paper, the optimum received uplink power levels using a SIC detector is determined analytically for any number of transmitters. The optimum uplink received power levels using the SIC decoder in NOMA strongly resembles the μ-law encoding used in pulse code modulation (PCM) speech companders. <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Resource Allocation in NOMA <s> NOMA has been recognized as a highly promising FRA technology to satisfy the requirements of the fifth generation era on high spectral efficiency and massive connectivity. Since the EE has become a growing concern in FRA from both the industrial and societal perspectives, this article discusses the sustainability issues of NOMA. We first thoroughly examine the theoretical power regions of NOMA to show the minimum transmission power with fixed data rate requirement, demonstrating the EE performance advantage of NOMA over orthogonal multiple access. Then we explore the role of energy-aware resource allocation and grant-free transmission in further enhancing the EE performance of NOMA. Based on this exploration, a hybrid NOMA strategy that reaps the joint benefits of resource allocation and grantfree transmission is investigated to simultaneously accomplish high throughput, large connectivity, and low energy cost. Finally, we identify some important and interesting future directions for NOMA designers to follow in the next decade. <s> BIB004 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Resource Allocation in NOMA <s> By analytically showing that index coding (IC) is more power efficient than superposition coding (SC) when appropriate caching contents are available for a pair of users, we propose a sub-optimal joint user clustering and power allocation scheme for a single-cell downlink non-orthogonal multiple access network with caching memory at the receivers that alternate between IC and SC. Simulation studies demonstrate that the proposed scheme significantly reduces the transmission power when compared with the benchmark scheme that only allows SC. <s> BIB005 | In 5G, attempts have been made to possibly explore the area of non-orthogonal multiple access (NOMA), employing power control for saving resources in both time and frequency domain. This concept is highlighted in the following Figure 5 . Operators would benefit from this technique by getting to serve the maximum number of users within the same frequency band, thus improving spectral efficiency BIB002 . This research area has been active for a while now for the reasons of increasing the network capacity and improving the data rates. An intelligent coordination among the base stations must be in place for maximum utilization of the available overall network energy. This corresponds to the fact that the harvested green energy has mostly been volatile, and a constant input source could not be guaranteed. For this reason, a detailed mathematical model has been presented for the power control of the UEs being serviced for minimizing interference as much as possible. A comparison of user association based genetic algorithms against a fixed transmit power was drawn. NOMA based techniques were demonstrated to outperform the conventional techniques for EE improvement for a larger number of nodes. The application was extended to a two-tier RAN having a macro base station covering a region of several pico base stations, being powered by both green and conventional energy sources. The proposed mathematical model uses a ratio of the network's data rate over the entire energy consumption as the network utility. Incorporation of improved user association techniques were suggested in BIB001 for improvement of user throughput and error containment in NOMA. In BIB003 , authors presented the mathematical feasibility for the utilization of successive interference cancellation at the receiver side. The signal that is being processed considers others to be noise, cancels them out and its iterative nature aims to decode all of them. With an increase in the number of transmitters having a fixed SINR, a linear relationship has been observed. On the other hand, this formulation might lead to a saturation point for the explosive number of IoT devices. The authors in , have taken an interesting approach for a fair comparison of NOMA and a relay-aided multiple access (RAMA) technique and a simulation was carried out for maximization of the sum rate. It was established via mathematical formulation that sum rate is an increasing function of user's transmission power and for the cases of a high data rate demand of the farthest user, NOMA proved to have maximized the sum rate. Distance between the users has been a key figure and with an increased separation between them, NOMA provides maximum rates whereas for the smaller separation relay-based setup provides a good enough sum rate. Authors in BIB004 have endorsed the advantages of nonorthogonal multiple access (NOMA) for the future radio access networks. Apart from the fact that the technique aids in getting a better spectral efficiency, authors instead have analyzed the feasibility of acquiring a better energy efficiency out of it as well. Considering the example of one base station serving two users, relationships between SE and EE have been observed which reflects that NOMA can potentially regulate the energy within the network by the allocation of more bandwidth to a cell center user in the uplink and more power to the cell edge user in the downlink. Considering the potential of NOMA, the problem was tackled with respect to its deployment scenario for the maximum exploitation. For a single cell deployment, EE mapping against resource allocation was considered as an NP hard problem because each user would be competing for the same radio resource, however, user scheduling and multiple access methods would aid for improving this situation. For the network level NOMA, a joint transmission technique could be beneficial for organizing the traffic load on the radio links and users must be scheduled accordingly when it comes to energy harvesting to keep the users with critical needs prioritized. Lastly, Grant free transmission has been studied for saving the signaling overhead, as soon as the user acquires data in its buffer it should start the uplink transmission and selection of the received data would be based upon its unique multiple access signature. Multiple access signature is deemed to be the basis of this proposal, but the signature pool must be carefully devised with an optimal tradeoff between the pool size and mutual correlation. It would greatly help for collision avoidance and detection. The users remain inactive for cutting down on the grant signaling and hence more energy is typically conserved. The proposed hybrid technique transitions between grant free and scheduled NOMA based on the current traffic load which eventually lowers down the collision probability and improves latency. In contrast with the above works that have discussed the use cases of caching in orthogonal multiple access (OMA), authors in BIB005 explored index based chaching instead of superposition chaching while adopting a sub optimal user clustering technique for significant reductions in the transmitted power while using NOMA. Owing to the enormous number of users, optimal user clustering was discouraged and user association based upon their differences in terms of link gain and cached data was suggested instead. The iterative power allocation algorithm was demonstrated to converge after several iterations. |
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient 5G Outdoor-Indoor Communication <s> In this paper, we study the joint resource allocation algorithm design for downlink and uplink multicarrier transmission assisted by a shared user equipment (UE)-side distributed antenna system (SUDAS). The proposed SUDAS simultaneously utilizes licensed frequency bands and unlicensed frequency bands, (e.g. millimeter wave bands), to enable a spatial multiplexing gain for single-antenna UEs to improve energy efficiency and system throughput of $5$-th generation (5G) outdoor-to-indoor communication. The design of the UE selection, the time allocation to uplink and downlink, and the transceiver processing matrix is formulated as a non-convex optimization problem for the maximization of the end-to-end system energy efficiency (bits/Joule). The proposed problem formulation takes into account minimum data rate requirements for delay sensitive UEs and the circuit power consumption of all transceivers. In order to design a tractable resource allocation algorithm, we first show that the optimal transmitter precoding and receiver post-processing matrices jointly diagonalize the end-to-end communication channel for both downlink and uplink communication via SUDAS. Subsequently, the matrix optimization problem is converted to an equivalent scalar optimization problem for multiple parallel channels, which is solved by an asymptotically globally optimal iterative algorithm. Besides, we propose a suboptimal algorithm which finds a locally optimal solution of the non-convex optimization problem. Simulation results illustrate that the proposed resource allocation algorithms for SUDAS achieve a significant performance gain in terms of system energy efficiency and spectral efficiency compared to conventional baseline systems by offering multiple parallel data streams for single-antenna UEs. <s> BIB001 | The research in BIB001 discusses a use case of shared UE side distributed antenna system for indoor usage where a combination of distributed antenna and MIMO technology is used for getting enhancements in the coverage area and utilization of unlicensed frequencies for accommodating more users. The use of both licensed as well as unlicensed bands simultaneously needs a redesign of the current resource allocation algorithms BIB001 . In this work, resource allocation has been considered to be a non-convex optimization for increasing the end to end energy efficiency. The suggested topology demands installation of a shared UE side multiple antenna hardware between a single antenna base station (outdoor) and arbitrary number of single antenna UEs (indoor) which are called shared user equipment (UE)-side distributed antenna system (SUDACs). These SUDACs would be able to communicate the channel information with their neighbouring SUDAC units installed. In contrast with the relaying in the LTE-A system, SUDACs could be installed at different locations by the users and still be able to operate in both licensed and unlicensed bands simultaneously. The problem statement boils down to defining the energy efficiency in terms of the bits exchanged between base station and the UEs via SUDACs per joule of energy. It has been shown in BIB001 that application of this model exploits the frequency and spatial multiplexing of UEs and increases the system efficiency as compared to the case when SUDACs is not involved. |
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Virtualization in 5G <s> This paper focuses on energy efficiency aspects and related benefits of radio-access-network-as-a-service (RANaaS) implementation (using commodity hardware) as architectural evolution of LTE-advanced networks toward 5G infrastructure. RANaaS is a novel concept introduced recently, which enables the partial centralization of RAN functionalities depending on the actual needs as well as on network characteristics. In the view of future definition of 5G systems, this cloud-based design is an important solution in terms of efficient usage of network resources. The aim of this paper is to give a vision of the advantages of the RANaaS, to present its benefits in terms of energy efficiency and to propose a consistent system-level power model as a reference for assessing innovative functionalities toward 5G systems. The incremental benefits through the years are also discussed in perspective, by considering technological evolution of IT platforms and the increasing matching between their capabilities and the need for progressive virtualization of RAN functionalities. The description is complemented by an exemplary evaluation in terms of energy efficiency, analyzing the achievable gains associated with the RANaaS paradigm. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Virtualization in 5G <s> A number of merits could be brought by network function virtualization (NFV) such as scalability, on demand allocation of resources, and the efficient utilization of network resources. In this paper, we introduce a framework for designing an energy efficient architecture for 5G mobile network function virtualization. In the proposed architecture, the main functionalities of the mobile core network which include the packet gateway (P-GW), serving gateway (S-GW), mobility management entity (MME), policy control and charging role function, and the home subscriber server (HSS) functions are virtualized and provisioned on demand. We also virtualize the functions of the base band unit (BBU) of the evolved node B (eNB) and offload them from the mobile radio side. We leverage the capabilities of gigabit passive optical networks (GPON) as the radio access technology to connect the remote radio head (RRH) to new virtualized BBUs. We consider the IP/WDM backbone network and the GPON based access network as the hosts of virtual machines (VMs) where network functions will be implemented. Two cases were investigated; in the first case, we considered virtualization in the IP/WDM network only (since the core network is typically the location that supports virtualization) and in the second case we considered virtualization in both the IP/WDM and GPON access network. Our results indicate that we can achieve energy savings of 22% on average with virtualization in both the IP/WDM network and GPON access network compared to the case where virtualization is only done in the IP/WDM network. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Virtualization in 5G <s> 5G wireless technology is paving the way to revolutionize future ubiquitous and pervasive networking, wireless applications, and user quality of experience. To realize its potential, 5G must provide considerably higher network capacity, enable massive device connectivity with reduced latency and cost, and achieve considerable energy savings compared to existing wireless technologies. The main objective of this article is to explore the potential of NFV in enhancing 5G radio access networks' functional, architectural, and commercial viability, including increased automation, operational agility, and reduced capital expenditure. The ETSI NFV Industry Specification Group has recently published drafts focused on standardization and implementation of NFV. Harnessing the potential of 5G and network functions virtualization, we discuss how NFV can address critical 5G design challenges through service abstraction and virtualized computing, storage, and network resources. We describe NFV implementation with network overlay and SDN technologies. In our discussion, we cover the first steps in understanding the role of NFV in implementing CoMP, D2D communication, and ultra densified networks. <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Virtualization in 5G <s> Several critical benefits are encompassed by the concept of NFV when this concept is brought under the roof of 5G such as scalability, high level of flexibility, efficient utilisation of network resources, cost and power reduction, and on demand allocation of network resources. NFV could reduce the cost for installing and maintaining network equipment through consolidating the hardware resources. By deploying NFV, network resources could be shared between different users and several network functions in a facile and flexible way. Beside this the network resources could be rescaled and allocated to each function in the network. As a result, the NFV can be customised according the precise demands, so that all the network components and users could be handled and accommodated efficiently. In this paper we extend the virtualization framework that was introduced in our previous work to include a large range of virtual machine workloads with the presence of mobile core network virtual machine intra communication. In addition, we investigate a wide range of traffic reduction factors which are caused by base band virtual machines (BBUVM) and their effect on the power consumption. We used two general scenarios to group our finding, the first one is virtualization in both IP over WDM (core network) and GPON (access network) while the second one is only in IP over WDM network (core network). We illustrate that the virtualization in IP over WDM and GPON can achieve power saving around (16.5% – 19.5%) for all cases compared to the case where no NFV is deployed, while the virtualization in IP over WDM records around (13.5% – 16.5%). <s> BIB004 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Virtualization in 5G <s> Network Function Virtualization (NFV) enables mobile operators to virtualize their network entities as Virtualized Network Functions (VNFs), offering fine-grained on-demand network capabilities. VNFs can be dynamically scale-in/out to meet the performance desire and other dynamic behaviors. However, designing the auto-scaling algorithm for desired characteristics with low operation cost and low latency, while considering the existing capacity of legacy network equipment, is not a trivial task. In this paper, we propose a VNF Dynamic Auto Scaling Algorithm (DASA) considering the tradeoff between performance and operation cost. We develop an analytical model to quantify the tradeoff and validate the analysis through extensive simulations. The results show that the DASA can significantly reduce operation cost given the latency upper-bound. Moreover, the models provide a quick way to evaluate the cost- performance tradeoff and system design without wide deployment, which can save cost and time. <s> BIB005 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Virtualization in 5G <s> In cloud computing paradigm, virtual resource autoscaling approaches have been intensively studied recent years. Those approaches dynamically scale in/out virtual resources to adjust system performance for saving operation cost. However, designing the autoscaling algorithm for desired performance with limited budget, while considering the existing capacity of legacy network equipment, is not a trivial task. In this paper, we propose a Deadline and Budget Constrained Autoscaling (DBCA) algorithm for addressing the budget-performance tradeoff. We develop an analytical model to quantify the tradeoff and cross-validate the model by extensive simulations. The results show that the DBCA can significantly improve system performance given the budget upper-bound. In addition, the model provides a quick way to evaluate the budget-performance tradeoff and system design without wide deployment, saving on cost and time. <s> BIB006 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Virtualization in 5G <s> We propose and compare different potential placement schemes for baseband functions and mobile edge computing on their energy efficiency. Simulation results show that NFV enabled flexible placement reduces more than 20% power than traditional solutions. <s> BIB007 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Virtualization in 5G <s> In this paper, network function virtualization (NVF) is identified as a promising key technology that can contribute to energy-efficiency improvement in 5G networks. An optical network supported architecture is proposed and investigated in this work to provide the wired infrastructure needed in 5G networks and to support NFV towards an energy efficient 5G network. In this architecture the mobile core network functions as well as baseband function are virtualized and provided as VMs. The impact of the total number of active users in the network, backhaul/fronthaul configurations and VM inter-traffic are investigated. A mixed integer linear programming (MILP) optimization model is developed with the objective of minimizing the total power consumption by optimizing the VMs location and VMs servers’ utilization. The MILP model results show that virtualization can result in up to 38% (average 34%) energy saving. The results also reveal how the total number of active users affects the baseband virtual machines (BBUVMs) optimal distribution whilst the core network virtual machines (CNVMs) distribution is affected mainly by the inter-traffic between the VMs. For real-time implementation, two heuristics are developed, an Energy Efficient NFV without CNVMs inter-traffic (EENFVnoITr) heuristic and an Energy Efficient NFV with CNVMs inter-traffic (EENFVwithITr) heuristic, both produce comparable results to the optimal MILP results. Finally, a Genetic algorithm is developed for further verification of the results. <s> BIB008 | Virtualization has been a very sought out way of reducing the time to market for the newer mobile technologies but with the emerging technological trends it might be a very useful way forward for reducing the energy consumption. In this case, hardware would serve as a bare metal for running multiple applications simultaneously for saving up on the cost of additional deployments of dedicated hardware and software components BIB008 . Most of the functions previously deployed on dedicated hardware would now be rolling out as software defined network functions thus promising scalability, performance maximization and mobility with in the cellular network. The virtual network architecture described in BIB003 lays out the interconnection between several virtual as well as the physical units being interconnected to form a larger system. A generalized 5G architecture incorporating virtualization has been illustrated in Figure 6 . The smooth integration of different technologies with virtualized environment thus becomes the key of reaping the expected efficiency outcomes. Resource and operations management plays a vital role in actively regulating the system for a fine tuned state of execution that helps mitigate issues including redundancy and keeping the operating expenses under control. Furthermore, usage of an openflow switch would come in handy for efficient packet traversal within the network. Significant advantage in terms of reduced energy consumption of about 30% have been experienced by incorporating the current architecture with Network Function Virtualization (NFV). Authors have assumed an ideal case scenario that the virtual BBU will not consume any energy when it stays idle and also the advantage of the enormous computational pool in the form of cloud have been used. Authors in BIB002 presented the significant energy conservation advantages of having virtual nodes in both access as well as the core network instead of having the physical nodes for executing only a single function. The proposed topology suggests baseband pooling for higher performance in the cloud, a direct gigabit optical connection from the remote radio heads to the core network and an even distribution of the core network nodes. The nearest available core network node would then be the one responsible of serving the incoming requests from the respective radio heads. The proposed architecture boasts the flexibility of resource distribution by having a single node running multiple virtualized access/core network functions e.g., serving gateway, packet gateway, etc. and the readiness of activating these functions wherever needed based on the work load. A visible gain of about 22% was recorded using mixed integer linear programming for modelling the work load across the nodes and both the core and access network were virtualized. Apart from the EE gains, a higher performance would also be achieved because of a reduced distance between the node requesting and the node serving the request. Research in BIB004 extends the same idea where the EE gains are deemed to be higher with an increased number of virtual function deployments in the access network which typically consumes more energy, about 70% of the entire demand of the end to end network. The suggested topology entails gigabit optical connectivity as the fronthaul technology instead of the Common Public Radio Interface (CPRI) connection between radio and baseband units. This brings out more deployment opportunities for the virtual machines by having more active nodes closer to the user. Authors documented a gain of about 19% with the proposed architecture. According to the authors in BIB007 , existing RAN architecture needs modification for meeting the upcoming traffic demands. Baseband unit has been decomposed into two main parts, namely distributed unit and a central unit. Both units find their optimal placements either close to the users for serving the low latency demands or in remote areas for providing a pool of computational power. Mobile edge computing uses the same concept and NFV proves to be an enabling technology to use it to its full potential. The network layout comprises upon active antenna units and the central office for edge and access computation. Mobile edge computing units were housed along with the distributed and the central units and was the aggregator for the traffic. Both latter functions were virtualized on general purpose processors and finally the electronic switch was responsible for the traffic routing. Simulations conducted on this topology have revealed about 20% power saving as compared to the case of fixed deployment of hardware units. Moreover, Reference BIB001 also supports the idea of flexible centralization of RAN functions of small cells. Prominent outcomes would comprise upon interference mitigation in a dense deployment and reduced radio access processing. Authors in BIB005 devised an analytical model for calculating the optimal number of active operator's resources. Dynamic Auto Scaling Algorithm, or DASA, was envisioned to provide a way for operators to better understand their cost vs performance trade off and authors have thus used real life data from Facebook's data center for a realistic estimation. On top of the already established legacy infrastructure comprising mainly upon mobile management entity, serving gateway, packet gateway and the policy & charging function, 3GPP has now proposed specifications for a virtualized packet core providing on demand computational resources for catering to the massive incoming user requests. A comparison was drawn between the consumed power and the response time of the servers for the jobs in a queue by varying different factors including total number of virtual network function (VNF) instances, total number of servers available as well as the rate of the incoming jobs, total system capacity and the virtual machine (VM) setup times. Trends recorded from the plots have signified the saturation point of the system and have paved a way for operators to optimize their infrastructure to be robust without taking in more power than needed. Similarly BIB006 extends the above mentioned approach by taking into account the rejection of incoming requests in case the saturation point has been reached. A more realistic framework was presented that incorporates either dropping the jobs from the queue or even blocking them out from being registered until some resources could be freed up. |
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Monitoring and Management in 5G with Integrated Fronthaul and Backhaul <s> Energy efficiency is likely to be the litmus test for the sustainability of upcoming 5G networks. Before the new generation of cellular networks are ready to roll out, their architecture designers are motivated to leverage the SDN technology for the sake of its offered flexibility, scalability, and programmability to achieve the 5G KPI of 10 times lower energy consumption. In this paper, we present Proofs-of-Concept of Energy Management and Monitoring Applications (EMMAs) in the context of three challenging, realistic case studies, along with a SDN/NFV-based MANO architecture to manage converged fronthaul/backhaul 5G transport networks. <s> BIB001 | The impact of software defined networking (SDN) on energy-efficiency was explored in BIB001 . The tremendous increase in the user density in a given area not only demands an energy efficient hardware but also demands for certain modifications in the control plane. Energy Management and Monitoring Applications (EMMA) were designed for observing the energy consumption in fronthaul as well as the backhaul network constituents. A monitoring layer was implemented over an SDN controller which observes the underlying operational domains including mmWave links and analogue Radio over Fiber technology (RoF). This topology is shown in Figure 7 . The energy management framework was extended to provide analysis on virtual network slices as well by gathering the real time power consumption data of a server by a power meter installed with it and then incorporating it with the respective flows. EMMA is based upon a SDN/NFV integrated transport network using a Beryllium framework and supports features including energy monitoring of the access network and the optimization of power states for the nodes. Furthermore, an analytics module provide statistics on the traffic consumption by the currently ongoing services, Provisioning manager would help in setting up new network connections and dynamic routing of connections for the ongoing sessions based upon the energy aware routing algorithms. Authors have envisioned EMMA as a fronthaul technology for providing coverage for high speed trains. It comprises upon a context information module for collection of data for mobility, a statistics module for storing the contextual data and updating it regularly, and lastly the management module for consuming this data and making real time moves in the network by switching on the nodes as the train approaches and switching them off when it leaves. Significant energy savings ranging between 10 to 60% were demonstrated using the real life data by switching on the nodes exactly when needed and keeping them asleep otherwise BIB001 . |
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Machine Learning Techniques for Energy-Efficiency in 5G <s> A hybrid network architecture has been proposed for machine-to-machine M2M communications in the fifth generation wireless systems, where M2M gateways connect the capillary networks and cellular networks. In this paper, we develop novel energy efficient and end-to-end delay duty cycle control scheme for controllers at the gateway and the capillary networks coordinator. We first formulate a duty cycle control problem with joint-optimisation of energy consumption and end-to-end delay. Then, a distributed duty cycle control scheme is proposed. The proposed scheme consists of two parts i a transmission policy, which decides the optimal number of packets to be transmitted between M2M devices, coordinators and gateways; and ii a duty cycle control for IEEE 802.15.4. We analytically derived the optimal duty cycle control and developed algorithms to compute the optimal duty cycle. It is to increase the feasibility of implementing the control on computation-limited devices where a suboptimal low complexity rollout algorithm-based duty cycle control RADutyCon is proposed. The simulation results show that RADutyCon achieves an exponential reduction of the computation complexity as compared with that of the optimal duty cycle control. The simulation results show that RADutyCon performs close to the optimal control, and it performs no worse than the heuristic base control. Copyright © 2014 John Wiley & Sons, Ltd. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Machine Learning Techniques for Energy-Efficiency in 5G <s> The explosive growth of mobile multimedia services has caused tremendous network traffic in wireless networks and a great part of the multimedia services are delay-sensitive. Therefore, it is important to design efficient radio resource allocation algorithms to increase network capacity and guarantee the delay QoS. In this paper, we study the power control problem in the downlink of two-tier femtocell networks with the consideration of the delay QoS provisioning. Specifically, we introduce the effective capacity (EC) as the network performance measure instead of the Shannon capacity to provide the statistical delay QoS provisioning. Then, the optimization problem is modeled as a non- cooperative game and the existence of Nash Equilibriums (NE) is investigated. However, in order to enhance the selforganization capacity of femtocells, based on non-cooperative game, we employ a Q-learning framework in which all of the femtocell base stations (FBSs) are considered as agents to achieve power allocation. Then a distributed Q- learning-based power control algorithm is proposed to make femtocell users (FUs) gain maximum EC. Numerical results show that the proposed algorithm can not only maintain the delay requirements of the delay-sensitive services, but also has a good convergence performance. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Machine Learning Techniques for Energy-Efficiency in 5G <s> We study the energy efficiency issue in 5G communications scenarios, where cognitive femtocells coexist with picocells operating at the same frequency bands. Optimal energy-efficient power allocation based on the sensing-based spectrum sharing (SBSS) is proposed for the uplink cognitive femto users operating in a multiuser MIMO mode. Both hard-decision and soft-decision schemes are considered for the SBSS. Different from the existing energy-efficient designs in multiuser scenarios, which consider system-wise energy efficiency, we consider user-wise energy efficiency and optimize them in a Pareto sense. To resolve the nonconvexity of the formulated optimization problem, we include an additional power constraint to convexify the problem without losing global optimality. Simulation results show that the proposed schemes significantly enhance the energy efficiency of the cognitive femto users compared with the existing spectral-efficient designs. <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Machine Learning Techniques for Energy-Efficiency in 5G <s> Next-generation wireless networks are expected to support extremely high data rates and radically new applications, which require a new wireless radio technology paradigm. The challenge is that of assisting the radio in intelligent adaptive learning and decision making, so that the diverse requirements of next-generation wireless networks can be satisfied. Machine learning is one of the most promising artificial intelligence tools, conceived to support smart radio terminals. Future smart 5G mobile terminals are expected to autonomously access the most meritorious spectral bands with the aid of sophisticated spectral efficiency learning and inference, in order to control the transmission power, while relying on energy efficiency learning/inference and simultaneously adjusting the transmission protocols with the aid of quality of service learning/inference. Hence we briefly review the rudimentary concepts of machine learning and propose their employment in the compelling applications of 5G networks, including cognitive radios, massive MIMOs, femto/small cells, heterogeneous networks, smart grid, energy harvesting, device-todevice communications, and so on. Our goal is to assist the readers in refining the motivation, problem formulation, and methodology of powerful machine learning algorithms in the context of future networks in order to tap into hitherto unexplored applications and services. <s> BIB004 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Machine Learning Techniques for Energy-Efficiency in 5G <s> The massive deployment of small cells (SCs) represents one of the most promising solutions adopted by 5G cellular networks to meet the foreseen huge traffic demand. The high number of network elements entails a significant increase in the energy consumption. The usage of renewable energies for powering the small cells can help reduce the environmental impact of mobile networks in terms of energy consumption and also save on electric bills. In this paper, we consider a two-tier cellular network architecture where SCs can offload macro base stations and solely rely on energy harvesting and storage. In order to deal with the erratic nature of the energy arrival process, we exploit an ON/OFF switching algorithm, based on reinforcement learning, that autonomously learns energy income and traffic demand patterns. The algorithm is based on distributed multi-agent Q-learning for jointly optimizing the system performance and the self-sustainability of the SCs. We analyze the algorithm by assessing its convergence time, characterizing the obtained ON/OFF policies, and evaluating an offline trained variant. Simulation results demonstrate that our solution is able to increase the energy efficiency of the system with respect to simpler approaches. Moreover, the proposed method provides an harvested energy surplus, which can be used by mobile operators to offer ancillary services to the smart electricity grid. <s> BIB005 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Machine Learning Techniques for Energy-Efficiency in 5G <s> Heterogeneous cloud radio access networks (H-CRAN) is a new trend of SC that aims to leverage the heterogeneous and cloud radio access networks advantages. Low power remote radio heads (RRHs) are exploited to provide high data rates for users with high quality of service requirements (QoS), while high power macro base stations (BSs) are deployed for coverage maintenance and low QoS users support. However, the inter-tier interference between the macro BS and RRHs and energy efficiency are critical challenges that accompany resource allocation in H-CRAN. Therefore, we propose a centralized resource allocation scheme using online learning, which guarantees interference mitigation and maximizes energy efficiency while maintaining QoS requirements for all users. To foster the performance of such scheme with a model-free learning, we consider users' priority in resource blocks (RBs) allocation and compact state representation based learning methodology to enhance the learning process. Simulation results confirm that the proposed resource allocation solution can mitigate interference, increase energy and spectral efficiencies significantly, and maintain users' QoS requirements. <s> BIB006 | Recently, machine learning techniques have been employed to various areas of wireless networks including approaches to enhance energy efficiency of the wireless network BIB004 . A typical example would include a smart transmission point, such as the one shown in Figure 8 that would evolve itself overtime by its observations. In BIB005 , the authors proposed switch-on/off policies for energy harvesting small cells through distributed Q-learning. A two tier network architecture was presented for discussion on on-off switching schemes based upon reinforcement learning. It is assumed that small cells are equipped to get their associated macrocell to transfer its load over to them and they themselves would rely upon the harvested energy, for example, solar energy. Application of Q-learning enables them to learn about the incoming traffic requests over time so they could tweak their operation to an optimal level. The proposed scenario includes a macro cell running on electricity and small cells running on solar energy with a distributed Q learning technique being used to gain knowledge about the current radio resource policies. Reward function for the online Q-learning proposes to turn off the small cells if users experience higher drop rates or use the ones that would already be on to take the burden from the macro cell. On the other hand, authors in BIB001 devised a novel EE and E2E delay duty cycle control scheme for controllers at the gateway of cellular and capillary networks. Formulation of a duty cycle control problem with joint-optimization of energy consumption and E2E delay was addressed followed by the distributed duty cycle control scheme. In BIB002 , the authors highlighted a distributed power control for two tier femtocell networks with QoS provisioning based on q-learning. Power control in the downlink of the two tier femtocell network was discussed and an effective network capacity measure was introduced for incorporating the statistical delay. Self-organization of small cells was also discussed with the perspective of Q-learning and utilization of a non cooperative game theory BIB002 . The proposed system model involves a macro base station covering several femtocells in its vicinity, each of them serving their own set of users. Expressions for SINR for both macro and femto cell users were also documented BIB002 . For the consumer's energy efficiency, Pareto optimization was opted for as compared to the traditional multi-user scenarios, focusing on a system level energy efficiency instead. Meanwhile in BIB003 , the deployment of macro and pico base stations were made similar to the above scenario. However, the random deployment of femto BS by consumers cause interference problems and cognitive radio technology was put together with these femto BS for an improved spectrum access. Spectrum sensing techniques provide benefits for UL transmission since the femto cells are power limited as compared to the macro cells. Detailed mathematical analysis for spectrum sensing techniques using both hard and soft decisions were demonstrated in BIB003 . Authors formulated objective functions in such a way that although they are computing optimal power allocation for the users, the whole scheme incorporates constraints for energy efficiency maximization. In BIB006 , the authors also use machine learning techniques for energy-efficient resource allocation in 5G heterogeneous cloud radio access network. Cloud radio access networks are considered as a key enabler in upcoming 5G era by providing higher data rates and lower inter cell interference. It consists of both small cells and macro base stations for accommodating more users, providing them with superior quality of service and for enhancing coverage area respectively where resources are scheduled through a cloud RAN. A resource allocation scheme was put together with the aim of maximizing energy efficiency of UEs served by the radio heads while minimizing inter tier interference BIB006 . Available spectrum was divided into two resource blocks and assigned to different UE groups depending upon their location and QoS demands. A central controller interfaced with the baseband unit pool gets to learn about the network state through the interfaced macro base station and then take certain actions needed for energy efficiency optimization. Furthermore, compact state representation was utilized for approximating algorithm's convergence. The resource block as well as the power allocation with respect to energy saving in the downlink channel of remote radio heads in accordance with the QoS constraints has also been documented. Since the given model depends upon the prior UE knowledge for it to make transitions for optimization, Q-learning was proposed to practically model the objectives and system specifications. The resource allocation is mainly carried out at the controller in the BBU pool and the control signalling is carried out via the X1 and S1 links. The hierarchy of UEs and RRHs operate under macro base station and convey their states to the controller. |
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Challenges and Open Issues <s> The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in 5G cellular networks. While massive MIMO will reduce the transmission power at the expense of higher computational cost, the question remains as to which (computation or transmission power) is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this article is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50 percent of the energy is consumed by the computation power at 5G small cell BSs. Moreover, the computation power of a 5G small cell BS can approach 800 W when massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Challenges and Open Issues <s> Along with spectral efficiency (SE), energy efficiency (EE) is a key performance metric for the design of 5G and beyond 5G (B5G) wireless networks. At the same time, infrastructure sharing among multiple operators has also emerged as a new trend in wireless communication networks. This paper presents an optimization framework for EE and SE maximization in a network, where radio resources are shared among multiple operators. We define a heterogeneous service level agreement (SLA) framework for a shared network, in which the constraints of different operators are handled by two different multi-objective optimization approaches namely the utility profile and scalarization methods. Pareto-optimal solutions are obtained by merging these approaches with the theory of generalized fractional programming. The approach applies to both noise-limited and interference-limited systems, with single-carrier or multi-carrier transmission. Extensive numerical results illustrate the effect of the operator specific SLA requirements on the global spectral and EE. Three network scenarios are considered in the numerical results, each one corresponding to a different SLA, with different operator-specific EE and SE constraints. <s> BIB002 | In accordance with the increase in the computational demand from the base stations, in the upcoming 5G networks, energy efficiency needs to be scaled up by 100-1000 times in contrast with the traditional 4G network BIB001 . Since the transmission ranges would have been scaled down due the dense small cell deployment, the energy efficiency evaluation will potentially revolve around the computational side as compared to the transmission side previously. Storage functions for local data caching should also be considered in this evaluation, since it would potentially be common in the forthcoming networks. Scheduling schemes should be enhanced to involve an optimal number of antennas and bandwidth for resource allocation. The trade-off between transmission and computational power should be optimized considering the effects of the kind of transmission technology involved. Software Defined Networking might be a potential fix for this issue, yet it needs further exploration. Moreover, authors in proposed the intermediate delays from source to destination to be incorporated in the energy efficiency formulation for an even more realistic estimation. Most of the ongoing research has been discussing energy efficiency from a lot of different perspectives but so far a unifying approach has not been reached. Green Touch project has taken such an initiative but more exploration is needed for a stronger understanding . With the explosive small cell deployment, 5G network would be interference limited so orthogonal transmission techniques might not be practical. The framework of sequential fractional programming might be extended for energy efficiency optimization with affordable complexity as suggested in BIB002 . Random Matrix theory and stochastic geometry appear as suitable statistical models for evaluating the randomness within the wireless networks, but a thorough research on energy efficiency needs to be conducted employing these tools. Finally, the avenue of self-learning mechanisms is still less explored. Since local caching has been considered a potential answer for reducing the load on backhaul networks, novel approaches including this consideration need to be developed. |
A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> Abstract In the discussion on the practical applicability of the collective theory of risk to the insurance field some points have been raised, where it is argued that the conceptions of the theory do not correspond to the conditions prevailing in practice, thus entailing a serious reduction of its working value. Three such points will be considered in this paper. They are usually put forward as follows: 1. The theory assumes constancy in time to hold for the distribution of the amounts at risk falling due, the risk sums. 2. The theory does not take into account that interest is earned on the safeguarding capital of the insurer, the risk reserve. 3. The theory considers the probability that ruin will ever occur to the insurer by exhaustion of the risk reserve. A fairly large part of this probability might be ascribable to the possibility of ruin in a very remote future, whilst the practical insurer is only interested in the probability within a reasonable period of time. <s> BIB001 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> We consider a process with reflection at the origin and paths which are piecewise linear or Brownian, with the drift and variance constants being determined by the state of an underlying finite Markov process; the purely linear case corresponds to fluid flow models of current interest in telecommunications engineering. It is shown that the stationary distribution is phase-type, and various algorithms for computing the phase representation are given, some iterative with each step involving a matrix inversion and some based upon spectral expansion of the phase generator. Mathematically, the point of view is Markov additive processes, and some key tools are time-reversal and auxiliary Markov processes obtained by observing the underlying Markov process when the additive component is at a maximum <s> BIB002 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> We consider a risk process with stochastic interest rate, and show that the probability of eventual ruin and the Laplace transform of the time of ruin can be found by solving certain boundary value problems involving integro-differential equations. These equations are then solved for a number of special cases. We also show that a sequence of such processes converges weakly towards a diffusion process, and analyze the above-mentioned ruin quantities for the limit process in some detail. <s> BIB003 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> We consider spectrally negative Levy process and determine the joint Laplace transform of the exit time and exit position from an interval containing the origin of the process reflected in its supremum. In the literature of fluid models, this stopping time can be identified as the time to buffer-overflow. The Laplace transform is determined in terms of the scale functions that appear in the two-sided exit problem of the given Levy process. The obtained results together with existing results on two sided exit problems are applied to solving optimal stopping problems associated with the pricing of Russian options and their Canadized versions. <s> BIB004 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> We consider first passage times for piecewise exponential Markov processes that may be viewed as Ornstein-Uhlenbeck processes driven by compound Poisson processes. We allow for two-sided jumps and as a main result we derive the joint Laplace transform of the first passage time of a lower level and the resulting undershoot when passage happens as a consequence of a downward (negative) jump. The Laplace transform is determined using complex contour integrals and we illustrate how the choice of contours depends in a crucial manner on the particular form of the negative jump part, which is allowed to belong to a dense class of probabilities. We give extensions of the main result to two-sided exit problems where the negative jumps are as before but now it is also required that the positive jumps have a distribution of the same type. Further, extensions are given for the case where the driving Levy process is the sum of a compound Poisson process and an independent Brownian motion. Examples are used to illustrate the theoretical results and include the numerical evaluation of some concrete exit probabilities. Also, some of the examples show that for specific values of the model parameters it is possible to obtain closed form expressions for the Laplace transform, as is the case when residue calculus may be used for evaluating the relevant contour integrals. <s> BIB005 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> We provide a unified analytical treatment of first passage problems under an affine state-dependent jump-diffusion model (with drift and volatility depending linearly on the state). Our proposed model, that generalizes several previously studied cases, may be used for example for obtaining probabilities of ruin in the presence of interest rates under the rational investement strategies proposed by Berk & Green (2004). <s> BIB006 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> This survey treats the problem of ruin in a risk model when assets earn investment income. In addition to a general presentation of the problem, topics covered are a presentation of the relevant integro-differential equations, exact and numerical solutions, asymptotic results, bounds on the ruin probability and also the possibility of minimizing the ruin probability by investment and possibly reinsurance control. The main emphasis is on continuous time models, but discrete time models are also covered. A fairly extensive list of references is provided, particularly of papers published after 1998. For more references to papers published before that, the reader can consult [47]. <s> BIB007 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> In this paper we develop a symbolic technique to obtain asymptotic expressions for ruin probabilities and discounted penalty functions in renewal insurance risk models when the premium income depends on the present surplus of the insurance portfolio. The analysis is based on boundary problems for linear ordinary differential equations with variable coefficients. The algebraic structure of the Green's operators allows us to develop an intuitive way of tackling the asymptotic behavior of the solutions, leading to exponential-type expansions and Cram\'er-type asymptotics. Furthermore, we obtain closed-form solutions for more specific cases of premium functions in the compound Poisson risk model. <s> BIB008 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> This paper concerns an optimal dividend distribution problem for an insurance company whose risk process evolves as a spectrally negative L\'{e}vy process (in the absence of dividend payments). The management of the company is assumed to control timing and size of dividend payments. The objective is to maximize the sum of the expected cumulative discounted dividend payments received until the moment of ruin and a penalty payment at the moment of ruin, which is an increasing function of the size of the shortfall at ruin; in addition, there may be a fixed cost for taking out dividends. A complete solution is presented to the corresponding stochastic control problem. It is established that the value-function is the unique stochastic solution and the pointwise smallest stochastic supersolution of the associated HJB equation. Furthermore, a necessary and sufficient condition is identified for optimality of a single dividend-band strategy, in terms of a particular Gerber-Shiu function. A number of concrete examples are analyzed. <s> BIB009 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> Abstract This paper solves exit problems for spectrally negative Markov additive processes and their reflections. So-called scale matrix, which is a generalization of the scale function of a spectrally negative Levy process, plays the central role in the study of the exit problems. Existence of the scale matrix was shown by Kyprianou and Palmowski (2008) [32, Thm. 3] . We provide the probabilistic construction of the scale matrix, and identify its transform. In addition, we generalize to the MAP setting the relation between the scale function and the excursion (height) measure. The main technique is based on the occupation density formula and even in the context of fluctuations of spectrally negative Levy processes this idea seems to be new. Our representation of the scale matrix W ( x ) = e − Λ x L ( x ) in terms of nice probabilistic objects opens up possibilities for further investigation of its properties. <s> BIB010 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> It is often natural to consider defective or killed stochastic processes. Various observations continue to hold true for this wider class of processes yielding more general results in a transparent way without additional effort. We illustrate this point with an example from risk theory by showing that the ruin probability for a defective risk process can be seen as a triple transform of various quantities of interest on the event of ruin. In particular, this observation is used to identify the triple transform in a simple way when either claims or interarrivals are exponential. We also show how to extend these results to modulated risk processes, where exponential distributions are replaced by phase-type distributions. In addition, we review and streamline some basic exit identities for defective Levy and Markov additive processes. <s> BIB011 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> This paper concerns an optimal dividend distribution problem for an insurance company with surplus-dependent premium. In the absence of dividend payments, such a risk process is a particular case of so-called piecewise deterministic Markov processes. The control mechanism chooses the size of dividend payments. The objective consists in maximazing the sum of the expected cumulative discounted dividend payments received until the time of ruin and a penalty payment at the time of ruin, which is an increasing function of the size of the shortfall at ruin. A complete solution is presented to the corresponding stochastic control problem. We identify the associated Hamilton-Jacobi-Bellman equation and find necessary and sufficient conditions for optimality of a single dividend-band strategy, in terms of particular Gerber-Shiu functions. A number of concrete examples are analyzed. <s> BIB012 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> As well known, all functionals of a Markov process may be expressed in terms of the generator operator, modulo some analytic work. In the case of spectrally negative Markov processes however, it is conjectured that everything can be expressed in a more direct way using the $W$ scale function which intervenes in the two-sided first passage problem, modulo performing various integrals. This conjecture arises from work on Levy processes \cite{AKP,Pispot,APP,Iva,IP, ivanovs2013potential,AIZ,APY}, where the $W$ scale function has explicit Laplace transform, and is therefore easily computable; furthermore it was found in the papers above that a second scale function $Z$ introduced in \cite{AKP} greatly simplifies first passage laws, especially for reflected processes. This paper gathers a collection of first passage formulas for spectrally negative Parisian L\'evy processes, expressed in terms of $W,Z$ which may serve as an"instruction kit"for computing quantities of interest in applications, for example in risk theory and mathematical finance. To illustrate the usefulness of our list, we construct a new index for the valuation of financial companies modeled by spectrally negative L\'evy processes, based on a Dickson-Waters modifications of the de Finetti optimal expected discounted dividends objective. We offer as well an index for the valuation of conglomerates of financial companies. An implicit question arising is to investigate analog results for other classes of spectrally negative Markovian processes. <s> BIB013 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> Drawdown (resp. drawup) of a stochastic process, also referred as the reflected process at its supremum (resp. infimum), has wide applications in many areas including financial risk management, actuarial mathematics and statistics. In this paper, for general time-homogeneous Markov processes, we study the joint law of the first passage time of the drawdown (resp. drawup) process, its overshoot, and the maximum of the underlying process at this first passage time. By using short-time pathwise analysis, under some mild regularity conditions, the joint law of the three drawdown quantities is shown to be the unique solution to an integral equation which is expressed in terms of fundamental two-sided exit quantities of the underlying process. Explicit forms for this joint law are found when the Markov process has only one-sided jumps or is a L\'{e}vy process (possibly with two-sided jumps). The proposed methodology provides a unified approach to study various drawdown quantities for the general class of time-homogeneous Markov processes. <s> BIB014 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> First passage problems for spectrally negative L\'evy processes with possible absorbtion or/and reflection at boundaries have been widely applied in mathematical finance, risk, queueing, and inventory/storage theory. Historically, such problems were tackled by taking Laplace transform of the associated Kolmogorov integro-differential equations involving the generator operator. In the last years there appeared an alternative approach based on the solution of two fundamental"two-sided exit"problems from an interval (TSE). A spectrally one-sided process will exit smoothly on one side on an interval, and the solution is simply expressed in terms of a"scale function"$W$ (Bertoin 1997). The non-smooth two-sided exit (or ruin) problem suggests introducing a second scale function $Z$ (Avram, Kyprianou and Pistorius 2004). Since many other problems can be reduced to TSE, researchers produced in the last years a kit of formulas expressed in terms of the"$W,Z$ alphabet"for a great variety of first passage problems. We collect here our favorite recipes from this kit, including a recent one (94) which generalizes the classic De Finetti dividend problem. One interesting use of the kit is for recognizing relationships between apparently unrelated problems -- see Lemma 3. Last but not least, it turned out recently that once the classic $W,Z$ are replaced with appropriate generalizations, the classic formulas for (absorbed/ reflected) L\'evy processes continue to hold for: a) spectrally negative Markov additive processes (Ivanovs and Palmowski 2012), b) spectrally negative L\'evy processes with Poissonian Parisian absorbtion or/and reflection (Avram, Perez and Yamazaki 2017, Avram Zhou 2017), or with Omega killing (Li and Palmowski 2017). <s> BIB015 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> The first motivation of our paper is to explore further the idea that, in risk control problems, it may be profitable to base decisions both on the position of the underlying process Xt and on its ... <s> BIB016 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> As is well-known, the benefit of restricting Levy processes without positive jumps is the “ W , Z scale functions paradigm”, by which the knowledge of the scale functions W , Z extends immediately to other risk control problems. The same is true largely for strong Markov processes X t , with the notable distinctions that (a) it is more convenient to use as “basis” differential exit functions ν , δ , and that (b) it is not yet known how to compute ν , δ or W , Z beyond the Levy, diffusion, and a few other cases. The unifying framework outlined in this paper suggests, however, via an example that the spectrally negative Markov and Levy cases are very similar (except for the level of work involved in computing the basic functions ν , δ ). We illustrate the potential of the unified framework by introducing a new objective (33) for the optimization of dividends, inspired by the de Finetti problem of maximizing expected discounted cumulative dividends until ruin, where we replace ruin with an optimally chosen Azema-Yor/generalized draw-down/regret/trailing stopping time. This is defined as a hitting time of the “draw-down” process Y t = sup 0 ≤ s ≤ t X s − X t obtained by reflecting X t at its maximum. This new variational problem has been solved in a parallel paper. <s> BIB017 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> In this paper, we investigate the reflected CIR process with two-sided jumps to capture the jump behavior and its non-negativeness. Applying the method of (complex) contour integrals, the closed-form solution to the joint Laplace transform of the first passage time crossing a lower level and the corresponding undershoot is derived. We further extend our arguments to the exit problem from a finite interval and obtain joint Laplace transforms. Our results are expressed in terms of the real and imaginary parts of complex functions by complex matrix. Numerical results are included. <s> BIB018 | Introduction. The Segerdahl-Tichy Process Segerdahl (1955) ; , characterized by exponential claims and state dependent drift, has drawn a considerable amount of interest-see, for example, BIB006 ; BIB008 ; BIB012 , due to its economic interest (it is the simplest risk process which takes into account the effect of interest rates-see the excellent overview (Albrecher and Asmussen 2010, Chapter 8) . It is also the simplest non-Lévy, non-diffusion example of a spectrally negative Markov risk model. Note that for both spectrally negative Lévy and diffusion processes, first passage theories which are based on identifying two "basic" monotone harmonic functions/martingales have been developed. This means that for these processes many control problems involving dividends, capital injections, etc., may be solved explicitly once the two basic functions have been obtained. Furthermore, extensions to general spectrally negative Markov processes are possible BIB014 , ; BIB016 ; BIB017 . Unfortunately, methods for computing the basic functions are still lacking outside the Lévy and diffusion classes. This divergence between theoretical and numerical is strikingly illustrated by the Segerdahl process, for which there exist today six theoretical approaches, but for which almost nothing has been computed, with the exception of the ruin probability BIB003 . Below, we review four of these methods (which apply also to certain generalizations provided in BIB006 ; ), with the purpose of drawing attention to connections between them, to underline open problems, and to stimulate further work. Spectrally negative Markov processes with constant jump intensity. To set the stage for our topic and future research, consider a spectrally negative jump diffusion on a filtered probability space (Ω, {F t } t≥0 , P), which satisfies the SDE: and is absorbed or reflected when leaving the half line (0, ∞). Here, B t is standard Brownian motion, σ(x) > 0, c(x) > 0, ∀x > 0, N λ (t) is a Poisson process of intensity λ, and C i are nonnegative random variables with distribution measure F C (dz) and finite mean. The functions c(x), a(x) := σ 2 (x) 2 and Π(dz) = λF C (dz) are referred to as the Lévy -Khinchine characteristics of X t . Note that we assume that all jumps go in the same direction and have constant intensity so that we can take advantage of potential simplifications of the first passage theory in this case. The Segerdahl-Tichy process is the simplest example outside the spectrally negative Lévy and diffusion classes. It is obtained by assuming a(x) = 0 in (1), and C k to be exponential i.i.d random variables with density f (x) = µe −µx (see BIB001 for the case c(x) = c + rx, r > 0, c ≥ 0, and for nonlinear c(x)). Note that, for the case c(x) = c + rx, an explicit computation of the ruin probability has been provided (with some typos) in BIB003 . See also BIB007 and see (Albrecher and Asmussen 2010, Chapter 8) for further information on risk processes with state dependent drift, and in particular the two pages of historical notes and references. First passage theory concerns the first passage times above and below fixed levels. For any process (X t ) t≥0 , these are defined by with inf ∅ = +∞, and the upper script X typically omitted. Since a is typically fixed below, we will write for simplicity T instead of T a,− . First passage times are important in the control of reserves/risk processes. The rough idea is that when below low levels a, reserves processes should be replenished at some cost, and when above high levels b, they should be partly invested to yield income-see, for example, the comprehensive textbook Albrecher and Asmussen (2010) . The most important first passage functions are the solutions of the two-sided upward and downward exit problems from a bounded interval [a, b] : where e q is an independent exponential random variable of rate q. We will call them (killed) survival and ruin probabilities, respectively 1 , but the qualifier killed will be usually dropped below. The absence of killing will be indicated by omitting the subindex q. Note that in the context of potential theory, (3) are called equilibrium potentials (of the capacitors {b, a} and {a, b}). Beyond ruin probabilities : scale functions, dividends, capital gains, etc. Recall that for "completely asymmetric Lévy " processes, with jumps going all in the same direction, a large variety of first passage problems may be reduced to the computation of the two monotone "scale functions" 1 See Ivanovs (2013) for a nice exposition of killing. W q , Z q -see, for example , , , BIB004 BIB009 , BIB011 Palmowski (2012), Albrecher et al. (2016) ; ; , BIB013 , and see BIB015 for a recent compilation of more than 20 laws expressed in terms of W q , Z q . For example, for spectrally negative Lévy processes, the Laplace transform/killed survival probability has a well known simple factorization 2 : For a second example, the De-Finetti de Finetti (1957) discounted dividends fixed barrier objective for spectrally negative Lévy processes has a simple expression in terms of either the W q scale function or of its logarithmic derivative ν q = W q W q 3 : Maximizing over the reflecting barrier b is simply achieved by finding the roots of W, Z formulas for first passage problems for spectrally negative Markov processes. Since results for spectrally negative Lévy processes require often not much more than the strong Markov property, it is natural to attempt to extend them to the spectrally negative strong Markov case. As expected, everything worked out almost smoothly for "Lévy -type cases" like random walks , Markov additive processes BIB010 , etc. Recently, it was discovered that W, Z formulas continue to hold a priori for spectrally negative Markov processes BIB014 , . The main difference is that in equations like Equation (4), W q (x − a) and the second scale function Z q,θ (x − a) BIB009 ; BIB010 must be replaced by two-variable functions W q (x, a), Z q,θ (x, a) (which reduces in the Lévy case to W q (x, y) = W q (x − y), with W q being the scale function of the Lévy process). This unifying structure has lead to recent progress for the optimal dividends problem for spectrally negative Markov processes (see BIB016 ). However, since the computation of the two-variables scale functions is currently well understood only for spectrally negative Lévy processes and diffusions, AG could provide no example outside these classes. In fact, as of today, we are not aware of any explicit or numeric results on the control of the process (1) which have succeeded to exploit the W, Z formalism. Literature review. Several approaches may allow handling particular cases of spectrally negative Markov processes: 1. with phase-type jumps, there is Asmussen's embedding into a regime switching diffusion BIB002 -see Section 5, and the complex integral representations of BIB005 , BIB018 . 2. for Lévy driven Langevin-type processes, renewal equations have been provided in Czarna et al. (2017) -see Section 2 3. for processes with affine operator, an explicit integrating factor for the Laplace transform may be found in BIB006 -see Section 3. |
A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> 2 <s> We consider a risk process with stochastic interest rate, and show that the probability of eventual ruin and the Laplace transform of the time of ruin can be found by solving certain boundary value problems involving integro-differential equations. These equations are then solved for a number of special cases. We also show that a sequence of such processes converges weakly towards a diffusion process, and analyze the above-mentioned ruin quantities for the limit process in some detail. <s> BIB001 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> 2 <s> We provide a unified analytical treatment of first passage problems under an affine state-dependent jump-diffusion model (with drift and volatility depending linearly on the state). Our proposed model, that generalizes several previously studied cases, may be used for example for obtaining probabilities of ruin in the presence of interest rates under the rational investement strategies proposed by Berk & Green (2004). <s> BIB002 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> 2 <s> Levy Processes and Applications.- The Levy-Ito Decomposition and Path Structure.- More Distributional and Path-Related Properties.- General Storage Models and Paths of Bounded Variation.- Subordinators at First Passage and Renewal Measures.- The Wiener-Hopf Factorisation.- Levy Processes at First Passage.- Exit Problems for Spectrally Negative Processes.- More on Scale Functions.- Ruin Problems and Gerber-Shiu Theory.- Applications to Optimal Stopping Problems.- Continuous-State Branching Processes.- Positive Self-similar Markov Processes.- Epilogue.- Hints for Exercises.- References.- Index. <s> BIB003 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> 2 <s> The first motivation of our paper is to explore further the idea that, in risk control problems, it may be profitable to base decisions both on the position of the underlying process Xt and on its ... <s> BIB004 | The fact that the survival probability has the multiplicative structure (4) is equivalent to the absence of positive jumps, by the strong Markov property; this is the famous "gambler's winning" formula BIB003 . 3 ν q may be more useful than W q in the spectrally negative Markov framework BIB004 4. for the Segerdahl process, the direct IDE solving approach is successful BIB001 ) -see Section 4. We will emphasize here the third approach but use also the second to show how the third approach fits within it. The direct IDE solving approach is recalled for comparison, and Asmussen's approach is also recalled, for its generality. Here is an example of an important problem we would like to solve: Problem 1. Find the de Finetti optimal barrier for the Segerdahl-Tichy process, extending the Equations (5) and (6). Contents. Section 2 reviews the recent approach based on renewal equations due to (which needs still be justified for increasing premiums satisfying (8)). An important renewal (Equation (11)) for the "scale derivative" w is recalled here, and a new result relating the scale derivative to its integrating factor (16) is offered-see Theorem 1. Section 3 reviews older computations of BIB002 for more general processes with affine operator, and provides explicit formulas for the Laplace transforms of the survival and ruin probability (24), in terms of the same integrating factor (16) and its antiderivative. Section 4 reviews the direct classic Kolmogorov approach for solving first passage problems with phase-type jumps. The discounted ruin probability (q > 0) for this process may be found explicitly (33) for the Segerdahl process by transforming the renewal equation (29) into the ODE (30), which is hypergeometric of order 2. This result due to Paulsen has stopped short further research for more general mixed exponential jumps, since it seems to require a separate "look-up" of hypergeometric solutions for each particular problem. Section 5 reviews Asmussen's approach for solving first passage problems with phase-type jumps, and illustrates the simple structure of the survival and ruin probability of the Segerdahl-Tichy process, in terms of the scale derivative w. This approach yields quasi-explicit results when q = 0. Section 6 checks that our integrating factor approach recovers various results for Segerdahl's process, when q = 0 or x = 0. Section 7 reviews necessary hypergeometric identities. Finally, Section 8 outlines further promising directions of research. |
A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> The Laplace transform-Integrating Factor Approach for Jump-Diffusions with Affine Operator Avram and Usabel (2008) <s> We provide a unified analytical treatment of first passage problems under an affine state-dependent jump-diffusion model (with drift and volatility depending linearly on the state). Our proposed model, that generalizes several previously studied cases, may be used for example for obtaining probabilities of ruin in the presence of interest rates under the rational investement strategies proposed by Berk & Green (2004). <s> BIB001 | We summarize now for comparison the results of BIB001 for the still tractable, more general extension of the Segerdahl-Tichy process provided by jump-diffusions with affine premium and volatility Besides Ornstein-Uhlenbeck type processes, (19) includes another famous particular case, Cox-Ingersoll-Ross (CIR) type processes, obtained when α 1 > 0. Introduce now a combined ruin-survival expected payoff at time t where w, p represent, respectively: • A penalty w(X T ) at a stopping time T, w : R→ R • A reward for survival after t years: p(X t ), p : R→ R + . Some particular cases of interest are the survival probability for t years, obtained with w(X T ) = 0, p(X t ) = 1 {X t ≥ 0 } and the ruin probability with deficit larger in absolute value than y, obtained with denote a "Laplace-Carson"/"Gerber Shiu" discounted penalty/pay-off. Proposition 1. (Avram and Usabel 2008, Lem. 1, Thm. 2) (a) Consider the process (19). Let V q (x) denote the corresponding Gerber-Shiu function (21), let w Π (x) = ∞ x w(x − u)Π(du) denote the expected payoff at ruin, and let g(x) := w Π (x) + qp(x), g(s) denote the combination of the two payoffs and its Laplace transform; note that the particular cases correspond to the survival and ruin probability , respectively BIB001 . Then, the Laplace transform of the derivative where h(s) = Π(s) + q s (this corrects a typo in (Avram and Usabel 2008, eqn. (9) )), and where the integrating factor is obtained from (16) by replacing c with c − α 1 (Avram and Usabel 2008, eqn. (11) ). Equivalently, (b) If α 0 = 0 = α 1 and q > 0, the survival probability satisfies Integrating by parts, J(y) = −I q (s) +cI q (s) −qI q−1 (s). Finally, |
A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Asmussen's Embedding Approach for Solving Kolmogorov's Integro-Differential Equation with Phase-Type Jumps <s> We consider a process with reflection at the origin and paths which are piecewise linear or Brownian, with the drift and variance constants being determined by the state of an underlying finite Markov process; the purely linear case corresponds to fluid flow models of current interest in telecommunications engineering. It is shown that the stationary distribution is phase-type, and various algorithms for computing the phase representation are given, some iterative with each step involving a matrix inversion and some based upon spectral expansion of the phase generator. Mathematically, the point of view is Markov additive processes, and some key tools are time-reversal and auxiliary Markov processes obtained by observing the underlying Markov process when the additive component is at a maximum <s> BIB001 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Asmussen's Embedding Approach for Solving Kolmogorov's Integro-Differential Equation with Phase-Type Jumps <s> For the Cram6r-Lundberg risk model with phase-type claims, it is shown that the probability of ruin before an independent phase-type time H coincides with the ruin probability in a certain Markovian fluid model and therefore has an matrix-exponential form. When H is exponential, this yields in particular a probabilistic interpretation of a recent result of Avram & Usabel. When H is Erlang, the matrix algebra takes a simple recursive form, and fixing the mean of H at T and letting the number of stages go to infinity yields a quick approximation procedure for the probability of ruin before time T. Numerical examples are given, including a combination with extrapolation. <s> BIB002 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Asmussen's Embedding Approach for Solving Kolmogorov's Integro-Differential Equation with Phase-Type Jumps <s> Our paper illustrates how the theory of Lie systems allows recovering known results and provide new examples of piecewise deterministic processes with phase-type jumps for which the corresponding first-time passage problems may be solved explicitly. <s> BIB003 | One of the most convenient approaches to get rid of the integral term in (29) is a probabilistic transformation which gets rid of the jumps as in BIB001 , when the downward phase-type jumps have a survival functionF where B is a n × n stochastic generating matrix (nonnegative off-diagonal elements and nonpositive row sums), β = (β 1 , . . . , β n ) is a row probability vector (with nonnegative elements and ∑ n j=1 β j = 1), and 1 = (1, 1, ..., 1) is a column probability vector. The density is f C (x) = βe −Bx b, where b = (−B)1 is a column vector, and the Laplace transform iŝ Asmussen's approach BIB001 ; BIB002 replaces the negative jumps by segments of slope −1, embedding the original spectrally negative Lévy process into a continuous Markov modulated Lévy process. For the new process we have auxiliary unknowns A i (x) representing ruin or survival probabilities (or, more generally, Gerber-Shiu functions) when starting at x conditioned on a phase i with drift downwards (i.e., in one of the "auxiliary stages of artificial time" introduced by changing the jumps to segments of slope −1). Let A denote the column vector with components A 1 , . . . , A n . The Kolmogorov integro-differential equation turns then into a system of ODE's, due to the continuity of the embedding process. For the ruin probability with exponential jumps of rate µ for example, there is only one downward phase, and the system is: For survival probabilities, one only needs to modify the boundary conditions-see the following section. 5.1. Exit Problems for the Segerdahl-Tichy process, with q = 0 Asmussen's approach is particular convenient for solving exit problems for the Segerdahl-Tichy process. Example 1. The eventual ruin probability. When q = 0, the system for the ruin probabilities with x ≥ 0 is: This may be solved by subtracting the equations. Putting we find: Finally, , and for the survival probability Ψ, where Ψ(0) = 1 W(∞) by plugging W(0) = 1 in the first and last terms in (45). We may also rewrite (45) as: Note that w(x) > 0 implies that the scale function W(x) is nondecreasing. (46) does not depend on a. Indeed, the analog of (44) is: x a w(u)du Remark 7. The definition adopted in this section for the scale function W(x, a) uses the normalization W(a, a) = 1, which is only appropriate in the absence of Brownian motion. Despite the new scale derivative/integrating factor approach, we were not able to produce further explicit results beyond (33), due to the fact that neither the scale derivative, nor the integral of the integrating factor are explicit when q > 0 (this is in line with BIB003 ). (33) remains thus for now an outstanding, not well-understood exception. Problem 5. Are there other explicit first passage results for Segerdahl's process when q > 0? In the next subsections, we show that via the scale derivative/integrating factor approach, we may rederive well-known results for q = 0. |
A unifying survey on weighted logics and weighted automata <s> Introduction <s> The formalism of regular expressions was introduced by S. C. Kleene [6] to obtain the following basic theorems. <s> BIB001 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> In this note we discuss the definition of a family (~ of automata derived from the family a0 of the finite one-way one-tape automata (Rabin and Scott, 1959). In loose terms, the automata from (~ are among the machines characterized by the following restrictions: (a) Their output consists in the acceptance (or reieetion ) of input words belonging to the set F of all words in the letters of a finite alphabet X. (b) The automaton operates sequentially on the sueessive letters of the input word without the possibility of coming back on the previously read letters and, thus, all the information to be used in the further computations has to be stored in the internal memory. (c) The unbounded part of the memory, V~¢, is the finite dimensional vector space of the vectors with N integral coordinates; this part of the memory plays only a passive role and all the control of the automaton is performed by the finite part. (d) 0n ly elementary arithmetic operations are used and the amount of computation allowed for each input letter is bounded in terms of the total number of additions and subtractions. (e) The rule by which it is decided to accept or reject a given input word is submitted to the same type of requirements and it involves only the storage of a finite amount of information. Thus the family (~ is a very elementary modification of G0 and it is not <s> BIB002 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> 1. Motivation. Many variants of the notion of automaton have appeared in the literature. We find it convenient here to adopt the notion of E. F. Moore [7]. Inasmuch as Rabin-Scott [9] adopt this notion, too, it is convenient to refer to [9] for various results presumed here. In particular, Kleene's theorem [5, Theorems 3, 5] is used in the form in which it appears in [9]. It is often perspicacious to view regular expressions, and this notion is used in the sense of [3]. In general, we are concerned with the problems of automatically designing an automaton from a specification of a relation which is to hold between the automaton's input sequences and determined output sequences. These "design requirements" are given via a formula of some kind. The problems with which we are concerned have been described in [1]. With respect to particular formalisms for expressing "design requirements" as well as the notion of automaton itself, the problems are briefly and informally these: (1) to produce an algorithm which when it operates on an automaton and a design requirement produces the correct answer to the question "Does this automaton satisfy this design requirement?", or else show no such algorithm exists; (2) to produce an algorithm which operates on a design requirement and produces the correct answer to the question "Does there exist an automaton which satisfies this design requirement?", or else show no such algorithm exists; (3) to produce an algorithm which operates on a design requirement and terminates with an automaton which satisfies the requirement when one exists and otherwise fails to terminate, or else show no such algorithm exists. Interrelationships among problems (1), (2), (3) will appear in the paper [1]. This paper will also indicate the close connection between problem (1) and decision problems for truth of sentences of certain arithmetics. The paper [1 ] will also make use of certain results concerning weak arithmetics already obtained in the literature to obtain answers to problems (1) and (3). Thus <s> BIB003 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> We define a weighted monadic second order logic for trees where the weights are taken from a commutative semiring. We prove that a restricted version of this logic characterizes the class of formal tree series which are accepted by weighted bottom-up finite state tree automata. The restriction on the logic can be dropped if additionally the semiring is locally finite. This generalizes corresponding classical results of Thatcher, Wright, and Doner for tree languages and it extends recent results of Droste and Gastin [Weighted automata and weighted logics, in: Automata, Languages and Programrning--32nd International Colloquium, ICALP 2005, Lisbon, Portugal, 2005, Proceedings, Lecture Notes in Computer Science, Vol. 3580, Springer, Berlin, 2005, pp. 513-525, full version in Theoretical Computer Science, to appear.] from formal power series on words to formal tree series. <s> BIB004 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> Weighted automata are used to describe quantitative properties in various areas such as probabilistic systems, image compression, speech-to-text processing. The behaviour of such an automaton is a mapping, called a formal power series, assigning to each word a weight in some semiring. We generalize Buchi's and Elgot's fundamental theorems to this quantitative setting. We introduce a weighted version of MSO logic and prove that, for commutative semirings, the behaviours of weighted automata are precisely the formal power series definable with particular sentences of our weighted logic. We also consider weighted first-order logic and show that aperiodic series coincide with the first-order definable ones, if the semiring is locally finite, commutative and has some aperiodicity property. <s> BIB005 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> Nondeterministic finite automata with states and transitions labeled by real-valued weights have turned out to be powerful tools for the representation and compression of digital grayscale and color images. The addressing of pixels by input-sequences is extended to cover multi-resolution images. Encoding algorithms for such weighted finite automata (WFA) exploit self-similarities for efficient image compression, outperforming the well-known JPEG baseline standard most of the time. WFA-concepts are embedded easily into weighted finite transducers (WFT) which can execute several natural operations on images in their compressed form and also into so-called parametric WFA, which are closely related to generalized Iterated Function Systems. <s> BIB006 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> We explain why weighted automata are an attractive knowledge representation for natural language problems. We first trace the close historical ties between the two fields, then present two complex real-world problems, transliteration and translation. These problems are usefully decomposed into a pipeline of weighted transducers, and weights can be set to maximize the likelihood of a training corpus using standard algorithms. We additionally describe the representation of language models, critical data sources in natural language processing, as weighted automata. We outline the wide range of work in natural language processing that makes use of weighted string and tree automata and describe current work and challenges. <s> BIB007 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> A multioperator monoid $\mathcal{A}$ is a commutative monoid with additional operations on its carrier set. A weighted tree automaton over $\mathcal{A}$ is a finite state tree automaton of which each transition is equipped with an operation of $\mathcal{A}$. We define M-expressions over $\mathcal{A}$ in the spirit of formulas of weighted monadic second-order logics and, as our main result, we prove that if $\mathcal{A}$ is absorptive, then the class of tree series recognizable by weighted tree automata over $\mathcal{A}$ coincides with the class of tree series definable by M-expressions over $\mathcal{A}$. This result implies the known fact that for the series over semirings recognizability by weighted tree automata is equivalent with definability in syntactically restricted weighted monadic second-order logic. We prove this implication by providing two purely syntactical transformations, from M-expressions into formulas of syntactically restricted weighted monadic second-order logic, and vice versa. <s> BIB008 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> We present an algorithmic method for the quantitative, performance-aware synthesis of concurrent programs. The input consists of a nondeterministic partial program and of a parametric performance model. The nondeterminism allows the programmer to omit which (if any) synchronization construct is used at a particular program location. The performance model, specified as a weighted automaton, can capture system architectures by assigning different costs to actions such as locking, context switching, and memory and cache accesses. The quantitative synthesis problem is to automatically resolve the nondeterminism of the partial program so that both correctness is guaranteed and performance is optimal. As is standard for shared memory concurrency, correctness is formalized "specification free", in particular as race freedom or deadlock freedom. For worst-case (average-case) performance, we show that the problem can be reduced to 2-player graph games (with probabilistic transitions) with quantitative objectives. While we show, using game-theoretic methods, that the synthesis problem is Nexp-complete, we present an algorithmic method and an implementation that works efficiently for concurrent programs and performance models of practical interest. We have implemented a prototype tool and used it to synthesize finite-state concurrent programs that exhibit different programming patterns, for several performance models representing different architectures. <s> BIB009 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> Quantitative aspects of systems can be modeled by weighted automata. Here, we deal with such automata running on finite trees. Usually, transitions are weighted with elements of a semiring and the behavior of the automaton is obtained by multiplying the weights along a run. We turn to a more general cost model: the weight of a run is now determined by a global valuation function. An example of such a valuation function is the average of the weights. We establish a characterization of the behaviors of these weighted finite tree automata by fragments of weighted monadic second-order logic. For bi-locally finite bimonoids, we show that weighted tree automata capture the expressive power of several semantics of full weighted MSO logic. Decision procedures follow as consequences. <s> BIB010 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> Weighted timed automata (WTA) model quantitative aspects of real-time systems like continuous consumption of memory, power or financial resources. They accept quantitative timed languages where every timed word is mapped to a value, e.g., a real number. In this paper, we prove a Nivat theorem for WTA which states that recognizable quantitative timed languages are exactly those which can be obtained from recognizable boolean timed languages with the help of several simple operations. We also introduce a weighted extension of relative distance logic developed by Wilke, and we show that our weighted relative distance logic and WTA are equally expressive. The proof of this result can be derived from our Nivat theorem and Wilke’s theorem for relative distance logic. Since the proof of our Nivat theorem is constructive, the translation process from logic to automata and vice versa is also constructive. This leads to decidability results for weighted relative distance logic. <s> BIB011 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> Weighted automata are non-deterministic automata where the transitions are equipped with weights. They can model quantitative aspects of systems like costs or energy consumption. The value of a run can be computed, for example, as the maximum, limit average, or discounted sum of transition weights. In multi-weighted automata, transitions carry several weights and can model, for example, the ratio between rewards and costs, or the efficiency of use of a primary resource under some upper bound constraint on a secondary resource. Here, we introduce a general model for multi-weighted automata as well as a multi-weighted MSO logic. In our main results, we show that this multi-weighted MSO logic and multi-weighted automata are expressively equivalent both for finite and infinite words. The translation process is effective, leading to decidability results for our multi-weighted MSO logic. <s> BIB012 | Weighted automata are a well-studied formalism modelling quantitative behaviours. Introduced by Schützenberger in BIB002 , they have been applied in many areas such as image compression BIB006 , natural language processing BIB007 , verification and synthesis of programs BIB009 , etc. In the last years, high-level specification formalisms of quantitative properties have received increasing interest. Among other successes, the connection between monadic second-order logic (MSO) and finite automata established by Büchi, Elgot and Trakhtenbrot BIB001 BIB003 , has been extended to the weighted setting. There have been many attempts to find a suitable extension of MSO to describe quantitative properties which captures the expressive power of weighted automata. The considered variants differ with respect to the structures (words, ranked or unranked trees, nested words, etc.) and the weight domains (semirings, valuation monoids, valuation structures, multi-operator monoids, etc.). This article aims at revisiting the link between weighted logics and weighted automata in a uniform manner with regards of these two dimensions. Our main contribution is to consider a new fragment of weighted logics containing a minimal set of features. In order to simplify the uniformity with respect to the structures, we syntactically separate a Boolean fragment from the weighted part: only the syntax of Boolean formulae depends on the structures considered. Then, we clearly separate a small fragment able to define step functions-that we call step formulae-from the more general weighted logic. Because of the minimal set of features that it displays, we call our logic core weighted monadic second-order logic. This separation into three distinct layers, more or less clear in previous works, is designed both to clarify the subsequent study of the expressive power, and to simplify the use of the weighted logic. Towards defining the semantics of this new logic, we first revisit weighted automata by defining an alternative semantics, then lifting it to formulae. This is done in two phases. First, an abstract semantics associates with a structure a multiset of weight labelled structures. E.g., in the case of words, a weighted automaton/formula will map every word to a multiset of weight words. In the setting of trees, every tree is associated with a multiset of weight trees (of the same shape as the original tree). This abstract semantics is fully uninterpreted and, hence, does not depend on any algebraic structure over the set of weights considered. This semantics is in the spirit of a transducer. It has already been used in similar contexts: in BIB008 with an operator H(ω) which relabels trees with operations taken from a multi-operator monoid, in with a weight assignment logic over infinite words, in BIB011 with Nivat theorems for weighted automata over various structures. In a second phase, a concrete semantics is given, by means of an aggregation operator taking the abstract semantics and aggregating every multiset of weight structures to a single value (in a possibly different weight domain). For instance, the usual semantics of weighted automata over semirings can be recovered by mapping every weight word to the product of its weights, and merging the multiset with the addition of the semiring. Separating the semantics in two successive phases, both for weighted automata and logics, allows us to revisit the original proof of expressive equivalence of BIB005 in the abstract semantics. This result has been extended to various weight domains and/or structures (see below). The proof of equivalence in all these works are based on the same core argument which relates runs of automata with the evaluation of formulae. Inspired by the above similarities, our choice of the abstract multiset semantics manifests this core argument. Because the abstract semantics is fully uninterpreted, no additional hypotheses on the weight domain is required to prove the equivalence. We then apply the aggregation operator to obtain a concrete equivalence between weighted automata and our core weighted logic. Our last contribution is to show, by means of purely logical reasoning, that our new fragment of core weighted logic is expressively equivalent to the logics pro-posed in the previous works. Over finite words, this allows us to recover the results over semirings BIB005 , (product) valuation monoids and (product) valuation structures BIB012 . Valuation monoids replace the product operation of the semiring by a lenient valuation operation, making possible to consider discounted sums, average or more evolved combination of sequences of weights. Valuation structures finally also replace the sum by a more general evaluation operator, for instance ratios of several weights computed simultaneously. As an example, it is then possible to compute the ratio between rewards and costs, or the efficiency of use of a primary resource under some upper bound constraint on a secondary resource. Our unifying proof gives new insights on the additional hypotheses (commutativity, distributivity, etc) over the weight domains used in these works. After studying in full details the case of finite words, we illustrate the uniformity of the method with respect to structures, by considering ranked and unranked trees. Once again, our study revisits existing works over semirings BIB004 , (product) valuation monoids BIB010 , and also multi-operator monoids BIB008 . The syntax of the logic in the case of multi-operator monoids is different from the other logics. The proof techniques used to show equivalence of the two formalisms are nevertheless very close to the original ones for semirings. |
A unifying survey on weighted logics and weighted automata <s> Semantics over valuation structures <s> Weighted automata are non-deterministic automata where the transitions are equipped with weights. They can model quantitative aspects of systems like costs or energy consumption. The value of a run can be computed, for example, as the maximum, limit average, or discounted sum of transition weights. In multi-weighted automata, transitions carry several weights and can model, for example, the ratio between rewards and costs, or the efficiency of use of a primary resource under some upper bound constraint on a secondary resource. Here, we introduce a general model for multi-weighted automata as well as a multi-weighted MSO logic. In our main results, we show that this multi-weighted MSO logic and multi-weighted automata are expressively equivalent both for finite and infinite words. The translation process is effective, leading to decidability results for our multi-weighted MSO logic. <s> BIB001 | In order to get even more flexibility in the computation of the semantics, BIB001 proposed to replace the sum of the run values which is used in (1) by a more general evaluator function F . In addition, BIB001 allows the set of weights R used in the automaton and for the value of runs to be different from the final set of weights S computed by the evaluator function. Formally, a valuation structure 2 is a tuple (U, Val, S, F ) where U and S are two sets, Val : U → U is a valuation operator, and F : N U → S is an evaluator function mapping a finite multiset of weights in U to a single weight in S. Given an R-weighted automaton A with R ⊆ U (that we call weighted automata over a valuation structure), we compute the value of a run as in the case of valuation monoids: Val(ρ) = Val(wgt(δ 1 ) · · · wgt(δ n )) when ρ = δ 1 · · · δ n . Then, the semantics of A over a word w ∈ Σ + is defined in two steps: First, Val transforms the set of accepting runs over w in a multiset of weights, which is then transformed in the final semantics with the evaluator function. For instance, we may choose U = Z × N with the valuation Then, choosing S = Q ∪ {+∞} in the valuation structure, we may compute the average of the ratios between rewards and non-negative costs with the evaluator function defined by F (∅) = 0 and |
A unifying survey on weighted logics and weighted automata <s> Core weighted monadic second-order logic <s> Weighted automata are used to describe quantitative properties in various areas such as probabilistic systems, image compression, speech-to-text processing. The behaviour of such an automaton is a mapping, called a formal power series, assigning to each word a weight in some semiring. We generalize Buchi's and Elgot's fundamental theorems to this quantitative setting. We introduce a weighted version of MSO logic and prove that, for commutative semirings, the behaviours of weighted automata are precisely the formal power series definable with particular sentences of our weighted logic. We also consider weighted first-order logic and show that aperiodic series coincide with the first-order definable ones, if the semiring is locally finite, commutative and has some aperiodicity property. <s> BIB001 </s> A unifying survey on weighted logics and weighted automata <s> Core weighted monadic second-order logic <s> While a mature theory around logics such as MSO, LTL, and CTL has been developed in the pure boolean setting of finite automata, weighted automata lack such a natural connection with (temporal) logic and related verification algorithms. In this paper, we will identify weighted versions of MSO and CTL that generalize the classical logics and even other quantitative extensions such as probabilistic CTL. We establish expressiveness results on our logics giving translations from weighted and probabilistic CTL into weighted MSO. <s> BIB002 </s> A unifying survey on weighted logics and weighted automata <s> Core weighted monadic second-order logic <s> A multioperator monoid $\mathcal{A}$ is a commutative monoid with additional operations on its carrier set. A weighted tree automaton over $\mathcal{A}$ is a finite state tree automaton of which each transition is equipped with an operation of $\mathcal{A}$. We define M-expressions over $\mathcal{A}$ in the spirit of formulas of weighted monadic second-order logics and, as our main result, we prove that if $\mathcal{A}$ is absorptive, then the class of tree series recognizable by weighted tree automata over $\mathcal{A}$ coincides with the class of tree series definable by M-expressions over $\mathcal{A}$. This result implies the known fact that for the series over semirings recognizability by weighted tree automata is equivalent with definability in syntactically restricted weighted monadic second-order logic. We prove this implication by providing two purely syntactical transformations, from M-expressions into formulas of syntactically restricted weighted monadic second-order logic, and vice versa. <s> BIB003 | We now turn to the description of a new weighted logic, that will be equivalent to weighted automata. Most existing works start with the definition of a very general logic, and then introduce restrictions to match the expressive power of weighted automata. We take the opposite approach by defining a very basic weighted logic, yet powerful enough to be expressively equivalent to weighted automata. Our logic has three layers: the Boolean fragment which is the classical MSO logic over words, a step weighted fragment (step-wMSO) defining step functions (i.e., piecewise constant functions with a finite number of pieces), and the core weighted logic (core-wMSO) which has the full expressive power of weighted automata. We will show in Section 5 that core-wMSO is a fragment of the (full) weighted MSO logic (wMSO) defined in BIB001 . Considering a Boolean fragment inside a weighted logic was originally done in BIB002 and followed in many articles, see, e.g., BIB003 . with a ∈ Σ, r ∈ R, x, y first-order variables and X a secondorder variable. Table 1 Syntax of the core weighted logic core-wMSO(Σ, R). |
A unifying survey on weighted logics and weighted automata <s> Equivalence with weighted automata <s> Weighted automata are used to describe quantitative properties in various areas such as probabilistic systems, image compression, speech-to-text processing. The behaviour of such an automaton is a mapping, called a formal power series, assigning to each word a weight in some semiring. We generalize Buchi's and Elgot's fundamental theorems to this quantitative setting. We introduce a weighted version of MSO logic and prove that, for commutative semirings, the behaviours of weighted automata are precisely the formal power series definable with particular sentences of our weighted logic. We also consider weighted first-order logic and show that aperiodic series coincide with the first-order definable ones, if the semiring is locally finite, commutative and has some aperiodicity property. <s> BIB001 | The seminal result of BIB001 , linking weighted automata and wMSO formulae, may then be rephrased as in the following theorem. Since N R is a semiring and the semantics {| · |} of weighted automata and core-wMSO is the natural semantics in this semiring, one may think that Theorem 9 is a formal corollary of BIB001 . This is almost true but not entirely. Here we insist that the set of weights used in the formula Φ A is the same as the set of weights used in the weighted automaton A (and vice versa for A Φ and Φ). This is not guaranteed by BIB001 though the proof can be adapted to obtain Theorem 9. We give in Section 4.5 a rather short proof of Theorem 9, which is based on some new ideas and which ensures that the set of weights is preserved. The equivalence of Theorem 9 transfers to the concrete semantics without any conditions on the aggregation operator. Corollary 10 For each R-weighted automaton A over alphabet Σ, we can effectively construct a sentence Φ A in core-wMSO(Σ, R), such that for all w ∈ Σ + , For each sentence Φ in core-wMSO(Σ, R), we can effectively construct an R-weighted automaton A Φ over Σ such that for all w ∈ Σ + , Instantiating the aggregation operator with aggr sr for semirings, aggr vm for valuation monoids, or aggr vs for valuation structures, we obtain Corollary 11 The expressive power of R-weighted automata and core-wMSO(Σ, R) is the same in each of the following cases Before presenting the proof of Theorem 9, we show, in the next two subsections, the robustness of core-wMSO by adding some useful features in it, without changing its expressive power. Notice that the proof of equivalences are fully logical, and do not make use of translations into weighted automata: in particular, they do not require the use of Theorem 9. |
A unifying survey on weighted logics and weighted automata <s> Actually, (3) holds for arbitrary multisets <s> A multioperator monoid $\mathcal{A}$ is a commutative monoid with additional operations on its carrier set. A weighted tree automaton over $\mathcal{A}$ is a finite state tree automaton of which each transition is equipped with an operation of $\mathcal{A}$. We define M-expressions over $\mathcal{A}$ in the spirit of formulas of weighted monadic second-order logics and, as our main result, we prove that if $\mathcal{A}$ is absorptive, then the class of tree series recognizable by weighted tree automata over $\mathcal{A}$ coincides with the class of tree series definable by M-expressions over $\mathcal{A}$. This result implies the known fact that for the series over semirings recognizability by weighted tree automata is equivalent with definability in syntactically restricted weighted monadic second-order logic. We prove this implication by providing two purely syntactical transformations, from M-expressions into formulas of syntactically restricted weighted monadic second-order logic, and vice versa. <s> BIB001 </s> A unifying survey on weighted logics and weighted automata <s> Actually, (3) holds for arbitrary multisets <s> Weighted timed automata (WTA) model quantitative aspects of real-time systems like continuous consumption of memory, power or financial resources. They accept quantitative timed languages where every timed word is mapped to a value, e.g., a real number. In this paper, we prove a Nivat theorem for WTA which states that recognizable quantitative timed languages are exactly those which can be obtained from recognizable boolean timed languages with the help of several simple operations. We also introduce a weighted extension of relative distance logic developed by Wilke, and we show that our weighted relative distance logic and WTA are equally expressive. The proof of this result can be derived from our Nivat theorem and Wilke’s theorem for relative distance logic. Since the proof of our Nivat theorem is constructive, the translation process from logic to automata and vice versa is also constructive. This leads to decidability results for weighted relative distance logic. <s> BIB002 | instance, in the weighted timed setting BIB002 , Droste and Perevoshchikov translate weighted (timed) automata into weighted sentences where Boolean formulae inside the universal quantification are of the form x ∈ X only. In our context, it means only set-step-wMSO inside a product x . Similarly, in the context of trees BIB001 , Fülöp, Stüber, and Vogler use an operation H(ω) which renames every node of the input tree with an operator from some family ω (coming from a multioperator monoid). Again, the renaming is described by means of formulae of the form x ∈ X only, and not by more general MSO formulae. |
A unifying survey on weighted logics and weighted automata <s> Lemma 18 <s> The formalism of regular expressions was introduced by S. C. Kleene [6] to obtain the following basic theorems. <s> BIB001 </s> A unifying survey on weighted logics and weighted automata <s> Lemma 18 <s> 1. Motivation. Many variants of the notion of automaton have appeared in the literature. We find it convenient here to adopt the notion of E. F. Moore [7]. Inasmuch as Rabin-Scott [9] adopt this notion, too, it is convenient to refer to [9] for various results presumed here. In particular, Kleene's theorem [5, Theorems 3, 5] is used in the form in which it appears in [9]. It is often perspicacious to view regular expressions, and this notion is used in the sense of [3]. In general, we are concerned with the problems of automatically designing an automaton from a specification of a relation which is to hold between the automaton's input sequences and determined output sequences. These "design requirements" are given via a formula of some kind. The problems with which we are concerned have been described in [1]. With respect to particular formalisms for expressing "design requirements" as well as the notion of automaton itself, the problems are briefly and informally these: (1) to produce an algorithm which when it operates on an automaton and a design requirement produces the correct answer to the question "Does this automaton satisfy this design requirement?", or else show no such algorithm exists; (2) to produce an algorithm which operates on a design requirement and produces the correct answer to the question "Does there exist an automaton which satisfies this design requirement?", or else show no such algorithm exists; (3) to produce an algorithm which operates on a design requirement and terminates with an automaton which satisfies the requirement when one exists and otherwise fails to terminate, or else show no such algorithm exists. Interrelationships among problems (1), (2), (3) will appear in the paper [1]. This paper will also indicate the close connection between problem (1) and decision problems for truth of sentences of certain arithmetics. The paper [1 ] will also make use of certain results concerning weak arithmetics already obtained in the literature to obtain answers to problems (1) and (3). Thus <s> BIB002 | The expressive power of core-wMSO(Σ, R) does not change if we replace step-wMSO(Σ, R) formulae by set-step-wMSO(R) formulae. Proof We start with a core-wMSO formula Φ = x Ψ where Ψ is a step-wMSO(Σ, R) formula. Let ϕ 1 , . . . , ϕ n be the MSO formulae occurring in Ψ as conditions of the if-then-else operator. We let X = (X 1 , . . . , X n ) be a tuple of fresh second-order variables. Let also Ψ be the formula obtained from Ψ by replacing every occurrence of ϕ i by x ∈ X i , for all 1 i n. Notice that Ψ is a set-step-wMSO(R) formula. We claim that Φ = x Ψ is equivalent to the formula Indeed, let V = free(Φ) = free( x Ψ ) and V = V ∪ {X 1 , . . . , X n }. For every valid (w, σ) ∈ Σ + V there is a unique (w, σ ) ∈ Σ + V such that σ | V = σ and w, σ |= i ∀x (x ∈ X i ↔ ϕ i ). For all 1 i n, we have σ (X i ) = {j ∈ pos(w) | w, σ[x → j] |= ϕ i }. We obtain {|Φ |}(w, σ) = {| x Ψ |}(w, σ ). Then, it is easy to check by induction on Ψ that for all j ∈ pos(w) we have Proof (of Theorem 9) Let A = (Q, ∆, wgt, I, F ) be a weighted automaton. We use a set variable X δ for each transition δ ∈ ∆ and we let X = (X δ ) δ∈∆ . Intuitively, the tuple X encodes a run of A over a word w when each set variable X δ is interpreted as the set of positions at which transition δ is used in that run. We can easily write an MSO formula run(X) which evaluates to true on some word w if and only if X encodes a run of A on w starting from I and ending in F . First, we state that X is a partition on the positions of w. Then we request that if the first position of w is in X δ then δ ∈ I × Σ × Q is initial. Similarly, the transition of the last position should be final. Finally, if δ = (p, a, q) and δ = (p , a , q ) are the transitions of two consecutive positions of w then q = p . It is routine to write all these requirements in MSO (even in FO 3 ). Assuming that run(X) holds, we let weight(x, X) be the set-step-wMSO formula which evaluates to wgt(δ) where δ ∈ ∆ is the unique transition such that x ∈ X δ . Formally, if ∆ = {δ 1 , δ 2 , . . . , δ n } then we define weight(x, X) as x ∈ X δ1 ? wgt(δ 1 ) : · · · x ∈ X δn−1 ? wgt(δ n−1 ) : wgt(δ n ) and Φ A = X run(X) ? x weight(x, X) : 0 . We can easily check that for all words w ∈ Σ + we have Conversely, we proceed by induction on Φ, hence we have to deal with free variables. So we construct for each formula Φ a weighted automaton A Φ over the alphabet It is folklore that we may increase the set of variables encoded in the alphabet whenever needed, e.g., to deal with sum or if-then-else. Formally, if V ⊆ V then we can lift an automaton A V defined on the al- The automaton A 0 has a single state which is initial but not final and has no transitions. We recall the classical constructions for the additive operators of core-wMSO: +, x and X . If Φ = Φ 1 + Φ 2 then A Φ is obtained as the disjoint union of A Φ1 and A Φ2 , both lifted to Σ Φ . If Φ = X Φ 1 then A Φ is obtained via a variant 4 of the projection construction starting from A Φ1 . Assume that A Φ1 = (Q, ∆, wgt, I, F ). We define A Φ = (Q × {0, 1}, ∆ , wgt , I ×{0}, F ×{0, 1}) over alphabet Σ free(Φ) by letting ((p, i), a, (q, j)) ∈ ∆ iff (p, (a, j), q) ∈ ∆ where (a, j) denotes the letter in Σ free(Φ)∪{X} where the value of the X-component is given by j and the remaining Σ free(Φ) -components (different from X) are given by a. We also let wgt ((p, i), a, (q, j)) = wgt(p, (a, j), q) . This transfer of the alphabet component for X to the state of A Φ allows us to define a bijection between the accepting runs of A Φ1 and the accepting runs of A Φ , preserving sequences of weights. Then, we deduce easily that {|A Φ |} = {|Φ|} over alphabet Σ free(Φ) . If Φ = x Φ 1 , the construction is almost the same. In the definition of A Φ , the set of accepting states is F × {1} and the transitions are given by ((p, 0), a, (q, j)) ∈ ∆ iff (p, (a, j), q) ∈ ∆ ((p, 1), a, (q, 1)) ∈ ∆ iff (p, (a, 0), q) ∈ ∆ with weights inherited as before wgt ((p, 0), a, (q, j)) = wgt(p, (a, j), q) wgt ((p, 1), a, (q, 1)) = wgt(p, (a, 0), q) . We turn now to the more interesting cases: if-thenelse and x . Noticed that ϕ ? Φ 1 : Φ 2 is equivalent to (ϕ ? Φ 1 : 0) + (¬ϕ ? Φ 2 : 0), hence we only need to construct an automaton for Φ = ϕ ? Φ 1 : 0. Let V = free(Φ) = free(ϕ) ∪ free(Φ 1 ). Since ϕ is a (Boolean) MSO formula, by BIB001 BIB002 , we can construct a deterministic 5 automaton A ϕ over the alphabet Σ V which accepts a word w ∈ Σ + V if and only if it is a valid encoding w = (w, σ) satisfying ϕ. Now, by induction, we have an automaton The automaton A Φ is obtained as the "intersection" of A ϕ and A Φ1 (see the formal construction below). Now, let w ∈ Σ + V . If w is not valid or w = (w, σ) is valid and does not satisfy ϕ then A ϕ (hence also A Φ ) has no accepting run on w and we obtain {|A Φ |}(w) = ∅ = {|Φ|}(w). On the other hand, assume that w = (w, σ) is valid and satisfy ϕ. Since A ϕ is deterministic, there is a bijection between the accepting runs of A Φ and the accepting runs of A Φ1 . By construction of A Φ , this bijection preserves the sequence of weights associated with a run. We deduce that {|A Φ |}(w, σ) = {|A Φ1 |}(w, σ) = {|Φ|}(w, σ). We give now the formal definition of A Φ . Let A Φ1 = (Q 1 , ∆ 1 , wgt 1 , I 1 , F 1 ) be the weighted automaton over ∆ is the set of triples δ = ((p 1 , p 2 ), a, (q 1 , q 2 )) such that δ 1 = (p 1 , a, q 1 ) ∈ ∆ 1 and (p 2 , a, q 2 ) ∈ ∆ 2 , and wgt(δ) = wgt(δ 1 ). Finally, it remains to deal with the case Φ = x Ψ . By Lemma 18, we may assume that Ψ is a formula in set-step-wMSO(R). So free(Ψ ) = {x, X 1 , . . . , X n } and the tests in Ψ are of the form x ∈ X i for some i ∈ {1, . . . , n}. Also, free(Φ) = {X 1 , . . . , X n } consists of second-order variables only, so every word w ∈ Σ + free(Φ) is valid. We could also use an unambiguous automaton for Aϕ. For every τ ∈ {0, 1} n , we define the evaluation Ψ (τ ) inductively as follows: r(τ ) = r and Let w = (a 1 , τ 1 ) · · · (a k , τ k ) ∈ Σ + free(Φ) with a j ∈ Σ and τ j ∈ {0, 1} free(Φ) for all 1 j k. We can easily check that {|Φ|}(w) = {{Ψ (τ 1 ) · · · Ψ (τ k )}}. Define A Φ = (Q, ∆, wgt, I, F ) with a single state which is both initial and final (Q = I = F = {q}) and for every a ∈ Σ and τ ∈ {0, 1} free(Φ) , there is a transition δ = (q, (a, τ ), q) ∈ ∆ with wgt(δ) = Ψ (τ ). It is clear that for every word w = (a 1 , τ 1 ) · · · (a k , τ k ) ∈ Σ + free(Φ) , the automaton A Φ has a single run on w whose sequence of weights is Ψ (τ 1 ) · · · Ψ (τ k ). Therefore, {|A Φ |}(w) = {|Φ|}(w), which concludes the proof. |
A unifying survey on weighted logics and weighted automata <s> Restricted weighted MSO logic <s> Weighted automata are used to describe quantitative properties in various areas such as probabilistic systems, image compression, speech-to-text processing. The behaviour of such an automaton is a mapping, called a formal power series, assigning to each word a weight in some semiring. We generalize Buchi's and Elgot's fundamental theorems to this quantitative setting. We introduce a weighted version of MSO logic and prove that, for commutative semirings, the behaviours of weighted automata are precisely the formal power series definable with particular sentences of our weighted logic. We also consider weighted first-order logic and show that aperiodic series coincide with the first-order definable ones, if the semiring is locally finite, commutative and has some aperiodicity property. <s> BIB001 </s> A unifying survey on weighted logics and weighted automata <s> Restricted weighted MSO logic <s> Weighted automata are non-deterministic automata where the transitions are equipped with weights. They can model quantitative aspects of systems like costs or energy consumption. The value of a run can be computed, for example, as the maximum, limit average, or discounted sum of transition weights. In multi-weighted automata, transitions carry several weights and can model, for example, the ratio between rewards and costs, or the efficiency of use of a primary resource under some upper bound constraint on a secondary resource. Here, we introduce a general model for multi-weighted automata as well as a multi-weighted MSO logic. In our main results, we show that this multi-weighted MSO logic and multi-weighted automata are expressively equivalent both for finite and infinite words. The translation process is effective, leading to decidability results for our multi-weighted MSO logic. <s> BIB002 | We now present the syntax and semantics of the full wMSO logic that has been studied over semirings BIB001 , valuation monoids and valuation structures BIB002 . The syntax used in these previous works is different. Also, there is no separate semantics for the Boolean fragment, instead, it is obtained as a special case of the quantitative semantics. As we will see, this choice requires some additional conditions on the weight domain, called hypothesis (01) below. In order to obtain the same expressive power as weighted automata, we also have to restrict the usage of conjunction and universal quantifications in wMSO. We present effective translations in both directions relating restricted wMSO with core-wMSO, and the conditions that the weight domain has to fulfil in different settings. Using Corollary 11, we obtain a purely logical proof of the equivalence between restricted wMSO and weighted automata, using core-wMSO as an intermediary, simple and elegant, logical formalism. |
A unifying survey on weighted logics and weighted automata <s> Extensions to ranked and unranked trees <s> The formalism of regular expressions was introduced by S. C. Kleene [6] to obtain the following basic theorems. <s> BIB001 </s> A unifying survey on weighted logics and weighted automata <s> Extensions to ranked and unranked trees <s> 1. Motivation. Many variants of the notion of automaton have appeared in the literature. We find it convenient here to adopt the notion of E. F. Moore [7]. Inasmuch as Rabin-Scott [9] adopt this notion, too, it is convenient to refer to [9] for various results presumed here. In particular, Kleene's theorem [5, Theorems 3, 5] is used in the form in which it appears in [9]. It is often perspicacious to view regular expressions, and this notion is used in the sense of [3]. In general, we are concerned with the problems of automatically designing an automaton from a specification of a relation which is to hold between the automaton's input sequences and determined output sequences. These "design requirements" are given via a formula of some kind. The problems with which we are concerned have been described in [1]. With respect to particular formalisms for expressing "design requirements" as well as the notion of automaton itself, the problems are briefly and informally these: (1) to produce an algorithm which when it operates on an automaton and a design requirement produces the correct answer to the question "Does this automaton satisfy this design requirement?", or else show no such algorithm exists; (2) to produce an algorithm which operates on a design requirement and produces the correct answer to the question "Does there exist an automaton which satisfies this design requirement?", or else show no such algorithm exists; (3) to produce an algorithm which operates on a design requirement and terminates with an automaton which satisfies the requirement when one exists and otherwise fails to terminate, or else show no such algorithm exists. Interrelationships among problems (1), (2), (3) will appear in the paper [1]. This paper will also indicate the close connection between problem (1) and decision problems for truth of sentences of certain arithmetics. The paper [1 ] will also make use of certain results concerning weak arithmetics already obtained in the literature to obtain answers to problems (1) and (3). Thus <s> BIB002 </s> A unifying survey on weighted logics and weighted automata <s> Extensions to ranked and unranked trees <s> Many of the important concepts and results of conventional finite automata theory are developed for a generalization in which finite algebras take the place of finite automata. The standard closure theorems are proved for the class of sets “recognizable” by finite algebras, and a generalization of Kleene's regularity theory is presented. The theorems of the generalized theory are then applied to obtain a positive solution to a decision problem of second-order logic. <s> BIB003 </s> A unifying survey on weighted logics and weighted automata <s> Extensions to ranked and unranked trees <s> We define a weighted monadic second order logic for trees where the weights are taken from a commutative semiring. We prove that a restricted version of this logic characterizes the class of formal tree series which are accepted by weighted bottom-up finite state tree automata. The restriction on the logic can be dropped if additionally the semiring is locally finite. This generalizes corresponding classical results of Thatcher, Wright, and Doner for tree languages and it extends recent results of Droste and Gastin [Weighted automata and weighted logics, in: Automata, Languages and Programrning--32nd International Colloquium, ICALP 2005, Lisbon, Portugal, 2005, Proceedings, Lecture Notes in Computer Science, Vol. 3580, Springer, Berlin, 2005, pp. 513-525, full version in Theoretical Computer Science, to appear.] from formal power series on words to formal tree series. <s> BIB004 </s> A unifying survey on weighted logics and weighted automata <s> Extensions to ranked and unranked trees <s> Quantitative aspects of systems can be modeled by weighted automata. Here, we deal with such automata running on finite trees. Usually, transitions are weighted with elements of a semiring and the behavior of the automaton is obtained by multiplying the weights along a run. We turn to a more general cost model: the weight of a run is now determined by a global valuation function. An example of such a valuation function is the average of the weights. We establish a characterization of the behaviors of these weighted finite tree automata by fragments of weighted monadic second-order logic. For bi-locally finite bimonoids, we show that weighted tree automata capture the expressive power of several semantics of full weighted MSO logic. Decision procedures follow as consequences. <s> BIB005 </s> A unifying survey on weighted logics and weighted automata <s> Extensions to ranked and unranked trees <s> We introduce a new behavior of weighted unranked tree automata. We prove a characterization of this behavior by two fragments of weighted MSO logic and thereby provide a solution of an open equivalence problem of Droste and Vogler. The characterization works for valuation monoids as weight structures; they include all semirings and, in addition, enable us to cope with average. <s> BIB006 | In this section, we show how to extend the equivalence between weighted automata and core-wMSO to other structures, namely ranked and unranked trees. We will primarily use a semantics in multisets of weight trees (instead of weight sequences). Then, we may apply an aggregation operator to recover a more concrete semantics. This approach allows us to infer results for semirings BIB004 and also for tree valuation monoids BIB005 . There are two main ingredients allowing us to prove the equivalence between core-wMSO and weighted automata. First, in the Boolean case, we should have an equivalence between unambiguous (or deterministic) automata and MSO logic. This equivalence is known for many structures such as words BIB001 BIB002 , ranked trees BIB003 , unranked trees , etc. Second, the computation of the weight of a run ρ of an automaton, and the evaluation of a product formula x Ψ should be based on the same mechanism. For words and valuation monoids (or valuation structures), it is the valuation of a sequence of weights. This is why we used an abstract semantics in the semiring of multisets of weight sequences. For trees and tree valuation monoids, the valuation takes a tree of weights as input and returns a value in the monoid. Hence, we use multisets of weight trees as abstract semantics. Note that, multisets of weight trees form a monoid but not a semiring. BIB006 |
A unifying survey on weighted logics and weighted automata <s> Weighted automata over trees <s> We define a weighted monadic second order logic for trees where the weights are taken from a commutative semiring. We prove that a restricted version of this logic characterizes the class of formal tree series which are accepted by weighted bottom-up finite state tree automata. The restriction on the logic can be dropped if additionally the semiring is locally finite. This generalizes corresponding classical results of Thatcher, Wright, and Doner for tree languages and it extends recent results of Droste and Gastin [Weighted automata and weighted logics, in: Automata, Languages and Programrning--32nd International Colloquium, ICALP 2005, Lisbon, Portugal, 2005, Proceedings, Lecture Notes in Computer Science, Vol. 3580, Springer, Berlin, 2005, pp. 513-525, full version in Theoretical Computer Science, to appear.] from formal power series on words to formal tree series. <s> BIB001 </s> A unifying survey on weighted logics and weighted automata <s> Weighted automata over trees <s> A multioperator monoid $\mathcal{A}$ is a commutative monoid with additional operations on its carrier set. A weighted tree automaton over $\mathcal{A}$ is a finite state tree automaton of which each transition is equipped with an operation of $\mathcal{A}$. We define M-expressions over $\mathcal{A}$ in the spirit of formulas of weighted monadic second-order logics and, as our main result, we prove that if $\mathcal{A}$ is absorptive, then the class of tree series recognizable by weighted tree automata over $\mathcal{A}$ coincides with the class of tree series definable by M-expressions over $\mathcal{A}$. This result implies the known fact that for the series over semirings recognizability by weighted tree automata is equivalent with definability in syntactically restricted weighted monadic second-order logic. We prove this implication by providing two purely syntactical transformations, from M-expressions into formulas of syntactically restricted weighted monadic second-order logic, and vice versa. <s> BIB002 </s> A unifying survey on weighted logics and weighted automata <s> Weighted automata over trees <s> Quantitative aspects of systems can be modeled by weighted automata. Here, we deal with such automata running on finite trees. Usually, transitions are weighted with elements of a semiring and the behavior of the automaton is obtained by multiplying the weights along a run. We turn to a more general cost model: the weight of a run is now determined by a global valuation function. An example of such a valuation function is the average of the weights. We establish a characterization of the behaviors of these weighted finite tree automata by fragments of weighted monadic second-order logic. For bi-locally finite bimonoids, we show that weighted tree automata capture the expressive power of several semantics of full weighted MSO logic. Decision procedures follow as consequences. <s> BIB003 </s> A unifying survey on weighted logics and weighted automata <s> Weighted automata over trees <s> We introduce a new behavior of weighted unranked tree automata. We prove a characterization of this behavior by two fragments of weighted MSO logic and thereby provide a solution of an open equivalence problem of Droste and Vogler. The characterization works for valuation monoids as weight structures; they include all semirings and, in addition, enable us to cope with average. <s> BIB004 | An R-weighted (unranked) tree automaton over Σ is a tuple A = (Q, ∆, wgt, F ) with (Q, ∆, F ) a tree automaton and wgt : ∆ → R associating a weight to every transition. The weight tree arising from a run ρ of A over a Σ-tree t is the R-tree wgt • ρ mapping each u ∈ dom(t) to wgt(ρ(u)) ∈ R. The abstract semantics of an Rweighted tree automaton A is a multiset of weight trees. For all trees t ∈ UT Σ , we define Hence, our abstract semantics lives in the commutative monoid N UT R of multisets of R-trees. Then, we may use an aggregation operator aggr : N UT R → S to obtain a concrete semantics in a possibly different weight structure S: Example 28 (Weighted automata over semirings) In the classical setting, the set R of weights is a subset of a semiring (S, +, ×, 0, 1). The value of a run ρ of A over a Σ-tree t is the product of the weights in the R-tree wgt • ρ. Since the semiring is not necessarily commutative, we have to specify the order in which this product is computed. Classically, we choose the postfix order. Formally, given an R-tree ν, the product (ν) = Prod(ν, ε) is computed bottom-up: for all u ∈ dom(ν) we set Prod(ν, u) = Prod(ν, u·1)×· · ·×Prod(ν, u·ar(u))×ν(u) . Note that, if u is a leaf then Prod(ν, u) = ν(u). As for words, the mapping : UT R → S can be lifted to a mapping : N UT R → N S . Then, the semantics is defined as always by summing the values of the accepting runs: [[A]](t) = ρ (wgt(ρ)) where the sum ranges over accepting runs ρ of A over the Σ-tree t. Therefore, the classical case of semirings is obtained from the abstract semantics with the aggregation operator aggr sr (A) = (A) = ν∈A (ν) . In the case of a ranked alphabet, we recover the definition of BIB001 of weighted tree automata. The comparison with the weighted unranked tree automata of is not as easy, at least over non-commutative semirings. We believe that over commutative semirings, our model is equivalent to the weighted unranked tree automata of . The situation is different over non-commutative semirings. Our definition is best motivated by considering words as special cases of trees. There are two ways to inject words in unranked trees, as shown in Fig. 2 : either in a horizontal way (a root with children representing the word from left to right), or a vertical way (unary nodes followed by a leaf, the word being read from bottom to top). With some easy encodings, we may see that our model of weighted unranked tree automata is a conservative extension of weighted word automata, both for the horizontal and the vertical injections of words. Moreover, our approach allows us to obtain the equivalence between automata and logic for arbitrary semirings (even non-commutative ones), as stated in Theorem 30. In contrast, the model of is not a conservative extension of weighted word automata for the horizontal injection. This is witnessed by an example given in Theorem 6.10] , that we now recall. In the (noncommutative) semiring (P({p, q} ), ∪, ·, ∅, {ε}), with two distinct letters p and q, we consider f : UT Σ → P({p, q} ) the tree series mapping every tree t composed of a root directly followed by n children (n ∈ N) to the language {p n q n }, and every other tree to ∅. The model of weighted unranked tree automata we have chosen can not recognise this tree series. However, the model of automata described in is able to recognise this tree series. The main difference between the two models, that explains this discrepancy, is the way weights are assigned during the computation of the automaton. Whereas we have decided to assign weights to transitions of the unranked tree automaton, keeping a Boolean regular (hedge) language to determine whether a transition is enabled, Droste and Vogler decided instead to use a weighted (hedge) automaton when reading the sequence of states of the children. Then, to each position in the tree domain is associated the weight of the (hedge) automaton reading the sequence of states of the children. The semantics over a tree is given by a depth-first left-to-right product of those weights (first the weight of the children from left to right, and then the weight of the parent). Example 29 (Tree valuation monoids) As for words, extensions of weighted automata to more general weight domains have been considered. Following BIB003 , a tree valuation monoid is a tuple (S, +, 0, Val) where (S, +, 0) is a commutative monoid and Val : UT S → S is a valuation function from S-trees to S. The value of a run ρ is now computed by applying this valuation function to the R-tree wgt • ρ. The final semantics is obtained as above by summing the values of accepting runs. Therefore, the semantics in tree valuation monoids is obtained from the abstract semantics with the aggregation operator For instance, when (S, +, ×, 0, 1) is a semiring, we obtain a tree valuation monoid with the postfix product defined in Example 28. We refer to BIB003 for other examples of weighted ranked tree automata, including the interesting case of multi-operator monoids which is also studied in BIB002 . A further extension for unranked trees has recently been considered in BIB004 . Further extensions like tree valuation structureswhere the sum in tree valuation monoids is replaced by a more general operator F as for words-are also possible, though not considered in the literature so far. Our results will apply in this context as well. |
A unifying survey on weighted logics and weighted automata <s> Conclusion <s> Weighted automata are used to describe quantitative properties in various areas such as probabilistic systems, image compression, speech-to-text processing. The behaviour of such an automaton is a mapping, called a formal power series, assigning to each word a weight in some semiring. We generalize Buchi's and Elgot's fundamental theorems to this quantitative setting. We introduce a weighted version of MSO logic and prove that, for commutative semirings, the behaviours of weighted automata are precisely the formal power series definable with particular sentences of our weighted logic. We also consider weighted first-order logic and show that aperiodic series coincide with the first-order definable ones, if the semiring is locally finite, commutative and has some aperiodicity property. <s> BIB001 </s> A unifying survey on weighted logics and weighted automata <s> Conclusion <s> Weighted automata are non-deterministic automata where the transitions are equipped with weights. They can model quantitative aspects of systems like costs or energy consumption. The value of a run can be computed, for example, as the maximum, limit average, or discounted sum of transition weights. In multi-weighted automata, transitions carry several weights and can model, for example, the ratio between rewards and costs, or the efficiency of use of a primary resource under some upper bound constraint on a secondary resource. Here, we introduce a general model for multi-weighted automata as well as a multi-weighted MSO logic. In our main results, we show that this multi-weighted MSO logic and multi-weighted automata are expressively equivalent both for finite and infinite words. The translation process is effective, leading to decidability results for our multi-weighted MSO logic. <s> BIB002 | We proved the meta-theorem relating weighted automata and core-wMSO at the level of multisets of weight structures for words and trees. However, the definitions and techniques developed in this article can easily be adapted for other structures like nested words, Mazurkiewicz traces, etc. The logical equivalence between restricted wMSO and core-wMSO at the concrete level is established for words in Section 5. An analogous result can be obtained for trees with a similar logical reasoning. In particular, this allows for an extension to trees of the valuation structures of BIB002 . In this article, our meta-theorem is only stated and proved for finite structures. At the level of the concrete semantics, equivalences between weighted automata and weighted logics have been extended to infinite structures, such as words or trees over semirings BIB001 , valuation monoids or valuation structures BIB002 . An extension of our meta-theorem to infinite structures capturing these results is a natural open problem. Finite multisets of weight structures are not adequate anymore since automata may exhibit infinitely many runs on a given input structure. The abstract semantics should ideally distinguish between countably many runs or uncountably many runs. |
An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> A definition of the concept 'intuitionistic fuzzy set' (IFS) is given, the latter being a generalization of the concept 'fuzzy set' and an example is described. Various properties are proved, which are connected to the operations and relations over sets, and with modal and topological operators, defined over the set of IFS's. <s> BIB001 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Abstract New results on intuitionistic fuzzy sets are introduced. Two news operators on intuitionistic fuzzy sets are defined and their basic properties are studied. <s> BIB002 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> We briefly describe the Ordered Weighted Averaging (OWA) operator and discuss a methodology for learning the associated weighting vector from observational data. We then introduce a more general type of OWA operator called the Induced Ordered Weighted Averaging (IOWA) Operator. These operators take as their argument pairs, called OWA pairs, in which one component is used to induce an ordering over the second components which are then aggregated. A number of different aggregation situations have been shown to be representable in this framework. We then show how this tool can be used to represent different types of aggregation models. <s> BIB003 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> In this paper, two uncertain linguistic aggregation operators called uncertain linguistic ordered weighted averaging (ULOWA) operator and uncertain linguistic hybrid aggregation (ULHA) operator are proposed. An approach to multiple attribute group decision making with uncertain linguistic information is developed based on the ULOWA and the ULHA operators. Finally, a practical application of the developed approach to the problem of evaluating university faculty for tenure and promotion is given. <s> BIB004 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> In this paper, we define various generalized induced linguistic aggregation operators, including generalized induced linguistic ordered weighted averaging (GILOWA) operator, generalized induced linguistic ordered weighted geometric (GILOWG) operator, generalized induced uncertain linguistic ordered weighted averaging (GIULOWA) operator, generalized induced uncertain linguistic ordered weighted geometric (GIULOWG) operator, etc. Each object processed by these operators consists of three components, where the first component represents the importance degree or character of the second component, and the second component is used to induce an ordering, through the first component, over the third components which are linguistic variables (or uncertain linguistic variables) and then aggregated. It is shown that the induced linguistic ordered weighted averaging (ILOWA) operator and linguistic ordered weighted averaging (LOWA) operator are the special cases of the GILOWA operator, induced linguistic ordered weighted geometric (ILOWG) operator and linguistic ordered weighted geometric (LOWG) operator are the special cases of the GILOWG operator, the induced uncertain linguistic ordered weighted averaging (IULOWA) operator and uncertain linguistic ordered weighted averaging (ULOWA) operator are the special cases of the GIULOWA operator, and that the induced uncertain linguistic ordered weighted geometric (IULOWG) operator and uncertain LOWG operator are the special cases of the GILOWG operator. <s> BIB005 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> The weighted geometric (WG) operator and the ordered weighted geometric (OWG) operator are two common aggregation operators in the field of information fusion. But these two aggregation operators are usually used in situations where the given arguments are expressed as crisp numbers or linguistic values. In this paper, we develop some new geometric aggregation operators, such as the intuitionistic fuzzy weighted geometric (IFWG) operator, the intuitionistic fuzzy ordered weighted geometric (IFOWG) operator, and the intuitionistic fuzzy hybrid geometric (IFHG) operator, which extend the WG and OWG operators to accommodate the environment in which the given arguments are intuitionistic fuzzy sets which are characterized by a membership function and a non-membership function. Some numerical examples are given to illustrate the developed operators. Finally, we give an application of the IFHG operator to multiple attribute decision making based on intuitionistic fuzzy sets. <s> BIB006 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> An intuitionistic fuzzy set, characterized by a membership function and a non-membership function, is a generalization of fuzzy set. In this paper, based on score function and accuracy function, we introduce a method for the comparison between two intuitionistic fuzzy values and then develop some aggregation operators, such as the intuitionistic fuzzy weighted averaging operator, intuitionistic fuzzy ordered weighted averaging operator, and intuitionistic fuzzy hybrid aggregation operator, for aggregating intuitionistic fuzzy values and establish various properties of these operators. <s> BIB007 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> This paper presents a new interpretation of intuitionistic fuzzy sets in the framework of the Dempster-Shafer theory of evidence (DST). This interpretation makes it possible to represent all mathematical operations on intuitionistic fuzzy values as the operations on belief intervals. Such approach allows us to use directly the Dempster's rule of combination to aggregate local criteria presented by intuitionistic fuzzy values in the decision making problem. The usefulness of the developed method is illustrated with the known example of multiple criteria decision making problem. The proposed approach and a new method for interval comparison based on DST, allow us to solve multiple criteria decision making problem without intermediate defuzzification when not only criteria, but their weights are intuitionistic fuzzy values. <s> BIB008 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> With respect to the multiple attribute group decision making problems in which the attribute weights are unknown and the attribute values take the form of the intuitionistic linguistic numbers, an expanded technique for order preference by similarity to ideal solution (TOPSIS) method is proposed. Firstly, the definition of intuitionistic linguistic number and the operational laws are given and distance between intuitionistic linguistic numbers is defined. Then, the attribute weights are determined based on the ‘maximizing deviation method’ and an extended TOPSIS method is developed to rank the alternatives. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness. <s> BIB009 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> With respect to multiple attribute group decision making problems with linguistic information, some new decision analysis methods are proposed. Firstly, we develop three new aggregation operators: generalized 2-tuple weighted average (G-2TWA) operator, generalized 2-tuple ordered weighted average (G-2TOWA) operator and induced generalized 2-tuple ordered weighted average (IG-2TOWA) operator. Then, a method based on the IG-2TOWA and G-2TWA operators for multiple attribute group decision making is presented. In this approach, alternative appraisal values are calculated by the aggregation of 2-tuple linguistic information. Thus, the ranking of alternative or selection of the most desirable alternative(s) is obtained by the comparison of 2-tuple linguistic information. Finally, a numerical example is used to illustrate the applicability and effectiveness of the proposed method. <s> BIB010 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> We study the induced generalized aggregation operators under intuitionistic fuzzy environments. Choquet integral and Dempster-Shafer theory of evidence are applied to aggregate inuitionistic fuzzy information and some new types of aggregation operators are developed, including the induced generalized intuitionistic fuzzy Choquet integral operators and induced generalized intuitionistic fuzzy Dempster-Shafer operators. Then we investigate their various properties and some of their special cases. Additionally, we apply the developed operators to financial decision making under intuitionistic fuzzy environments. Some extensions in interval-valued intuitionistic fuzzy situations are also pointed out. <s> BIB011 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> With respect to multiple attribute group decision making (MAGDM) problems in which the attribute weights and the expert weights take the form of real numbers and the attribute values take the form of intuitionistic uncertain linguistic variables, new group decision making methods have been developed. First, operational laws, expected value definitions, score functions and accuracy functions of intuitionistic uncertain linguistic variables are introduced. Then, an intuitionistic uncertain linguistic weighted geometric average (IULWGA) operator and an intuitionistic uncertain linguistic ordered weighted geometric (IULOWG) operator are developed. Furthermore, some desirable properties of these operators, such as commutativity, idempotency, monotonicity and boundedness, have been studied, and an intuitionistic uncertain linguistic hybrid geometric (IULHG) operator, which generalizes both the IULWGA operator and the IULOWG operator, was developed. Based on these operators, two methods for multiple attribute group decision making problems with intuitionistic uncertain linguistic information have been proposed. Finally, an illustrative example is given to verify the developed approaches and demonstrate their practicality and effectiveness. <s> BIB012 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> We introduce a wide range of induced and linguistic generalized aggregation operators. First, we present the induced linguistic generalized ordered weighted averaging (ILGOWA) operator. It is a generalization of the OWA operator that uses linguistic variables, order inducing variables and generalized means in order to provide a more general formulation. One of its main results is that it includes a wide range of linguistic aggregation operators such as the induced linguistic OWA (ILOWA), the induced linguistic OWG (ILOWG) and the linguistic generalized OWA (LGOWA) operator. We further generalize the ILGOWA operator by using quasi-arithmetic means obtaining the induced linguistic quasi-arithmetic OWA (Quasi-ILOWA) operator and by using hybrid averages forming the induced linguistic generalized hybrid average (ILGHA) operator. We also present a further extension with Choquet integrals. We call it the induced linguistic generalized Choquet integral aggregation (ILGCIA). We end the paper with an application of the new approach in a linguistic group decision making problem. <s> BIB013 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> With respect to multiple attribute group decision making (MAGDM) problems in which both the attribute weights and the expert weights take the form of real numbers, attribute values take the form of intuitionistic linguistic numbers, the group decision making methods based on some generalized dependent aggregation operators are developed. Firstly, score function and accuracy function of intuitionistic linguistic numbers are introduced. Then, an intuitionistic linguistic generalized dependent ordered weighted average (ILGDOWA) operator and an intuitionistic linguistic generalized dependent hybrid weighted aggregation (ILGDHWA) operator are developed. Furthermore, some desirable properties of the ILGDOWA operator, such as commutativity, idempotency and monotonicity, etc. are studied. At the same time, some special cases of the generalized parameters in these operators are analyzed. Based on the ILGDOWA and ILGDHWA operators, the approach to multiple attribute group decision making with intuitionistic linguistic information is proposed. Finally, an illustrative example is given to verify the developed approaches and to demonstrate their practicality and effectiveness. <s> BIB014 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> In this paper, a new concept of interval-valued intuitionistic linguistic number IVILN, which is characterised by a linguistic term, an interval-valued membership degree and an interval-valued non-membership degree, is first introduced. Then, score function, accuracy function and some multiplicative operational laws of IVILNs are defined. Based on these two functions, a simple approach for the comparison between two IVILNs is presented. Based on these operational laws, some new geometric aggregation operators, such as the interval-valued intuitionistic linguistic weighted geometric IVILWG operator, interval-valued intuitionistic linguistic ordered weighted geometric IVILOWG operator and interval-valued intuitionistic linguistic hybrid geometric IVILHG operator, are proposed, and some desirable properties of these operators are established. Furthermore, by using the IVILWG operator and the IVILHG operator, a group decision making approach, in which the criterion values are IVILNs and the criterion weight information is known completely, is developed. Finally, an illustrative example is given to demonstrate the feasibility and effectiveness of the developed method. <s> BIB015 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Abstract With respect to multiple attribute group decision making (MAGDM) problems in which both the attribute weights and the expert weights take the form of crisp numbers, and attribute values take the form of interval-valued intuitionistic uncertain linguistic variables, some new group decision making analysis methods are developed. Firstly, some operational laws, expected value and accuracy function of interval-valued intuitionistic uncertain linguistic variables are introduced. Then, an interval-valued intuitionistic uncertain linguistic weighted geometric average (IVIULWGA) operator and an interval-valued intuitionistic uncertain linguistic ordered weighted geometric (IVIULOWG) operator have been developed. Furthermore, some desirable properties of the IVIULWGA operator and the IVIULOWG operator, such as commutativity, idempotency and monotonicity, have been studied, and an interval-valued intuitionistic uncertain linguistic hybrid geometric (IVIULHG) operator which generalizes both the IVIULWGA operator and the IVIULOWG operator, was developed. Based on these operators, an approach to multiple attribute group decision making with interval-valued intuitionistic uncertain linguistic information has been proposed. Finally, an illustrative example is given to verify the developed approaches and to demonstrate their practicality and effectiveness. <s> BIB016 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> With respect to multiple attribute decision making (MADM) problems, in which attribute values take the form of intuitionistic uncertain linguistic information, a new decision-making method based on the intuitionistic uncertain linguistic weighted Bonferroni OWA operator is developed. First, the score function, accuracy function, and comparative method of the intuitionistic uncertain linguistic numbers are introduced. Then, an intuitionistic uncertain linguistic Bonferroni OWA (IULBOWA) operator and an intuitionistic uncertain linguistic weighted Bonferroni OWA (IULWBOWA) operator are developed. Furthermore, some properties of the IULBOWA and IULWBOWA operators, such as commutativity, idempotency, monotonicity, and boundedness, are discussed. At the same time, some special cases of these operators are analyzed. Based on the IULWBOWA operator, the multiple attribute decision-making method with intuitionistic uncertain linguistic information is proposed. Finally, an illustrative example is given to illustrat... <s> BIB017 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Abstract With respect to multiple attribute group decision making (MAGDM) problems in which both the attribute weights and the expert weights take the form of crisp numbers, and attribute values take the form of intuitionistic uncertain linguistic variables, some new intuitionistic uncertain linguistic Heronian mean operators, such as intuitionistic uncertain linguistic arithmetic Heronian mean (IULAHM) operator, intuitionistic uncertain linguistic weighted arithmetic Heronian mean (IULWAHM) operator, intuitionistic uncertain linguistic geometric Heronian mean (IULGHM) operator, and intuitionistic uncertain linguistic weighted geometric Heronian mean (IULWGHM) operator, are proposed. Furthermore, we have studied some desired properties of these operators and discussed some special cases with respect to the different parameter values in these operators. Moreover, with respect to multiple attribute group decision making (MAGDM) problems in which both the attribute weights and the expert weights take the form of real numbers, attribute values take the form of intuitionistic uncertain linguistic variables, some approaches based on the developed operators are proposed. Finally, an illustrative example has been given to show the steps of the developed methods and to discuss the influences of different parameters on the decision-making results. <s> BIB018 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> With respect to multiple attribute group decision making (MADM) problems in which attribute values take the form of intuitionistic linguistic numbers, some new group decision making methods are developed. Firstly, some operational laws, expected value, score function and accuracy function of intuitionistic linguistic numbers are introduced. Then, an intuitionistic linguistic power generalized weighted average (ILPGWA) operator and an intuitionistic linguistic power generalized ordered weighted average (ILPGOWA) operator are developed. Furthermore, some desirable properties of the ILPGWA and ILPGOWA operators, such as commutativity, idempotency and monotonicity, etc. are studied. At the same time, some special cases of the generalized parameters in these operators are analyzed. Based on the ILPGWA and ILPGOWA operators, two approaches to multiple attribute group decision making with intuitionistic linguistic information are proposed. Finally, an illustrative example is given to verify the developed approaches and to demonstrate their practicality and effectiveness. <s> BIB019 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Abstract In this study a generated admissible order between interval-valued intuitionistic uncertain linguistic numbers using two continuous functions is introduced. Then, two interval-valued intuitionistic uncertain linguistic operators called the interval-valued intuitionistic uncertain linguistic Choquet averaging (IVIULCA) operator and the interval-valued intuitionistic uncertain linguistic Choquet geometric mean (IVIULCGM) operator are defined, which consider the interactive characteristics among elements in a set. In order to overall reflect the correlations between them, we further define the generalized Shapley interval-valued intuitionistic uncertain linguistic Choquet averaging (GS-IVIULCA) operator and the generalized Shapley interval-valued intuitionistic uncertain linguistic Choquet geometric mean (GS-IVIULCGM) operator. Moreover, if the information about the weights of experts and attributes is incompletely known, the models for the optimal fuzzy measures on expert set and attribute set are established, respectively. Finally, a method to multi-attribute group decision making under interval-valued intuitionistic uncertain linguistic environment is developed, and an example is provided to show the specific application of the developed procedure. <s> BIB020 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> The intuitionistic uncertain linguistic variables are the good tools to express the fuzzy information, and the TODIM (an acronym in Portuguese of Interactive and Multicriteria Decision Making) method can consider the bounded rationality of decision makers based on the prospect theory. However, the classical TODIM method can only process the multiple attribute decision making (MADM) problems where the attribute values take the form of crisp numbers. In this paper, we will extend the TODIM method to the multiple attribute group decision making (MAGDM) with intuitionistic uncertain linguistic information. Firstly, the definition, characteristics, expectation, comparison method and distance of intuitionistic uncertain linguistic variables are briefly introduced, and the steps of the classical TODIM method for MADM problems are presented. Then, on the basis of the classical TODIM method, the extended TODIM method is proposed to deal with MAGDM problems with intuitionistic uncertain linguistic variables, and its significant characteristic is that it can fully consider the decision makers' bounded rationality which is a real action in decision making. Finally, an illustrative example is proposed to verify the developed approach. <s> BIB021 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> With respect to multiple attribute group decision making (MAGDM) problems, in which the attribute weights take the form of real numbers, and the attribute values take the form of intuitionistic fuzzy linguistic variables, a decision analysis approach is proposed. In this paper, we develop an intuitionistic fuzzy linguistic induce OWA (IFLIOWA) operator and analyze the properties of it by utilizing some operational laws of intuitionistic fuzzy linguistic variables. A new method based on the IFLIOWA operator for multiple attribute group decision making (MAGDM) is presented. Finally, a numerical example is used to illustrate the applicability and effectiveness of the proposed method. <s> BIB022 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> The intuitionistic uncertain fuzzy linguistic variable can easily expressthe fuzzy information, and the power average (PA) operator is a usefultool which provides more versatility in the information aggregation procedure.At the same time, Einstein operations are a kind of various t-normsand t-conorms families which can be used to perform the corresponding intersectionsand unions of intuitionistic fuzzy sets (IFSs). In this paper, wewill combine the PA operator and Einstein operations to intuitionistic uncertainlinguistic environment, and propose some new PA operators. Firstly,the definition and some basic operations of intuitionistic uncertain linguisticnumber (IULN), power aggregation (PA) operator and Einstein operationsare introduced. Then, we propose intuitionistic uncertain linguistic fuzzypowered Einstein averaging (IULFPEA) operator, intuitionistic uncertain linguisticfuzzy powered Einstein weighted (IULFPEWA) operator, intuitionisticuncertain linguistic fuzzy Einstein geometric (IULFPEG) operator and intuitionisticuncertain linguistic fuzzy Einstein weighted geometric (IULFPEWG)operator, and discuss some properties of them in detail. Furthermore, we developthe decision making methods for multi-attribute group decision making(MAGDM) problems with intuitionistic uncertain linguistic information andgive the detail decision steps. At last, an illustrate example is given to showthe process of decision making and the effectiveness of the proposed method. <s> BIB023 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> The problem for evaluating the design patterns of the Micro-Air vehicle is the multiple attribute decision making problems. In this paper, we introduce the concept of interval-valued intuitionistic uncertain linguistic sets and propose the induced interval-valued intuitionistic uncertain linguistic ordered weighted average (I-IVIULOWA) operator on the basis of the interval-valued intuitionistic uncertain linguistic ordered weighted average (IVIULOWA) operator and IOWA operator. We also study some desirable properties of the proposed operator, such as commutativity, idempotency and monotonicity. Then, we utilize the induced interval-valued intuitionistic uncertain linguistic ordered weighted average (IIVIULOWA) operator to solve the multiple attribute decision making problems with interval-valued intuitionistic uncertain linguistic information. Finally, an illustrative example for evaluating the design patterns of the Micro-Air vehicle is given. <s> BIB024 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> We point out the issues of the operational laws on IIULSs in the reference.We define some new operational laws that eliminate the existing issues.The expected and accuracy functions are defined to rank IIULSs.Two operators on IIULSs are defined, and optimal models are established.An approach is developed, and the associated example is offered. Interval intuitionistic uncertain linguistic sets are an important generalization of fuzzy sets, which well cope with the experts' qualitative preferences as well as reflect the interval membership and non-membership degrees of the uncertain linguistic term. This paper first points out the issues of the operational laws on interval intuitionistic uncertain linguistic numbers in the literature, and then defines some alternative ones. To consider the relationship between interval intuitionistic uncertain linguistic sets, the expectation and accuracy functions are defined. To study the application of interval intuitionistic uncertain linguistic sets, two symmetrical interval intuitionistic uncertain linguistic hybrid aggregation operators are defined. Meanwhile, models for the optimal weight vectors are established, by which the optimal weighting vector can be obtained. As a series of development, an approach to multi-attribute decision making under interval intuitionistic uncertain linguistic environment is developed, and the associated example is provided to demonstrate the effectivity and practicality of the procedure. <s> BIB025 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> With respect to multiple attribute group decision making (MAGDM) problems in which the attributes are dependent and the attribute values take the forms of intuitionistic linguistic numbers and intuitionistic uncertain linguistic numbers, this paper investigates two novel MAGDM methods based on Maclaurin symmetric mean (MSM) aggregation operators. First, the Maclaurin symmetric mean is extended to intuitionistic linguistic environment and two new aggregation operators are developed for aggregating the intuitionistic linguistic information, such as the intuitionistic linguistic Maclaurin symmetric mean (ILMSM) operator and the weighted intuitionistic linguistic Maclaurin symmetric mean (WILMSM) operator. Then, some desirable properties and special cases of these operators are discussed in detail. Furthermore, this paper also develops two new Maclaurin symmetric mean operators for aggregating the intuitionistic uncertain linguistic information, including the intuitionistic uncertain linguistic Maclaurin symmetric mean (IULMSM) operator and the weighted intuitionistic uncertain linguistic Maclaurin symmetric mean (WIULMSM) operator. Based on the WILMSM and WIULMSM operators, two approaches to MAGDM are proposed under intuitionistic linguistic environment and intuitionistic uncertain linguistic environment, respectively. Finally, two practical examples of investment alternative evaluation are given to illustrate the applications of the proposed methods. <s> BIB026 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Intuitionistic fuzzy set is capable of handling uncertainty with counterpart falsities which exist in nature. Proximity measure is a convenient way to demonstrate impractical significance of values of memberships in the intuitionistic fuzzy set. However, the related works of Pappis (Fuzzy Sets Syst 39(1):111–115, 1991), Hong and Hwang (Fuzzy Sets Syst 66(3):383–386, 1994), Virant (2000) and Cai (IEEE Trans Fuzzy Syst 9(5):738–750, 2001) did not model the measure in the context of the intuitionistic fuzzy set but in the Zadeh’s fuzzy set instead. In this paper, we examine this problem and propose new notions of δ-equalities for the intuitionistic fuzzy set and δ-equalities for intuitionistic fuzzy relations. Two fuzzy sets are said to be δ-equal if they are equal to an extent of δ. The applications of δ-equalities are important to fuzzy statistics and fuzzy reasoning. Several characteristics of δ-equalities that were not discussed in the previous works are also investigated. We apply the δ-equalities to the application of medical diagnosis to investigate a patient’s diseases from symptoms. The idea is using δ-equalities for intuitionistic fuzzy relations to find groups of intuitionistic fuzzified set with certain equality or similar degrees then combining them. Numerical examples are given to illustrate validity of the proposed algorithm. Further, we conduct experiments on real medical datasets to check the efficiency and applicability on real-world problems. The results obtained are also better in comparison with 10 existing diagnosis methods namely De et al. (Fuzzy Sets Syst 117:209–213, 2001), Samuel and Balamurugan (Appl Math Sci 6(35):1741–1746, 2012), Szmidt and Kacprzyk (2004), Zhang et al. (Procedia Eng 29:4336–4342, 2012), Hung and Yang (Pattern Recogn Lett 25:1603–1611, 2004), Wang and Xin (Pattern Recogn Lett 26:2063–2069, 2005), Vlachos and Sergiadis (Pattern Recogn Lett 28(2):197–206, 2007), Zhang and Jiang (Inf Sci 178(6):4184–4191, 2008), Maheshwari and Srivastava (J Appl Anal Comput 6(3):772–789, 2016) and Support Vector Machine (SVM). <s> BIB027 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Abstract In this paper, we propose a new method for multiattribute decision making (MADM) using multiplication operations of interval-valued intuitionistic fuzzy values (IVIFVs) and the linear programming (LP) methodology. It can overcome the shortcomings of Chen and Huang's MADM method (2017), where Chen and Huang's MADM method has two shortcomings, i.e., (1) it gets an infinite number of solutions of the optimal weights of attributes when the summation values of some columns in the transformed decision matrix (TDM) are the same, resulting in the case that it obtains different preference orders (POs) of the alternatives, and (2) the PO of alternatives cannot be distinguished in some situations. <s> BIB028 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> ABSTRACTAccuracy functions proposed by various researchers fail to compare some interval-valued intuitionistic fuzzy sets (IVIFSs) correctly. In the present research paper, we propose an improved accuracy function to compare all comparable IVIFSs correctly. The use of proposed accuracy function is also proposed in a method for multi attribute group decision making (MAGDM) method with partially known attributes’ weight. Finally, the proposed MAGDM method is implemented on a real case study of evaluation teachers’ performance. Sensitivity analysis of this method is also done to show the effectiveness of the proposed accuracy function in MAGDM. <s> BIB029 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Abstract In this paper, we propose a new autocratic multiattribute group decision making (AMAGDM) method for hotel location selection based on interval-valued intuitionistic fuzzy sets (IVIFSs), where the evaluating values of the attributes for alternatives and the weights of the attributes given by decision makers are represented by interval-valued intuitionistic fuzzy values (IVIFVs). The proposed method calculates the changing of the weights of the decision makers until the group consensus degree (GCD) of the decision makers is larger than or equal to a predefined threshold value. We also apply the proposed AMAGDM method to deal with the hotel location selection problem. The main contribution of this paper is that we propose a new AMAGDM method which is simpler than Wibowo's method (2013), where the drawback of Wibowo's method is that it is too complicated due to the fact that it adopts the concept of ideal solutions for determining the overall performance of each hotel location alternative with respect to all the selection criteria. The proposed AMAGDM method provides us with a very useful way for AMAGDM in interval-valued intuitionistic fuzzy environments. <s> BIB030 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Abstract In the process of multi-criteria decision making (MCDM), decision makers or experts usually exploit quantitative or qualitative methods to evaluate the comprehensive performance of all alternatives on each criterion. How the decision-makers or the experts make the evaluations relies on their professional knowledge and the actual performances on the criteria characters of the alternatives. However, because of both the objective complexity of decision making problem and the uncertainty of human subjective judgments, it is sometimes too hard to get the accurate evaluation information. Intuitionistic fuzzy set (IFS) is a useful tool to deal with uncertainty and fuzziness of complex problems. In this paper, we propose a new distance measure between IFSs and prove some of its useful properties. The experimental results show that the proposed distance measure between IFSs can overcome the drawbacks of some existing distance and similarity measures. Then based on the proposed distance measure, an extended intuitionistic fuzzy TOPSIS approach is developed to handle the MCDM problems. Finally, a practical application which is about credit risk evaluation of potential strategic partners is provided to demonstrate the extended intuitionistic fuzzy TOPSIS approach, and then it is compared with other current methods to further explain its effectiveness. <s> BIB031 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> A great number of real-world problems can be associated with multi-criteria decision-making. These problems are often characterized by a high degree of uncertainty. Intuitionistic fuzzy sets (IFSs) are a generalized form of an ordinal fuzzy set to deal with this natural uncertainty. In this paper, we propose a hybrid version of the intuitionistic fuzzy ELECTRE based on VIKOR method, which was never considered before. The advantage and strengths of the intuitionistic fuzzy ELECTRE based on VIKOR method as decision aid technique and IFS as an uncertain framework make the proposed method a suitable choice in solving practical problems. Finally, a numerical example for engineering manager choice is given to illustrate the application of proposed method. The paper also gives a special view of point to the research along IFSs: It can be viewed as a kind of factorial scalar theory in factor space, which helps the authors to complete the paper with clear ideas. <s> BIB032 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Abstract This paper presents a location selection problem for a military airport using multiple criteria decision making methods. A real-world decision problem is presented and the decision criteria to evaluate alternative locations are specified. The objective is to identify the best location among candidate locations. Nine main criteria and thirty-three sub-criteria are identified by taking into account not only requirements for a military airport such as climate, geography, infrastructure, security, and transportation but also its environmental and social effects. The criteria weights are determined using AHP. Ranking and selection processes of four alternatives are carried out using PROMETHEE and VIKOR methods. Furthermore, the results of PROMETHEE and VIKOR methods are compared with the results of COPRAS, MAIRCA and MABAC methods. All methods suggest the same alternative as the best and produce the same results on the rankings of the location alternatives. One-way sensitivity analysis is carried out on the main criteria weights for all methods. Statistically significant correlations are observed between the rankings of the methods. Therefore, it is concluded that PROMETHEE, VIKOR, COPRAS, MAIRCA and MABAC methods can be successfully used for location selection problems and in general, for other types of multi-criteria decision problems with finite number of alternatives. <s> BIB033 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Edges of the image play an important role in the field of digital image processing and computer vision. The edges reduce the amount of data, extract useful information from the image and preserve significant structural properties of an input image. Further, these edges can be used for object and facial expression detection. In this paper, we will propose new intuitionistic fuzzy divergence and entropy measures with its proof of validity for intuitionistic fuzzy sets. A new and significant technique has been developed for edge detection. To check the robustness of the proposed method, obtained results are compared with Canny, Sobel and Chaira methods. Finally, mean square error (MSE) and peak signal-to-noise ratio (PSNR) have been calculated and PSNR values of proposed method are always equal or greater than the PSNR values of existing methods. The detected edges of the various sample images are found to be true, smooth and sharpen. <s> BIB034 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Abstract In this paper, a novel method is proposed to support the process of solving multi-objective nonlinear programming problems subject to strict or flexible constraints. This method assumes that the practical problems are expressed in the form of geometric programming problems. Integrating the concept of intuitionistic fuzzy sets into the solving procedure, a rich structure is provided which can include the inevitable uncertainties into the model regarding different objectives and constraints. Another important feature of the proposed method is that it continuously interacts with the decision maker. Thus, the decision maker could learn about the problem, thereby a compromise solution satisfying his/hers preferences could be obtained. Further, a new two-step geometric programming approach is introduced to determine Pareto-optimal compromise solutions for the problems defined during different iterative steps. Employing the compensatory operator of “weighted geometric mean”, the first step concentrates on finding an intuitionistic fuzzy efficient compromise solution. In the cases where one or more intuitionistic fuzzy objectives are fully achieved, a second geometric programming model is developed to improve the resulting compromise solution. Otherwise, it is concluded that the resulting solution vectors simultaneously satisfy both of the conditions of intuitionistic fuzzy efficiency and Pareto-optimality. The models forming the proposed solving method are developed in a way such that, the posynomiality of the defined problem is not affected. This property is of great importance when solving nonlinear programming problems. A numerical example of multi-objective nonlinear programming problem is also used to provide a better understanding of the proposed solving method. <s> BIB035 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> In the present paper we introduce the classes of sequence stcIFN, stc0IFN and st∞IFN of statistically convergent, statistically null and statistically bounded sequences of intuitionistic fuzzy number based on the newly defined metric on the space of all intuitionistic fuzzy numbers (IFNs). We study some algebraic and topological properties of these spaces and prove some inclusion relations too. <s> BIB036 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> An interval-valued intuitionistic uncertain linguistic set (IVIULS) combines the ambiguity, fuzziness as well as indeterminacy in real-life predicaments due to intricacy of subjective nature of human thoughts and easily express fuzzy information. Technique for order preference by similarity to an ideal solution (TOPSIS) is one of the eminent traditional distance measure-based approach for multi-criteria group decision-making (MCGDM) problems and having widespread applications. This study aims to develop TOPSIS method for MCGDM problems under IVIUL environment. Firstly, some basic operational laws and aggregation operators of IVIULS are discussed. A novel distance measure of IVIULEs is also investigated. An illustrative example of evaluation problem is also taken to clarify developed methodology and to reveal its efficiency with comparative analysis of proposed method. <s> BIB037 | Due to the increasing complexity of decision-making problems, it is generally difficult to express criteria values of alternatives by exact numbers. originally proposed the fuzzy set (FS) theory, which is an effective tool in dealing with fuzzy information. However, it is not suitable to handle the information with non-membership. As the generalization of FS, intuitionistic fuzzy set (IFS) introduced by BIB001 BIB002 has a membership degree (MD), a non-membership degree (NMD) and a hesitancy degree (HD), which can further overcome the drawbacks of FS. Now, a large number of methods based on IFS have been utilized to a number of areas. Up to date, many contributions have concentrated on the decision-making techniques based on IFSs, which are from three domains: the theory of foundations, for instance, operational rules BIB028 BIB008 BIB024 , comparative approaches BIB029 , distance and similarity measures BIB002 , likelihood , ranking function , consensus degree BIB030 , proximity measure BIB027 and so on; the extended muticriteria decision-making (MCDM) approaches for IFS, such as TOPSIS BIB031 , ELECTRE BIB032 , VIKOR BIB033 , TODIM , entropy BIB034 and other methods, such as Choquet integral (CI) , multi-objective linear programming or multi-objective nonlinear programming (NLP) BIB035 , Decision-Making Trial and Evaluation Laboratory , statistical convergent sequence spaces BIB036 and so on; and the MCDM techniques based on aggregation operators (AOs) of IFS, they have more superiority than the traditional MCDM techniques because of can acquire the comprehensive values of alternatives by aggregating all attribute values, and then rank the alternatives. However, with the increasing of uncertainty and complexity, the IFS cannot depict the uncertain information comprehensively and accurately in the circumstance in which the MD and NMD with the form of IFS cannot be expressed as real values. For the sake of adequately expressing the fuzzy and uncertain information in real process of decision making, proposed first the concept of linguistic variable (LV ) and defined a discrete linguistic term set (LTS) , that is, variables whose evaluation values are not real and exact numbers but linguistic terms, such as "very low," "low," "fair," "high," "very high," etc. Obviously, the decision maker can more easily to express his/her opinions and preferences by selecting the matching linguistic terms from the LTS. So based on the IFS and the LTS, a novel solution is that MD and NMD are denoted by LTS, which is called intuitionistic linguistic fuzzy set (ILFS), first introduced by . As a generalization of IFS, LT and LTS, the ILFS can more adequately dispose the fuzzy and uncertain information than IFS, LT and LTS. Since appearance, IFLS has attracted more and more attention. Based on the IFLS, different forms of IFLS are extended and some basic operational rules of IFLS are defined, such as intuitionistic uncertain linguistic set (IULS) BIB012 BIB014 , interval-value intuitionistic uncertain linguistic set (IVIULS) BIB015 BIB025 , intuitionistic uncertain 2-tuple linguistic variable (IU2TLV ) Martínez, 2000a, b, 2012) . AOs of IFLS are a new branch of IFLS, which is a meaningful and significance research issue and has attracted more and more attention. For example, some basic intuitionistic linguistic (IL) fuzzy AOs, such as intuitionistic uncertain linguistic weighted geometric mean (IULWGM) operator BIB012 , ordered intuitionistic uncertain linguistic weighted geometric mean (OIULWGM) operator BIB012 , interval-value IULWGM (GIULWGM) operator BIB016 and interval-value OIULWGM (GOIULWGM) operator BIB016 ; the extended MCDM approaches for IUFS, such as the extended TOPSIS (ETOPSIS) approaches BIB009 BIB037 BIB010 , the extended TODIM (ETODIM) approaches BIB021 , the extended VIKOR (EVIKOR) approach ; some IL fuzzy AOs considering the interrelationships between criteria, such as IUL Bonferroni OWM (IULBOWM) operator BIB017 , weighted IUL Bonferroni OWM (WIULBOWM) operator BIB017 , IUL arithmetic Heronian mean (IULAHM) operator BIB018 , IUL geometric Heronian mean (IULGHM) operator BIB018 , weighted IUL arithmetic Heronian mean (WIULAHM) operator BIB018 , IUL geometric Heronian mean (WIULGHM) operator BIB018 , IUL Maclaurin symmetric mean (IULMSM) operator BIB026 , weighted ILMSM (WIULMSM) operator BIB026 ; generalized intuitionistic linguistic fuzzy aggregation operators, such as generalized IL dependent ordered weighted mean (DOWM) (GILDOWM) operator BIB014 BIB019 ) and a generalized IL dependent hybrid weighted mean (DHWM) (GILDHWM) operator BIB014 BIB019 ; IL fuzzy AOs based on CI BIB020 ; induced IL fuzzy AOs BIB019 BIB020 BIB022 BIB003 BIB005 BIB011 BIB013 , such as, IFL induced ordered weighted mean (IFLIOWM) operator BIB019 BIB020 , IFL induced ordered weighted geometric mean (IFLIOWGM) operator BIB019 BIB020 . To understand and learn these AOs and decision-making methods better and more conveniently, it is necessary to make an overview of interval-valued intuitionistic fuzzy information aggregation techniques and their applications. The rest of this paper is organized as follows: in Section 2, we review the basic concepts and operational rules of IFS, LTS, intuitionistic linguistic set (ILS), IULS and IVIULS. In Section 3, we review, summary analysis and discuss some kinds of AOs about ILS, IULS and IVIULS. At the same time, we divide the AOs into categories. In Section 4, we mainly review the applications in dealing with a variety of real and practice MCDM or muticriteria group decision-making (MCGDM) problems. In Section 5, we point out some possible development directions for future research. In Section 6, we discuss the conclusions. 2. Basic concepts and operations 2.1 The intuitionistic fuzzy set Definition 1. BIB007 ) Let E ¼ {ε 1 , ε 2 , …, ε n }be a nonempty set, an IFS R in E is given by R ¼ {〈ε, u R (ε), v R (ε)〉|ε∈E}, where u R : E→[0, 1] and v R : E→[0, 1], with the condition 0 ⩽ u R (ε)+v R (ε) ⩽ 1, ∀ε∈E. The numbers u R (ε) and v R (ε) denote, respectively, the MD and NMD of the element ε to E. For the given element ε, 〈u R (ε), v R (ε)〉 is called intuitionistic fuzzy number (IFN), and for convenience, we can utilizer ¼ u r ; v r ð Þto denote an IFN, which meets the conditions, u R (ε), Letr ¼ u r ; v r ð Þ andt ¼ u t ; v t ð Þ be two IFNs, δ ⩾ 0, then the operations of IFNs are defined as follows BIB007 For relieving the information loss in the decision making, Xu BIB005 BIB011 extended discrete linguistic set S ¼ {s 0 , s 1 , …, s m } to continuous linguistic set _ S ¼ fs l l A 0; t ½ g. For any LV s x ; s y A _ S, the operations of LV can be defined as follows: Definition 2. BIB006 The numbers u R (ε) and v R (ε) denote, respectively, the MD and NMD of the element ε to linguistic index s φ(ε) . In addition, π(ε) ¼ 1−u R (ε)−v R (ε), ∀ε∈E, denotes the ID of the element ε to E. It is evident that 0 ⩽ π(ε) ⩽ 1,∀ε∈E. For the given element ε, 〈s φ(ε) , (u R (ε), v R (ε))〉 is called intuitionistic linguistic fuzzy number (ILFN), and for convenience, we can utilizeẽ ¼ s j e ð Þ ; u e ð Þ; v e ð Þ ð Þ to denote an ILFN, which meets the conditions, It is easy to know that the operation rules (15)- (18) have some limitations which the ULVs obtained by calculating are lower than the maximum number s t are not assured. It is obviously that the upper and lower limits are all greater than s 6 which is the largest number of S. For the sake of overcoming the above limitation, some literatures give some new modified operational laws for ULVs. Let _ s 1 ¼ ½s j 1 ; s k 1 and _ s 2 ¼ ½s j 2 ; s k 2 are ULVs, then the operations of ULV are defined as follows: Definition 4. BIB012 The numbers u R (ε) and v R (ε) denote, respectively, the MD and NMD of the element ε to linguistic index [s φ(ε) , s ϑ(ε) ]. In addition, π(ε) ¼ 1−u R (ε)−v R (ε), ∀ε ∈ E, denotes the ID of the element ε to E. It is evident that 0 ⩽ π(ε) ⩽ 1,∀ε∈E. From BIB012 , BIB014 , BIB004 and , we can find that there are some shortcomings in the process of calculation by taking some examples, which the IULVs obtained by calculating are lower than the maximum number s t are not assured. For supplying this gap, some modified operational laws of IULV are presented in some literatures. then the modified operations of IULV can be defined as follows: Besides, BIB023 defined the operations of IULVs based on the Einstein t-norm (TN) and t-conorm (TC), which can be used to demonstrate the corresponding intersections and unions of IULVs. |
An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Interval-value intuitionistic uncertain linguistic set (IVIULS) <s> In this paper, a new concept of interval-valued intuitionistic linguistic number IVILN, which is characterised by a linguistic term, an interval-valued membership degree and an interval-valued non-membership degree, is first introduced. Then, score function, accuracy function and some multiplicative operational laws of IVILNs are defined. Based on these two functions, a simple approach for the comparison between two IVILNs is presented. Based on these operational laws, some new geometric aggregation operators, such as the interval-valued intuitionistic linguistic weighted geometric IVILWG operator, interval-valued intuitionistic linguistic ordered weighted geometric IVILOWG operator and interval-valued intuitionistic linguistic hybrid geometric IVILHG operator, are proposed, and some desirable properties of these operators are established. Furthermore, by using the IVILWG operator and the IVILHG operator, a group decision making approach, in which the criterion values are IVILNs and the criterion weight information is known completely, is developed. Finally, an illustrative example is given to demonstrate the feasibility and effectiveness of the developed method. <s> BIB001 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Interval-value intuitionistic uncertain linguistic set (IVIULS) <s> We point out the issues of the operational laws on IIULSs in the reference.We define some new operational laws that eliminate the existing issues.The expected and accuracy functions are defined to rank IIULSs.Two operators on IIULSs are defined, and optimal models are established.An approach is developed, and the associated example is offered. Interval intuitionistic uncertain linguistic sets are an important generalization of fuzzy sets, which well cope with the experts' qualitative preferences as well as reflect the interval membership and non-membership degrees of the uncertain linguistic term. This paper first points out the issues of the operational laws on interval intuitionistic uncertain linguistic numbers in the literature, and then defines some alternative ones. To consider the relationship between interval intuitionistic uncertain linguistic sets, the expectation and accuracy functions are defined. To study the application of interval intuitionistic uncertain linguistic sets, two symmetrical interval intuitionistic uncertain linguistic hybrid aggregation operators are defined. Meanwhile, models for the optimal weight vectors are established, by which the optimal weighting vector can be obtained. As a series of development, an approach to multi-attribute decision making under interval intuitionistic uncertain linguistic environment is developed, and the associated example is provided to demonstrate the effectivity and practicality of the procedure. <s> BIB002 | Definition 5. BIB001 It is obviously that if u lR (ε) ¼ u uR (ε) and v lR (ε) ¼ v uR (ε) for each ε∈E, then IVIULS reduces to be the IULS. Furthermore, if s φ(ε) ¼ s ϑ(ε) , then it reduces to be the ILS. We know ifẽ 1 andẽ 2 are two IVIULVs, then have the same above properties as the IULVs. Furthermore, two symmetrical IVL hybrid aggregation operators are introduced by BIB002 . |
An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Intuitionistic uncertain 2-tuple linguistic variable (IU2TLV ) <s> With respect to multiple attribute group decision making (MAGDM) problems in which the attribute weights and the expert weights take the form of real numbers and the attribute values take the form of intuitionistic uncertain linguistic variables, new group decision making methods have been developed. First, operational laws, expected value definitions, score functions and accuracy functions of intuitionistic uncertain linguistic variables are introduced. Then, an intuitionistic uncertain linguistic weighted geometric average (IULWGA) operator and an intuitionistic uncertain linguistic ordered weighted geometric (IULOWG) operator are developed. Furthermore, some desirable properties of these operators, such as commutativity, idempotency, monotonicity and boundedness, have been studied, and an intuitionistic uncertain linguistic hybrid geometric (IULHG) operator, which generalizes both the IULWGA operator and the IULOWG operator, was developed. Based on these operators, two methods for multiple attribute group decision making problems with intuitionistic uncertain linguistic information have been proposed. Finally, an illustrative example is given to verify the developed approaches and demonstrate their practicality and effectiveness. <s> BIB001 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Intuitionistic uncertain 2-tuple linguistic variable (IU2TLV ) <s> In this paper, we introduce the Atanassov's intuitionistic linguistic ordered weighted averaging distance AILOWAD operator. It is a new aggregation operator that unifies distance measures and Atanassov's intuitionistic linguistic variables in the ordered weighted averaging OWA operator. The main advantage of this aggregation operator is that it is able to use the attitudinal character of the decision maker in the aggregation of the distance measures. Moreover, it is able to deal with uncertain situations where the information can assessed with Atanassov's intuitionistic linguistic numbers. We study some of main properties and different particular cases of the AILOWAD operator. We further generalize this approach by using quasi-arithmetic means obtaining the quasi-arithmetic AILOWAD Quasi-AILOWAD operator. We also develop an application of the new approach to a multi-person decision making problem regarding the selection of strategies. Thus, we obtain the multi-person AILOWAD MP-AILOWAD operator. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness. <s> BIB002 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Intuitionistic uncertain 2-tuple linguistic variable (IU2TLV ) <s> Dealing with uncertainty is always a challenging problem, and different tools have been proposed to deal with it. Fuzzy sets was presented to manage situations in which experts have some membership value to assess an alternative. The fuzzy linguistic approach has been applied successfully to many problems. The linguistic information expressed by means of 2-tuples, which were composed by a linguistic term and a numeric value assessed in [ - 0.5, 0.5. Linguistic values was used to assess an alternative and variable in qualitative settings. Intuitionistic fuzzy sets were presented to manage situations in which experts have some membership and nonmembership value to assess an alternative. In this paper, the concept of an I2LI model is developed to provide a linguistic and computational basis to manage the situations in which experts assess an alternative in possible and impossible linguistic variable and their translation parameter. A method to solve the group decision making problem based on intuitionistic 2-tuple linguistic information I2LI by the group of experts is formulated. Some operational laws on I2LI are introduced. Based on these laws, new aggregation operators are introduced to aggregate the collective opinion of decision makers. An illustrative example is given to show the practicality and feasibility of our proposed aggregation operators and group decision making method. <s> BIB003 | Definition 6. Martínez, 2000a, b, 2012) Let S ¼ {s 0 , s 1 , …, s m }be an ordered linguistic label set. The symbolic translation between the 2-tuple linguistic representation and numerical values can be defined as follows: where Definition 7. BIB003 The numbers u R (ε) and v R (ε) denote, respectively, MD and NMD of the element where the weighted vector ofẽ 1 ;ẽ 2 ; . . .;ẽ n is w ¼ (w 1 , w 2 , …, w n ) T , w i ∈[0, 1] and P n i¼1 w i ¼ 1. Definition 9. BIB001 Þ be a collection of IULVs. The value aggregated by ordered weighted geometric mean (OWGM) operator is an IULV, and: where the weighted vector ofẽ 1 ;ẽ 2 ; . . .;ẽ n is w ¼ (w 1 , w 2 , …, w n ) is any permutation of (1, 2, …, n), such that e y iÀ1 Xẽ y i for all (i ¼ 1, 2, …, n). It is easy to prove that the above operators have the properties of commutativity, idempotency, boundedness and monotonicity. where the weighted vector ofẽ 1 ;ẽ 2 ; . . .;ẽ n is w ¼ (w 1 , w 2 , …, w n ) is any permutation of (1, 2, …, n), such that e y iÀ1 Xẽ y i for all (i ¼ 1, 2, …, n). It is easy to prove that the above operators have the properties of commutativity, idempotency, boundedness and monotonicity. In addition, based on the IL weighted arithmetic mean operator, Wang et al. (2014) developed intuitionistic linguistic ordered weighted mean (ILOWM) operator and the intuitionistic linguistic hybrid operator. BIB002 presented the intuitionistic linguistic ordered weighted mean distance operator, quasi-arithmetic intuitionistic linguistic ordered weighted mean distance operator and multi-person intuitionistic linguistic ordered weighted mean distance operator. |
An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The extended MCDM approaches for IUFS <s> With respect to the multiple attribute group decision making problems in which the attribute weights are unknown and the attribute values take the form of the intuitionistic linguistic numbers, an expanded technique for order preference by similarity to ideal solution (TOPSIS) method is proposed. Firstly, the definition of intuitionistic linguistic number and the operational laws are given and distance between intuitionistic linguistic numbers is defined. Then, the attribute weights are determined based on the ‘maximizing deviation method’ and an extended TOPSIS method is developed to rank the alternatives. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness. <s> BIB001 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The extended MCDM approaches for IUFS <s> An interval-valued intuitionistic uncertain linguistic set (IVIULS) combines the ambiguity, fuzziness as well as indeterminacy in real-life predicaments due to intricacy of subjective nature of human thoughts and easily express fuzzy information. Technique for order preference by similarity to an ideal solution (TOPSIS) is one of the eminent traditional distance measure-based approach for multi-criteria group decision-making (MCGDM) problems and having widespread applications. This study aims to develop TOPSIS method for MCGDM problems under IVIUL environment. Firstly, some basic operational laws and aggregation operators of IVIULS are discussed. A novel distance measure of IVIULEs is also investigated. An illustrative example of evaluation problem is also taken to clarify developed methodology and to reveal its efficiency with comparative analysis of proposed method. <s> BIB002 | (1) The ETOPSIS approaches for IUFS. In general, the standard TOPSIS approach can only process the real value and cannot deal with fuzzy information, such as IUFS. introduced an ETOPSIS approach to process the IUFS in real decision-making circumstance. BIB001 developed an extended technique for TOPSIS in which the criteria values are in the form of IULVs and the criteria weights are unknown. BIB002 combined the TOPSIS and IVIULVs by redefining the basic operation rules and distance measure to solve the MCGDM problems. Wei (2011) used the ETOPSIS approach to solve the MAGDM problems with 2TIULVs. (2) The ETODIM approaches for IUFS. We all know that TODIM approach can take into account the bounded rationality of experts based on prospect theory in MCDM. The classical TODIM can only |
An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Intuitionistic linguistic fuzzy information <s> With respect to multiple attribute group decision making (MAGDM) problems in which both the attribute weights and the expert weights take the form of real numbers, attribute values take the form of intuitionistic linguistic numbers, the group decision making methods based on some generalized dependent aggregation operators are developed. Firstly, score function and accuracy function of intuitionistic linguistic numbers are introduced. Then, an intuitionistic linguistic generalized dependent ordered weighted average (ILGDOWA) operator and an intuitionistic linguistic generalized dependent hybrid weighted aggregation (ILGDHWA) operator are developed. Furthermore, some desirable properties of the ILGDOWA operator, such as commutativity, idempotency and monotonicity, etc. are studied. At the same time, some special cases of the generalized parameters in these operators are analyzed. Based on the ILGDOWA and ILGDHWA operators, the approach to multiple attribute group decision making with intuitionistic linguistic information is proposed. Finally, an illustrative example is given to verify the developed approaches and to demonstrate their practicality and effectiveness. <s> BIB001 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Intuitionistic linguistic fuzzy information <s> With respect to multiple attribute group decision making (MADM) problems in which attribute values take the form of intuitionistic linguistic numbers, some new group decision making methods are developed. Firstly, some operational laws, expected value, score function and accuracy function of intuitionistic linguistic numbers are introduced. Then, an intuitionistic linguistic power generalized weighted average (ILPGWA) operator and an intuitionistic linguistic power generalized ordered weighted average (ILPGOWA) operator are developed. Furthermore, some desirable properties of the ILPGWA and ILPGOWA operators, such as commutativity, idempotency and monotonicity, etc. are studied. At the same time, some special cases of the generalized parameters in these operators are analyzed. Based on the ILPGWA and ILPGOWA operators, two approaches to multiple attribute group decision making with intuitionistic linguistic information are proposed. Finally, an illustrative example is given to verify the developed approaches and to demonstrate their practicality and effectiveness. <s> BIB002 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Intuitionistic linguistic fuzzy information <s> The intuitionistic uncertain linguistic variables are the good tools to express the fuzzy information, and the TODIM (an acronym in Portuguese of Interactive and Multicriteria Decision Making) method can consider the bounded rationality of decision makers based on the prospect theory. However, the classical TODIM method can only process the multiple attribute decision making (MADM) problems where the attribute values take the form of crisp numbers. In this paper, we will extend the TODIM method to the multiple attribute group decision making (MAGDM) with intuitionistic uncertain linguistic information. Firstly, the definition, characteristics, expectation, comparison method and distance of intuitionistic uncertain linguistic variables are briefly introduced, and the steps of the classical TODIM method for MADM problems are presented. Then, on the basis of the classical TODIM method, the extended TODIM method is proposed to deal with MAGDM problems with intuitionistic uncertain linguistic variables, and its significant characteristic is that it can fully consider the decision makers' bounded rationality which is a real action in decision making. Finally, an illustrative example is proposed to verify the developed approach. <s> BIB003 | process the MCDM problems where the criteria values are exact numbers. Liu BIB003 developed an ETODIM to deal with MCDM problems with IULVs. presented an interactive MCDM approach based on TODIM and NLP with IULVs. proposed TODIM for IL (ILTODIM) approach and TODIM for IUL (IULTODIM) approach by improving the distance measure to deal with the MADM problems with the forms of ILV and IULV. (3) The EVIKOR approach for IULVs. The VIKOR approach is a very useful tool to dispose decision-making problems by selecting the best alternative based on the maximizing "group utility" and minimizing "individual regret." At present, a number of researchers pay more and more attention to VIKOR approach. extended the VIKOR approach to deal with IULVs and presented the EVIKOR for MADM problems with IULVs. Furthermore, developed the EVIKOR by using the Hamming distance to deal with the IVIULVs and presented the EVIKOR approach for MADM problems with IVIULVs. Definition 20. BIB001 BIB002 sẽ i ; e ð Þ is the similarity degree betweenẽ i and e, denoted by: Definition 21. BIB001 BIB002 Þbe a collection of IULVs. The value aggregated by GILDHWM operator is an IULV, and: where the weighted vector ofẽ 1 ;ẽ 2 ; . . .;ẽ n is w ¼ (w 1 , w 2 , …, w n ) The IVIULCA, IVIULCGA, GSIVIULCA and GSIVIULCGA operators satisfy the commutativity, idempotency and boundedness. |
An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Some intuitionistic linguistic fuzzy AOs considering the interrelationships between criteria <s> We introduce the power average to provide an aggregation operator which allows argument values to support each other in the aggregation process. The properties of this operator are described. We discuss the idea of a power median. We introduce some possible formulations for the support function used in the power average. We extend the supported aggregation facility of empowerment to a wider class of mean operators, such as the OWA (ordered weighted averaging) operator and the generalized mean operator. <s> BIB001 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Some intuitionistic linguistic fuzzy AOs considering the interrelationships between criteria <s> The power-average (PA) operator and the power-ordered-weighted-average (POWA) operator are the two nonlinear weighted-average aggregation tools whose weighting vectors depend on the input arguments. In this paper, we develop a power-geometric (PG) operator and its weighted form, which are on the basis of the PA operator and the geometric mean, and develop a power-ordered-geometric (POG) operator and a power-ordered-weighted-geometric (POWG) operator, which are on the basis of the POWA operator and the geometric mean, and study some of their properties. We also discuss the relationship between the PA and PG operators and the relationship between the POWA and POWG operators. Then, we extend the PG and POWG operators to uncertain environments, i.e., develop an uncertain PG (UPG) operator and its weighted form, and an uncertain power-ordered-weighted-geometric (UPOWG) operator to aggregate the input arguments taking the form of interval of numerical values. Furthermore, we utilize the weighted PG and POWG operators, respectively, to develop an approach to group decision making based on multiplicative preference relations and utilize the weighted UPG and UPOWG operators, respectively, to develop an approach to group decision making based on uncertain multiplicative preference relations. Finally, we apply both the developed approaches to broadband Internet-service selection. <s> BIB002 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Some intuitionistic linguistic fuzzy AOs considering the interrelationships between criteria <s> With respect to multiple attribute decision making (MADM) problems, in which attribute values take the form of intuitionistic uncertain linguistic information, a new decision-making method based on the intuitionistic uncertain linguistic weighted Bonferroni OWA operator is developed. First, the score function, accuracy function, and comparative method of the intuitionistic uncertain linguistic numbers are introduced. Then, an intuitionistic uncertain linguistic Bonferroni OWA (IULBOWA) operator and an intuitionistic uncertain linguistic weighted Bonferroni OWA (IULWBOWA) operator are developed. Furthermore, some properties of the IULBOWA and IULWBOWA operators, such as commutativity, idempotency, monotonicity, and boundedness, are discussed. At the same time, some special cases of these operators are analyzed. Based on the IULWBOWA operator, the multiple attribute decision-making method with intuitionistic uncertain linguistic information is proposed. Finally, an illustrative example is given to illustrat... <s> BIB003 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Some intuitionistic linguistic fuzzy AOs considering the interrelationships between criteria <s> Abstract With respect to multiple attribute group decision making (MAGDM) problems in which both the attribute weights and the expert weights take the form of crisp numbers, and attribute values take the form of intuitionistic uncertain linguistic variables, some new intuitionistic uncertain linguistic Heronian mean operators, such as intuitionistic uncertain linguistic arithmetic Heronian mean (IULAHM) operator, intuitionistic uncertain linguistic weighted arithmetic Heronian mean (IULWAHM) operator, intuitionistic uncertain linguistic geometric Heronian mean (IULGHM) operator, and intuitionistic uncertain linguistic weighted geometric Heronian mean (IULWGHM) operator, are proposed. Furthermore, we have studied some desired properties of these operators and discussed some special cases with respect to the different parameter values in these operators. Moreover, with respect to multiple attribute group decision making (MAGDM) problems in which both the attribute weights and the expert weights take the form of real numbers, attribute values take the form of intuitionistic uncertain linguistic variables, some approaches based on the developed operators are proposed. Finally, an illustrative example has been given to show the steps of the developed methods and to discuss the influences of different parameters on the decision-making results. <s> BIB004 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Some intuitionistic linguistic fuzzy AOs considering the interrelationships between criteria <s> With respect to multiple attribute group decision making (MAGDM) problems in which the attributes are dependent and the attribute values take the forms of intuitionistic linguistic numbers and intuitionistic uncertain linguistic numbers, this paper investigates two novel MAGDM methods based on Maclaurin symmetric mean (MSM) aggregation operators. First, the Maclaurin symmetric mean is extended to intuitionistic linguistic environment and two new aggregation operators are developed for aggregating the intuitionistic linguistic information, such as the intuitionistic linguistic Maclaurin symmetric mean (ILMSM) operator and the weighted intuitionistic linguistic Maclaurin symmetric mean (WILMSM) operator. Then, some desirable properties and special cases of these operators are discussed in detail. Furthermore, this paper also develops two new Maclaurin symmetric mean operators for aggregating the intuitionistic uncertain linguistic information, including the intuitionistic uncertain linguistic Maclaurin symmetric mean (IULMSM) operator and the weighted intuitionistic uncertain linguistic Maclaurin symmetric mean (WIULMSM) operator. Based on the WILMSM and WIULMSM operators, two approaches to MAGDM are proposed under intuitionistic linguistic environment and intuitionistic uncertain linguistic environment, respectively. Finally, two practical examples of investment alternative evaluation are given to illustrate the applications of the proposed methods. <s> BIB005 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Some intuitionistic linguistic fuzzy AOs considering the interrelationships between criteria <s> Coal mine safety has been a pressing issue for many years, and it is a constant and non-negligible problem that must be addressed during any coal mining process. This paper focuses on developing an innovative multi-criteria decision-making (MCDM) method to address coal mine safety evaluation problems. Because lots of uncertain and fuzzy information exists in the process of evaluating coal mine safety, linguistic intuitionistic fuzzy numbers (LIFNs) are introduced to depict the evaluation information necessary to the process. Furthermore, the handling of qualitative information requires the effective support of quantitative tools, and the linguistic scale function (LSF) is therefore employed to deal with linguistic intuitionistic information. First, the distance, a valid ranking method, and Frank operations are proposed for LIFNs. Subsequently, the linguistic intuitionistic fuzzy Frank improved weighted Heronian mean (LIFFIWHM) operator is developed. Then, a linguistic intuitionistic MCDM method for coal mine safety evaluation is constructed based on the developed operator. Finally, an illustrative example is provided to demonstrate the proposed method, and its feasibility and validity are further verified by a sensitivity analysis and comparison with other existing methods. <s> BIB006 | In some real decision-making problem, we should take into account the interrelationships between criteria because of existing the situation of mutual support in some criteria. BIB003 presented an IULBOWM operator, WIULBOWM operator. BIB004 proposed the IULAHM operator, IULGHM operator, WIULAHM operator, WIULGHM operator. BIB005 where ξ(i) is the ith largest element in the tupleẽ 1 ;ẽ 2 ; . . .;ẽ n , and w i is the OWA weighted vector of dimension n with the weighted vector of e 1 ;ẽ 2 ; . . .;ẽ n is w ¼ (w 1 , w 2 , …, w n ) T , w i ∈[0, 1] and P n i¼1 w i ¼ 1. Definition 13. BIB003 Þbe a collection of IULVs. The value aggregated by WIULBOWM operator is an IULV, and: where ξ(i) is the ith largest element in the tupleẽ 1 ;ẽ 2 ; . . .;ẽ n , and w i is the OWA weighted vector of dimension n with the weighted vector of e 1 ;ẽ 2 ; . . .;ẽ n is w ¼ (w 1 , w 2 , …, w n ) T , w i ∈[0, 1] and P n i¼1 w i ¼ 1. Obviously, the above IULBOWM and WIULBOWM operators have the desirable properties of commutativity, idempotency, monotonicity and boundedness. Furthermore, introduced IUL partitioned BM (IULPBM) operator, weighted IUL partitioned BM operator, geometric IUL partitioned BM operator and weighted geometric IUL partitioned BM because they consider that in some time the interrelationships between criteria do not always exist and we can take the criteria into some part based on the different categories and the interrelationships between criteria in same part exist. At the same time, the DOWM operator has the advantage of relieving the impact of biased criteria values. It is easy to know that the IULGHM operator has the properties of monotonicity, idempotency and boundedness. Definition 15. BIB004 It is easy to prove that the WIULAHM operator has not the property of idempotency, but it has the property of monotonicity. Definition 16. BIB004 . . .; n ð Þbe a collection of IULVs. The value aggregated by IULGHM operator is an IULV, and: It is easy to know that the IULGHM operator has the properties of monotonicity, idempotency and bounded. Definition 17. BIB004 Þbe a collection of IULVs. The value aggregated by WIULGHM operator is an IULV, the weighted vector ofẽ 1 ;ẽ 2 ; . . .;ẽ n is w ¼ (w 1 , w 2 , …, w n ) T , w i ∈[0, 1] and P n i¼1 w i ¼ 1, n is a balance parameter, and: Obviously, the WIULGHM operator has not the property of idempotency, but it has the property of monotonicity. In addition, BIB006 proposed weighted intuitionistic linguistic fuzzy Frank improved Heronian mean operator to construct the coal mine safety evaluation. investigated the generalized ILHM operator and weighted GILHM operator. . . .; n ð Þbe a collection of IULVs and r ¼ 1, 2, …, n. The value aggregated by IULMSM operator is an IULV. It is easy to demonstrate that the IULMSM operator has the properties of idempotency, monotonicity, boundedness and commutativity. Þbe a collection of IULVs and r ¼ 1, 2, …, n. The value aggregated by WIULMSM operator is an IULV, and: The WILMSM has the property of monotonically in the case of parameter r. In some time, for the sake of selecting the best alternative, we not only take into account the criteria values, but also consider the interrelationships between the criteria. Power average (PA) operator introduced first by Yager BIB001 BIB002 can overcome the above weakness by setting different criteria weights. Recently, based on the PA and BM operator, presented ILF power BM and weighted ILF power BM operator. |
An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Induced IL fuzzy AOs <s> In this paper, we define various generalized induced linguistic aggregation operators, including generalized induced linguistic ordered weighted averaging (GILOWA) operator, generalized induced linguistic ordered weighted geometric (GILOWG) operator, generalized induced uncertain linguistic ordered weighted averaging (GIULOWA) operator, generalized induced uncertain linguistic ordered weighted geometric (GIULOWG) operator, etc. Each object processed by these operators consists of three components, where the first component represents the importance degree or character of the second component, and the second component is used to induce an ordering, through the first component, over the third components which are linguistic variables (or uncertain linguistic variables) and then aggregated. It is shown that the induced linguistic ordered weighted averaging (ILOWA) operator and linguistic ordered weighted averaging (LOWA) operator are the special cases of the GILOWA operator, induced linguistic ordered weighted geometric (ILOWG) operator and linguistic ordered weighted geometric (LOWG) operator are the special cases of the GILOWG operator, the induced uncertain linguistic ordered weighted averaging (IULOWA) operator and uncertain linguistic ordered weighted averaging (ULOWA) operator are the special cases of the GIULOWA operator, and that the induced uncertain linguistic ordered weighted geometric (IULOWG) operator and uncertain LOWG operator are the special cases of the GILOWG operator. <s> BIB001 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Induced IL fuzzy AOs <s> An intuitionistic fuzzy set, characterized by a membership function and a non-membership function, is a generalization of fuzzy set. In this paper, based on score function and accuracy function, we introduce a method for the comparison between two intuitionistic fuzzy values and then develop some aggregation operators, such as the intuitionistic fuzzy weighted averaging operator, intuitionistic fuzzy ordered weighted averaging operator, and intuitionistic fuzzy hybrid aggregation operator, for aggregating intuitionistic fuzzy values and establish various properties of these operators. <s> BIB002 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Induced IL fuzzy AOs <s> We study the induced generalized aggregation operators under intuitionistic fuzzy environments. Choquet integral and Dempster-Shafer theory of evidence are applied to aggregate inuitionistic fuzzy information and some new types of aggregation operators are developed, including the induced generalized intuitionistic fuzzy Choquet integral operators and induced generalized intuitionistic fuzzy Dempster-Shafer operators. Then we investigate their various properties and some of their special cases. Additionally, we apply the developed operators to financial decision making under intuitionistic fuzzy environments. Some extensions in interval-valued intuitionistic fuzzy situations are also pointed out. <s> BIB003 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Induced IL fuzzy AOs <s> We introduce a wide range of induced and linguistic generalized aggregation operators. First, we present the induced linguistic generalized ordered weighted averaging (ILGOWA) operator. It is a generalization of the OWA operator that uses linguistic variables, order inducing variables and generalized means in order to provide a more general formulation. One of its main results is that it includes a wide range of linguistic aggregation operators such as the induced linguistic OWA (ILOWA), the induced linguistic OWG (ILOWG) and the linguistic generalized OWA (LGOWA) operator. We further generalize the ILGOWA operator by using quasi-arithmetic means obtaining the induced linguistic quasi-arithmetic OWA (Quasi-ILOWA) operator and by using hybrid averages forming the induced linguistic generalized hybrid average (ILGHA) operator. We also present a further extension with Choquet integrals. We call it the induced linguistic generalized Choquet integral aggregation (ILGCIA). We end the paper with an application of the new approach in a linguistic group decision making problem. <s> BIB004 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Induced IL fuzzy AOs <s> With respect to multiple attribute group decision making (MAGDM) problems, in which the attribute weights take the form of real numbers, and the attribute values take the form of intuitionistic fuzzy linguistic variables, a decision analysis approach is proposed. In this paper, we develop an intuitionistic fuzzy linguistic induce OWA (IFLIOWA) operator and analyze the properties of it by utilizing some operational laws of intuitionistic fuzzy linguistic variables. A new method based on the IFLIOWA operator for multiple attribute group decision making (MAGDM) is presented. Finally, a numerical example is used to illustrate the applicability and effectiveness of the proposed method. <s> BIB005 | Now, a type of induced AOs has been a hot topic in a lot of research literatures, which take criteria as pairs, in which the first element denoted order induced variable is used to induce an ordering over the second element which is the aggregated variables. Illuminated by Xu's work BIB001 BIB003 BIB002 BIB005 introduced IFLIOWM operator, IFLIOWGM operator. . . .; n ð Þbe a collection of IULVs and r ¼ 1, 2, …, n. The value aggregated by IFLIOWA operator is an IULV, the weighted vector of e 1 ;ẽ 2 ; . . .;ẽ n is w ¼ (w 1 , w 2 , …, w n ) T , satisfies w i ∈[0, 1], P n i¼1 w i ¼ 1, and: Definition 27. BIB005 BIB004 Þbe a collection of IULVs and r ¼ 1, 2, …, n. The value aggregated by IFLIOWGA operator is an IULV, the weighted vector ofẽ 1 ;ẽ 2 ; . . .;ẽ n is w ¼ (w 1 , w 2 , …, w n ) T , satisfies w i ∈ [0, 1], P n i¼1 w i ¼ 1, and: The IFLIOWA and IFLIOWGA operators satisfy the commutativity, idempotency, monotonicity and boundedness. |
An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The applications about the AOs of IULVs <s> With respect to multiple attribute group decision making (MAGDM) problems in which the attribute weights and the expert weights take the form of real numbers and the attribute values take the form of intuitionistic uncertain linguistic variables, new group decision making methods have been developed. First, operational laws, expected value definitions, score functions and accuracy functions of intuitionistic uncertain linguistic variables are introduced. Then, an intuitionistic uncertain linguistic weighted geometric average (IULWGA) operator and an intuitionistic uncertain linguistic ordered weighted geometric (IULOWG) operator are developed. Furthermore, some desirable properties of these operators, such as commutativity, idempotency, monotonicity and boundedness, have been studied, and an intuitionistic uncertain linguistic hybrid geometric (IULHG) operator, which generalizes both the IULWGA operator and the IULOWG operator, was developed. Based on these operators, two methods for multiple attribute group decision making problems with intuitionistic uncertain linguistic information have been proposed. Finally, an illustrative example is given to verify the developed approaches and demonstrate their practicality and effectiveness. <s> BIB001 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The applications about the AOs of IULVs <s> With respect to multiple attribute group decision making (MADM) problems in which attribute values take the form of intuitionistic linguistic numbers, some new group decision making methods are developed. Firstly, some operational laws, expected value, score function and accuracy function of intuitionistic linguistic numbers are introduced. Then, an intuitionistic linguistic power generalized weighted average (ILPGWA) operator and an intuitionistic linguistic power generalized ordered weighted average (ILPGOWA) operator are developed. Furthermore, some desirable properties of the ILPGWA and ILPGOWA operators, such as commutativity, idempotency and monotonicity, etc. are studied. At the same time, some special cases of the generalized parameters in these operators are analyzed. Based on the ILPGWA and ILPGOWA operators, two approaches to multiple attribute group decision making with intuitionistic linguistic information are proposed. Finally, an illustrative example is given to verify the developed approaches and to demonstrate their practicality and effectiveness. <s> BIB002 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The applications about the AOs of IULVs <s> For multi-criteria group decision making problems with intuitionistic linguistic information, we define a new score function and a new accuracy function of intuitionistic linguistic numbers, and propose a simple approach for the comparison between two intuitionistic linguistic numbers. Based on the intuitionistic linguistic weighted arithmetic averaging ILWAA operator, we define two new intuitionistic linguistic aggregation operators, such as the intuitionistic linguistic ordered weighted averaging ILOWA operator and the intuitionistic linguistic hybrid aggregation ILHA operator, and establish various properties of these operators. The ILOWA operator weights the ordered positions of the intuitionistic linguistic numbers instead of weighting the arguments themselves. The ILHA operator generalizes both the ILWAA operator and the ILOWA operator at the same time, and reflects the importance degrees of both the given intuitionistic linguistic numbers and the ordered positions of these arguments. Furthermore, based on the ILHA operator and the ILWAA operator, we develop a multi-criteria group decision making approach, in which the criteria values are intuitionistic linguistic numbers and the criteria weight information is known completely. Finally, an example is given to illustrate the feasibility and effectiveness of the developed method. <s> BIB003 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The applications about the AOs of IULVs <s> Dealing with uncertainty is always a challenging problem, and different tools have been proposed to deal with it. Fuzzy sets was presented to manage situations in which experts have some membership value to assess an alternative. The fuzzy linguistic approach has been applied successfully to many problems. The linguistic information expressed by means of 2-tuples, which were composed by a linguistic term and a numeric value assessed in [ - 0.5, 0.5. Linguistic values was used to assess an alternative and variable in qualitative settings. Intuitionistic fuzzy sets were presented to manage situations in which experts have some membership and nonmembership value to assess an alternative. In this paper, the concept of an I2LI model is developed to provide a linguistic and computational basis to manage the situations in which experts assess an alternative in possible and impossible linguistic variable and their translation parameter. A method to solve the group decision making problem based on intuitionistic 2-tuple linguistic information I2LI by the group of experts is formulated. Some operational laws on I2LI are introduced. Based on these laws, new aggregation operators are introduced to aggregate the collective opinion of decision makers. An illustrative example is given to show the practicality and feasibility of our proposed aggregation operators and group decision making method. <s> BIB004 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The applications about the AOs of IULVs <s> In this paper, we first introduce some operations on interval-valued intuitionistic uncertain linguistic sets, and further develop the induced interval-valued intuitionistic uncertain linguistic ordered weighted geometric (I-IVIULOWG) operator. We also establish some desirable properties of this operator, such as commutativity, idempotency and monotonicity. Then, we apply the induced interval-valued intuitionistic uncertain linguistic ordered weighted geometric (I-IVIULOWG) operator to deal with the interval-valued intuitionistic uncertain linguistic multiple attribute decision making problems. Finally, an illustrative example for evaluating the knowledge management performance is given to verify the developed approach and to demonstrate its practicality and effectiveness. <s> BIB005 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The applications about the AOs of IULVs <s> The problem for evaluating the design patterns of the Micro-Air vehicle is the multiple attribute decision making problems. In this paper, we introduce the concept of interval-valued intuitionistic uncertain linguistic sets and propose the induced interval-valued intuitionistic uncertain linguistic ordered weighted average (I-IVIULOWA) operator on the basis of the interval-valued intuitionistic uncertain linguistic ordered weighted average (IVIULOWA) operator and IOWA operator. We also study some desirable properties of the proposed operator, such as commutativity, idempotency and monotonicity. Then, we utilize the induced interval-valued intuitionistic uncertain linguistic ordered weighted average (IIVIULOWA) operator to solve the multiple attribute decision making problems with interval-valued intuitionistic uncertain linguistic information. Finally, an illustrative example for evaluating the design patterns of the Micro-Air vehicle is given. <s> BIB006 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The applications about the AOs of IULVs <s> Abstract This paper presents a new two-tier decision making framework with linguistic preferences for scientific decision making. The major reason for adopting linguistic preference is to ease the process of rating of alternatives by allowing decision makers (DMs) to strongly emphasize their opinion on each alternative. In the first tier, aggregation is done using a newly proposed operator called linguistic based aggregation (LBA), which aggregates linguistic terms directly without making any conversion. The main motivation for this proposal is driven by the previous studies on aggregation theory which reveals that conversion leads to loss of information and formation of virtual sets which are no longer sensible and rational for decision making process. Secondly, in the next tier, a new ranking method called IFSP (intuitionistic fuzzy set based PROMETHEE) is proposed which is an extension to PROMETHEE (preference ranking organization method for enrichment evaluation) under intuitionistic fuzzy set (IFS) context. Unlike previous ranking methods, this ranking method follows a new formulation by considering personal choice of the DMs over each alternative. The main motivation for such formulation is derived from the notion of not just obtaining a suitable alternative but also coherently satisfying the DMs’ viewpoint during decision process. Finally, the practicality of the framework is tested by using supplier selection (SS) problem for an automobile factory. The strength and weakness of the proposed LBA-IFSP framework are verified by comparing with other methods under the realm of theoretical and numerical analysis. The results from the analysis infer that proposed LBA-IFSP framework is rationally coherent to DMs’ viewpoint, moderately consistent with other methods and highly stable and robust against rank reversal issue. <s> BIB007 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The applications about the AOs of IULVs <s> Abstract In this paper by using fuzzy intuitionistic linguistic fuzzy operators has been analyzed the influence of external factors to economic state, social consequences and government responses. The analysis is investigated on the basis of Azerbaijan and international information in 2010-2015. <s> BIB008 | In this section, we give an overview of some practical applications of the IULVs AO and approach in the domain of different types of MCDM and MCGDM. Based on IULWGM, OIULWGM, GIULWGM, GOIULWGM, IULBOWM, WIULBOWM, IULAHM, IULGHM, WIULAHM, WIULGHM, IULMSM, WIULMSM, GILDOWM and GILDHWM operator and so on, the corresponding MCDM or MCGDM methods were developed to solve the real MCDM or MCGDM problems, such as human resource management, supply-chain management, project investment (PI) and benefit evaluation: (1) PI. BIB001 applied the MCDM methods based on IULHG, WIULGA and WIULOG operators to solve investment problems, in which an investment company wants to invest a sum of money in the best selection. BIB002 developed MCGDM methods based on GWILPA and GWILPOA operators to deal with investment evaluate problems. BIB003 proposed a MCGDM approach based on the ILHA and WILAA operator to disposal MCGDM problem involving a PI. proposed the weighted trapezium cloud arithmetic mean operator, ordered weighted trapezium cloud arithmetic mean operator and the trapezium cloud hybrid arithmetic operator, and then used them to solve PI problems. gave a real example about selecting the best investment strategy for an investment company by applying GIVIFLIHA operator to aggregate IVIFLVs. gave an illustrated example about investment selection by developing IU2TL continuous extend BM (IU2TLCEBM) operator. presented a novel IFL hybrid aggregation operator to deal with an investment risk evaluation problem in the circumstance of IFLI. (2) Suppler selection. In many literature, researchers have attempted to dispose the suppler selection problems by using the AOs to aggregate intuitionistic linguistic fuzzy information (ILFI). For example, presented a MAGDM method based on I2LGA by extending the Archimedean TN and TC to select the best suppler for manufacturing company' core competition. BIB007 applied a novel approach based on IL AOs to select the best applier from the four potential suppliers. developed an IVIFLI-MCGDM approach based on the IV2TLI and applied it to the practice problem about a purchasing department want to select a best supplier. presented an IL multiple attribute decision making with ILWIOWA and ILGWIOWA operator and its application to low carbon supplier selection. (3) Some other applications. gave two IL MCDM based on HM approaches and their application to evaluation of scientific research capacity. BIB008 analyzed thoroughly the impact of external elements to economic state, social consequences and government responses by applying IFLI. BIB004 built an I2TLI model to solve the problem about a family to purchase a house in best locality. BIB005 presented an approach based on induced IVIULOWG operator to evaluating the knowledge management performance with IVIULFI. BIB006 built a model for evaluating the design patterns of the Micro-Air vehicle under interval-valued intuitionistic uncertain linguistic environment. |
An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Further research directions <s> Intuitionistic fuzzy information aggregation plays an important part in intuitionistic fuzzy set theory, which has emerged to be a new research direction receiving more and more attention in recent years. In this paper, we investigate the multiple attribute decision making (MADM) problems with intuitionistic fuzzy numbers. Then, we first introduce some operations on intuitionistic fuzzy sets, such as Einstein sum, Einstein product, and Einstein exponentiation, and further develop some new Einstein hybrid aggregation operators, such as the intuitionistic fuzzy Einstein hybrid averaging (IFEHA) operator and intuitionistic fuzzy Einstein hybrid geometric (IFEHG) operator, which extend the hybrid averaging (HA) operator and the hybrid geometric (HG) operator to accommodate the environment in which the given arguments are intuitionistic fuzzy values. Then, we apply the intuitionistic fuzzy Einstein hybrid averaging (IFEHA) operator and intuitionistic fuzzy Einstein hybrid geometric (IFEHG) operator to deal with multiple attribute decision making under intuitionistic fuzzy environments. Finally, some illustrative examples are given to verify the developed approach and to demonstrate its practicality and effectiveness. <s> BIB001 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Further research directions <s> Since proposed in 1983, the intuitionistic fuzzy set (IFS) theory has grown immensely during the past decades and has wide application in machine learning, pattern recognition, management engineering and decision making. With the rapid development and widespread adoption of IFS, thousands of research results have been appeared, focusing on both theory development and practical applications. Given the large number of research materials exist, this paper intends to make a scientometric review on IFS studies to reveal the most cited papers, influential authors and influential journals in this domain based on the 1318 references retrieved from SCIE and SSCI databases via Web of science. The research results of this paper are based on the objective data analysis and they are less affected by subjective biases, which make them more reliable. <s> BIB002 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Further research directions <s> Abstract Real-world decision-making problems are often complex and indeterminate. Thus, uncertainty and hesitancy are usually unavoidable issues being experienced by decision makers. Dual hesitant fuzzy sets (DHFSs) which are described in terms of the two functions, namely the membership hesitancy function and the non-membership hesitancy function, have been developed. In light of their properties, they are considered as a powerful vehicle to express uncertain information in the process of multi-attribute decision-making (MADM). In accordance with the practical demand, this study proposes a new MADM approach with dual hesitant fuzzy (DHF) assessments based on Frank aggregation operators. First, original score and accuracy functions of DHFS are developed to construct a new comparison method of DHFSs. The properties of the developed score and accuracy functions are analyzed. Second, we investigate the generalized operations of DHFS based on Frank t-norm and t-conorm. The generalized operations are then used to build the generalized arithmetic and geometric aggregation operators of DHF assessments in the context of fuzzy MADM. The monotonicity of arithmetic and geometric aggregated assessments with respect to a parameter in Frank t-norm and t-conorm and their relationship are also demonstrated. In particular, the monotonicity is employed to associate the parameter with the risk attitude of a decision maker, by which a method is designed to determine the parameter. A procedure of the proposed MADM method is presented. Finally, an investment evaluation problem is discussed by the proposed approach to demonstrate its applicability and validity. A detailed sensitivity analysis and a comparative study are also conducted to highlight the validity and advantages of the approach proposed in this paper. More importantly, we discuss the situations where Frank aggregation operators are replaced by Hamacher aggregation operators at the second step of the proposed approach, through re-considering the investment evaluation problem. <s> BIB003 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Further research directions <s> Abstract Fuzzy game theory has been applied in many decision-making problems. The matrix game with interval-valued intuitionistic fuzzy numbers (IVIFNs) is investigated based on Archimedean t-conorm and t-norm. The existing matrix games with IVIFNs are all based on Algebraic t-conorm and t-norm, which are special cases of Archimedean t-conorm and t-norm. In this paper, the intuitionistic fuzzy aggregation operators based on Archimedean t-conorm and t-norm are employed to aggregate the payoffs of players. To derive the solution of the matrix game with IVIFNs, several mathematical programming models are developed based on Archimedean t-conorm and t-norm. The proposed models can be transformed into a pair of primal–dual linear programming models, based on which, the solution of the matrix game with IVIFNs is obtained. It is proved that the theorems being valid in the exiting matrix game with IVIFNs are still true when the general aggregation operator is used in the proposed matrix game with IVIFNs. The proposed method is an extension of the existing ones and can provide more choices for players. An example is given to illustrate the validity and the applicability of the proposed method. <s> BIB004 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Further research directions <s> Abstract Intuitionistic fuzzy soft set (IFSS) theory is one of the successful extension of the soft set theory to deal the uncertainty by introducing the parametrization factor during the analysis. Under this environment, the present paper develops two new scaled prioritized averaging aggregation operators by considering the interaction between the membership degrees. Further, some shortcomings of the existing operators have been highlighted and overcome by the proposed operators. The principal advantage of the operators is that they consider the priority relationships between the parameters as well as experts. Furthermore, some properties based on these operators are discussed in detail. Then, we utilized these operators to solve decision-making problem and validate it with a numerical example. <s> BIB005 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Further research directions <s> The theory of intuitionistic fuzzy sets (IFS) is widely used for dealing with vagueness and the Dempster--Shafer (D-S) evidence theory has a widespread use in multiple criteria decision-making problems under uncertain situation. However, there are many methods to aggregate intuitionistic fuzzy numbers (IFNs), but the aggregation operator to fuse basic probability assignment (BPA) is rare. Power average (P-A) operator, as a powerful operator, is useful and important in information fusion. Motivated by the idea of P-A power, in this paper, a new operator based on the IFS and D-S evidence theory is proposed, which is named as intuitionistic fuzzy evidential power average (IFEPA) aggregation operator. First, an IFN is converted into a BPA, and the uncertainty is measured in D-S evidence theory. Second, the difference between BPAs is measured by Jousselme distance and a satisfying support function is proposed to get the support degree between each other effectively. Then the IFEPA operator is used for aggregating the original IFN and make a more reasonable decision. The proposed method is objective and reasonable because it is completely driven by data once some parameters are required. At the same time, it is novel and interesting. Finally, an application of developed models to the ‘One Belt, One road’ investment decision-making problems is presented to illustrate the effectiveness and feasibility of the proposed operator. <s> BIB006 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Further research directions <s> Normal neutrosophic numbers (NNNs) are an important tool to describe the decision making problems, and they are more appropriate to express the incompleteness, indeterminacy and inconsistency of the evaluation information. In this paper, we firstly introduce the definition, the properties, the score function, the accuracy function, and the operational laws of the NNNs. Then, some operators are proposed, such as the normal neutrosophic power averaging operator, the normal neutrosophic weighted power averaging operator, the normal neutrosophic power geometric operator, the normal neutrosophic weighted power geometric operator, the normal neutrosophic generalized power averaging operator, the normal neutrosophic generalized weighted power averaging (NNGWPA) operator. Furthermore, some properties of them are discussed. Thirdly, we propose a multiple attribute decision making method based on the NNGWPA operator. Finally, we use an illustrative example to demonstrate the practicality and effectiveness of the proposed method. <s> BIB007 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Further research directions <s> An outranking method is developed within the environment of hesitant intuitionistic fuzzy linguistic term sets (HIFLTSs), where the membership degree and the non-membership degree of the element are subsets of linguistic term set. The directional Hausdorff distance, which uses HIFLTSs, is proposed, and the dominance relations are subsequently defined using this distance. Moreover, some interesting characteristics of the proposed directional Hausdorff distance are further discussed in detail. In this context, a collective decision matrix is obtained in the form of hesitant intuitionistic fuzzy linguistic elements and analyzes the collective data by using proposed ELECTRE-based outranking method. The linguistic scale functions are employed in this paper to conduct the transformation between qualitative information and quantitative data. Furthermore, based on the proposed method, we also investigate the ranking of the alternatives based on a new proposed definition of HIFLTS. The feasibility and applicability of the proposed method are illustrated with an example, and a comparative analysis is performed with other approaches to validate the effectiveness of the proposed methodology. <s> BIB008 | Although the approach and theory of IUL have gained abundant research achievements, a number of works on IUL fuzzy information should be further done in the future. First, some new operational rules, such as Einstein and interactive operational rule BIB001 , Schweizer -Sklar TC and TN , Dombi operations , Frank TC and TN BIB003 , Archimedean TC and TN BIB004 ) and so on, should be extended and applied in the process of aggregation of ILFI. Moreover, some other AOs, such as cloud distance operators BIB002 , prioritized weighted mean operator BIB005 , geometric prioritized weighted mean operator , power generalized AO, evidential power AO BIB006 , induced OWA Minkowski distance operator BIB007 , continuous OWGA operator BIB008 , Muirhead mean operator, and so on should be developed to aggregation ILFI. Finally, the applications in some real and practical fields, such as online comment analysis, smart home, Internet of Things, precision medicine and Big Data, internet bots, unmanned aircraft, software robots, virtual reality and so on, are also very interesting, meaningful and significance in the future. After doing so, we will propose a much more complete and comprehensive theory knowledge system of ILFI. |
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Epidemiologic and interventional studies have led to lower treatment targets for type 2 diabetes (formerly known as non-insulin-dependent diabetes), including a glycosylated hemoglobin level of 7 percent or less and a before-meal blood glucose level of 80 to 120 mg per dL (4.4 to 6.7 mmol per L). New oral medications make these targets easier to achieve, especially in patients with recently diagnosed diabetes. Acarbose, metformin, miglitol, pioglitazone, rosiglitazone and troglitazone help the patient's own insulin control glucose levels and allow early treatment with little risk of hypoglycemia. Two new long-acting sulfonylureas (glimepiride and extended-release glipizide) and a short-acting sulfonylurea-like agent (repaglinide) simply and reliably augment the patient's insulin supply. Combinations of agents have additive therapeutic effects and can restore glucose control when a single agent is no longer successful. Oral therapy for early type 2 diabetes can be relatively inexpensive, and evidence of its cost-effectiveness is accumulating. <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> The developing world does not have access to many of the best medical diagnostic technologies; they were designed for air-conditioned laboratories, refrigerated storage of chemicals, a constant supply of calibrators and reagents, stable electrical power, highly trained personnel and rapid transportation of samples. Microfluidic systems allow miniaturization and integration of complex functions, which could move sophisticated diagnostic tools out of the developed-world laboratory. These systems must be inexpensive, but also accurate, reliable, rugged and well suited to the medical and social contexts of the developing world. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A high-performance monitoring system for human blood glucose levels was developed using microchip electrophoresis with a plastic chip. The combination of reductive amination as glucose labeling with fluorescent 2-aminoacridone (AMAC) and glucose-borate complex formation realized the highly selective detection of glucose even in a complex matrix such as a blood sample. The migration time of a single peak, observed on an electropherogram of AMAC-labeled plasma, closely resembled that of glucose standard solution. The treatment of plasma with hexokinase or glucokinase for glucose phosphorylation resulted in a peak shift from approximately 145 to 70 s, corresponding to glucose and glucose-6-phosphate, respectively. A double-logarithm plot revealed a linear relationship between glucose concentration and fluorescence intensity in the range of 1-300 microM of glucose (r(2) = 0.9963; p <0.01), and the detection limit was 0.92 microM. Furthermore, blood glucose concentrations estimated from the standard curves of three subjects were compared with results obtained by conventional colorimetric analysis using glucose dehydrogenase. Good correlation was observed between methods according to simple linear regression analysis (p <0.05). The reproducibility of the assay was about 6.3-9.1% (RSD) and the within-days and between-days reproducibility were 1.6-8.4 and 5.2-7.2%, respectively. This system enables us to determine blood glucose with high sensitivity and accuracy, and will be applicable to clinical diagnosis. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This communication describes a simple method for patterning paper to create well-defined, millimeter-sized channels, comprising hydrophilic paper bounded by hydrophobic polymer. We believe that this type of patterned paper will become the basis for low-cost, portable, and technically simple multiplexed bioassays. We demonstrate this capability by the simultaneous detection of glucose and protein in 5 μL of urine. The assay system is small, disposable, easy to use (and carry), and requires no external equipment, reagents, or power sources. We believe this kind of system is attractive for uses in less-industrialized countries, in the field, or as an inexpensive alternative to more advanced technologies already used in clinical settings.[1-4] ::: ::: The analysis of biological fluids is necessary for monitoring the health of populations,[2] but these measurements are difficult to implement in remote regions such as those found in less-industrialized countries, in emergency situations, or in home health-care settings.[3] Conventional laboratory instruments provide quantitative measurements of biological samples, but they are unsuitable for these situations since they are large, expensive, and require trained personnel and considerable volumes of biological samples.[2] Other bioassay platforms provide alternatives to more expensive instruments,[5-7] but the need remains for a platform that uses small volumes of sample and that is sufficiently inexpensive to be used widely for measuring samples from large populations. ::: ::: We believe that paper may serve as a particularly convenient platform for running bioassays in the remote situations locations. As a prototype for a mthod we believe to be particularly promosing, we patterned photoresist onto chromatography paper to form defined areas of hydrophilic paper separated by hydrophobic lines or “walls”; these patterns provide spatial control of biological fluids and enable fluid transport, without pumping, due to capillary action in the millimeter-sized channels produced. This method for patterning paper makes it possible to run multiple diagnostic assays on one strip of paper, while still using only small volumes of a single sample. In a fully developed technology, patterned photoresist would be replaced by an appropriate printing technology, but patterning paper with photoresist is: i) convenient for prototyping these devices, and ii) a useful new micropatterning technology in its own right. ::: ::: We patterned chromatography paper with SU-8 2010 photoresist as shown in Figure 1a and as described below: we soaked a 7.5-cm diameter piece of chromatography paper in 2 mL of SU-8 2010 for 30 s, spun it at 2000 rpm for 30 s, and then baked it at 95 °C for 5 min to remove the cyclopentanone in the SU-8 formula. We then exposed the photoresist and paper to 405 nm UV light (50 mW/cm2) for 10 s through a photo-mask (CAD/Art Services, Inc.) that was aligned using a mask aligner (OL-2 Mask Aligner, AB-M, Inc). After exposure, we baked the paper a second time at 95 °C for 5 min to cross-link the exposed portions of the resist. The unpolymerized photoresist was removed by soaking the paper in propylene glycol monomethyl ether acetate (PGMEA) (5 min), and by washing the pattern with propan-2-ol (3 × 10 mL). The paper was more hydrophobic after it was patterned, presumably due to residual resist bound to the paper, so we exposed the entire surface to an oxygen plasma for 10 s at 600 millitorr (SPI Plasma-Prep II, Structure Probe, Inc) to increase the hydrophilicity of the paper (Figures 2a and 2b). ::: ::: ::: ::: Figure 1 ::: ::: Chromatography paper patterned with photoresist. The darker lines are cured photoresist; the lighter areas are unexposed paper. (a) Patterned paper after absorbing 5 μL of Waterman red ink by capillary action. The central channel absorbs the sample ... ::: ::: ::: ::: ::: ::: Figure 2 ::: ::: Assays contaminated with (a) dirt, (b) plant pollen, and (c) graphite powder. The pictures were taken before and after running an artificial urine solution that contained 550 mM glucose and 75 μM BSA. The particulates do not move up the channels ... ::: ::: ::: ::: The patterned paper can be derivatized for biological assays by adding appropriate reagents to the test areas (Figures 1b and and2b).2b). In this communication, we demonstrate the method by detecting glucose and protein,[8] but the surface should be suitable for measuring many other analytes as well.[7] The glucose assay is based on the enzymatic oxidation of iodide to iodine,[9] where a color change from clear to brown is associated with the presence of glucose.[10] The protein assay is based on the color change of tetrabromophenol blue (TBPB) when it ionizes and binds to proteins;[11] a positive result in this case is indicated by a color change from yellow to blue. ::: ::: For the glucose assay, we spotted 0.3 μL of a 0.6 M solution of potassium iodide, followed by 0.3 μL of a 1:5 horseradish peroxidase/glucose oxidase solution (15 units of protein per mL of solution). For the protein assay, we spotted 0.3 μL of a 250-mM citrate buffer (pH 1.8) in a well separate from the glucose assay, and then layered 0.3 μL of a 3.3 mM solution of tetrabromophenol blue (TBPB) in 95% ethanol over the citrate buffer. The spotted reagents were allowed to air dry at room temperature. This pre-loaded paper gave consistent results for the protein assay regardless of storage temperature and time (when stored for 15 d both at 0 °C and at 23 °C, wrapped in aluminum foil). The glucose assay was sensitive to storage conditions, and showed decreased signal for assays run 24 h after spotting the reagents (when stored at 23 °C); when stored at 0 °C, however, the glucose assay was as sensitive after day 15 as it was on day 1. ::: ::: We measured artificial samples of glucose and protein in clinically relevant ranges (2.5-50 mM for glucose and 0.38-7.5 μM for bovine serum albumin (BSA))[12, 13] by dipping the bottom of each test strip in 5 μL of a pre-made test solution (Figure 2d). The fluid filled the entire pattern within ca. one minute, but the assays required 10-11 min for the paper to dry and for the color to fully develop.[14] In all cases, we observed color changes corresponding roughly in intensity to the amount of glucose and protein in the test samples, where the lowest concentrations define the lower limits to which these assays can be used (Figure 2e). For comparison, commercially-available dipsticks detect glucose at concentrations as low as 5 mM[7, 9] and protein as low as 0.75 μM;[6, 15] these limits indicate that these paper-based assays are comparable in sensitivity to commercial dipstick assays. Our assay format also allows for the measurement of multiple analytes. ::: ::: This paper-based assay is suitable for measuring multiple samples in parallel and in a relatively short period of time. For example, in one trial, one researcher was able to run 20 different samples (all with 550 mM glucose and 75 μM BSA) within 7.5 min (followed by another 10.5 min for the color to fully develop). An 18-min assay of this type—one capable of measuring two analytes in 20 different sample—may be efficient enough to use in high-throughput screens of larger sample pools. ::: ::: In the field, samples will not be measured under sterile conditions, and dust and dirt may contaminate the assays. The combination of paper and capillary action provides a mechanism for separating particulates from a biological fluid. As a demonstration, we purposely contaminated the artificial urine samples with quantities of dirt, plant pollen, and graphite powder at levels higher than we might expect to see in the samples in the field. These particulates do not move up the channels under the action of capillary wicking, and do not interfere with the assay (Figure 3). ::: ::: Paper strips have been used in biomedical assays for decades because they offer an inexpensive platform for colorimetric chemical testing.[1] Patterned paper has characteristics that lead to miniaturized assays that run by capillary action (e.g., without external pumping), with small volumes of fluids. These methods suggest a path for the development of simple, inexpensive, and portable diagnostic assays that may be useful in remote settings, and in particular, in less-industrialized countries where simple assays are becoming increasingly important for detecting disease and monitoring health,[16, 17], for environmental monitoring, in veterinary and agricultural practice and for other applications. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This article describes FLASH (Fast Lithographic Activation of Sheets), a rapid method for laboratory prototyping of microfluidic devices in paper. Paper-based microfluidic devices are emerging as a new technology for applications in diagnostics for the developing world, where low cost and simplicity are essential. FLASH is based on photolithography, but requires only a UV lamp and a hotplate; no clean-room or special facilities are required (FLASH patterning can even be performed in sunlight if a UV lamp and hotplate are unavailable). The method provides channels in paper with dimensions as small as 200 µm in width and 70 µm in height; the height is defined by the thickness of the paper. Photomasks for patterning paper-based microfluidic devices can be printed using an ink-jet printer or photocopier, or drawn by hand using a waterproof black pen. FLASH provides a straightforward method for prototyping paper-based microfluidic devices in regions where the technological support for conventional photolithography is not available. <s> BIB005 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This article describes a method for fabricating 3D microfluidic devices by stacking layers of patterned paper and double-sided adhesive tape. Paper-based 3D microfluidic devices have capabilities in microfluidics that are difficult to achieve using conventional open-channel microsystems made from glass or polymers. In particular, 3D paper-based devices wick fluids and distribute microliter volumes of samples from single inlet points into arrays of detection zones (with numbers up to thousands). This capability makes it possible to carry out a range of new analytical protocols simply and inexpensively (all on a piece of paper) without external pumps. We demonstrate a prototype 3D device that tests 4 different samples for up to 4 different analytes and displays the results of the assays in a side-by-side configuration for easy comparison. Three-dimensional paper-based microfluidic devices are especially appropriate for use in distributed healthcare in the developing world and in environmental monitoring and water analysis. <s> BIB006 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Paper-based microfluidic patterns have been demonstrated in recent literature to have a significant potential in developing low-cost analytical devices for telemedicine and general health monitoring. This study reports a new method for making microfluidic patterns on a paper surface using plasma treatment. Paper was first hydrophobized and then treated using plasma in conjunction with a mask. This formed well defined hydrophilic channels on the paper. Paper-based microfluidic systems produced in this way retained the flexibility of paper and a variety of patterns could be formed. A major advantage of this system is that simple functional elements such as switches and filters can be built into the patterns. Examples of these elements are given in this study. <s> BIB007 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> OBJECTIVE ::: To assess the effect of self-monitoring of blood glucose (SMBG) on glycaemic control in non-insulin treated patients with type 2 diabetes by means of a systematic review and meta-analysis. ::: ::: ::: RESEARCH DESIGN AND METHODS ::: MEDLINE and the Cochrane Controlled Trials Register were searched from inception to January 2009 for randomised controlled trials comparing SMBG with non-SMBG or more frequent SMBG with less intensive SMBG. Electronic searches were supplemented by manual searching of reference lists and reviews. The comparison of SMBG with non-SMBG was the primary, the comparison of more frequent SMBG with less intensive SMBG the secondary analysis. Stratified analyses were performed to evaluate modifying factors. ::: ::: ::: MAIN OUTCOME MEASURES ::: The primary endpoint was glycated haemoglobin A(1c) (HbA(1c)), secondary outcomes included fasting glucose and the occurrence of hypoglycaemia. Using random effects models a weighted mean difference (WMD) was calculated for HbA(1c) and a risk ratio (RR) was calculated for hypoglycaemia. Due to considerable heterogeneity, no combined estimate was computed for fasting glucose. ::: ::: ::: RESULTS ::: Fifteen trials (3270 patients) were included in the analyses. SMBG was associated with a larger reduction in HbA(1c) compared with non-SMBG (WMD -0.31%, 95% confidence interval -0.44 to -0.17). The beneficial effect associated with SMBG was not attenuated over longer follow-up. SMBG significantly increased the probability of detecting a hypoglycaemia (RR 2.10, 1.37 to 3.22). More frequent SMBG did not result in significant changes of HbA(1c) compared with less intensive SMBG (WMD -0.21%, 95% CI -0.57 to 0.15). ::: ::: ::: CONCLUSIONS ::: SMBG compared with non-SMBG is associated with a significantly improved glycaemic control in non-insulin treated patients with type 2 diabetes. The added value of more frequent SMBG compared with less intensive SMBG remains uncertain. <s> BIB008 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Here we present a simple and low-cost production method to generate paper-based microfluidic devices with wax for portable bioassay. The wax patterning method we introduced here included three different ways: (i) painting with a wax pen, (ii) printing with an inkjet printer followed by painting with a wax pen, (iii) printing by a wax printer directly. The whole process was easy to operate and could be finished within 5-10 min without the use of a clean room, UV lamp, organic solvent, etc. Horse radish peroxidase, BSA and glucose assays were conducted to verify the performance of wax-patterned paper. <s> BIB009 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This technical note describes a detailed study on wax printing, a simple and inexpensive method for fabricating microfluidic devices in paper using a commercially available printer and hot plate. The printer prints patterns of solid wax on the surface of the paper, and the hot plate melts the wax so that it penetrates the full thickness of the paper. This process creates complete hydrophobic barriers in paper that define hydrophilic channels, fluid reservoirs, and reaction zones. The design of each device was based on a simple equation that accounts for the spreading of molten wax in paper. <s> BIB010 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Abstract Objectives The aim of this study was assess serum ischemia modified albumin (IMA) in type 2 diabetes patients and determine its correlation with other risk factors for chronic complications such as inflammation and hyperglycemia. Design and methods Fasting glucose, glycated albumin, total cholesterol, HDL cholesterol, LDL cholesterol, triglycerides, creatinine, uric acid, albumin, lactic acid, high-sensitivity C-reactive protein (hs-CRP) and IMA were measured in 80 patients with type 2 diabetes and 26 controls. Results Fasting glucose, glycated albumin, triglycerides, creatinine, IMA and hs-CRP were significantly higher in patients with type 2 diabetes. Correlations were weak but significant between IMA and fasting glucose, IMA and hs-CRP, hs-CRP and HDL cholesterol and hs-CRP and fasting glucose were observed. Conclusions We have shown higher levels of IMA and hs-CRP in type 2 diabetes. Hyperglycemia and inflammation reduces the capacity of albumin to bind cobalt, resulting in higher IMA levels. <s> BIB011 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> BACKGROUND ::: Self-monitoring of Blood Glucose (SMBG) is purported to improve glycaemic control, measured by glycosylated haemoglobin (HbA1c). The effectiveness of SMBG in type 2 diabetes mellitus (T2DM) is well-documented though no systematic review of the economic evidence surrounding the use of SMBG in T2DM has been performed. ::: ::: ::: OBJECTIVES ::: To perform a systematic review of economic evaluations of SMBG in T2DM patients. ::: ::: ::: INCLUSION CRITERIA ::: All adult patients suffering from T2DM were included. Outcomes of differing treatment groups, where specified, were also recorded. Studies which examined SMBG as an intervention to control blood glucose were considered. To be included, studies must have made a formal attempt to relate cost to outcome data in a cost-effectiveness or cost utility analysis.The main outcomes were in terms of cost-effectiveness and cost-utility. ::: ::: ::: SEARCH STRATEGY ::: Extensive electronic searches were conducted. Searching was carried out, for the time period 1990 to January 2009, for full text papers and conference abstracts. ::: ::: ::: METHODOLOGICAL QUALITY ::: Methodological quality of included studies was assessed by two reviewers using the standard critical appraisal tools from the JBI-Actuari (Joanna Briggs Institute-Analysis of Cost, Technology and Utilisation Assessment and Review Instrument). Included modelling studies were also assessed using the review criteria of economic models set out by Phillips and colleagues. ::: ::: ::: DATA COLLECTION ::: Data from included studies were extracted using the JBI-Actuari extraction tool. ::: ::: ::: DATA SYNTHESIS ::: Studies were grouped by outcome measure and summarised using tabular and narrative formats. ::: ::: ::: RESULTS ::: Five studies met the review criteria. Three were model-based analyses assessing long-term cost-effectiveness of SMBG, all of which concluded that SMBG was cost-effective. Two further primary economic evaluations assessed short-term cost-effectiveness. Their results found SMBG to be associated with increased cost and no significant reduction in HbA1c. The studies examined subgroups in terms of their treatment protocols and SMBG was considered more likely to be cost-effective in drug and insulin treated groups compared to diet and exercise groups. ::: ::: ::: CONCLUSIONS ::: Economic evidence surrounding SMBG in T2DM remains unclear. For the most part, included studies found SMBG to be cost-effective though analyses are extremely sensitive to relative effects, time-frame of analyses and model assumptions. Whilst large uncertainty exists, SMBG may be cost-effective in certain subgroups e.g. drug and insulin-treated patients. ::: ::: ::: IMPLICATION FOR PRACTICE ::: No strong evidence to recommend the regular use of SMBG in well-controlled diabetes patients, treated only with diet and exercise programmes, exists. The evidence does offer support for SMBG in drug and insulin treated T2DM. It is recommended that clinicians select appropriate patients for SMBG, from these groups, based on their domain expertise. ::: ::: ::: IMPLICATIONS FOR RESEARCH ::: Large-scale prospective RCTs of SMBG, particularly in drug and insulin treated patients, with well-conducted economic evaluations performed alongside them, will enable a more accurate estimation of the cost-effectiveness of SMBG. The optimal frequency and administration of SMBG is still unknown and is another area that warrants further research. <s> BIB012 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> The interest in low-cost microfluidic platforms as well as emerging microfabrication techniques has increased considerably over the last years. Toner- and paper-based techniques have appeared as two of the most promising platforms for the production of disposable devices for on-chip applications. This review focuses on recent advances in the fabrication techniques and in the analytical/bioanalytical applications of toner and paper-based devices. The discussion is divided in two parts dealing with (i) toner and (ii) paper devices. Examples of miniaturized devices fabricated by using direct-printing or toner transfer masking in polyester-toner, glass, PDMS as well as conductive platforms as recordable compact disks and printed circuit board are presented. The construction and the use of paper-based devices for off-site diagnosis and bioassays are also described to cover this emerging platform for low-cost diagnostics. <s> BIB013 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> We report the use of paper-based microfluidic devices fabricated from a novel polymer blend for the monitoring of urinary ketones, glucose, and salivary nitrite. Paper-based devices were fabricated via photolithography in less than 3 min and were immediately ready for use for these diagnostically relevant assays. Patterned channels on filter paper as small as 90 microm wide with barriers as narrow as 250 microm could be reliably patterned to permit and block fluid wicking, respectively. Colorimetric assays for ketones and nitrite were adapted from the dipstick format to this paper microfluidic chip for the quantification of acetoacetate in artificial urine, as well as nitrite in artificial saliva. Glucose assays were based on those previously demonstrated (Martinez et al., Angew Chem Int Ed 8:1318-1320, 1; Martinez et al., Anal Chem 10:3699-3707, 2; Martinez et al., Proc Nat Acad Sci USA 50:19606-19611, 3; Lu et al., Electrophoresis 9:1497-1500, 4; Abe et al., Anal Chem 18:6928-6934, 5). Reagents were spotted on the detection pad of the paper device and allowed to dry prior to spotting of samples. The ketone test was a two-step reaction requiring a derivitization step between the sample spotting pad and the detection pad, thus for the first time, confirming the ability of these paper devices to perform online multi-step chemical reactions. Following the spotting of the reagents and sample solution onto the paper device and subsequent drying, color images of the paper chips were recorded using a flatbed scanner, and images were converted to CMYK format in Adobe Photoshop CS4 where the intensity of the color change was quantified using the same software. The limit of detection (LOD) for acetoacetate in artificial urine was 0.5 mM, while the LOD for salivary nitrite was 5 microM, placing both of these analytes within the clinically relevant range for these assays. Calibration curves for urinary ketone (5 to 16 mM) and salivary nitrite (5 to 2,000 microM) were generated. The time of device fabrication to the time of test results was about 25 min. <s> BIB014 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This paper describes the fabrication and the performance of microfluidic paper-based electrochemical sensing devices (we call the microfluidic paper-based electrochemical devices, µPEDs). The µPEDs comprise paper-based microfluidic channels patterned by photolithography or wax printing, and electrodes screen-printed from conducting inks (e.g., carbon or Ag/AgCl). We demonstrated that the µPEDs are capable of quantifying the concentrations of various analytes (e.g., heavy-metal ions and glucose) in aqueous solutions. This low-cost analytical device should be useful for applications in public health, environmental monitoring, and the developing world. <s> BIB015 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This article describes the use of microfluidic paper-based analytical devices (muPADs) to perform quantitative chemical assays with internal standards. MicroPADs are well-suited for colorimetric biochemical assays; however, errors can be introduced from the background color of the paper due to batch difference and age, and from color measurement devices. To reduce errors from these sources, a series of standard analyte solutions and the sample solution are assayed on a single device with multiple detection zones simultaneously; an analyte concentration calibration curve can thus be established from the standards. Since the muPAD design allows the colorimetric measurements of the standards and the sample to be conducted simultaneously and under the same condition, errors from the above sources can be minimized. The analytical approach reported in this work shows that muPADs can perform quantitative chemical analysis at very low cost. <s> BIB016 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This Technical Note demonstrates a simple method based on flexographic printing of polystyrene to form liquid guiding boundaries and layers on paper substrates. The method allows formation of hydrophobic barrier structures that partially or completely penetrate through the substrate. This unique property enables one to form very thin fluidic channels on paper, leading to reduced sample volumes required in point-of-care diagnostic devices. The described method is compatible with roll-to-roll flexography units found in many printing houses, making it an ideal method for large-scale production of paper-based fluidic structures. <s> BIB017 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Paper spray is developed as a direct sampling ionization method for mass spectrometric analysis of complex mixtures. Ions of analyte are generated by applying a high voltage to a paper triangle wetted with a small volume (<10 microL) of solution. Samples can be preloaded onto the paper, added with the wetting solution, or transferred from surfaces using the paper as a wipe. It is demonstrated that paper spray is applicable to the analysis of a wide variety of compounds, including small organic compounds, peptides, and proteins. Procedures are developed for analysis of dried biofluid spots and applied to therapeutic drug monitoring with whole blood samples and to illicit drug detection in raw urine samples. Limits of detection of 50 ng/mL (or 20 pg absolute) are achieved for atenolol in bovine blood. The combination of sample collection from surfaces and paper spray ionization also enables fast chemical screening at high sensitivity, for example 100 pg of heroin distributed on a surface and agrochemicals on fruit peels are detectable. Online derivatization with a preloaded reagent is demonstrated for analysis of cholesterol in human serum. The combination of paper spray with miniature mass spectrometers offers a powerful impetus to wide application of mass spectrometry in nonlaboratory environments. <s> BIB018 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A novel, ultra low-cost surface enhanced Raman spectroscopy (SERS) substrate has been developed by modifying the surface chemistry of cellulose paper and patterning nanoparticle arrays, all with a consumer inkjet printer. Micro/nanofabrication of SERS substrates for on-chip chemical and biomolecular analysis has been under intense investigation. However, the high cost of producing these substrates and the limited shelf life severely limit their use, especially for routine laboratory analysis and for point-of-sample analysis in the field. Paper-based microfluidic biosensing systems have shown great potential as low-cost disposable analysis tools. In this work, this concept is extended to SERS-based detection. Using an inexpensive consumer inkjet printer, cellulose paper substrates are modified to be hydrophobic in the sensing regions. Synthesized silver nanoparticles are printed onto this hydrophobic paper substrate with microscale precision to form sensing arrays. The hydrophobic surface prevents the aque... <s> BIB019 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Abstract We describe the development of a highly stable and sensitive glucose biosensor based on the nanohybrid materials derived from gold nanoparticles (AuNPs) and multi-walled carbon nanotubes (MWCNT). The biosensing platform was developed by using layer-by-layer (LBL) self-assembly of the nanohybrid materials and the enzyme glucose oxidase (GOx). A high density of AuNPs and MWCNT nanocomposite materials were constructed by alternate self assembly of thiol functionalized MWCNTs and AuNPs, followed by chemisoption of GOx. The surface morphology of multilayered AuNPs/MWCNT structure was characterized by field emission-scanning electron microscope (FE-SEM), and the surface coverage of AuNPs was investigated by cyclic voltammetry (CV), showing that 5 layers of assembly achieves the maximum particle density on electrode. The immobilization of GOx was monitored by electrochemical impedance spectroscopy (EIS). CV and amperometry methods were used to study the electrochemical oxidation of glucose at physiological pH 7.4. The Au electrode modified with five layers of AuNPs/MWCNT composites and GOx exhibited an excellent electrocatalytic activity towards oxidation of glucose, which presents a wide liner range from 20 μM to 10 mM, with a sensitivity of 19.27 μA mM−1 cm−2. The detection limit of present modified electrode was found to be 2.3 μM (S/N = 3). In addition, the resulting biosensor showed a faster amperometric current response (within 3 s) and low apparent Michaelis–Menten constant ( K m app ) . Our present study shows that the high density of AuNPs decorated MWCNT is a promising nanohybrid material for the construction of enzyme based electrochemical biosensors. <s> BIB020 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Abstract Aim To assess whether self-monitoring of quantitative urine glucose or blood glucose is effective, convenient and safe for glycaemic control in non-insulin treated type 2 diabetes. Methods Adults with non-insulin treated type 2 diabetes were recruited and randomized into three groups: Group A, self-monitoring with a quantitative urine glucose meter (n = 38); Group B, selfmonitoring with a blood glucose meter (n = 35); Group C, the control group without selfmonitoring (n = 35). All patients were followed up for six months, during which identical diabetes care was provided. Results There was a significant decrease in HbA1c within each group (p Conclusions This study suggests that self-monitoring of urine glucose has comparable efficacy on glycaemic control, and facilitates better compliance than blood self monitoring, without influencing the quality of life or risk of hypoglycaemia. <s> BIB021 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Wax screen-printing as a low-cost, simple, and rapid method for fabricating paper-based microfluidic devices (µPADs) is reported here. Solid wax was rubbed through a screen onto paper filters. The printed wax was then melted into the paper to form hydrophobic barriers using only a hot plate. We first studied the relationship between the width of a hydrophobic barrier and the width of the original design line. We also optimized the heating temperature and time and determined the resolution of structures fabricated using this technique. The minimum width of hydrophilic channel and hydrophobic barrier is 650 and 1300 µm, respectively. Next, our fabrication method was compared to a photolithographic method using the reaction between bicinchoninic acid (BCA) and Cu1+ to demonstrate differences in background reactivity. Photolithographically defined channels exhibited a high background while wax printed channels showed a very low background. Finally, the utility of wax screen-printing was demonstrated for the simultaneous determination of glucose and total iron in control human serum samples using an electrochemical method with glucose oxidase and a colorimetric method with 1,10-phenanthroline. This study demonstrates that wax screen-printing is an easy-to-use and inexpensive alternative fabrication method for µPAD, which will be especially useful in developing countries. <s> BIB022 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Abstract A reaction plate mimicking the working principle of a conventional three-dimensional microplate was printed on various hydrophilic paper substrates. Planar well patterns with high wetting/non-wetting contrast were formed using hydrophobic polydimethylsiloxane (PDMS) based ink with fast curing time, which enables truly low cost roll-to-roll fabrication. The formation and functionality of the printed reaction arrays were verified by two proof-of-concept demonstrations. Firstly a colorimetric glucose sensor, based on an enzymatic reaction sequence involving glucose oxidase, was screen-printed on the reaction plate. A detection limit of 0.1 mg/mL and a fairly linear sensor response was obtained on a logarithmic scale. Secondly, the employment of the reaction plate for electrical applications was demonstrated by modulating the resistance of a drop-casted polyaniline film as a function of pH. <s> BIB023 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> We report a method for fabricating inexpensive microfluidic platforms on paper using laser treatment. Any paper with a hydrophobic surface coating (e.g., parchment paper, wax paper, palette paper) can be used for this purpose. We were able to selectively modify the surface structure and property (hydrophobic to hydrophilic) of several such papers using a CO(2) laser. We created patterns down to a minimum feature size of 62±1 µm. The modified surface exhibited a highly porous structure which helped to trap/localize chemical and biological aqueous reagents for analysis. The treated surfaces were stable over time and were used to self-assemble arrays of aqueous droplets. Furthermore, we selectively deposited silica microparticles on patterned areas to allow lateral diffusion from one end of a channel to the other. Finally, we demonstrated the applicability of this platform to perform chemical reactions using luminol-based hemoglobin detection. <s> BIB024 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> In this work, chemiluminescence (CL) method was combined with microfluidic paper-based analytical device (μPAD) to establish a novel CL μPAD biosensor for the first time. This novel CL μPAD biosensor was based on enzyme reaction which produced H(2)O(2) while decomposing the substrate and the CL reaction between rhodanine derivative and generated H(2)O(2) in acid medium. Microchannels in μPAD were fabricated by cutting method. And the possible CL assay principle of this CL μPAD biosensor was explained. Rhodanine derivative system was used to reach the purpose of high sensitivity and well-defined signal for this CL μPAD biosensor. And the optimum reaction conditions were investigated. The quantitative determination of uric acid could be achieved by this CL μPAD biosensor with accurate and satisfactory result. And this biosensor could provide good reproducible results upon storage at 4°C for at least 10 weeks. The successful integration of μPAD and CL reaction made the final biosensor inexpensive, easy-to-use, low-volume, and portable for uric acid determination, which also greatly reduces the cost and increases the efficiency required for an analysis. We believe this simple, practical CL μPAD biosensor will be of interest for use in areas such as disease diagnosis. <s> BIB025 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> In this study, a novel microfluidic paper-based chemiluminescence analytical device (μPCAD) with a simultaneous, rapid, sensitive and quantitative response for glucose and uric acid was designed. This novel lab-on-paper biosensor is based on oxidase enzyme reactions (glucose oxidase and urate oxidase, respectively) and the chemiluminescence reaction between a rhodanine derivative and generated hydrogen peroxide in an acid medium. The possible chemiluminescence assay principle of this μPCAD is explained. We found that the simultaneous determination of glucose and uric acid could be achieved by differing the distances that the glucose and uric acid samples traveled. This lab-on-paper biosensor could provide reproducible results upon storage at 4 °C for at least 10 weeks. The application test of our μPCAD was then successfully performed with the simultaneous determination of glucose and uric acid in artificial urine. This study shows the successful integration of the μPCAD and the chemiluminescence method will be an easy-to-use, inexpensive, and portable alternative for point-of-care monitoring. <s> BIB026 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This paper describes the first approach at combining paper microfluidics with electrochemiluminescent (ECL) detection. Inkjet printing is used to produce paper microfluidic substrates which are combined with screen-printed electrodes (SPEs) to create simple, cheap, disposable sensors which can be read without a traditional photodetector. The sensing mechanism is based on the orange luminescence due to the ECL reaction of tris(2,2′-bipyridyl)ruthenium(II) (Ru(bpy)32+) with certain analytes. Using a conventional photodetector, 2-(dibutylamino)ethanol (DBAE) and nicotinamide adenine dinucleotide (NADH) could be detected to levels of 0.9 μM and 72 μM, respectively. Significantly, a mobile camera phone can also be used to detect the luminescence from the sensors. By analyzing the red pixel intensity in digital images of the ECL emission, a calibration curve was constructed demonstrating that DBAE could be detected to levels of 250 μM using the phone. <s> BIB027 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A surface acoustic wave-based sample delivery and ionization method that requires minimal to no sample pretreatment and that can operate under ambient conditions is described. This miniaturized technology enables real-time, rapid, and high-throughput analysis of trace compounds in complex mixtures, especially high ionic strength and viscous samples that can be challenging for conventional ionization techniques such as electrospray ionization. This technique takes advantage of high order surface acoustic wave (SAW) vibrations that both manipulate small volumes of liquid mixtures containing trace analyte compounds and seamlessly transfers analytes from the liquid sample into gas phase ions for mass spectrometry (MS) analysis. Drugs in human whole blood and plasma and heavy metals in tap water have been successfully detected at nanomolar concentrations by coupling a SAW atomization and ionization device with an inexpensive, paper-based sample delivery system and mass spectrometer. The miniaturized SAW ioniza... <s> BIB028 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> In this work, we first employ a drying method combining with the bienzyme colorimetric detection of glucose and uric acid on microfluidic paper-based analysis devices (μPADs). The channels of 3D μPADs are also designed by us to get better results. The color results are recorded by both Gel Documentation systems and a common camera. By using Gel Documentation systems, the limits of detection (LOD) of glucose and uric acid are 3.81 × 10(-5)M and 4.31 × 10(-5)M, respectively one order of magnitude lower than that of the reported methods on μPADs. By using a common camera, the limits of detection (LOD) of glucose and uric acid are 2.13 × 10(-4)M and 2.87 × 10(-4)M, respectively. Furthermore, the effects of detection conditions have been investigated and discussed comprehensively. Human serum samples are detected with satisfactory results, which are comparable with the clinical testing results. A low-cost, simple and rapid colorimetric method for the simultaneous detection of glucose and uric acid on the μPADs has been developed with enhanced sensitivity. <s> BIB029 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> In this work, robust approach for a highly sensitive point-of-care virus detection was established based on immunomagnetic nanobeads and fluorescent quantum dots (QDs). Taking advantage of immunomagnetic nanobeads functionalized with the monoclonal antibody (mAb) to the surface protein hemagglutinin (HA) of avian influenza virus (AIV) H9N2 subtype, H9N2 viruses were efficiently captured through antibody affinity binding, without pretreatment of samples. The capture kinetics could be fitted well with a first-order bimolecular reaction with a high capturing rate constant kf of 4.25 × 109 (mol/L)−1 s–1, which suggested that the viruses could be quickly captured by the well-dispersed and comparable-size immunomagnetic nanobeads. In order to improve the sensitivity, high-luminance QDs conjugated with streptavidin (QDs-SA) were introduced to this assay through the high affinity biotin-streptavidin system by using the biotinylated mAb in an immuno sandwich mode. We ensured the selective binding of QDs-SA to the ... <s> BIB030 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A novel 3D microfluidic paper-based immunodevice, integrated with blood plasma separation from whole blood samples, automation of rinse steps, and multiplexed CL detections, was developed for the first time based on the principle of origami (denoted as origami-based device). This 3D origami-based device, comprised of one test pad surrounded by four folding tabs, could be patterned and fabricated by wax-printing on paper in bulk. In this work, a sandwich-type chemiluminescence (CL) immunoassay was introduced into this 3D origami-based immunodevice, which could separate the operational procedures into several steps including (i) folding pads above/below and (ii) addition of reagent/buffer under a specific sequence. The CL behavior, blood plasma separation, washing protocol, and incubation time were investigated in this work. The developed 3D origami-based CL immunodevice, combined with a typical luminuol-H(2)O(2) CL system and catalyzed by Ag nanoparticles, showed excellent analytical performance for the simultaneous detection of four tumor markers. The whole blood samples were assayed and the results obtained were in agreement with the reference values from the parallel single-analyte test. This paper-based microfluidic origami CL detection system provides a new strategy for a low-cost, sensitive, simultaneous multiplex immunoassay and point-of-care diagnostics. <s> BIB031 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> In this review we discuss how nanomaterials can be integrated in diagnostic paper-based biosensors for the detection of proteins, nucleic acids and cells. In particular first the different types and properties of paper-based nanobiosensors and nanomaterials are briefly explained. Then several examples of their application in diagnostics of several biomarkers are reported. Finally our opinions regarding future trends in this field are discussed. <s> BIB032 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Dipstick and lateral-flow formats have dominated rapid diagnostics over the last three decades. These formats gained popularity in the consumer markets due to their compactness, portability and facile interpretation without external instrumentation. However, lack of quantitation in measurements has challenged the demand of existing assay formats in consumer markets. Recently, paper-based microfluidics has emerged as a multiplexable point-of-care platform which might transcend the capabilities of existing assays in resource-limited settings. However, paper-based microfluidics can enable fluid handling and quantitative analysis for potential applications in healthcare, veterinary medicine, environmental monitoring and food safety. Currently, in its early development stages, paper-based microfluidics is considered a low-cost, lightweight, and disposable technology. The aim of this review is to discuss: (1) fabrication of paper-based microfluidic devices, (2) functionalisation of microfluidic components to increase the capabilities and the performance, (3) introduction of existing detection techniques to the paper platform and (4) exploration of extracting quantitative readouts via handheld devices and camera phones. Additionally, this review includes challenges to scaling up, commercialisation and regulatory issues. The factors which limit paper-based microfluidic devices to become real world products and future directions are also identified. <s> BIB033 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Immediate response for disease control relies on simple, inexpensive, and sensitive diagnostic tests, highly sought after for timely and accurate test of various diseases, including infectious diseases. Composite Fe3O4/Au nanoparticles have attracted considerable interest in diagnostic applications due to their unique physical and chemical properties. Here, we developed a simple coating procedure for gold magnetic nanoparticles (GMNs) with poly(acrylic acid) (PAA). PAA-coated GMNs (PGMNs) were stable and monodispersed and characterized by Fourier transform-infrared spectroscopy (FT-IR), transmission electron microscopy, UV-visible scanning spectrophotometry, thermogravimetric analysis, and Zetasizer methodologies. For diagnostic application, we established a novel lateral flow immunoassay (LFIA) strip test system where recombinant Treponema pallidum antigens (r-Tp) were conjugated with PGMNs to construct a particle probe for detection of anti-Tp antibodies. Intriguingly, the particle probes specifically identified Tp antibodies with a detection limitation as low as 1 national clinical unit/mL (NCU/mL). An ample pool of 1020 sera samples from three independent hospitals were obtained to assess our PGMNs-based LFIA strips, which exhibited substantially high values of sensitivity and specificity for all clinical tests (higher than 97%) and, therefore, proved to be a suitable approach for syphilis screening at a point-of-care test manner. <s> BIB034 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> The impact of detecting multiple infectious diseases simultaneously at point-of-care with good sensitivity, specificity, and reproducibility would be enormous for containing the spread of diseases in both resource-limited and rich countries. Many barcoding technologies have been introduced for addressing this need as barcodes can be applied to detecting thousands of genetic and protein biomarkers simultaneously. However, the assay process is not automated and is tedious and requires skilled technicians. Barcoding technology is currently limited to use in resource-rich settings. Here we used magnetism and microfluidics technology to automate the multiple steps in a quantum dot barcode assay. The quantum dot-barcoded microbeads are sequentially (a) introduced into the chip, (b) magnetically moved to a stream containing target molecules, (c) moved back to the original stream containing secondary probes, (d) washed, and (e) finally aligned for detection. The assay requires 20 min, has a limit of detection of ... <s> BIB035 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Human exposure to particulate matter (PM) air pollution has been linked with respiratory, cardiovascular, and neurodegenerative diseases, in addition to various cancers. Consistent among all of these associations is the hypothesis that PM induces inflammation and oxidative stress in the affected tissue. Consequently, a variety of assays have been developed to quantify the oxidative activity of PM as a means to characterize its ability to induced oxidative stress. The vast majority of these assays rely on high-volume, fixed-location sampling methods due to limitations in assay sensitivity and detection limit. As a result, our understanding of how personal exposure contributes to the intake of oxidative air pollution is limited. To further this understanding, we present a microfluidic paper-based analytical device (μPAD) for measuring PM oxidative activity on filters collected by personal sampling. The μPAD is inexpensive to fabricate and provides fast and sensitive analysis of aerosol oxidative activity. T... <s> BIB036 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Paper-based analytical devices (PADs) represent a growing class of elegant, yet inexpensive chemical sensor technologies designed for point-of-use applications. Most PADs, however, still utilize some form of instrumentation such as a camera for quantitative detection. We describe here a simple technique to render PAD measurements more quantitative and straightforward using the distance of colour development as a detection motif. The so-called distance-based detection enables PAD chemistries that are more portable and less resource intensive compared to classical approaches that rely on the use of peripheral equipment for quantitative measurement. We demonstrate the utility and broad applicability of this technique with measurements of glucose, nickel, and glutathione using three different detection chemistries: enzymatic reactions, metal complexation, and nanoparticle aggregation, respectively. The results show excellent quantitative agreement with certified standards in complex sample matrices. This work provides the first demonstration of distance-based PAD detection with broad application as a class of new, inexpensive sensor technologies designed for point-of-use applications. <s> BIB037 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> In this work, we reported a simple rapid and point-of-care magnetic immunofluorescence assay for avian influenza virus (AIV) and developed a portable experimental setup equipped with an optical fiber spectrometer and a microfluidic device. We achieved the integration of immunomagnetic target capture, concentration, and fluorescence detection in the microfluidic chip. By optimizing flow rate and incubation time, we could get a limit of detection low up to 3.7 × 104 copy/μL with a sample consumption of 2 μL and a total assay time of less than 55 min. This approach had proved to possess high portability, fast analysis, high specificity, high precision, and reproducibility with an intra-assay variability of 2.87% and an interassay variability of 4.36%. As a whole, this microfluidic system may provide a powerful platform for the rapid detection of AIV and may be extended for detection of other viral pathogens; in addition, this portable experimental setup enables the development of point-of-care diagnostic sys... <s> BIB038 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A new technique for the detection of explosives has been developed based on fluorescence quenching of pyrene on paper-based analytical devices (μPADs). Wax barriers were generated (150 °C, 5 min) using ten different colours. Magenta was found as the most suitable wax colour for the generation of the hydrophobic barriers with a nominal width of 120 μm resulting in fully functioning hydrophobic barriers. One microliter of 0.5 mg mL(-1) pyrene dissolved in an 80:20 methanol-water solution was deposited on the hydrophobic circle (5 mm diameter) to produce the active microchip device. Under ultra-violet (UV) illumination, ten different organic explosives were detected using the μPAD, with limits of detection ranging from 100-600 ppm. A prototype of a portable battery operated instrument using a 3 W power UV light-emitting-diode (LED) (365 nm) and a photodiode sensor was also built and evaluated for the successful automatic detection of explosives and potential application for field-based screening. <s> BIB039 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This work presents a novel and facile method for fabricating paper-based microfluidic devices by means of coupling of hydrophobic silane to paper fibers followed by deep UV-lithography. After filter paper being simply immersed in an octadecyltrichlorosilane (OTS) solution in n-hexane for 5 min, the hydrophilic paper became highly hydrophobic (water contact angle of about 125°) due to the hydrophobic OTS molecules were coupled to paper's cellulose fibers. The hydrophobized paper was then exposed to deep UV-lights through a quartz mask that had the pattern of the to-be-prepared channel network. Thus, the UV-exposed regions turned highly hydrophilic whereas the masked regions remained highly hydrophobic, generating hydrophilic channels, reservoirs and reaction zones that were well-defined by the hydrophobic regions. The resolution for hydrophilic channels was 233 ± 30 μm and that for between-channel hydrophobic barrier was 137 ± 21 μm. Contact angle measurement, X-ray photoelectron spectroscopy (XPS) and attenuated total reflectance Fourier transform-infrared (ATR-FT-IR) spectroscopy were employed to characterize the surface chemistry of the OTS-coated and UV/O(3)-treated paper, and the related mechanism was discussed. Colorimetric assays of nitrite are demonstrated with the developed paper-based microfluidic devices. <s> BIB040 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> An electrode platform printed on a recyclable low-cost paper substrate was characterized using cyclic voltammetry. The working and counter electrodes were directly printed gold-stripes, while the reference electrode was a printed silver stripe onto which an AgCl layer was deposited electrochemically. The novel paper-based chips showed comparable performance to conventional electrochemical cells. Different types of electrode modifications were carried out to demonstrate that the printed electrodes behave similarly with conventional electrodes. Firstly, a self-assembled monolayer (SAM) of alkanethiols was successfully formed on the Au electrode surface. As a consequence, the peak currents were suppressed and no longer showed clear increase as a function of the scan rate. Such modified electrodes have potential in various sensor applications when terminally substituted thiols are used. Secondly, a polyaniline film was electropolymerized on the working electrode by cyclic voltammetry and used for potentiometric pH sensing. The calibration curve showed close to Nerstian response. Thirdly, a poly(3,4-ethylenedioxythiophene) (PEDOT) layer was electropolymerized both by galvanostatic and cyclic potential sweep method on the working electrode using two different dopants; Cl− to study ion-to-electron transduction on paper-Au/PEDOT system and glucose oxidase in order to fabricate a glucose biosensor. The planar paper-based electrochemical cell is a user-friendly platform that functions with low sample volume and allows the sample to be applied and changed by e.g. pipetting. Low unit cost is achieved with mask- and mesh-free inkjet-printing technology. <s> BIB041 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Microfluidic devices fabricated out of paper (and paper and tape) have emerged as promising platforms for conducting multiple diagnostic assays simultaneously in resource-limited settings. Certain types of assays in these devices, however, require a source of power to function. Lithium ion, nickel-cadmium, and other types of batteries have been used to power these devices, but these traditional batteries are too expensive and pose too much of a disposal hazard for diagnostic applications in resource-limited settings. To circumvent this problem, we previously designed a “fluidic battery” that is composed of multiple galvanic cells, incorporated directly into a multilayer paper-based microfluidic device. We now show that multiple cells of these fluidic batteries can be connected in series and/or in parallel in a predictable way to obtain desired values of current and potential, and that the batteries can be optimized to last for a short period of time (<1 min) or for up to 10–15 min. This paper also (i) outlines and quantifies the parameters that can be adjusted to maximize the current and potential of fluidic batteries, (ii) describes two general configurations for fluidic batteries, and (iii) provides equations that enable prediction of the current and potential that can be obtained when these two general designs are varied. This work provides the foundation upon which future applications of fluidic batteries will be based. <s> BIB042 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> In this study, a fast, low-cost, and facile spray method was proposed. This method deposits highly sensitive surface-enhanced Raman scattering (SERS) silver nanoparticles (AgNPs) on the paper-microfluidic scheme. The procedures for substrate preparation were studied including different strategies to synthesize AgNPs and the optimization of spray cycles. In addition, the morphologies of the different kinds of paper substrates were characterized by SEM and investigated by their SERS signals. The established method was found to be favorable for obtaining good sensitivity and reproducible results. The RSDs of Raman intensity of randomly analyzing 20 spots on the same paper or different filter papers depositing AgNPs are both below 15%. The SERS enhancement factor is approximately 2 × 10(7) . The whole fabrication is very rapid, robust, and does not require specific instruments. Furthermore, the total cost for 1000 pieces of chip is less than $20. These advantages demonstrated the potential for growing SERS applications in the area of environmental monitoring, food safety, and bioanalysis in the future. <s> BIB043 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> One of the goals of point-of-care (POC) is a chip-based, miniaturized, portable, self-containing system that allows the assay of proteins, nucleic acids, and cells in complex samples. The integration of nanomaterials and microfluidics can help achieve this goal. This tutorial review outlines the mechanism of assaying biomarkers by gold nanoparticles (AuNPs), and the implementation of AuNPs for microfluidic POC devices. In line with this, we discuss some recent advances in AuNP-coupled microfluidic sensors with enhanced performance. Portable and automated instruments for device operation and signal readout are also included for practical applications of these AuNP-combined microfluidic chips. <s> BIB044 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This report demonstrates a straightforward, robust, multiplexed and point-of-care microcapillary-based loop-mediated isothermal amplification (cLAMP) for assaying nucleic acids. This assay integrates capillaries (glass or plastic) to introduce and house sample/reagents, segments of water droplets to prevent contamination, pocket warmers to provide heat, and a hand-held flashlight for a visual readout of the fluorescent signal. The cLAMP system allows the simultaneous detection of two RNA targets of human immunodeficiency virus (HIV) from multiple plasma samples, and achieves a high sensitivity of two copies of standard plasmid. As few nucleic acid detection methods can be wholly independent of external power supply and equipment, our cLAMP holds great promise for point-of-care applications in resource-poor settings. <s> BIB045 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Microbial pathogens pose serious threats to public health and safety, and results in millions of illnesses and deaths as well as huge economic losses annually. Laborious and expensive pathogen tests often represent a significant hindrance to implementing effective front-line preventative care, particularly in resource-limited regions. Thus, there is a significant need to develop low-cost and easy-to-use methods for pathogen detection. Herein, we present a simple and inexpensive litmus test for bacterial detection. The method takes advantage of a bacteria-specific RNA-cleaving DNAzyme probe as the molecular recognition element and the ability of urease to hydrolyze urea and elevate the pH value of the test solution. By coupling urease to the DNAzyme on magnetic beads, the detection of bacteria is translated into a pH increase, which can be readily detected using a litmus dye or pH paper. The simplicity, low cost, and broad adaptability make this litmus test attractive for field applications, particularly in the developing world. <s> BIB046 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Preconcentration of pathogens from patient samples represents a great challenge in point-of-care (POC) diagnostics. Here, a low-cost, rapid, and portable agarose-based microfluidic device was developed to concentrate biological fluid from micro- to picoliter volume. The microfluidic concentrator consisted of a glass slide simply covered by an agarose layer with a binary tree-shaped microchannel, in which pathogens could be concentrated at the end of the microchannel due to the capillary effect and the strong water permeability of the agarose gel. The fluorescent Escherichia coli strain OP50 was used to demonstrate the capacity of the agarose-based device. Results showed that 90% recovery efficiency could be achieved with a million-fold volume reduction from 400 μL to 400 pL. For concentration of 1 × 10(3) cells mL(-1) bacteria, approximately ten million-fold enrichment in cell density was realized with volume reduction from 100 μL to 1.6 pL. Urine and blood plasma samples were further tested to validate the developed method. In conjugation with fluorescence immunoassay, we successfully applied the method to the concentration and detection of infectious Staphylococcus aureus in clinics. The agarose-based microfluidic concentrator provided an efficient approach for POC detection of pathogens. <s> BIB047 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A paper microfluidic chip was designed and fabricated to evaluate the taste of 10 different red wines using a set of chemical dyes. The digital camera of a smartphone captured the images, and its red-green-blue (RGB) pixel intensities were analyzed by principal component analysis (PCA). Using 8 dyes and 2 principal components (PCs), we were able to distinguish each wine by the grape variety and the oxidation status. Through comparing with the flavor map by human evaluation, PC1 seemed to represent the sweetness and PC2 the bodyness of red wine. This superior performance is attributed to: (1) careful selection of commercially available dyes through a series of linear correlation studies with the taste chemicals in red wines, (2) minimization of sample-to-sample variation by splitting a single sample into multiple wells on the paper microfluidics, and (3) filtration of particulate matter through paper fibers. The image processing and PCA procedure can eventually be implemented as a stand-alone smartphone application and can be adopted as an extremely low-cost, disposable, fully handheld, easy-to-use, yet sensitive and specific quality control method for appraising red wine or similar beverage products in resource-limited environments. <s> BIB048 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> In this paper we describe a method for three-dimensional wax patterning of microfluidic paper-based analytical devices (μPADs). The method is rooted in the fundamental details of wax transport in paper and provides a simple way to fabricate complex channel architectures such as hemichannels and fully enclosed channels. We show that three-dimensional μPADs can be fabricated with half as much paper by using hemichannels rather than ordinary open channels. We also provide evidence that fully enclosed channels are efficiently isolated from the exterior environment, decreasing contamination risks, simplifying the handling of the device, and slowing evaporation of solvents. <s> BIB049 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> We report a simple, low-cost, one-step fabrication method for microfluidic paper-based analytical devices (μPAD) using only polystyrene and a patterned screen. The polystyrene solution applied through the screen penetrates through the paper, forming a three-dimensional hydrophobic barrier, defining a hydrophilic analysis zone. The optimal polystyrene concentration and paper types were first investigated. Adjusting polystyrene concentration allows for various types of paper to be used for successful device fabrication. Using an optimized polystyrene concentration with Whatman#4 filter paper, a linear relationship was found to exist between the design width and the printed width. The smallest hydrophilic channel and hydrophobic barrier that can be obtained are 670 ± 50 μm and 380 ± 40 μm, respectively. High device-to-device fabrication reproducibility was achieved yielding a relative standard deviation (%RSD) in the range of 1.12–2.54% (n = 64) of the measured diameter of the well-shaped fabricated test zones with a designed diameter of 5 and 7 mm. To demonstrate the significance of the fabricated μPAD, distance-based and well-based paper devices were constructed for the analysis of H2O2 and antioxidant activity, respectively. The analysis of H2O2 in real samples using distance-based measurement with CeO2 nanoparticles as the colorimetric agent produced the same results at 95% confidence level, as those obtained using KMnO4 titration. A proof-of-concept antioxidant activity determination based on the 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay was also demonstrated. The results verify that the polymer screen-printing method can be used as an alternative method for μPAD fabrication. <s> BIB050 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> We developed a novel, low-cost and simple method for the fabrication of microfluidic paper-based analytical devices (μPADs) by silanization of filter cellulose using a paper mask having a specific pattern. The paper mask was penetrated with trimethoxyoctadecylsilane (TMOS) by immersing into TMOS-heptane solution. By heating the filter paper sandwiched between the paper mask and glass slides, TMOS was immobilized onto the filter cellulose via the reaction between cellulose OH and TMOS, while the hydrophilic area was not silanized because it was not in contact with the paper mask penetrated with TMOS. The effects of some factors including TMOS concentration, heating temperature and time on the fabrication of μPADs were studied. This method is free of any expensive equipment and metal masks, and could be performed by untrained personnel. These features are very attractive for the fabrication and applications of μPADs in developing countries or resource-limited settings. A flower-shaped μPAD was fabricated and used to determine glucose in human serum samples. The contents determined by this method agreed well with those determined by a standard method. <s> BIB051 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Paper microfluidic devices are a promising technology in developing analytical devices for point-of-care diagnosis in the developing world. This article describes a simple method for paper microfluidic devices based on a PZT drop-on-demand droplet generator. Wax was jetted in the form of droplet, linked with each other and formed into wax pattern on filter paper with a PZT actuator and a glass nozzle. The heated wax pattern became a hydrophobic barrier for reagent used in bio-assay. The glass nozzle fabricated by a home-made micronozzle puller without complicated fabrication technology was low cost, simple and easily made. Coefficient of variation of the jetted wax droplet diameter was 4.0% which showed good reproducibility. The width of wax line was experimentally studied by changing the driving voltage, nozzle diameters and degree of overlapping. The wax line with width of 700–1700 μm was prepared for paper based microfluidic devices. Multi-assay of glucose, protein and pH and 3 × 3 arrays of glucose, protein and pH assay were realized with the prepared paper microfluidic devices. The wax droplet generating system supplied a low-cost, simple, easy-to-use and fast fabrication method for paper microfluidic devices. <s> BIB052 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Abstract Paper based colorimetric biosensing platform utilizing cross-linked siloxane 3-aminopropyltriethoxysilane (APTMS) as probe was developed for the detection of a broad range of targets including H 2 O 2 , glucose and protein biomarker. APTMS was extensively used for the modification of filter papers to develop paper based analytical devices. We discovered when APTMS was cross-linked with glutaraldehyde (GA), the resulting complex (APTMS–GA) displays brick-red color, and a visual color change was observed when the complex reacted with H 2 O 2 . By integrating the APTMS–GA complex with filter paper, the modified paper enables quantitative detection of H 2 O 2 through the monitoring of the color intensity change of the paper via software Image J. Then, with the immobilization of glucose oxidase (GOx) onto the modified paper, glucose can be detected through the detection of enzymatically generated H 2 O 2 . For protein biomarker prostate specific antigen (PSA) assay, we immobilized capture, not captured anti-PSA antibody (Ab 1 ) onto the paper surface and using GOx modified gold nanorod (GNR) as detection anti-PSA antibody (Ab 2 ) label. The detection of PSA was also achieved via the liberated H 2 O 2 when the GOx label reacted with glucose. The results demonstrated the possibility of this paper based sensor for the detection of different analytes with wide linear range. The low cost and simplicity of this paper based sensor could be developed for “point-of-care” analysis and find wide application in different areas. <s> BIB053 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This paper describes the development and use of a handheld and lightweight stamp for the production of microfluidic paper-based analytical devices (μPADs). We also chemically modified the paper surface for improved colorimetric measurements. The design of the microfluidic structure has been patterned in a stamp, machined in stainless steel. Prior to stamping, the paper surface was oxidized to promote the conversion of hydroxyl into aldehyde groups, which were then chemically activated for covalent coupling of enzymes. Then, a filter paper sheet was impregnated with paraffin and sandwiched with a native paper (n-paper) sheet, previously oxidized. The metal stamp was preheated at 150 °C and then brought in contact with the paraffined paper (p-paper) to enable the thermal transfer of the paraffin to the n-paper, thus forming the hydrophobic barriers under the application of a pressure of ca. 0.1 MPa for 2 s. The channel and barrier widths measured in 50 independent μPADs exhibited values of 2.6 ± 0.1 and 1.4 ± 0.1 mm, respectively. The chemical modification for covalent coupling of enzymes on the paper surface also led to improvements in the colour uniformity generated inside the sensing area, a known bottleneck in this technology. The relative standard deviation (RSD) values for glucose and uric acid (UA) assays decreased from 40 to 10% and from 20 to 8%, respectively. Bioassays related to the detection of glucose, UA, bovine serum albumin (BSA), and nitrite were successfully performed in concentration ranges useful for clinical assays. The semi-quantitative analysis of all four analytes in artificial urine samples revealed an error smaller than 4%. The disposability of μPADs, the low instrumental requirements of the stamp-based fabrication, and the improved colour uniformity enable the use of the proposed devices for the point-of-care diagnostics or in limited resources settlements. <s> BIB054 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> We report a method for the bottom-up fabrication of paper-based capillary microchips by the blade coating of cellulose microfibers on a patterned surface. The fabrication process is similar to the paper-making process in which an aqueous suspension of cellulose microfibers is used as the starting material and is blade-coated onto a polypropylene substrate patterned using an inkjet printer. After water evaporation, the cellulose microfibers form a porous, hydrophilic, paperlike pattern that wicks aqueous solution by capillary action. This method enables simple, fast, inexpensive fabrication of paper-based capillary channels with both width and height down to about 10 μm. When this method is used, the capillary microfluidic chip for the colorimetric detection of glucose and total protein is fabricated, and the assay requires only 0.30 μL of sample, which is 240 times smaller than for paper devices fabricated using photolithography. <s> BIB055 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Abstract A miniaturized paper-based microfluidic electrochemical enzymatic biosensing platform was developed and the effects of fluidic behaviors in paper substrate on electrochemical sensing were systemically investigated. The biosensor is composed of an enzyme-immobilized pure cellulose paper pad, an enzymeless screen-printed electrode (SPE) modified with platinum nanoparticles (PtNPs), and a pair of clamped acrylonitrile butadiene styrene (ABS) plastic holders to provide good alignment for stable signal sensing. The wicking rate of liquid sample in paper was predicted, using a two-dimensional Fickian-diffusion model, to be 1.0 × 10 −2 cm 2 /s, and was verified experimentally. Dip-coating was used to prepare the enzyme-modified paper pad (EPP), which is amenable for mass manufacturing. The EPP retained excellent hydrophilicity and mechanical properties, with even slightly improved tensile strength and break strain. No significant difference in voltammetric behaviors was observed between measurements made in bulk buffer solution and with different sample volumes applied to EPP beyond its saturation wicking volume. Glucose oxidase (GO x ), an enzyme specific for glucose (Glc) substrate, was used as a model enzyme and its enzymatic reaction product H 2 O 2 was detected by the enzymeless PtNPs-SPE in the presence of ambient electron mediator O 2 . Consequently, Glc was detected with its concentration linearly depending on H 2 O 2 oxidation current with sensitivity of 10.5 μA mM -1 cm -2 and detection limit of 9.3 μM (at S / N = 3). The biosensor can be quickly regenerated with memory effects removed by buffer additions for continuous real-time detection of multiple samples in one run for point-of-care purposes. This integrated platform is also inexpensive since the EPP is easily stored, and enzymeless PtNPs-SPEs can be used multiple times with different EPPs. The green and facile preparation in bulk, excellent mechanical strength, well-maintained enzyme activity, disposability, and good reproducibility and stability make our paper-fluidic biosensor platform suitable for various real-time electrochemical bioassays without any external power for mixing, especially in resource-limited conditions. <s> BIB056 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> beta-Agonists are a group of illegal but widely used feed additives in the stockbreeding industry. In order to achieve simple-to-use, fast and high-throughput testing of this banned chemical, herein we suggest a paper-based analytical device on which a chemiluminescence diminishment method was performed. In this approach, extracts from swine hair samples as well as luminescent reagents, such as luminol and potassium periodate solution, in a low volume were applied to our device. It was found that the light emission was diminished by the beta-agonists extracted from the swine hair samples. The degree of diminishment is proportional to the concentration of the beta-agonists from 1.0 x 10(-5) to 1.0 x 10(-8) mol L-1. Also, the concentrations of solutions for chemiluminescence were optimized. The mechanism and reaction kinetics of chemiluminescence were discussed as well. The detection limit was obtained as 1.0 x 10(-9) mol L-1, and recoveries from 96% to 110% were achieved, both of which suggested that our method will be favourable in field applications for swine hair samples. <s> BIB057 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Fluorescence assays often require specialized equipment and, therefore, are not easily implemented in resource-limited environments. Herein we describe a point-of-care assay strategy in which fluorescence in the visible region is used as a readout, while a camera-equipped cellular phone is used to capture the fluorescent response and quantify the assay. The fluorescence assay is made possible using a paper-based microfluidic device that contains an internal fluidic battery, a surface-mount LED, a 2 mm section of a clear straw as a cuvette, and an appropriately designed small molecule reagent that transforms from weakly fluorescent to highly fluorescent when exposed to a specific enzyme biomarker. The resulting visible fluorescence is digitized by photographing the assay region using a camera-equipped cellular phone. The digital images are then quantified using image processing software to provide sensitive as well as quantitative results. In a model 30 min assay, the enzyme β-D-galactosidase was measured quantitatively down to 700 pM levels. This communication describes the design of these types of assays in paper-based microfluidic devices and characterizes the key parameters that affect the sensitivity and reproducibility of the technique. <s> BIB058 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> The capacity to achieve rapid, sensitive, specific, quantitative, and multiplexed genetic detection of pathogens via a robust, portable, point-of-care platform could transform many diagnostic applications. And while contemporary technologies have yet to effectively achieve this goal, the advent of microfluidics provides a potentially viable approach to this end by enabling the integration of sophisticated multistep biochemical assays (e.g., sample preparation, genetic amplification, and quantitative detection) in a monolithic, portable device from relatively small biological samples. Integrated electrochemical sensors offer a particularly promising solution to genetic detection because they do not require optical instrumentation and are readily compatible with both integrated circuit and microfluidic technologies. Nevertheless, the development of generalizable microfluidic electrochemical platforms that integrate sample preparation and amplification as well as quantitative and multiplexed detection remains a challenging and unsolved technical problem. Recognizing this unmet need, we have developed a series of microfluidic electrochemical DNA sensors that have progressively evolved to encompass each of these critical functionalities. For DNA detection, our platforms employ label-free, single-step, and sequence-specific electrochemical DNA (E-DNA) sensors, in which an electrode-bound, redox-reporter-modified DNA "probe" generates a current change after undergoing a hybridization-induced conformational change. After successfully integrating E-DNA sensors into a microfluidic chip format, we subsequently incorporated on-chip genetic amplification techniques including polymerase chain reaction (PCR) and loop-mediated isothermal amplification (LAMP) to enable genetic detection at clinically relevant target concentrations. To maximize the potential point-of-care utility of our platforms, we have further integrated sample preparation via immunomagnetic separation, which allowed the detection of influenza virus directly from throat swabs and developed strategies for the multiplexed detection of related bacterial strains from the blood of septic mice. Finally, we developed an alternative electrochemical detection platform based on real-time LAMP, which not is only capable of detecting across a broad dynamic range of target concentrations, but also greatly simplifies quantitative measurement of nucleic acids. These efforts represent considerable progress toward the development of a true sample-in-answer-out platform for genetic detection of pathogens at the point of care. Given the many advantages of these systems, and the growing interest and innovative contributions from researchers in this field, we are optimistic that iterations of these systems will arrive in clinical settings in the foreseeable future. <s> BIB059 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A thin and flexible paper-based skin patch was developed for the diagnostic screening of cystic fibrosis. It utilized a unique combination of both anion exchange and pH test papers to enable the quantitative, colorimetric and on-skin detection of sweat anions. <s> BIB060 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A rapid and highly sensitive point-of-care (PoC) lateral flow assay for phospholipase A2 (PLA2) is demonstrated in serum through the enzyme-triggered release of a new class of biotinylated multiarmed polymers from a liposome substrate. Signal from the enzyme activity is generated by the adhesion of polystreptavidin-coated gold nanoparticle networks to the lateral flow device, which leads to the appearance of a red test line due to the localized surface plasmon resonance effect of the gold. The use of a liposome as the enzyme substrate and multivalent linkers to link the nanoparticles leads to amplification of the signal, as the cleavage of a small amount of lipids is able to release a large amount of polymer linker and adhesion of an even larger amount of gold nanoparticles. By optimizing the molecular weight and multivalency of these biotinylated polymer linkers, the sensitivity of the device can be tuned to enable naked-eye detection of 1 nM human PLA2 in serum within 10 min. This high sensitivity enabled the correct diagnosis of pancreatitis in diseased clinical samples against a set of healthy controls using PLA2 activity in a point-of-care device for the first time. <s> BIB061 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Early and timely detection of disease biomarkers can prevent the spread of infectious diseases, and drastically decrease the death rate of people suffering from different diseases such as cancer and infectious diseases. Because conventional diagnostic methods have limited application in low-resource settings due to the use of bulky and expensive instrumentation, simple and low-cost point-of-care diagnostic devices for timely and early biomarker diagnosis is the need of the hour, especially in rural areas and developing nations. The microfluidics technology possesses remarkable features for simple, low-cost, and rapid disease diagnosis. There have been significant advances in the development of microfluidic platforms for biomarker detection of diseases. This article reviews recent advances in biomarker detection using cost-effective microfluidic devices for disease diagnosis, with the emphasis on infectious disease and cancer diagnosis in low-resource settings. This review first introduces different microfluidic platforms (e.g. polymer and paper-based microfluidics) used for disease diagnosis, with a brief description of their common fabrication techniques. Then, it highlights various detection strategies for disease biomarker detection using microfluidic platforms, including colorimetric, fluorescence, chemiluminescence, electrochemiluminescence (ECL), and electrochemical detection. Finally, it discusses the current limitations of microfluidic devices for disease biomarker detection and future prospects. <s> BIB062 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Low-cost assays have broad applications ranging from human health diagnostics and food safety inspection to environmental analysis. Hence, low-cost assays are especially attractive for rural areas and developing countries, where financial resources are limited. Recently, paper-based microfluidic devices have emerged as a low-cost platform which greatly accelerates the point of care (POC) analysis in low-resource settings. This paper reviews recent advances of low-cost bioanalysis on paper-based microfluidic platforms, including fully paper-based and paper hybrid microfluidic platforms. In this review paper, we first summarized the fabrication techniques of fully paper-based microfluidic platforms, followed with their applications in human health diagnostics and food safety analysis. Then we highlighted paper hybrid microfluidic platforms and their applications, because hybrid platforms could draw benefits from multiple device substrates. Finally, we discussed the current limitations and perspective trends of paper-based microfluidic platforms for low-cost assays. <s> BIB063 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Microfluidic paper-based analytical devices (μPADs) attract tremendous attention as an economical tool for in-field diagnosis, food safety and environmental monitoring. We innovatively fabricated 2D and 3D μPADs by photolithography-patterning microchannels on a Parafilm® and subsequently embossing them to paper. This truly low-cost, wax printer and cutter plotter independent approach offers the opportunity for researchers from resource-limited laboratories to work on paper-based analytical devices. <s> BIB064 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A novel, highly selective and sensitive paper-based colorimetric sensor for trace determination of copper (Cu(2+)) ions was developed. The measurement is based on the catalytic etching of silver nanoplates (AgNPls) by thiosulfate (S2O3(2-)). Upon the addition of Cu(2+) to the ammonium buffer at pH 11, the absorption peak intensity of AuNPls/S2O3(2-) at 522 nm decreased and the pinkish violet AuNPls became clear in color as visible to the naked eye. This assay provides highly sensitive and selective detection of Cu(2+) over other metal ions (K(+), Cr(3+), Cd(2+), Zn(2+), As(3+), Mn(2+), Co(2+), Pb(2+), Al(3+), Ni(2+), Fe(3+), Mg(2+), Hg(2+) and Bi(3+)). A paper-based colorimetric sensor was then developed for the simple and rapid determination of Cu(2+) using the catalytic etching of AgNPls. Under optimized conditions, the modified AgNPls coated at the test zone of the devices immediately changes in color in the presence of Cu(2+). The limit of detection (LOD) was found to be 1.0 ng mL(-1) by visual detection. For semi-quantitative measurement with image processing, the method detected Cu(2+) in the range of 0.5-200 ng mL(-1)(R(2)=0.9974) with an LOD of 0.3 ng mL(-1). The proposed method was successfully applied to detect Cu(2+) in the wide range of real samples including water, food, and blood. The results were in good agreement according to a paired t-test with results from inductively coupled plasma-optical emission spectrometry (ICP-OES). <s> BIB065 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A new water-soluble conjugated polyelectrolyte containing triphenylamine groups with aldehyde pendants was synthesized, which featured distinctly different emission colors according to its states, in aqueous solution and in the solid. Paper-based strips containing the polymer were prepared by simple immersion of filter paper in the polyelectrolyte solution for practical and efficient detection of biothiols including cysteine and homocysteine. The presence of aldehyde groups enables us to demonstrate noticeable fluorescence emission color changes (green-to-blue) because of the alterations in electron push–pull structure in the polymer via a reaction between the aldehyde group of the polymer and the aminothiol moiety in biothiol compounds. The presence of an aldehyde group and a sulfonate side chain was found to be indispensable for the cysteine reaction site and for a hydrophilic environment allowing the easy approach of cysteine, respectively, resulting in a simple and easy detection protocol for biothiol compounds. <s> BIB066 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Abstract Background After Roux-en-Y gastric bypass (RYGB), hypoglycemia can occur and be associated with adverse events such as intense malaise and impaired quality of life. Objective To compare insulin secretion, sensitivity, and clearance between two groups of patients, with or without hypoglycemia, after an oral glucose tolerance test (OGTT 75-g), and also to compare real-life glucose profiles within these two groups. Setting Bariatric surgery referral center. Methods This study involves a prospective cohort of 46 consecutive patients who complained of malaise compatible with hypoglycemia after RYGB, in whom an OGTT 75-g was performed. A plasma glucose value of lower than 2.8 mmol/L (50 mg/dl) between 90 and 120 min after the load was considered to be a significant hypoglycemia. The main outcome measures were insulin sensitivity, beta-cell function, and glycemic profiles during the test. Glucose parameters were also evaluated by continuous glucose monitoring (CGM) in a real-life setting in 43 patients. Results Twenty-five patients had plasma glucose that was lower than 2.8 mmol/L between 90 and 120 from the load (HYPO group). Twenty-one had plasma glucose that was higher than 2.8 mmol/L (NONHYPO group). The HYPO patients were younger, had lost more weight after RYGB, were less frequently diabetic before surgery, and displayed higher early insulin secretion rates compared with the NONHYPO patients after the 75-g OGTT, and they had lower late insulin secretion rates. The HYPO patients had lower interstitial glucose values in real life, which suggests that a continuum exists between observations with an oral glucose load and real-life interstitial glucose concentrations. Conclusions This study suggests that HYPO patients after RYGB display an early increased insulin secretion rate when tested with an OGTT. CGM shows that HYPO patients spend more time below 3.3 mmol/L when compared with NONHYPO patients. This phenotype of patients should be monitored carefully after RYGB. <s> BIB067 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A disposable, equipment-free, versatile point-of-care testing platform, microfluidic distance readout sweet hydrogel integrated paper-based analytical device (μDiSH-PAD), was developed for portable quantitative detection of different types of targets. The platform relies on a target-responsive aptamer cross-linked hydrogel for target recognition, cascade enzymatic reactions for signal amplification, and microfluidic paper-based analytic devices (μPADs) for visual distance-based quantitative readout. A “sweet” hydrogel with trapped glucoamylase (GA) was synthesized using an aptamer as a cross-linker. When target is present in the sample, the “sweet” hydrogel collapses and releases enzyme GA into the sample, generating glucose by amylolysis. A hydrophilic channel on the μPADs is modified with glucose oxidase (GOx) and colorless 3,3′-diaminobenzidine (DAB) as the substrate. When glucose travels along the channel by capillary action, it is converted to H2O2 by GOx. In addition, DAB is converted into brown ins... <s> BIB068 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Abstract In this work, an origami paper-based analytical device for glucose biosensor by employing fully-drawn pencil electrodes has been reported. The three-electrode system was prepared on paper directly by drawing with nothing more than pencils. By simple printing, two separated zones on paper were designed for the immobilization of the mediator and glucose oxidase (GOx), respectively. The used paper provides a favorable and biocompatible support for maintaining the bioactivities of GOx. With a sandwich-type scheme, the origami biosensor exhibited great analytical performance for glucose sensing including acceptable reproducibility and favorable selectivity against common interferents in physiological fluids. The limit of detection and linear range achieved with the approach was 0.05 mM and 1–12 mM, respectively. Its analytical performance was also demonstrated in the analysis of human blood samples. Such fully-drawn paper-based device is cheap, flexible, portable, disposable, and environmentally friendly, affording great convenience for practical use under resource-limited conditions. We therefore envision that this approach can be extended to generate other functional paper-based devices. <s> BIB069 | Glucose, one of the essential metabolic intermediates, is an important medical analyte which is the indicator of various diseases, such as glucose metabolism disorders and islet cell carcinoma BIB011 BIB020 BIB021 BIB029 . Normally, the concentration of glucose in human blood stream is in the range of 3.8-6.9 mM. A level below 2.8 mM after no-eating or following exercise is considered to be hypoglycemia BIB067 . For diabetics, the blood glucose concentration should be strictly controlled below 10 mM according to the American Diabetes Association . Frequent and convenient monitor of the blood glucose concentration is a key endeavor for medical diagnosis BIB003 BIB001 and of critical importance to the diabetics for the hyperglycemia complications prevention BIB008 BIB012 . A terminology "ASSURED" representing the words "affordable, sensitive, specific, user-friendly, rapid and robust, equipment-free and delivered to those in need", is summarized by the World Health Organization (WHO) as the guidelines for the diagnostic point-of-care tests (POCTs) . These diagnostic tests are emerging for applications in the underdeveloped and developing world, where cost-effect and simplicity are of major concerns BIB004 BIB013 . As the most abundant biopolymer on the Earth, cellulose is mostly used to produce paper for industrial use. Being composed of a network of hydrophilic cellulose fibers , paper has a natural porous microstructure, which is amenable to lateral flow via capillary action, realizing on-site analysis without the requirement for external forces such as pumps BIB004 BIB002 . Microfluidic paper-based analytical devices (µPADs) as a promising and powerful platform have shown great potential in the development of POCTs BIB032 BIB044 BIB059 BIB033 . This concept was first proposed by the Whitesides group in 2007 BIB004 and the photoresist-patterned paper was used to fabricate the microfluidic devices that the liquid could transport through capillary force in the lack of external equipment. Since then, µPADs have been popular in a variety of applications, such as clinical diagnostics BIB004 BIB034 BIB035 BIB045 BIB060 BIB061 , food safety BIB046 , environmental monitoring BIB062 BIB036 BIB037 and bioterrorism BIB038 BIB030 BIB063 BIB047 BIB039 due to the advantages of portability, simplicity, economic affordability and minimal sample consumption. Paper substrate is hydrophilic by nature. Therefore, to fabricate the µPADs, hydrophobic barriers are usually created to confine the fluid flow within a desired location or direct the fluidics follow desired trails. A number of techniques, including photolithography BIB004 BIB040 BIB014 BIB064 BIB048 BIB005 BIB006 , wax printing BIB009 BIB010 BIB049 BIB065 , screen-printing BIB022 BIB015 , plasma treating BIB007 BIB016 , flexography BIB023 BIB050 BIB017 and laser treating BIB024 have been developed for the manufacture of hydrophobic barriers. In the photolithography process, photoresists, e.g., octadecyltrichlorosilane (OTS), poly(o-nitrobenzylmethacrylate) (PoNBMA) and SU-8 used to fabricate µPADs are costly and the expensive photolithography equipment is also required. Patterning paper with wax printing technology could offer relative high speed, facile process and high resolution for fabricating µPADs, while the commercial wax printers of high running costs and the wax of low melting point restrict the use in batch production. Screen-printing method exhibits slightly higher resolutions than wax printing, but it is limited by the requirements of accordingly various printing screens when patterns are changed. Although plasma treating produces patterns without affecting their flexibility or surface topography, this method suffers from the limitation of mass production. Flexographic printing is considered as a proper technique for mass production. However, its requirements locate at the two prints of polystyrene and different printing plates. High resolution could be achieved when fabricating µPADs using laser treating method, but it is of difficulties to fold or store the laser-treated devices BIB051 BIB052 . Though each fabrication method has its own advantages and limits, the economic benefit of µPAD mass production is the principal issue in concerned, especially for the widespread utilization in glucose detection. Balancing the interests between cost and performance may rely on the development of unique process technology and new materials. With the development of µPADs, multiple conventional detection techniques, such as colorimetric detection BIB051 BIB053 BIB054 BIB068 BIB055 , electrochemical detection BIB056 BIB069 BIB041 , chemiluminescence (CL) BIB025 BIB026 BIB027 BIB057 BIB031 , fluorescence BIB058 BIB066 BIB042 , mass spectrum (MS) BIB028 BIB018 and surface-enhanced Raman spectroscopy (SERS) BIB043 BIB019 have been applied to paper-based devices for rapid diagnostics. In this article, colorimetric and electrochemical µPADs for glucose detection in the past five years are summarized and reviewed. With the development of microfabrication and nanomaterial, glucose detection µPADs with high sensitivity and stability will be commercially accessible in the near future. |
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> Metabolic engineering for the overproduction of high-value small molecules is dependent upon techniques in directed evolution to improve production titers. The majority of small molecules targeted for overproduction are inconspicuous and cannot be readily obtained by screening. We provide a review on the development of high-throughput colorimetric, fluorescent, and growth-coupled screening techniques, enabling inconspicuous small-molecule detection. We first outline constraints on throughput imposed during the standard directed evolution workflow (library construction, transformation, and screening) and establish a screening and selection ladder on the basis of small-molecule assay throughput and sensitivity. An in-depth analysis of demonstrated screening and selection approaches for small-molecule detection is provided. Particular focus is placed on in vivo biosensor-based detection methods that reduce or eliminate in vitro assay manipulations and increase throughput. We conclude by providing our prospec... <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> Light scattering phenomena in periodic systems have been investigated for decades in optics and photonics. Their classical description relies on Bragg scattering, which gives rise to constructive interference at specific wavelengths along well defined propagation directions, depending on illumination conditions, structural periodicity, and the refractive index of the surrounding medium. In this paper, by engineering multifrequency colorimetric responses in deterministic aperiodic arrays of nanoparticles, we demonstrate significantly enhanced sensitivity to the presence of a single protein monolayer. These structures, which can be readily fabricated by conventional Electron Beam Lithography, sustain highly complex structural resonances that enable a unique optical sensing approach beyond the traditional Bragg scattering with periodic structures. By combining conventional dark-field scattering micro-spectroscopy and simple image correlation analysis, we experimentally demonstrate that deterministic aperiodic surfaces with engineered structural color are capable of detecting, in the visible spectral range, protein layers with thickness of a few tens of Angstroms. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> A number of analogues of phenethylamine and tryptamine, which are prepared by modification of the chemical structures, are being developed for circulation on the black market. Often called “designer drugs,” they are abused in many countries, and cause serious social problems in many parts of the world. Acute deaths have been reported after overdoses of designer drugs. Various methods are required for screening and routine analysis of designer drugs in biological materials for forensic and clinical purposes. Many sample preparation and chromatographic methods for analysis of these drugs in biological materials and seized items have been published. This review presents various colorimetric detections, gas chromatographic (GC)–mass spectrometric, and liquid chromatographic (LC)–mass spectrometric methods proposed for designer drug analyses. Basic information on extractions, derivatizations, GC columns, LC columns, detection limits, and linear ranges is also summarized. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> This paper describes the use of a printed circuit technology to generate hydrophilic channels in a filter paper. Patterns of channels were designed using Protel soft, and printed on a blank paper. Then, the patterns were transferred to a sheet copper using a thermal transfer printer. The sheet copper with patterns was dipped into ferric chloride solution to etch the whole patterns of the sheet copper. At last, the etched sheet copper was coated with a film of paraffin and then a filter paper. An electric iron was used to heat the other side of the sheet copper. The melting paraffin penetrated full thickness of the filter paper and formed a hydrophobic “wall”. Colorimetric assays for the presence of protein and glucose were demonstrated using the paper-based device. The work is helpful to researchers to fabricate paper-based microfluidic devices for monitoring health and detecting disease. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> Many diagnostic tests in a conventional clinical laboratory are performed on blood plasma because changes in its composition often reflect the current status of pathological processes throughout the body. Recently, a significant research effort has been invested into the development of microfluidic paper-based analytical devices (μPADs) implementing these conventional laboratory tests for point-of-care diagnostics in resource-limited settings. This paper describes the use of red blood cell (RBC) agglutination for separating plasma from finger-prick volumes of whole blood directly in paper, and demonstrates the utility of this approach by integrating plasma separation and a colorimetric assay in a single μPAD. The μPAD was fabricated by printing its pattern onto chromatography paper with a solid ink (wax) printer and melting the ink to create hydrophobic barriers spanning through the entire thickness of the paper substrate. The μPAD was functionalized by spotting agglutinating antibodies onto the plasma separation zone in the center and the reagents of the colorimetric assay onto the test readout zones on the periphery of the device. To operate the μPAD, a drop of whole blood was placed directly onto the plasma separation zone of the device. RBCs in the whole blood sample agglutinated and remained in the central zone, while separated plasma wicked through the paper substrate into the test readout zones where analyte in plasma reacted with the reagents of the colorimetric assay to produce a visible color change. The color change was digitized with a portable scanner and converted to concentration values using a calibration curve. The purity and yield of separated plasma was sufficient for successful operation of the μPAD. This approach to plasma separation based on RBC agglutination will be particularly useful for designing fully integrated μPADs operating directly on small samples of whole blood. <s> BIB005 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> We developed a novel, low-cost and simple method for the fabrication of microfluidic paper-based analytical devices (μPADs) by silanization of filter cellulose using a paper mask having a specific pattern. The paper mask was penetrated with trimethoxyoctadecylsilane (TMOS) by immersing into TMOS-heptane solution. By heating the filter paper sandwiched between the paper mask and glass slides, TMOS was immobilized onto the filter cellulose via the reaction between cellulose OH and TMOS, while the hydrophilic area was not silanized because it was not in contact with the paper mask penetrated with TMOS. The effects of some factors including TMOS concentration, heating temperature and time on the fabrication of μPADs were studied. This method is free of any expensive equipment and metal masks, and could be performed by untrained personnel. These features are very attractive for the fabrication and applications of μPADs in developing countries or resource-limited settings. A flower-shaped μPAD was fabricated and used to determine glucose in human serum samples. The contents determined by this method agreed well with those determined by a standard method. <s> BIB006 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> Paper microfluidic devices are a promising technology in developing analytical devices for point-of-care diagnosis in the developing world. This article describes a simple method for paper microfluidic devices based on a PZT drop-on-demand droplet generator. Wax was jetted in the form of droplet, linked with each other and formed into wax pattern on filter paper with a PZT actuator and a glass nozzle. The heated wax pattern became a hydrophobic barrier for reagent used in bio-assay. The glass nozzle fabricated by a home-made micronozzle puller without complicated fabrication technology was low cost, simple and easily made. Coefficient of variation of the jetted wax droplet diameter was 4.0% which showed good reproducibility. The width of wax line was experimentally studied by changing the driving voltage, nozzle diameters and degree of overlapping. The wax line with width of 700–1700 μm was prepared for paper based microfluidic devices. Multi-assay of glucose, protein and pH and 3 × 3 arrays of glucose, protein and pH assay were realized with the prepared paper microfluidic devices. The wax droplet generating system supplied a low-cost, simple, easy-to-use and fast fabrication method for paper microfluidic devices. <s> BIB007 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> This paper describes the development and use of a handheld and lightweight stamp for the production of microfluidic paper-based analytical devices (μPADs). We also chemically modified the paper surface for improved colorimetric measurements. The design of the microfluidic structure has been patterned in a stamp, machined in stainless steel. Prior to stamping, the paper surface was oxidized to promote the conversion of hydroxyl into aldehyde groups, which were then chemically activated for covalent coupling of enzymes. Then, a filter paper sheet was impregnated with paraffin and sandwiched with a native paper (n-paper) sheet, previously oxidized. The metal stamp was preheated at 150 °C and then brought in contact with the paraffined paper (p-paper) to enable the thermal transfer of the paraffin to the n-paper, thus forming the hydrophobic barriers under the application of a pressure of ca. 0.1 MPa for 2 s. The channel and barrier widths measured in 50 independent μPADs exhibited values of 2.6 ± 0.1 and 1.4 ± 0.1 mm, respectively. The chemical modification for covalent coupling of enzymes on the paper surface also led to improvements in the colour uniformity generated inside the sensing area, a known bottleneck in this technology. The relative standard deviation (RSD) values for glucose and uric acid (UA) assays decreased from 40 to 10% and from 20 to 8%, respectively. Bioassays related to the detection of glucose, UA, bovine serum albumin (BSA), and nitrite were successfully performed in concentration ranges useful for clinical assays. The semi-quantitative analysis of all four analytes in artificial urine samples revealed an error smaller than 4%. The disposability of μPADs, the low instrumental requirements of the stamp-based fabrication, and the improved colour uniformity enable the use of the proposed devices for the point-of-care diagnostics or in limited resources settlements. <s> BIB008 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> This paper presents a novel paper-based analytical device based on the colorimetric paper assays through its light reflectance. The device is portable, low cost (<20 dollars), and lightweight (only 176 g) that is available to assess the cost-effectiveness and appropriateness of the original health care or on-site detection information. Based on the light reflectance principle, the signal can be obtained directly, stably and user-friendly in our device. We demonstrated the utility and broad applicability of this technique with measurements of different biological and pollution target samples (BSA, glucose, Fe, and nitrite). Moreover, the real samples of Fe (II) and nitrite in the local tap water were successfully analyzed, and compared with the standard UV absorption method, the quantitative results showed good performance, reproducibility, and reliability. This device could provide quantitative information very conveniently and show great potential to broad fields of resource-limited analysis, medical diagnostics, and on-site environmental detection. <s> BIB009 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> Paper-based microfluidics is a rapidly progressing inter-disciplinary technology driven by the need for low-cost alternatives to conventional point-of-care diagnostic tools. For transport of reagents/analytes, such devices often consist of interconnected hydrophilic fluid-flow channels that are demarcated by hydrophobic barrier walls that extend through the thickness of the paper. Here, we present a laser-based fabrication procedure that uses polymerisation of a photopolymer to produce the required fluidic channels in paper. Experimental results showed that the structures successfully guide the flow of fluids and allow containment of fluids in wells, and hence the technique is suitable for fabrication of paper-based microfluidic devices. The minimum width for the hydrophobic barriers that successfully prevented fluid leakage was ~120 μm and the minimum width for the fluidic channels that can be formed was ~80 μm, the smallest reported so far for paper-based fluidic patterns. <s> BIB010 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> We report a method for the bottom-up fabrication of paper-based capillary microchips by the blade coating of cellulose microfibers on a patterned surface. The fabrication process is similar to the paper-making process in which an aqueous suspension of cellulose microfibers is used as the starting material and is blade-coated onto a polypropylene substrate patterned using an inkjet printer. After water evaporation, the cellulose microfibers form a porous, hydrophilic, paperlike pattern that wicks aqueous solution by capillary action. This method enables simple, fast, inexpensive fabrication of paper-based capillary channels with both width and height down to about 10 μm. When this method is used, the capillary microfluidic chip for the colorimetric detection of glucose and total protein is fabricated, and the assay requires only 0.30 μL of sample, which is 240 times smaller than for paper devices fabricated using photolithography. <s> BIB011 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> This paper describes a simple and instrument-free screen-printing method to fabricate hydrophilic channels by patterning polydimethylsiloxane (PDMS) onto chromatography paper. Clearly recognizable border lines were formed between hydrophilic and hydrophobic areas. The minimum width of the printed channel to deliver an aqueous sample was 600 μm, as obtained by this method. Fabricated microfluidic paper-based analytical devices (μPADs) were tested for several colorimetric assays of pH, glucose, and protein in both buffer and artificial urine samples and results were obtained in less than 30 min. The limits of detection (LODs) for glucose and bovine serum albumin (BSA) were 5 mM and 8 μM, respectively. Furthermore, the pH values of different solutions were visually recognised with the naked eye by using a sensitive ink. Ultimately, it is expected that this PDMS-screen-printing (PSP) methodology for μPADs can be readily translated to other colorimetric detection and hydrophilic channels surrounded by a hydrophobic polymer can be formed to transport fluids toward target zones. <s> BIB012 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> A simple and low-cost fabrication method for paper-based diagnostic devices (PBDDs) is described in this study. Street-available polymer solutions were screen printed onto filter papers to create hydrophobic patterns for fluidic channels. In order to obtain fully functional hydrophobic patterns for fluids, the original polymer solutions were diluted with butyl acetate to yield a suitable viscosity range between 30-200 cP for complete patterning on paper. Typical pH and glucose tests with color indicators were performed on the screen printed PBDDs. Images of the PBDDs were analyzed by computers to obtain calibration curves for pH between 2 and 12 and glucose concentration ranging from 10-1000 mmol dm(-3). Detection of formaldehyde in acetone was also carried out to show the possibility of using this PBBD for analytical detection with organic solvents. An exemplar PBDD with simultaneous pH and glucose detection was also used to demonstrate the feasibility of applying this technique for realistic diagnostic applications. <s> BIB013 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> Interest in low-cost diagnostic devices has recently gained attention, in part due to the rising cost of healthcare and the need to serve populations in resource-limited settings. A major challenge in the development of such devices is the need for hydrophobic barriers to contain polar bio-fluid analytes. Key approaches in lowering the cost in diagnostics have centered on (i) development of low-cost fabrication techniques/processes, (ii) use of affordable materials, or, (iii) minimizing the need for high-tech tools. This communication describes a simple, low-cost, adaptable, and portable method for patterning paper and subsequent use of the patterned paper in diagnostic tests. Our approach generates hydrophobic regions using a ball-point pen filled with a hydrophobizing molecule suspended in a solvent carrier. An empty ball-point pen was filled with a solution of trichloro perfluoroalkyl silane in hexanes (or hexadecane), and the pen used to draw lines on Whatman® chromatography 1 paper. The drawn regions defined the test zones since the trichloro silane reacts with the paper to give a hydrophobic barrier. The formation of the hydrophobic barriers is reaction kinetic and diffusion-limited, ensuring well defined narrow barriers. We performed colorimetric glucose assays and enzyme-linked immuno-sorbent assay (ELISA) using the created test zones. To demonstrate the versatility of this approach, we fabricated multiple devices on a single piece of paper and demonstrated the reproducibility of assays on these devices. The overall cost of devices fabricated by drawing are relatively lower ( <s> BIB014 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> Paper microfluidic devices are a promising technology in developing analytical devices for point-of-care diagnosis in the developing world. This article describes a novel method of wax jetting with a PZT (piezoelectric ceramic transducer) actuator and glass nozzle for the fabrication of paper microfluidic devices. The hydrophobic fluid pattern was formed by the permeation of filter paper with wax droplets. Results showed that the size of the wax droplet, which was determined by the voltage of the driving signal and nozzle diameter, ranged from 150 μm to 380 μm, and the coefficient of variation of the droplet diameter was under 4.0%. The smallest width of the fluid channel was 600 μm frontside and 750 μm backside. The patterned filter paper was without any leakage, and multi-assay of glucose, protein, and pH on the paper microfluidic device, and laminar diffusion flow with blue and yellow dye were realized. The wax jetting system supplied a low-cost, simple, easy-to-use and fast fabrication method for paper microfluidic devices. <s> BIB015 | Colorimetric detection has been the most widely employed technique for paper-based analytical devices due to the advantages of visual readout, straightforward operation and superior stability BIB001 BIB002 BIB003 . Glucose oxidase (GOx) and horseradish peroxidase (HRP) are the commonly used bienzyme system to catalyze the reaction between glucose and the color indicator in µPADs. The catalytic reaction of glucose by glucose oxidase results in hydrogen peroxide (H 2 O 2 ) and gluconic acid. Peroxidase then catalyzes the reaction of H 2 O 2 with color indicator and generates a visual color change. Identifying an appropriate color indicator is one of the crucial steps in the advancement of µPADs for the glucose concentrations determination. Potassium iodide (i.e., KI) was one of the commonly used color indicators. HRP catalyzes the oxidation of iodide to iodine by hydrogen peroxide, leading to a change from colorless to a visual brown color BIB006 BIB007 BIB008 BIB015 BIB009 BIB012 BIB004 BIB013 BIB005 . Garcia et al. BIB008 proposed a production method of µPAD using a handheld metal stamp (Figure 1 ). The channel and barrier widths of the fabricated µPAD were 2.6 ± 0.1 and 1.4 ± 0.1 mm, respectively. The improvement in the color uniformity was created by the covalent coupling of enzymes on the surface of paper. The linear response was in the range from 0 to 12 mM. Cai et al. BIB006 developed a µPAD fabricated free of metal masks or expensive equipment. A mask immobilized with trimethoxyoctadecylsilane (TMOS) was used to silanize the cellulose paper substrate by heating the paper, which was located between the mask and glass slides. TMOS adsorbed on the mask would evaporate and penetrate into the cellulose paper aligning onto the mask, while other parts remained hydrophilic due to the lack of reaction between cellulose OH groups and TMOS (Figure 2) . Li et al. BIB007 BIB015 developed a piezoelectric ceramic transducer (PZT) drop-on-demand wax droplet generating system for µPADs. Wax was jetted as droplet and shaped to form the hydrophobic fluid pattern on a piece of filter paper with a PZT actuator. Mohammadi et al. BIB012 proposed a screen-printing method to fabricate µPAD through patterning polydimethylsiloxane (PDMS) instead of wax onto paper to construct hydrophilic channels. The glucose diagnostic device could be developed by drawing with a silane/hexane ink without further requirement of complex equipment. Oyola-Reynoso et al. BIB014 used a ball-point pen in the fullness of a solution of trichloro perfluoroalkyl silane in hexanes to draw hydrophobic regions of paper. To investigate the glucose concentration in blood plasma, Yang et al. BIB005 developed a µPAD with agglutinating antibodies immobilized for separating blood plasma from red blood cells in whole blood ( Figure 3 ). Furthermore, laser-induced photo-polymerisation BIB010 and blade coating BIB011 were also used for creation of µPADs depending on GOx/HRP bienzyme reaction. metal masks or expensive equipment. A mask immobilized with trimethoxyoctadecylsilane (TMOS) was used to silanize the cellulose paper substrate by heating the paper, which was located between the mask and glass slides. TMOS adsorbed on the mask would evaporate and penetrate into the cellulose paper aligning onto the mask, while other parts remained hydrophilic due to the lack of reaction between cellulose OH groups and TMOS (Figure 2 ). Li et al. BIB007 BIB015 developed a piezoelectric ceramic transducer (PZT) drop-on-demand wax droplet generating system for μPADs. Wax was jetted as droplet and shaped to form the hydrophobic fluid pattern on a piece of filter paper with a PZT actuator. Mohammadi et al. BIB012 proposed a screen-printing method to fabricate μPAD through patterning polydimethylsiloxane (PDMS) instead of wax onto paper to construct hydrophilic channels. The glucose diagnostic device could be developed by drawing with a silane/hexane ink without further requirement of complex equipment. Oyola-Reynoso et al. BIB014 used a ball-point pen in the fullness of a solution of trichloro perfluoroalkyl silane in hexanes to draw hydrophobic regions of paper. To investigate the glucose concentration in blood plasma, Yang et al. BIB005 developed a μPAD with agglutinating antibodies immobilized for separating blood plasma from red blood cells in whole blood ( Figure 3 ). Furthermore, laser-induced photo-polymerisation BIB010 and blade coating BIB011 were also used for creation of μPADs depending on GOx/HRP bienzyme reaction. metal masks or expensive equipment. A mask immobilized with trimethoxyoctadecylsilane (TMOS) was used to silanize the cellulose paper substrate by heating the paper, which was located between the mask and glass slides. TMOS adsorbed on the mask would evaporate and penetrate into the cellulose paper aligning onto the mask, while other parts remained hydrophilic due to the lack of reaction between cellulose OH groups and TMOS (Figure 2 ). Li et al. BIB007 BIB015 developed a piezoelectric ceramic transducer (PZT) drop-on-demand wax droplet generating system for μPADs. Wax was jetted as droplet and shaped to form the hydrophobic fluid pattern on a piece of filter paper with a PZT actuator. Mohammadi et al. BIB012 proposed a screen-printing method to fabricate μPAD through patterning polydimethylsiloxane (PDMS) instead of wax onto paper to construct hydrophilic channels. The glucose diagnostic device could be developed by drawing with a silane/hexane ink without further requirement of complex equipment. Oyola-Reynoso et al. BIB014 used a ball-point pen in the fullness of a solution of trichloro perfluoroalkyl silane in hexanes to draw hydrophobic regions of paper. To investigate the glucose concentration in blood plasma, Yang et al. BIB005 developed a μPAD with agglutinating antibodies immobilized for separating blood plasma from red blood cells in whole blood ( Figure 3 ). Furthermore, laser-induced photo-polymerisation BIB010 and blade coating BIB011 were also used for creation of μPADs depending on GOx/HRP bienzyme reaction. |
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose μPADs <s> Abstract Cellulose paper based glucose test strips were successfully prepared using 2,4,6-tribromo-3-hydroxy benzoic acid (TBHBA) as the chromogen agent. Cellulose paper is a good substrate for carrying chromogen agents and other chemicals so that the quantitative analysis can be done based on the colorimetric chemistry. The color intensity of the developed compounds, which was measured as the differential diffusive reflectance of the test strip at 510 nm, was correlated to the glucose concentration of the sample solutions in the range of 0.18–9.91 mg/ml. These colorimetric test strips could be conveniently used, do not have to use an electronic device, and would have potential applications in the home monitoring of blood glucose for people with diabetes. <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose μPADs <s> In this work, we first employ a drying method combining with the bienzyme colorimetric detection of glucose and uric acid on microfluidic paper-based analysis devices (μPADs). The channels of 3D μPADs are also designed by us to get better results. The color results are recorded by both Gel Documentation systems and a common camera. By using Gel Documentation systems, the limits of detection (LOD) of glucose and uric acid are 3.81 × 10(-5)M and 4.31 × 10(-5)M, respectively one order of magnitude lower than that of the reported methods on μPADs. By using a common camera, the limits of detection (LOD) of glucose and uric acid are 2.13 × 10(-4)M and 2.87 × 10(-4)M, respectively. Furthermore, the effects of detection conditions have been investigated and discussed comprehensively. Human serum samples are detected with satisfactory results, which are comparable with the clinical testing results. A low-cost, simple and rapid colorimetric method for the simultaneous detection of glucose and uric acid on the μPADs has been developed with enhanced sensitivity. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose μPADs <s> Many diagnostic tests in a conventional clinical laboratory are performed on blood plasma because changes in its composition often reflect the current status of pathological processes throughout the body. Recently, a significant research effort has been invested into the development of microfluidic paper-based analytical devices (μPADs) implementing these conventional laboratory tests for point-of-care diagnostics in resource-limited settings. This paper describes the use of red blood cell (RBC) agglutination for separating plasma from finger-prick volumes of whole blood directly in paper, and demonstrates the utility of this approach by integrating plasma separation and a colorimetric assay in a single μPAD. The μPAD was fabricated by printing its pattern onto chromatography paper with a solid ink (wax) printer and melting the ink to create hydrophobic barriers spanning through the entire thickness of the paper substrate. The μPAD was functionalized by spotting agglutinating antibodies onto the plasma separation zone in the center and the reagents of the colorimetric assay onto the test readout zones on the periphery of the device. To operate the μPAD, a drop of whole blood was placed directly onto the plasma separation zone of the device. RBCs in the whole blood sample agglutinated and remained in the central zone, while separated plasma wicked through the paper substrate into the test readout zones where analyte in plasma reacted with the reagents of the colorimetric assay to produce a visible color change. The color change was digitized with a portable scanner and converted to concentration values using a calibration curve. The purity and yield of separated plasma was sufficient for successful operation of the μPAD. This approach to plasma separation based on RBC agglutination will be particularly useful for designing fully integrated μPADs operating directly on small samples of whole blood. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose μPADs <s> Abstract In this contribution, we first developed a semiquantitative method for the detection of glucose with self-calibration based on bienzyme colorimetry by using tree-shaped paper strip. The GOD/HRP bienzyme system was utilized to amplify the color signal in the aqueous phase. Moreover, we employed a paper as microfluidic media for running colorimetric assay, while tree-shaped paper strip was designed to ensure uniform microfluidic flow for multiple branches. Our proposed method gives direct outcomes which can be observed by the naked eye or recorded by a simple camera. The linear range is from 1.0 × 10 −3 to 11.0 × 10 −3 M, with a detection limit of 3 × 10 −4 M. Furthermore, the effect of detection condition has been investigated and discussed comprehensively. The result of determining glucose in human serum is consistent with that of detecting standard glucose solution by using our developed approach. A low-cost, simple, and rapid colorimetric method for the simultaneous detection of glucose with self-calibration on the tree-shaped paper has been proposed. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose μPADs <s> We developed a novel, low-cost and simple method for the fabrication of microfluidic paper-based analytical devices (μPADs) by silanization of filter cellulose using a paper mask having a specific pattern. The paper mask was penetrated with trimethoxyoctadecylsilane (TMOS) by immersing into TMOS-heptane solution. By heating the filter paper sandwiched between the paper mask and glass slides, TMOS was immobilized onto the filter cellulose via the reaction between cellulose OH and TMOS, while the hydrophilic area was not silanized because it was not in contact with the paper mask penetrated with TMOS. The effects of some factors including TMOS concentration, heating temperature and time on the fabrication of μPADs were studied. This method is free of any expensive equipment and metal masks, and could be performed by untrained personnel. These features are very attractive for the fabrication and applications of μPADs in developing countries or resource-limited settings. A flower-shaped μPAD was fabricated and used to determine glucose in human serum samples. The contents determined by this method agreed well with those determined by a standard method. <s> BIB005 | Due to the weaker color signal produced by potassium iodide, some organics and nanoparticles were used as color indicators in glucose μPADs. 2,4,6-tribromo-3-hydroxy benzoic acid (TBHBA) and 4-aminoantipyrine (4-APP) were used as substrates catalyzed by HRP to generate color signal for glucose detection due to superior water solubility of TBHBA and positive charges of TBHBA/4-APP which can be attached firmly onto paper substrate with negative charges BIB004 BIB001 . Chen et al. BIB002 Figure 2. Scheme of the µPAD fabrication in BIB005 : A filter paper mask (b) was obtained by cutting on a native filter paper (a), and was immersed in TMOS solution (c); The TMOS-adsorbed mask and a native filter paper were packed between two glass slides (d); TMOS molecules were assembled on the native filter paper by heating (e); and the fabricated µPAD with hydrophilic-hydrophobic contrast (f) and its photograph (g) obtained by spraying water on it. With the permission from BIB005 ; Copyright 2014, The Royal Society of Chemistry. Due to the weaker color signal produced by potassium iodide, some organics and nanoparticles were used as color indicators in glucose μPADs. 2,4,6-tribromo-3-hydroxy benzoic acid (TBHBA) and 4-aminoantipyrine (4-APP) were used as substrates catalyzed by HRP to generate color signal for glucose detection due to superior water solubility of TBHBA and positive charges of TBHBA/4-APP which can be attached firmly onto paper substrate with negative charges BIB004 BIB001 . Chen et al. BIB002 Figure 3. Fabrication scheme of the µPAD designed in BIB003 . The central plasma separation zone (a) and the four test readout zones (b) were patterned on chromatography paper by a wax printer (c); (d) Agglutinating antibodies were immobilized at the central part while the reagents for the colorimetric assay at the periphery zones; (e) To perform a diagnostic test with the developed µPAD, the whole blood sample was dropped onto the plasma separation zone; (f) The red blood cells were agglutinated in the central zone, while the separated plasma wicked into the test readout zones and reacted with the reagents of the colorimetric assay. With the permission from BIB003 ; Copyright 2012, The Royal Society of Chemistry. |
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> An enzymatic synthesis route to protein-wrapped gold nanoparticles is developed. Glucose oxidase (GOD) reduces Au(III) ion in the presence of β-D-glucose, and stable gold nanoparticles with average diameter of 14.5 nm areformed. FT-IR spectra, zeta potential and CD spectra of purified nanoparticles indicate that they are stabilized by the adsorbed protein layer. <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> Abstract The control of size and shape of metallic nanoparticles is a fundamental goal in nanochemistry, and crucial for applications exploiting nanoscale properties of materials. We present here an approach to the synthesis of gold nanoparticles mediated by glucose oxidase (GOD) immobilized on solid substrates using the Layer-by-Layer (LbL) technique. The LbL films contained four alternated layers of chitosan and poly(styrene sulfonate) (PSS), with GOD in the uppermost bilayer adsorbed on a fifth chitosan layer: (chitosan/PSS)4/(chitosan/GOD). The films were inserted into a solution containing gold salt and glucose, at various pHs. Optimum conditions were achieved at pH 9, producing gold nanoparticles of ca. 30 nm according to transmission electron microscopy. A comparative study with the enzyme in solution demonstrated that the synthesis of gold nanoparticles is more efficient using immobilized GOD. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> Carboxyl-modified graphene oxide (GO-COOH) possesses intrinsic peroxidase-like activity that can catalyze the reaction of peroxidase substrate 3,3,5,5-tetramethyl-benzidine (TMB) in the presence of H2O2 to produce a blue color reaction. A simple, cheap, and highly sensitive and selective colorimetric method for glucose detection has been developed and will facilitate the utilization of GO-COOH intrinsic peroxidase activity in medical diagnostics and biotechnology. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> We report the first use of redox nanoparticles of cerium oxide as colorimetric probes in bioanalysis. The method is based on changes in the physicochemical properties of ceria nanoparticles, used here as chromogenic indicators, in response to the analyte. We show that these particles can be fully integrated in a paper-based bioassay. To construct the sensor, ceria nanoparticles and glucose oxidase were coimmobilized onto filter paper using a silanization procedure. In the presence of glucose, the enzymatically generated hydrogen peroxide induces a visual color change of the ceria nanoparticles immobilized onto the bioactive sensing paper, from white-yellowish to dark orange, in a concentration-dependent manner. A detection limit of 0.5 mM glucose with a linear range up to 100 mM and a reproducibility of 4.3% for n = 11 ceria paper strips were obtained. The assay is fully reversible and can be reused for at least 10 consecutive measurement cycles, without significant loss of activity. Another unique featur... <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> Abstract Cellulose paper based glucose test strips were successfully prepared using 2,4,6-tribromo-3-hydroxy benzoic acid (TBHBA) as the chromogen agent. Cellulose paper is a good substrate for carrying chromogen agents and other chemicals so that the quantitative analysis can be done based on the colorimetric chemistry. The color intensity of the developed compounds, which was measured as the differential diffusive reflectance of the test strip at 510 nm, was correlated to the glucose concentration of the sample solutions in the range of 0.18–9.91 mg/ml. These colorimetric test strips could be conveniently used, do not have to use an electronic device, and would have potential applications in the home monitoring of blood glucose for people with diabetes. <s> BIB005 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> In this work, we first employ a drying method combining with the bienzyme colorimetric detection of glucose and uric acid on microfluidic paper-based analysis devices (μPADs). The channels of 3D μPADs are also designed by us to get better results. The color results are recorded by both Gel Documentation systems and a common camera. By using Gel Documentation systems, the limits of detection (LOD) of glucose and uric acid are 3.81 × 10(-5)M and 4.31 × 10(-5)M, respectively one order of magnitude lower than that of the reported methods on μPADs. By using a common camera, the limits of detection (LOD) of glucose and uric acid are 2.13 × 10(-4)M and 2.87 × 10(-4)M, respectively. Furthermore, the effects of detection conditions have been investigated and discussed comprehensively. Human serum samples are detected with satisfactory results, which are comparable with the clinical testing results. A low-cost, simple and rapid colorimetric method for the simultaneous detection of glucose and uric acid on the μPADs has been developed with enhanced sensitivity. <s> BIB006 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> Abstract In this paper gold nanoparticles (Au-NPs) have been used as colorimetric reporters for the detection of sugars. The synthesis of Au-NPs has been obtained in presence of glucose as reducing agent in different conditions, allowing the formation of pink or blue coloured NPs, and has been employed in the design of two colorimetric assays. Both assays rely on the analyte induced intensity increase (without any shift) of the NPs plasmon band absorption. The “pink assay” is based on the sugar assisted chemical synthesis of NPs and it represents a simple one-step colorimetric approach to the quantification of all potentially reducing sugars (sucrose included) with a LOD of 10 μM. The “blue assay” is based on the Au-NP synthesis catalysed by the enzyme glucose oxidase and it is specific for glucose, with a LOD of 5 μM. Compared to the classical bi-enzymatic (glucose oxidase/peroxidase) optical assay, it uses only one enzyme and does not suffer of the bleaching of the final colour because the reporter Au-NPs are very stable. <s> BIB007 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> Paper-based analytical devices (PADs) represent a growing class of elegant, yet inexpensive chemical sensor technologies designed for point-of-use applications. Most PADs, however, still utilize some form of instrumentation such as a camera for quantitative detection. We describe here a simple technique to render PAD measurements more quantitative and straightforward using the distance of colour development as a detection motif. The so-called distance-based detection enables PAD chemistries that are more portable and less resource intensive compared to classical approaches that rely on the use of peripheral equipment for quantitative measurement. We demonstrate the utility and broad applicability of this technique with measurements of glucose, nickel, and glutathione using three different detection chemistries: enzymatic reactions, metal complexation, and nanoparticle aggregation, respectively. The results show excellent quantitative agreement with certified standards in complex sample matrices. This work provides the first demonstration of distance-based PAD detection with broad application as a class of new, inexpensive sensor technologies designed for point-of-use applications. <s> BIB008 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> Abstract In this contribution, we first developed a semiquantitative method for the detection of glucose with self-calibration based on bienzyme colorimetry by using tree-shaped paper strip. The GOD/HRP bienzyme system was utilized to amplify the color signal in the aqueous phase. Moreover, we employed a paper as microfluidic media for running colorimetric assay, while tree-shaped paper strip was designed to ensure uniform microfluidic flow for multiple branches. Our proposed method gives direct outcomes which can be observed by the naked eye or recorded by a simple camera. The linear range is from 1.0 × 10 −3 to 11.0 × 10 −3 M, with a detection limit of 3 × 10 −4 M. Furthermore, the effect of detection condition has been investigated and discussed comprehensively. The result of determining glucose in human serum is consistent with that of detecting standard glucose solution by using our developed approach. A low-cost, simple, and rapid colorimetric method for the simultaneous detection of glucose with self-calibration on the tree-shaped paper has been proposed. <s> BIB009 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> Abstract Paper based colorimetric biosensing platform utilizing cross-linked siloxane 3-aminopropyltriethoxysilane (APTMS) as probe was developed for the detection of a broad range of targets including H 2 O 2 , glucose and protein biomarker. APTMS was extensively used for the modification of filter papers to develop paper based analytical devices. We discovered when APTMS was cross-linked with glutaraldehyde (GA), the resulting complex (APTMS–GA) displays brick-red color, and a visual color change was observed when the complex reacted with H 2 O 2 . By integrating the APTMS–GA complex with filter paper, the modified paper enables quantitative detection of H 2 O 2 through the monitoring of the color intensity change of the paper via software Image J. Then, with the immobilization of glucose oxidase (GOx) onto the modified paper, glucose can be detected through the detection of enzymatically generated H 2 O 2 . For protein biomarker prostate specific antigen (PSA) assay, we immobilized capture, not captured anti-PSA antibody (Ab 1 ) onto the paper surface and using GOx modified gold nanorod (GNR) as detection anti-PSA antibody (Ab 2 ) label. The detection of PSA was also achieved via the liberated H 2 O 2 when the GOx label reacted with glucose. The results demonstrated the possibility of this paper based sensor for the detection of different analytes with wide linear range. The low cost and simplicity of this paper based sensor could be developed for “point-of-care” analysis and find wide application in different areas. <s> BIB010 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> This paper describes a silica nanoparticle-modified microfluidic paper-based analytical device (μPAD) with improved color intensity and uniformity for three different enzymatic reactions with clinical relevance (lactate, glucose, and glutamate). The μPADs were produced on a Whatman grade 1 filter paper and using a CO2 laser engraver. Silica nanoparticles modified with 3-aminopropyltriethoxysilane were then added to the paper devices to facilitate the adsorption of selected enzymes and prevent the washing away effect that creates color gradients in the colorimetric measurements. According to the results herein described, the addition of silica nanoparticles yielded significant improvements in color intensity and uniformity. The resulting μPADs allowed for the detection of the three analytes in clinically relevant concentration ranges with limits of detection (LODs) of 0.63 mM, 0.50 mM, and 0.25 mM for lactate, glucose, and glutamate, respectively. An example of an analytical application has been demonstrated for the semi-quantitative detection of all three analytes in artificial urine. The results demonstrate the potential of silica nanoparticles to avoid the washing away effect and improve the color uniformity and intensity in colorimetric bioassays performed on μPADs. <s> BIB011 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> Abstract In this paper, Graphene oxide@SiO 2 @CeO 2 hybrid nanosheets (GSCs) have been successfully synthesized by the wet-chemical strategy. TEM, FITR and XPS were applied to characterize the morphology and composition of the nanosheets. The colorimetric assay of these nanosheets indicated that they possessed high intrinsic peroxidase activity, which should be ascribed to the combination of graphene oxide and CeO 2 . A fully integrated reagentless bioactive paper based on GSCs was fabricated, which were able to simultaneously detect glucose, lactate, uric acid and cholesterol. The results demonstrated that GSCs have great potential as an alternative to the commonly employed peroxidase in daily nursing and general physical examination. <s> BIB012 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> In our present study, we developed an optical biosensor for direct determination of salivary glucose by using immobilized glucose oxidase enzyme on filter paper strip (specific activity 1.4 U/strip) and then reacting it with synthetic glucose samples in presence of co-immobilized color pH indicator. The filter paper changed color based on concentration of glucose in reaction media and hence, by scanning this color change (using RGB profiling) through an office scanner and open source image processing software (GIMP) the concentration of glucose in the reaction medium could be deduced. Once the biosensor was standardized, the synthetic glucose sample was replaced with human saliva from donors. The individual's blood glucose level at the time of obtaining saliva was also measured using an Accuchek(™) active glucometer (Roche Inc.). In this preliminary study, a correlation of nearly 0.64 was found between glucose levels in saliva and blood of healthy individuals and in diabetic patients it was nearly in the order of 0.95, thereby validating the importance of salivary analysis. The RGB profiling method obtained a detection range of 9-1350 mg/dL glucose at a response time of 45 s and LOD of 22.2 mg/dL. <s> BIB013 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> Custom-made pencils containing reagents dispersed in a solid matrix were developed to enable rapid and solvent-free deposition of reagents onto membrane-based fluidic devices. The technique is as simple as drawing with the reagent pencils on a device. When aqueous samples are added to the device, the reagents dissolve from the pencil matrix and become available to react with analytes in the sample. Colorimetric glucose assays conducted on devices prepared using reagent pencils had comparable accuracy and precision to assays conducted on conventional devices prepared with reagents deposited from solution. Most importantly, sensitive reagents, such as enzymes, are stable in the pencils under ambient conditions, and no significant decrease in the activity of the enzyme horseradish peroxidase stored in a pencil was observed after 63 days. Reagent pencils offer a new option for preparing and customizing diagnostic tests at the point of care without the need for specialized equipment. <s> BIB014 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> This paper describes the modification of microfluidic paper-based analytical devices (μPADs) with chitosan to improve the analytical performance of colorimetric measurements associated with enzymatic bioassays. Chitosan is a natural biopolymer extensively used to modify biosensing surfaces due to its capability of providing a suitable microenvironment for the direct electron transfer between an enzyme and a reactive surface. This hypothesis was investigated using glucose and uric acid (UA) colorimetric assays as model systems. The best colorimetric sensitivity for glucose and UA was achieved using a chromogenic solution composed of 4-aminoantipyrine and sodium 3,5-dichloro-2-hydroxy-benzenesulfonate (4-AAP/DHBS), which provided a linear response for a concentration range between 0.1 and 1.0 mM. Glucose and UA were successfully determined in artificial serum samples with accuracies between 87 and 114%. The limits of detection (LODs) found for glucose and UA assays were 23 and 37 μM, respectively. The enhanced analytical performance of chitosan-modified μPADs allowed the colorimetric detection of glucose in tear samples from four nondiabetic patients. The achieved concentration levels ranged from 130 to 380 μM. The modified μPADs offered analytical reliability and accuracy as well as no statistical difference from the values achieved through a reference method. Based on the presented results, the proposed μPAD can be a powerful alternative tool for non-invasive glucose analysis. <s> BIB015 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> A disposable, equipment-free, versatile point-of-care testing platform, microfluidic distance readout sweet hydrogel integrated paper-based analytical device (μDiSH-PAD), was developed for portable quantitative detection of different types of targets. The platform relies on a target-responsive aptamer cross-linked hydrogel for target recognition, cascade enzymatic reactions for signal amplification, and microfluidic paper-based analytic devices (μPADs) for visual distance-based quantitative readout. A “sweet” hydrogel with trapped glucoamylase (GA) was synthesized using an aptamer as a cross-linker. When target is present in the sample, the “sweet” hydrogel collapses and releases enzyme GA into the sample, generating glucose by amylolysis. A hydrophilic channel on the μPADs is modified with glucose oxidase (GOx) and colorless 3,3′-diaminobenzidine (DAB) as the substrate. When glucose travels along the channel by capillary action, it is converted to H2O2 by GOx. In addition, DAB is converted into brown ins... <s> BIB016 | Due to the weaker color signal produced by potassium iodide, some organics and nanoparticles were used as color indicators in glucose µPADs. 2,4,6-tribromo-3-hydroxy benzoic acid (TBHBA) and 4-aminoantipyrine (4-APP) were used as substrates catalyzed by HRP to generate color signal for glucose detection due to superior water solubility of TBHBA and positive charges of TBHBA/4-APP which can be attached firmly onto paper substrate with negative charges BIB009 BIB005 . Chen et al. BIB006 replaced TBHBA with N-ethyl-N (3-sulfopropyl)-3-methyl-aniline sodium salt (TOPS) and used TOPS/4-APP in µPAD for glucose detection, which showed a limit of detection (LOD) of 38.1 µM. Gabriel et al. BIB015 used 4-AAP and sodium 3,5-dichloro-2-hydroxy-benzenesulfonate (DHBS) as the chromogenic solution. Chitosan was involved to improve the sensing performance of glucose in tear samples and the detection limit was 0.023 mM. Zhou et al. BIB010 used cross-linked siloxane 3-aminopropyltriethoxysilane (APTMS) as probe for colorimetric µPAD. Only glucose oxidase needs to be immobilized on the µPAD due to a visual color change when APTMS/glutaraldehyde (GA) complex reacted with H 2 O 2 . The µPAD exhibited good linearity for the concentration in the range from 0.5 to 30 mM, covering the clinical range for normal blood glucose level . Similarly, Soni et al. BIB013 used co-immobilized color pH indicator for direct determination of salivary glucose with no need for peroxidase. While most conventional intensity-based colorimetric µPAD were still constrained to the requirement of camera for quantitative detection, Cate et al. BIB008 and Wei et al. BIB016 utilized visual distance-based methods for µPADs through the distance of color development as a detection value. GOx and colorless 3,3 -diaminobenzidine (DAB) were immobilized in a hydrophilic channel as the substrate on the µPADs. H 2 O 2 were generated by GOx when sample solution travelled along the channel by capillary action, and then further reacted with DAB to form a visible brown, insoluble product (poly(DAB)) in the presence of peroxidase (Figure 4 ). The length of the brown precipitate was positively correlated to the concentration of glucoses. replaced TBHBA with N-ethyl-N (3-sulfopropyl)-3-methyl-aniline sodium salt (TOPS) and used TOPS/4-APP in μPAD for glucose detection, which showed a limit of detection (LOD) of 38.1 μM. Gabriel et al. BIB015 used 4-AAP and sodium 3,5-dichloro-2-hydroxy-benzenesulfonate (DHBS) as the chromogenic solution. Chitosan was involved to improve the sensing performance of glucose in tear samples and the detection limit was 0.023 mM. Zhou et al. BIB010 used cross-linked siloxane 3-aminopropyltriethoxysilane (APTMS) as probe for colorimetric μPAD. Only glucose oxidase needs to be immobilized on the μPAD due to a visual color change when APTMS/glutaraldehyde (GA) complex reacted with H2O2. The μPAD exhibited good linearity for the concentration in the range from 0.5 to 30 mM, covering the clinical range for normal blood glucose level . Similarly, Soni et al. BIB013 used co-immobilized color pH indicator for direct determination of salivary glucose with no need for peroxidase. While most conventional intensity-based colorimetric μPAD were still constrained to the requirement of camera for quantitative detection, Cate et al. BIB008 and Wei et al. BIB016 utilized visual distance-based methods for μPADs through the distance of color development as a detection value. GOx and colorless 3,3′-diaminobenzidine (DAB) were immobilized in a hydrophilic channel as the substrate on the μPADs. H2O2 were generated by GOx when sample solution travelled along the channel by capillary action, and then further reacted with DAB to form a visible brown, insoluble product (poly(DAB)) in the presence of peroxidase ( Figure 4 ). The length of the brown precipitate was positively correlated to the concentration of glucoses. Nanoparticles have been used in lateral flow assays associated with colorimetric detection to improve the analytical performance and minimize washing effects BIB007 BIB011 . Figueredo et al. applied three different types of nanomaterials, namely Fe 3 O 4 nanoparticles (MNPs), multiwalled carbon nanotubes (MWCNT), and graphene oxide (GO) in paper-based analytical devices to improve the homogeneity on color measurements. Instead of constructing hydrophobic barriers on paper surface as described above, a layer of hydrophilic paper channels was directly built up on the surface of a hydrophobic substrate. With the assistance of glucose oxidase and HRP, the LOD of the µPADs treated with MNPs, MWCNT and GO were 43, 62, and 18 µM, respectively. Evans et al. BIB011 also aimed at improving color intensity and uniformity by using silica nanoparticles ( Figure 5 ). The PAD added with silica nanoparticles can prevent the color gradients in the colorimetric detection caused by the washing away effect and the LOD was 0.5 mM. According to the ability of glucose oxidase to reduce Au 3+ ions to Au 0 in the presence of glucose BIB001 BIB002 , Palazzo et al. BIB007 used gold nanoparticles (AuNPs) as colorimetric reporters to detect glucose. This µPAD only used glucose oxidase instead of conventional bienzymatic (GOx/peroxidase) device and it avoided bleaching of the final color, with a LOD of 5 µM. Some nanoparticles like graphene oxide (GO) and cerium oxide (CeO 2 ) possessed high intrinsic peroxidase-like catalytic activity BIB003 BIB004 . Deng et al. BIB012 synthesized GO@SiO 2 @CeO 2 hybrid nanosheets (GSCs) as an alternative to the commonly employed peroxidase. 2,2 -azinobis(3-ethylbenzothiozoline)-6-sulfonic acid (ABTS) used as the electron donor dye substrate was converted from a colorless reduced form to a blue-green oxidized form by GSCs instead of HRP BIB014 with a LOD of 9 nM. Nanoparticles have been used in lateral flow assays associated with colorimetric detection to improve the analytical performance and minimize washing effects BIB007 BIB011 . Figueredo et al. applied three different types of nanomaterials, namely Fe3O4 nanoparticles (MNPs), multiwalled carbon nanotubes (MWCNT), and graphene oxide (GO) in paper-based analytical devices to improve the homogeneity on color measurements. Instead of constructing hydrophobic barriers on paper surface as described above, a layer of hydrophilic paper channels was directly built up on the surface of a hydrophobic substrate. With the assistance of glucose oxidase and HRP, the LOD of the μPADs treated with MNPs, MWCNT and GO were 43, 62, and 18 μM, respectively. Evans et al. BIB011 also aimed at improving color intensity and uniformity by using silica nanoparticles ( Figure 5 ). The PAD added with silica nanoparticles can prevent the color gradients in the colorimetric detection caused by the washing away effect and the LOD was 0.5 mM. According to the ability of glucose oxidase to reduce Au 3+ ions to Au 0 in the presence of glucose BIB001 BIB002 , Palazzo et al. BIB007 used gold nanoparticles (AuNPs) as colorimetric reporters to detect glucose. This μPAD only used glucose oxidase instead of conventional bienzymatic (GOx/peroxidase) device and it avoided bleaching of the final color, with a LOD of 5 μM. Some nanoparticles like graphene oxide (GO) and cerium oxide (CeO2) possessed high intrinsic peroxidase-like catalytic activity BIB003 BIB004 . Deng et al. BIB012 synthesized GO@SiO2@CeO2 hybrid nanosheets (GSCs) as an alternative to the commonly employed peroxidase. 2,2′-azinobis(3-ethylbenzothiozoline)-6-sulfonic acid (ABTS) used as the electron donor dye substrate was converted from a colorless reduced form to a blue-green oxidized form by GSCs instead of HRP BIB014 with a LOD of 9 nM. |
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-μPADs <s> This paper describes an efficient and high throughput method for fabricating three-dimensional (3D) paper-based microfluidic devices. The method avoids tedious alignment and assembly steps and eliminates a major bottleneck that has hindered the development of these types of devices. A single researcher now can prepare hundreds of devices within 1 h. <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-μPADs <s> A simple paper-based optical biosensor for glucose monitoring was developed. As a glucose biosensing principle, a colorimetric glucose assay, using glucose oxidase (GOx) and horseradish peroxidase (HRP), was chosen. The enzymatic glucose assay was implanted on the analytical paper-based device, which is fabricated by the wax printing method. The fabricated device consists of two paper layers. The top layer has a sample loading zone and a detection zone, which are modified with enzymes and chromogens. The bottom layer contains a fluidic channel to convey the solution from the loading zone to the detection zone. Double-sided adhesive tape is used to attach these two layers. In this system, when a glucose solution is dropped onto the loading zone, the solution is transferred to the detection zone, which is modified with GOx, HRP, and chromogenic compounds through the connected fluidic channel. In the presence of GOx-generated H2O2, HRP converts chromogenic compounds into the final product exhibiting a blue color, inducing color change in the detection zone. To confirm the changes in signal intensity in the detection zone, the resulting image was registered by a digital camera from a smartphone. To minimize signal interference from external light, the experiment was performed in a specifically designed light-tight box, which was suited to the smartphone. By using the developed biosensing system, various concentrations of glucose samples (0–20 mM) and human serum (5–17 mM) were precisely analyzed within a few minutes. With the developed system, we could expand the applicability of a smartphone to bioanalytical health care. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-μPADs <s> There is a strong interest in the use of biopolymers in the electronic and biomedical industries, mainly towards low-cost applications. The possibility of developing entirely new kinds of products based on cellulose is of current interest, in order to enhance and to add new functionalities to conventional paper-based products. We present our results towards the development of paper-based microfluidics for molecular diagnostic testing. Paper properties were evaluated and compared to nitrocellulose, the most commonly used material in lateral flow and other rapid tests. Focusing on the use of paper as a substrate for microfluidic applications, through an eco-friendly wax-printing technology, we present three main and distinct colorimetric approaches: (i) enzymatic reactions (glucose detection); (ii) immunoassays (antibodies anti-Leishmania detection); (iii) nucleic acid sequence identification (Mycobacterium tuberculosis complex detection). Colorimetric glucose quantification was achieved through enzymatic reactions performed within specific zones of the paper-based device. The colouration achieved increased with growing glucose concentration and was highly homogeneous, covering all the surface of the paper reaction zones in a 3D sensor format. These devices showed a major advantage when compared to the 2D lateral flow glucose sensors, where some carryover of the coloured products usually occurs. The detection of anti-Leishmania antibodies in canine sera was conceptually achieved using a paper-based 96-well enzyme-linked immunosorbent assay format. However, optimization is still needed for this test, regarding the efficiency of the immobilization of antigens on the cellulose fibres. The detection of Mycobacterium tuberculosis nucleic acids integrated with a non-cross-linking gold nanoprobe detection scheme was also achieved in a wax-printed 384-well paper-based microplate, by the hybridization with a species-specific probe. The obtained results with the above-mentioned proof-of-concept sensors are thus promising towards the future development of simple and cost-effective paper-based diagnostic devices. (Some figures may appear in colour only in the online journal) <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-μPADs <s> The development of real-time innocuous blood diagnosis has been a long-standing goal in healthcare; an improved, miniature, all-in-one point-of-care testing (POCT) system with low cost and simplified operation is highly desired. Here, we present a one-touch-activated blood multidiagnostic system (OBMS) involving the synergistic integration of a hollow microneedle and paper-based sensor, providing a number of unique characteristics for simplifying the design of microsystems and enhancing user performance. In this OBMS, all functions of blood collection, serum separation, and detection were sequentially automated in one single device that only required one-touch activation by finger-power without additional operations. For the first time, we successfully demonstrated the operation of this system in vivo in glucose and cholesterol diagnosis, showing a great possibility for human clinical application and commercialization. Additionally, this novel system offers a new approach for the use of microneedles and paper sensors as promising intelligent elements in future real-time healthcare monitoring devices. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-μPADs <s> Abstract We developed a simple and low-cost cell culture monitoring system utilizing a paper-based analytical device (PAD) and a smartphone. The PAD simultaneously analyses glucose and lactate concentrations in the cell culture medium. Focusing on the fact that animal cells consume glucose and produce lactate under anaerobic conditions, oxidase- and horseradish peroxidase (HRP) enzyme-mediated colorimetric assays were integrated into the PAD. The PAD was designed to have three laminated layers. By using a double-sided adhesive tape as the middle layer and wax coating, a bifurcated fluidic channel was prepared to manipulate sample flow. At the inlet and the outlets of the channel, a sample drop zone and two detection zones for glucose and lactate, respectively, were positioned. When sample solution is loaded onto the drop zone, it flows to the detection zone through the hydrophilic fluidic channel via capillary force. Upon reaching the detection zone, the sample reacts with glucose and lactate oxidases (GOx and LOx) and HRP, immobilized on the detection zone along with colorless chromophores. By the Trinder’s reaction, the colorless chromophore is converted to a blue-colored product, generating concentration-dependent signal. With a gadget designed to aid the image acquisition, the PAD was positioned to the smartphone-embedded camera. Images of the detection zones were acquired using a mobile application and the color intensities were quantified as sensor signals. For the glucose assay using GOx/HRP format, we obtained the limit of detection (LOD ∼0.3 mM) and the limit of quantification (LOQ ∼0.9 mM) values in the dynamic detection range from 0.3 to 8.0 mM of glucose. For lactate assay using LOx/HRP, the LOD (0.02 mM) and the LOQ (0.06 mM) values were registered in the dynamic detection range from 0.02 to 0.50 mM of lactate. With the device, simultaneous analyses of glucose and lactate in cell culture media were conducted, exhibiting highly accurate and reproducible results. Based on the results, we propose that the optical sensing system developed is feasible for practical monitoring of animal cell culture. <s> BIB005 | Three-dimensional microfluidic paper-based analytical devices (3D-μPADs) represent an emerging platform development tendency due to the advantages of high throughput, complex fluid manipulation, multiplexed analytical tests, and parallel sample distribution . Compared to the 2D μPADs, 3D-μPADs showed the advantage of highly homogeneous coloration that covering all the surface of the paper reaction zones. Fluid can move freely in both the horizontal and vertical directions in a 3D-μPAD. Yoon groups BIB002 BIB005 , Costa et al. BIB003 and Lewis et al. BIB001 fabricated 3D-μPADs by stacking alternating layers of patterned paper and double-sided adhesive tape with holes. In the presence of H2O2 generated by GOx, the HRP converts 4-AAP and N-ethyl-N-(2-hydroxy-3-sulfopropyl)-3,5-dimethylaniline sodium salt monohydrate (MAOS) from colorless compounds to a blue form, which can be visualized in the detection zone. Digital camera from a smartphone was utilized to read the signal and the dynamic detection ranges from 0.3 to 0.8 mM BIB005 . Li et al. BIB004 integrated a minimally invasive microneedle with 3D-μPAD to create the onetouch-activated blood diagnostic system, which shows great potential in clinical application. |
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> This paper describes an efficient and high throughput method for fabricating three-dimensional (3D) paper-based microfluidic devices. The method avoids tedious alignment and assembly steps and eliminates a major bottleneck that has hindered the development of these types of devices. A single researcher now can prepare hundreds of devices within 1 h. <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> We present a new method for fabricating three-dimensional paper-based fluidic devices that uses toner as a thermal adhesive to bond multiple layers of patterned paper together. The fabrication process is rapid, involves minimal equipment (a laser printer and a laminator) and produces complex channel networks with dimensions down to 1 mm. The devices can run multiple diagnostic assays on one or more samples simultaneously, can incorporate positive and negative controls and can be programmed to display the results of the assays in a variety of patterns. The patterns of the results can encode information, which could be used to identify counterfeit devices, identify samples, encrypt the results for patient privacy or monitor patient compliance. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> The first step in curing a disease is being able to detect the disease effectively. Paper-based microfluidic devices are biodegradable and can make diagnosing diseases cost-effective and easy in almost all environments. We created a three-dimesnional (3D) paper device using wax printing fabrication technique and basic principles of origami. This design allows for a versatile fabrication technique over previously reported patterning of SU-8 photoresist on chromatography paper by employing a readily available wax printer. The design also utilizes multiple colorimetric assays that can accommodate one or more analytes including urine, blood, and saliva. In this case to demonstrate the functionality of the 3D paper-based microfluidic system, a urinalysis of protein and glucose assays is conducted. The amounts of glucose and protein introduced to the device are found to be proportional to the color change of each assay. This color change was quantified by use of Adobe Photoshop. Urine samples from participants with no pre-existing health conditions and one person with diabetes were collected and compared against synthetic urine samples with predetermined glucose and protein levels. Utilizing this method, we were able to confirm that both protein and glucose levels were in fact within healthy ranges for healthy participants. For the participant with diabetes, glucose was found to be above the healthy range while the protein level was in the healthy range. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> A simple paper-based optical biosensor for glucose monitoring was developed. As a glucose biosensing principle, a colorimetric glucose assay, using glucose oxidase (GOx) and horseradish peroxidase (HRP), was chosen. The enzymatic glucose assay was implanted on the analytical paper-based device, which is fabricated by the wax printing method. The fabricated device consists of two paper layers. The top layer has a sample loading zone and a detection zone, which are modified with enzymes and chromogens. The bottom layer contains a fluidic channel to convey the solution from the loading zone to the detection zone. Double-sided adhesive tape is used to attach these two layers. In this system, when a glucose solution is dropped onto the loading zone, the solution is transferred to the detection zone, which is modified with GOx, HRP, and chromogenic compounds through the connected fluidic channel. In the presence of GOx-generated H2O2, HRP converts chromogenic compounds into the final product exhibiting a blue color, inducing color change in the detection zone. To confirm the changes in signal intensity in the detection zone, the resulting image was registered by a digital camera from a smartphone. To minimize signal interference from external light, the experiment was performed in a specifically designed light-tight box, which was suited to the smartphone. By using the developed biosensing system, various concentrations of glucose samples (0–20 mM) and human serum (5–17 mM) were precisely analyzed within a few minutes. With the developed system, we could expand the applicability of a smartphone to bioanalytical health care. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> There is a strong interest in the use of biopolymers in the electronic and biomedical industries, mainly towards low-cost applications. The possibility of developing entirely new kinds of products based on cellulose is of current interest, in order to enhance and to add new functionalities to conventional paper-based products. We present our results towards the development of paper-based microfluidics for molecular diagnostic testing. Paper properties were evaluated and compared to nitrocellulose, the most commonly used material in lateral flow and other rapid tests. Focusing on the use of paper as a substrate for microfluidic applications, through an eco-friendly wax-printing technology, we present three main and distinct colorimetric approaches: (i) enzymatic reactions (glucose detection); (ii) immunoassays (antibodies anti-Leishmania detection); (iii) nucleic acid sequence identification (Mycobacterium tuberculosis complex detection). Colorimetric glucose quantification was achieved through enzymatic reactions performed within specific zones of the paper-based device. The colouration achieved increased with growing glucose concentration and was highly homogeneous, covering all the surface of the paper reaction zones in a 3D sensor format. These devices showed a major advantage when compared to the 2D lateral flow glucose sensors, where some carryover of the coloured products usually occurs. The detection of anti-Leishmania antibodies in canine sera was conceptually achieved using a paper-based 96-well enzyme-linked immunosorbent assay format. However, optimization is still needed for this test, regarding the efficiency of the immobilization of antigens on the cellulose fibres. The detection of Mycobacterium tuberculosis nucleic acids integrated with a non-cross-linking gold nanoprobe detection scheme was also achieved in a wax-printed 384-well paper-based microplate, by the hybridization with a species-specific probe. The obtained results with the above-mentioned proof-of-concept sensors are thus promising towards the future development of simple and cost-effective paper-based diagnostic devices. (Some figures may appear in colour only in the online journal) <s> BIB005 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> The development of real-time innocuous blood diagnosis has been a long-standing goal in healthcare; an improved, miniature, all-in-one point-of-care testing (POCT) system with low cost and simplified operation is highly desired. Here, we present a one-touch-activated blood multidiagnostic system (OBMS) involving the synergistic integration of a hollow microneedle and paper-based sensor, providing a number of unique characteristics for simplifying the design of microsystems and enhancing user performance. In this OBMS, all functions of blood collection, serum separation, and detection were sequentially automated in one single device that only required one-touch activation by finger-power without additional operations. For the first time, we successfully demonstrated the operation of this system in vivo in glucose and cholesterol diagnosis, showing a great possibility for human clinical application and commercialization. Additionally, this novel system offers a new approach for the use of microneedles and paper sensors as promising intelligent elements in future real-time healthcare monitoring devices. <s> BIB006 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> Abstract This study investigates a new paper-based 3D microfluidic analytical device for analyzing multiple biological fluids. A wax-printed and -impregnated device was operated using tip-pinch manipulation of the thumb and index fingers and applied the chemical reaction of a preloaded colorimetric indicator and biological solutions. Chemical sensing of protein and glucose concentrations was quantitatively analyzed by changes in the color intensity of the image taken from three image readout devices including scanner (Epson Perfection V700), microscope (USB-embedded handheld digital microscope), and smartphone (LG Optimus Vu). Paper-based 3D microfluidic analytic device with three image analyzers successfully quantified 1.5–75 μM protein concentrations and 0–900 mg/dL glucose concentrations. Paper-based 3D microfluidic device combined with the smartphone showed the performance in protein bioassay (1.5–75 μM) and glucose bioassay (0–50 mM) including clinically relevant ranges comparable to other devices. An origami-driven paper-based 3D microfluidic analytic is a useful platform with great potential for application in point-of-care diagnostics. <s> BIB007 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> This study demonstrates a simple approach for fabricating a 3D-μPAD from a single sheet of paper by double-sided printing and lamination. First, a wax printer prints vertically symmetrical and asymmetrical wax patterns onto a double-sided paper surface. Then, a laminator melts the printed wax patterns to form microfluidic channels in the paper sheet. The vertically symmetrical wax patterns form vertical channels when the melted wax patterns make contact with each other. The asymmetrical wax patterns form lateral and vertical channels at the cross section of the paper when the printed wax patterns are melted to a lower height than the thickness of the single sheet of paper. Finally, the two types of wax patterns form a 3D microfluidic network to move fluid laterally and vertically in the single sheet of paper. This method eliminates major technical hurdles related to the complicated and tedious alignment, assembly, bonding, and punching process. This 3D-μPAD can be used in a multiplex digital assay to measure the concentration of a target analyte in a sample solution simply by counting the number of colored bars at a fixed time. It does not require any external instruments to perform digital measurements. Therefore, we expect that this approach could be an instrument-free assay format for use in developing countries. <s> BIB008 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> Abstract We developed a simple and low-cost cell culture monitoring system utilizing a paper-based analytical device (PAD) and a smartphone. The PAD simultaneously analyses glucose and lactate concentrations in the cell culture medium. Focusing on the fact that animal cells consume glucose and produce lactate under anaerobic conditions, oxidase- and horseradish peroxidase (HRP) enzyme-mediated colorimetric assays were integrated into the PAD. The PAD was designed to have three laminated layers. By using a double-sided adhesive tape as the middle layer and wax coating, a bifurcated fluidic channel was prepared to manipulate sample flow. At the inlet and the outlets of the channel, a sample drop zone and two detection zones for glucose and lactate, respectively, were positioned. When sample solution is loaded onto the drop zone, it flows to the detection zone through the hydrophilic fluidic channel via capillary force. Upon reaching the detection zone, the sample reacts with glucose and lactate oxidases (GOx and LOx) and HRP, immobilized on the detection zone along with colorless chromophores. By the Trinder’s reaction, the colorless chromophore is converted to a blue-colored product, generating concentration-dependent signal. With a gadget designed to aid the image acquisition, the PAD was positioned to the smartphone-embedded camera. Images of the detection zones were acquired using a mobile application and the color intensities were quantified as sensor signals. For the glucose assay using GOx/HRP format, we obtained the limit of detection (LOD ∼0.3 mM) and the limit of quantification (LOQ ∼0.9 mM) values in the dynamic detection range from 0.3 to 8.0 mM of glucose. For lactate assay using LOx/HRP, the LOD (0.02 mM) and the LOQ (0.06 mM) values were registered in the dynamic detection range from 0.02 to 0.50 mM of lactate. With the device, simultaneous analyses of glucose and lactate in cell culture media were conducted, exhibiting highly accurate and reproducible results. Based on the results, we propose that the optical sensing system developed is feasible for practical monitoring of animal cell culture. <s> BIB009 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> This paper describes the modification of microfluidic paper-based analytical devices (μPADs) with chitosan to improve the analytical performance of colorimetric measurements associated with enzymatic bioassays. Chitosan is a natural biopolymer extensively used to modify biosensing surfaces due to its capability of providing a suitable microenvironment for the direct electron transfer between an enzyme and a reactive surface. This hypothesis was investigated using glucose and uric acid (UA) colorimetric assays as model systems. The best colorimetric sensitivity for glucose and UA was achieved using a chromogenic solution composed of 4-aminoantipyrine and sodium 3,5-dichloro-2-hydroxy-benzenesulfonate (4-AAP/DHBS), which provided a linear response for a concentration range between 0.1 and 1.0 mM. Glucose and UA were successfully determined in artificial serum samples with accuracies between 87 and 114%. The limits of detection (LODs) found for glucose and UA assays were 23 and 37 μM, respectively. The enhanced analytical performance of chitosan-modified μPADs allowed the colorimetric detection of glucose in tear samples from four nondiabetic patients. The achieved concentration levels ranged from 130 to 380 μM. The modified μPADs offered analytical reliability and accuracy as well as no statistical difference from the values achieved through a reference method. Based on the presented results, the proposed μPAD can be a powerful alternative tool for non-invasive glucose analysis. <s> BIB010 | Three-dimensional microfluidic paper-based analytical devices (3D-µPADs) represent an emerging platform development tendency due to the advantages of high throughput, complex fluid manipulation, multiplexed analytical tests, and parallel sample distribution . Compared to the 2D µPADs, 3D-µPADs showed the advantage of highly homogeneous coloration that covering all the surface of the paper reaction zones. Fluid can move freely in both the horizontal and vertical directions in a 3D-µPAD. Yoon groups BIB004 BIB009 , Costa et al. BIB005 and Lewis et al. BIB001 fabricated 3D-µPADs by stacking alternating layers of patterned paper and double-sided adhesive tape with holes. In the presence of H 2 O 2 generated by GOx, the HRP converts 4-AAP and N-ethyl-N-(2-hydroxy-3-sulfopropyl)-3,5-dimethylaniline sodium salt monohydrate (MAOS) from colorless compounds to a blue form, which can be visualized in the detection zone. Digital camera from a smartphone was utilized to read the signal and the dynamic detection ranges from 0.3 to 0.8 mM BIB009 . Li et al. BIB006 integrated a minimally invasive microneedle with 3D-µPAD to create the one-touch-activated blood diagnostic system, which shows great potential in clinical application. 3D-µPADs could also be converted from 2D structures by origami BIB002 BIB007 BIB003 . Choi et al. BIB007 separated the 3D-µPADs into two layers. Reservoirs on the top layer were preloaded with reagent for glucose detection and the test solutions were loaded to each injection zone in the bottom layer. The device was used by tip-pinch manipulation with the thumb and index fingers to operate the chemical reaction of the preloaded reagent and test solutions. Sechi et al. BIB003 used 3D origami technique to fold the 3D-µPAD and the sample flows from the x, y, and z directions toward the detection points along the hydrophobic channels created by the wax printing technique (Figure 6 ). 3D-μPADs could also be converted from 2D structures by origami BIB002 BIB007 BIB003 . Choi et al. BIB007 separated the 3D-μPADs into two layers. Reservoirs on the top layer were preloaded with reagent for glucose detection and the test solutions were loaded to each injection zone in the bottom layer. The device was used by tip-pinch manipulation with the thumb and index fingers to operate the chemical reaction of the preloaded reagent and test solutions. Sechi et al. BIB003 used 3D origami technique to fold the 3D-μPAD and the sample flows from the x, y, and z directions toward the detection points along the hydrophobic channels created by the wax printing technique (Figure 6 ). Traditional fabrication techniques of 3D-μPAD involve stacking layers of patterned paper and origami-clamping, which are complicated and low efficiency. Li et al. and Jeong et al. BIB008 proposed a method to fabricate a 3D-μPAD in a single layer of paper by doubled-sided printing and lamination ( Figure 7) . Through adjusting the density of printed wax and the heating time, penetration depth of melted wax could be controlled. This method eliminates major technical hurdles related to the complicated and interminable stacking, alignment, bonding and punching process. The LODs achieved versus the colorimetric specific indicators through enzymatic reactions and the kinds of barriers explored were summarized in Table 1 . Traditional fabrication techniques of 3D-µPAD involve stacking layers of patterned paper and origami-clamping, which are complicated and low efficiency. Li et al. and Jeong et al. BIB008 proposed a method to fabricate a 3D-µPAD in a single layer of paper by doubled-sided printing and lamination ( Figure 7) . Through adjusting the density of printed wax and the heating time, penetration depth of melted wax could be controlled. This method eliminates major technical hurdles related to the complicated and interminable stacking, alignment, bonding and punching process. The LODs achieved versus the colorimetric specific indicators through enzymatic reactions and the kinds of barriers explored were summarized in Table 1 . BIB010 4-AAP/DHBS Paraffin 0.023 mM Figure 7 . Scheme of the 3D-μPAD formation on a single sheet of paper in BIB008 . Before (a) and after (b) loading the red dye solution, the front, backside and cross section images of each parts indicated that the red dye solution had smoothly flowed from the inlet to the outlet via the alternative lower and upper channels. With the permission from BIB008 ; Copyright 2015, The Royal Society of Chemistry. |
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose μPADs <s> Abstract Electrochemical paper-based analytical devices (ePADs) with integrated plasma isolation for determination of glucose from whole blood samples have been developed. A dumbbell shaped ePAD containing two blood separation zones (VF2 membranes) with a middle detection zone was fabricated using the wax dipping method. The dumbbell shaped device was designed to separate plasma while generating homogeneous flow to the middle detection zone of the ePAD. The proposed ePADs work with whole blood samples with 24–60% hematocrit without dilution, and the plasma was completely separated within 4 min. Glucose in isolated plasma separated was detected using glucose oxidase immobilized on the middle of the paper device. The hydrogen peroxide generated from the reaction between glucose and the enzyme pass through to a Prussian blue modified screen printed electrode (PB-SPEs). The currents measured using chronoamperometry at the optimal detection potential for H 2 O 2 (−0.1 V versus Ag/AgCl reference electrode) were proportional to glucose concentrations in the whole blood. The linear range for glucose assay was in the range 0–33.1 mM ( r 2 = 0.987). The coefficients of variation (CVs) of currents were 6.5%, 9.0% and 8.0% when assay whole blood sample containing glucose concentration at 3.4, 6.3, and 15.6 mM, respectively. Because each sample displayed intra-individual variation of electrochemical signal, glucose assay in whole blood samples were measured using the standard addition method. Results demonstrate that the ePAD glucose assay was not significantly different from the spectrophotometric method ( p = 0.376, paired sample t -test, n = 10). <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose μPADs <s> a b s t r a c t This paper describes a simple inexpensive paper-based amperometric glucose biosensor developed based on Prussian Blue (PB)-modified screen-printed carbon electrodes (SPCEs). The use of cellulose paper proved to be a simple, "ideal" and green biocompatible immobilization matrix for glucose oxidase (GOx) as it was successfully embedded within the fibre matrix of paper via physical adsorption. The glucose biosensor allowed a small amount (0.5 L) of sample solution for glucose analysis. The biosensor had a linear calibration range between 0.25 mM and 2.00 (R2 = 0.987) and a detection limit of 0.01 mM glucose (S/N = 3). Interference study of selected potential interfering compounds on the biosensor response was investigated. Its analytical performance was demonstrated in the analysis of selected commercial glucose beverages. Despite the simplicity of the immobilization method, the biosensor retained ca. 72% of its activity after a storage period of 45 days. © 2014 Elsevier B.V. All rights reserved. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose μPADs <s> Abstract A simple low cost “green” biosensor configuration comprising of a hydrophilic cellulose paper disk with immobilised glucose oxidase (GOx) via adsorption step, placed on top of a screen printed carbon electrode (SPCE) was developed. This biosensor configuration allowed for low volume of glucose sample (5 μL) to be analysed. Cellulose paper was also used as the pre-storage reagent matrix for 0.1 M phosphate buffer solution (PBS, pH 7.0) and 10 mM soluble ferrocene monocarboxylic acid mediator. This biosensor exhibited a linear dynamic calibration range of 1 to 5 mM glucose ( r 2 = 0.971), with a limit of detection of 0.18 mM and retained 98% of its signal after a period of four months. In addition, its performance was demonstrated in the analysis of selected commercial soda beverages. The glucose concentrations obtained by the biosensor corroborated well with an independent high performance liquid chromatographic (HPLC) method. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose μPADs <s> Abstract A miniaturized paper-based microfluidic electrochemical enzymatic biosensing platform was developed and the effects of fluidic behaviors in paper substrate on electrochemical sensing were systemically investigated. The biosensor is composed of an enzyme-immobilized pure cellulose paper pad, an enzymeless screen-printed electrode (SPE) modified with platinum nanoparticles (PtNPs), and a pair of clamped acrylonitrile butadiene styrene (ABS) plastic holders to provide good alignment for stable signal sensing. The wicking rate of liquid sample in paper was predicted, using a two-dimensional Fickian-diffusion model, to be 1.0 × 10 −2 cm 2 /s, and was verified experimentally. Dip-coating was used to prepare the enzyme-modified paper pad (EPP), which is amenable for mass manufacturing. The EPP retained excellent hydrophilicity and mechanical properties, with even slightly improved tensile strength and break strain. No significant difference in voltammetric behaviors was observed between measurements made in bulk buffer solution and with different sample volumes applied to EPP beyond its saturation wicking volume. Glucose oxidase (GO x ), an enzyme specific for glucose (Glc) substrate, was used as a model enzyme and its enzymatic reaction product H 2 O 2 was detected by the enzymeless PtNPs-SPE in the presence of ambient electron mediator O 2 . Consequently, Glc was detected with its concentration linearly depending on H 2 O 2 oxidation current with sensitivity of 10.5 μA mM -1 cm -2 and detection limit of 9.3 μM (at S / N = 3). The biosensor can be quickly regenerated with memory effects removed by buffer additions for continuous real-time detection of multiple samples in one run for point-of-care purposes. This integrated platform is also inexpensive since the EPP is easily stored, and enzymeless PtNPs-SPEs can be used multiple times with different EPPs. The green and facile preparation in bulk, excellent mechanical strength, well-maintained enzyme activity, disposability, and good reproducibility and stability make our paper-fluidic biosensor platform suitable for various real-time electrochemical bioassays without any external power for mixing, especially in resource-limited conditions. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose μPADs <s> Enzymatic sensors on complementary metal–oxide–semiconductor (CMOS) chips are realized using carbon ink and chromatography paper (ChrPr). Electrodes are fabricated from carbon ink on CMOS chips. The carbon ink electrodes work as well-behaving electrochemical electrodes. Enzyme electrodes are realized by covering the carbon ink electrodes on the CMOS chip with ChrPr supporting enzymes and electron mediators. Such enzyme electrodes successfully give anodic current proportional to the glucose concentration. Good linearity is observed up to 10 mM glucose concentration, which is sufficient for blood glucose testing applications. <s> BIB005 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose μPADs <s> This study demonstrates a simple approach for fabricating a 3D-μPAD from a single sheet of paper by double-sided printing and lamination. First, a wax printer prints vertically symmetrical and asymmetrical wax patterns onto a double-sided paper surface. Then, a laminator melts the printed wax patterns to form microfluidic channels in the paper sheet. The vertically symmetrical wax patterns form vertical channels when the melted wax patterns make contact with each other. The asymmetrical wax patterns form lateral and vertical channels at the cross section of the paper when the printed wax patterns are melted to a lower height than the thickness of the single sheet of paper. Finally, the two types of wax patterns form a 3D microfluidic network to move fluid laterally and vertically in the single sheet of paper. This method eliminates major technical hurdles related to the complicated and tedious alignment, assembly, bonding, and punching process. This 3D-μPAD can be used in a multiplex digital assay to measure the concentration of a target analyte in a sample solution simply by counting the number of colored bars at a fixed time. It does not require any external instruments to perform digital measurements. Therefore, we expect that this approach could be an instrument-free assay format for use in developing countries. <s> BIB006 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose μPADs <s> Abstract This report describes for the first time the development of paper-based enzymatic reactors (PERs) for the detection of glucose (Glu) in artificial serum sample using a 3D printed batch injection analysis (BIA) cell coupled with electrochemical detection. The fabrication of the PERs involved firstly the oxidation of the paper surface with a sodium periodate solution. The oxidized paper was then perforated with a paper punch to create microdisks and activated with a solution containing N -hydroxysuccinimide (NHS) and N -(3-dimethylaminopropyl)- N ′-ethylcarbodiimide hydrochloride (EDC). Glucose oxidase (GOx) enzyme was then covalently immobilized on paper surface to promote the enzymatic assay for the detection of Glu in serum sample. After the addition of Glu on the PER surface placed inside a plastic syringe, the analyte penetrated through the paper surface under vertical flow promoting the enzymatic assay. The reaction product (H 2 O 2 ) was collected with an electronic micropipette in a microtube and analyzed in the 3D BIA cell coupled with screen-printed electrodes (SPEs). The overall preparation time and the cost estimated per PER were 2.5 h and $0.02, respectively. Likewise the PERs, the use of a 3D printer allowed the fabrication of a BIA cell within 4 h at cost of $5. The coupling of SPE with the 3D printed cell exhibited great analytical performance including repeatability and reproducibility lower than 2% as well as high sampling rate (30 injections h −1 ) under low injection volume (10 μL). The limit of detection (LD) and linear range achieved with the proposed approach was 0.11 mmol L −1 and 1–10 mmol L −1 , respectively. Lastly, the glucose concentration level was successfully determined using the proposed method and the values found were not statistically different from the data achieved by a reference method at confidence level of 95%. <s> BIB007 | Electrochemical detection integrated with a paper-based analytical device plays an important role in glucose detection due to the advantage of low cost, high sensitivity and selectivity, minimal sample preparation and short time of response. Screen-printed electrode (SPE) has been used for glucose detection in many paper-based analytical devices due to the advantage of flexible design and easy modification with chemicals. The research group of Swee Ngin Tan developed a paper-based amperometric glucose biosensor by placing a paper disk immobilized with glucose oxidase (GOx) on top of the SPE and used Fc-COOH or Prussian Blue (PB) as mediator BIB002 BIB003 . The linear response range was 1-5 mM with a correlation coefficient of 0.971. The PAD showed a LOD of 0.18 mM. Yang et al. BIB004 modified the SPE with platinum nanoparticles (PtNPs) and used the enzymeless PtNPs-SPE to detect glucose oxidase reaction product H2O2. The detection limit was dropped to 9.3 μM. Noiphung et al. BIB001 added a plasma isolation part and used the PAD to detect glucose from whole blood. A polyvinyl alcoholbound glass fiber was used to separate whole blood and the linear calibration range was from 0 up to 33.1 mM with a correlation coefficient of 0.987. Dias et al. BIB007 developed a paper-based enzymatic device to detect glucose in the 3D batch injection analysis (BIA) cell coupled with SPEs. The LOD was 0.11 mM and linear range was 1-10 mM. Miki et al. BIB005 replaced screen-printed electrode with complementary metal-oxide-semiconductor (CMOS) chips for electrochemical paper-based glucose detection. Electrodes were fabricated on CMOS chips, the working electrode (WE) and counter electrode (CE) were dropped with carbon ink, and the reference electrode (RE) was formed using Ag/AgCl ink. Glucose oxidase and electron mediator K3[Fe(CN)6] were immobilized on chromatography paper. Anodic currents given by electrodes were proportional to the glucose concentrations and linearity is up to 10 mM, which is sufficient for clinical applications . Scheme of the 3D-µPAD formation on a single sheet of paper in BIB006 . Before (a) and after (b) loading the red dye solution, the front, backside and cross section images of each parts indicated that the red dye solution had smoothly flowed from the inlet to the outlet via the alternative lower and upper channels. With the permission from BIB006 ; Copyright 2015, The Royal Society of Chemistry. |
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose µPADs <s> Abstract Electrochemical paper-based analytical devices (ePADs) with integrated plasma isolation for determination of glucose from whole blood samples have been developed. A dumbbell shaped ePAD containing two blood separation zones (VF2 membranes) with a middle detection zone was fabricated using the wax dipping method. The dumbbell shaped device was designed to separate plasma while generating homogeneous flow to the middle detection zone of the ePAD. The proposed ePADs work with whole blood samples with 24–60% hematocrit without dilution, and the plasma was completely separated within 4 min. Glucose in isolated plasma separated was detected using glucose oxidase immobilized on the middle of the paper device. The hydrogen peroxide generated from the reaction between glucose and the enzyme pass through to a Prussian blue modified screen printed electrode (PB-SPEs). The currents measured using chronoamperometry at the optimal detection potential for H 2 O 2 (−0.1 V versus Ag/AgCl reference electrode) were proportional to glucose concentrations in the whole blood. The linear range for glucose assay was in the range 0–33.1 mM ( r 2 = 0.987). The coefficients of variation (CVs) of currents were 6.5%, 9.0% and 8.0% when assay whole blood sample containing glucose concentration at 3.4, 6.3, and 15.6 mM, respectively. Because each sample displayed intra-individual variation of electrochemical signal, glucose assay in whole blood samples were measured using the standard addition method. Results demonstrate that the ePAD glucose assay was not significantly different from the spectrophotometric method ( p = 0.376, paired sample t -test, n = 10). <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose µPADs <s> a b s t r a c t This paper describes a simple inexpensive paper-based amperometric glucose biosensor developed based on Prussian Blue (PB)-modified screen-printed carbon electrodes (SPCEs). The use of cellulose paper proved to be a simple, "ideal" and green biocompatible immobilization matrix for glucose oxidase (GOx) as it was successfully embedded within the fibre matrix of paper via physical adsorption. The glucose biosensor allowed a small amount (0.5 L) of sample solution for glucose analysis. The biosensor had a linear calibration range between 0.25 mM and 2.00 (R2 = 0.987) and a detection limit of 0.01 mM glucose (S/N = 3). Interference study of selected potential interfering compounds on the biosensor response was investigated. Its analytical performance was demonstrated in the analysis of selected commercial glucose beverages. Despite the simplicity of the immobilization method, the biosensor retained ca. 72% of its activity after a storage period of 45 days. © 2014 Elsevier B.V. All rights reserved. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose µPADs <s> Abstract A simple low cost “green” biosensor configuration comprising of a hydrophilic cellulose paper disk with immobilised glucose oxidase (GOx) via adsorption step, placed on top of a screen printed carbon electrode (SPCE) was developed. This biosensor configuration allowed for low volume of glucose sample (5 μL) to be analysed. Cellulose paper was also used as the pre-storage reagent matrix for 0.1 M phosphate buffer solution (PBS, pH 7.0) and 10 mM soluble ferrocene monocarboxylic acid mediator. This biosensor exhibited a linear dynamic calibration range of 1 to 5 mM glucose ( r 2 = 0.971), with a limit of detection of 0.18 mM and retained 98% of its signal after a period of four months. In addition, its performance was demonstrated in the analysis of selected commercial soda beverages. The glucose concentrations obtained by the biosensor corroborated well with an independent high performance liquid chromatographic (HPLC) method. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose µPADs <s> Abstract A miniaturized paper-based microfluidic electrochemical enzymatic biosensing platform was developed and the effects of fluidic behaviors in paper substrate on electrochemical sensing were systemically investigated. The biosensor is composed of an enzyme-immobilized pure cellulose paper pad, an enzymeless screen-printed electrode (SPE) modified with platinum nanoparticles (PtNPs), and a pair of clamped acrylonitrile butadiene styrene (ABS) plastic holders to provide good alignment for stable signal sensing. The wicking rate of liquid sample in paper was predicted, using a two-dimensional Fickian-diffusion model, to be 1.0 × 10 −2 cm 2 /s, and was verified experimentally. Dip-coating was used to prepare the enzyme-modified paper pad (EPP), which is amenable for mass manufacturing. The EPP retained excellent hydrophilicity and mechanical properties, with even slightly improved tensile strength and break strain. No significant difference in voltammetric behaviors was observed between measurements made in bulk buffer solution and with different sample volumes applied to EPP beyond its saturation wicking volume. Glucose oxidase (GO x ), an enzyme specific for glucose (Glc) substrate, was used as a model enzyme and its enzymatic reaction product H 2 O 2 was detected by the enzymeless PtNPs-SPE in the presence of ambient electron mediator O 2 . Consequently, Glc was detected with its concentration linearly depending on H 2 O 2 oxidation current with sensitivity of 10.5 μA mM -1 cm -2 and detection limit of 9.3 μM (at S / N = 3). The biosensor can be quickly regenerated with memory effects removed by buffer additions for continuous real-time detection of multiple samples in one run for point-of-care purposes. This integrated platform is also inexpensive since the EPP is easily stored, and enzymeless PtNPs-SPEs can be used multiple times with different EPPs. The green and facile preparation in bulk, excellent mechanical strength, well-maintained enzyme activity, disposability, and good reproducibility and stability make our paper-fluidic biosensor platform suitable for various real-time electrochemical bioassays without any external power for mixing, especially in resource-limited conditions. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose µPADs <s> Enzymatic sensors on complementary metal–oxide–semiconductor (CMOS) chips are realized using carbon ink and chromatography paper (ChrPr). Electrodes are fabricated from carbon ink on CMOS chips. The carbon ink electrodes work as well-behaving electrochemical electrodes. Enzyme electrodes are realized by covering the carbon ink electrodes on the CMOS chip with ChrPr supporting enzymes and electron mediators. Such enzyme electrodes successfully give anodic current proportional to the glucose concentration. Good linearity is observed up to 10 mM glucose concentration, which is sufficient for blood glucose testing applications. <s> BIB005 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose µPADs <s> Abstract This report describes for the first time the development of paper-based enzymatic reactors (PERs) for the detection of glucose (Glu) in artificial serum sample using a 3D printed batch injection analysis (BIA) cell coupled with electrochemical detection. The fabrication of the PERs involved firstly the oxidation of the paper surface with a sodium periodate solution. The oxidized paper was then perforated with a paper punch to create microdisks and activated with a solution containing N -hydroxysuccinimide (NHS) and N -(3-dimethylaminopropyl)- N ′-ethylcarbodiimide hydrochloride (EDC). Glucose oxidase (GOx) enzyme was then covalently immobilized on paper surface to promote the enzymatic assay for the detection of Glu in serum sample. After the addition of Glu on the PER surface placed inside a plastic syringe, the analyte penetrated through the paper surface under vertical flow promoting the enzymatic assay. The reaction product (H 2 O 2 ) was collected with an electronic micropipette in a microtube and analyzed in the 3D BIA cell coupled with screen-printed electrodes (SPEs). The overall preparation time and the cost estimated per PER were 2.5 h and $0.02, respectively. Likewise the PERs, the use of a 3D printer allowed the fabrication of a BIA cell within 4 h at cost of $5. The coupling of SPE with the 3D printed cell exhibited great analytical performance including repeatability and reproducibility lower than 2% as well as high sampling rate (30 injections h −1 ) under low injection volume (10 μL). The limit of detection (LD) and linear range achieved with the proposed approach was 0.11 mmol L −1 and 1–10 mmol L −1 , respectively. Lastly, the glucose concentration level was successfully determined using the proposed method and the values found were not statistically different from the data achieved by a reference method at confidence level of 95%. <s> BIB006 | Electrochemical detection integrated with a paper-based analytical device plays an important role in glucose detection due to the advantage of low cost, high sensitivity and selectivity, minimal sample preparation and short time of response. Screen-printed electrode (SPE) has been used for glucose detection in many paper-based analytical devices due to the advantage of flexible design and easy modification with chemicals. The research group of Swee Ngin Tan developed a paper-based amperometric glucose biosensor by placing a paper disk immobilized with glucose oxidase (GOx) on top of the SPE and used Fc-COOH or Prussian Blue (PB) as mediator BIB002 BIB003 . The linear response range was 1-5 mM with a correlation coefficient of 0.971. The PAD showed a LOD of 0.18 mM. Yang et al. BIB004 modified the SPE with platinum nanoparticles (PtNPs) and used the enzymeless PtNPs-SPE to detect glucose oxidase reaction product H 2 O 2 . The detection limit was dropped to 9.3 µM. Noiphung et al. BIB001 added a plasma isolation part and used the PAD to detect glucose from whole blood. A polyvinyl alcohol-bound glass fiber was used to separate whole blood and the linear calibration range was from 0 up to 33.1 mM with a correlation coefficient of 0.987. Dias et al. BIB006 developed a paper-based enzymatic device to detect glucose in the 3D batch injection analysis (BIA) cell coupled with SPEs. The LOD was 0.11 mM and linear range was 1-10 mM. Miki et al. BIB005 replaced screen-printed electrode with complementary metal-oxide-semiconductor (CMOS) chips for electrochemical paper-based glucose detection. Electrodes were fabricated on CMOS chips, the working electrode (WE) and counter electrode (CE) were dropped with carbon ink, and the reference electrode (RE) was formed using Ag/AgCl ink. Glucose oxidase and electron mediator K 3 [Fe(CN) ] were immobilized on chromatography paper. Anodic currents given by electrodes were proportional to the glucose concentrations and linearity is up to 10 mM, which is sufficient for clinical applications . |
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose µPADs with Printed Electrodes <s> An electrode platform printed on a recyclable low-cost paper substrate was characterized using cyclic voltammetry. The working and counter electrodes were directly printed gold-stripes, while the reference electrode was a printed silver stripe onto which an AgCl layer was deposited electrochemically. The novel paper-based chips showed comparable performance to conventional electrochemical cells. Different types of electrode modifications were carried out to demonstrate that the printed electrodes behave similarly with conventional electrodes. Firstly, a self-assembled monolayer (SAM) of alkanethiols was successfully formed on the Au electrode surface. As a consequence, the peak currents were suppressed and no longer showed clear increase as a function of the scan rate. Such modified electrodes have potential in various sensor applications when terminally substituted thiols are used. Secondly, a polyaniline film was electropolymerized on the working electrode by cyclic voltammetry and used for potentiometric pH sensing. The calibration curve showed close to Nerstian response. Thirdly, a poly(3,4-ethylenedioxythiophene) (PEDOT) layer was electropolymerized both by galvanostatic and cyclic potential sweep method on the working electrode using two different dopants; Cl− to study ion-to-electron transduction on paper-Au/PEDOT system and glucose oxidase in order to fabricate a glucose biosensor. The planar paper-based electrochemical cell is a user-friendly platform that functions with low sample volume and allows the sample to be applied and changed by e.g. pipetting. Low unit cost is achieved with mask- and mesh-free inkjet-printing technology. <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose µPADs with Printed Electrodes <s> Abstract The present work describes for the first time the coupling of graphite pencil electrodes with paper-based analytical devices (μPADs) for glucose biosensing. Electrochemical measurement for μPADs using a two-electrode system was also developed. This dual-electrode configuration on paper provides electrochemical responses similar to those recorded by conventional electrochemical systems (three electrode systems). A wax printing process was used to define hydrophilic circular microzones by inserting hydrophobic patterns on paper. The microzones were employed one for filtration, one for an enzymatic reaction and one for electrochemical detection. By adding 4-aminophenylboronic acid as redox mediator and glucose oxidase to the reaction microzone, it was possible to reach low limits of detection for glucose with graphite pencil electrodes without modifying the electrode. The limit of detection of the proposed μPAD was found to be 0.38 μmol L −1 for glucose. Low sample consumption (40 μL) and fast analysis time (less than 5 min) combined with low cost electrodes and paper-based analytical platforms are attractive properties of the proposed μPAD with electrochemical detection. Artificial blood serum samples containing glucose were analyzed with the proposed device as proof of concept. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose µPADs with Printed Electrodes <s> Abstract The development of a miniaturized and low-cost platform for the highly sensitive, selective and rapid detection of multiplexed metabolites is of great interest for healthcare, pharmaceuticals, food science, and environmental monitoring. Graphene is a delicate single-layer, two-dimensional network of carbon atoms with extraordinary electrical sensing capability. Microfluidic paper with printing technique is a low cost matrix. Here, we demonstrated the development of graphene-ink based biosensor arrays on a microfluidic paper for the multiplexed detection of different metabolites, such as glucose, lactate, xanthine and cholesterol. Our results show that the graphene biosensor arrays can detect multiple metabolites on a microfluidic paper sensitively, rapidly and simultaneously. The device exhibits a fast measuring time of less than 2 min, a low detection limit of 0.3 μM, and a dynamic detection range of 0.3–15 μM. The process is simple and inexpensive to operate and requires a low consumption of sample volume. We anticipate that these results could open exciting opportunities for a variety of applications. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose µPADs with Printed Electrodes <s> The integration of paper with an electrochemical device has attracted growing attention for point-of-care testing, where it is of great importance to fabricate electrodes on paper in a low-cost, easy and versatile way. In this work, we report a simple strategy for directly writing electrodes on paper using a pressure-assisted ball pen to form a paper-based electrochemical device (PED). This method is demonstrated to be capable of fabricating electrodes on paper with good electrical conductivity and electrochemical performance, holding great potential to be employed in point-of-care applications, such as in human health diagnostics and food safety detection. As examples, the PEDs fabricated using the developed method are applied for detection of glucose in artificial urine and melamine in sample solutions. Furthermore, our developed strategy is also extended to fabricate PEDs with multi-electrode arrays and write electrodes on non-planar surfaces (e.g., paper cup, human skin), indicating the potential application of our method in other fields, such as fabricating biosensors, paper electronics etc. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose µPADs with Printed Electrodes <s> Abstract In this work, an origami paper-based analytical device for glucose biosensor by employing fully-drawn pencil electrodes has been reported. The three-electrode system was prepared on paper directly by drawing with nothing more than pencils. By simple printing, two separated zones on paper were designed for the immobilization of the mediator and glucose oxidase (GOx), respectively. The used paper provides a favorable and biocompatible support for maintaining the bioactivities of GOx. With a sandwich-type scheme, the origami biosensor exhibited great analytical performance for glucose sensing including acceptable reproducibility and favorable selectivity against common interferents in physiological fluids. The limit of detection and linear range achieved with the approach was 0.05 mM and 1–12 mM, respectively. Its analytical performance was also demonstrated in the analysis of human blood samples. Such fully-drawn paper-based device is cheap, flexible, portable, disposable, and environmentally friendly, affording great convenience for practical use under resource-limited conditions. We therefore envision that this approach can be extended to generate other functional paper-based devices. <s> BIB005 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose µPADs with Printed Electrodes <s> The present work describes the fabrication of paper-based analytical devices (μPADs) by immobilization of glucose oxidase onto the screen printed carbon electrodes (SPCEs) for the electrochemical glucose detection. The sensitivity towards glucose was improved by using a SPCE prepared from homemade carbon ink mixed with cellulose acetate. In addition, 4-aminophenylboronic acid (4-APBA) was used as a redox mediator giving a lower detection potential for improvement selectivity. Under optimized condition, the detection limit was 0.86 mM. The proposed device was applied in real samples. This μPAD has many advantages including low sample consumption, rapid analysis method, and low device cost. <s> BIB006 | An electrochemical sensor is composed of substrate and electrode so that it is important to fabricate electrodes on paper using an easy and versatile method. Some scientists directly printed electrodes on paper substrate instead of using commercial screen-printed electrodes BIB005 BIB001 BIB006 BIB004 BIB003 . Rungsawang et al. BIB006 used 4-aminophenylboronic acid (4-APBA) as redox mediator to improve the selectivity of the homemade screen-printed carbon electrode due to the low detection potential and the detection limit was 0.86 mM. Määttänen et al. BIB001 used an inkjet-printing paper-based device, whose working and counter electrodes were printed gold-stripes and a silver-stripe was printed onto an AgCl layer to form the reference electrode. Several modifications were carried to demonstrate the inkjet-printing paper-based device showed no difference with conventional electrodes. Li et al. BIB004 proposed a direct writing method using a pressure-assisted accessory ball pen to fabricate electrodes on paper (Figure 8 ). The electrodes fabricated on paper were demonstrated with great electrical conductivity and electrochemical performance, and the electrode could be used in the artificial urine samples, which exhibited the potential in practical application. Li et al. BIB005 developed a three-electrode system prepared on paper directly by drawing with graphite pencils. The µPAD was designed with a sandwich-type structure that mediator and glucose oxidase were immobilized on separated zones. This origami µPAD showed acceptable reproducibility and high selectivity against interferents in physiological fluids. The linear calibration range was from 1 up to 12 mM and the LOD was 0.05 mM. Santhiago et al. BIB002 developed a dual-electrode system to replace the conventional three electrode systems. Graphite pencil was directly used as the working electrode instead of drawing on the paper. 4-aminophenylboronic acid was added as redox mediator to reach low limits glucose detection with a LOD of 0.38 µM. |
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose μPADs with Printed Electrodes <s> An electrode platform printed on a recyclable low-cost paper substrate was characterized using cyclic voltammetry. The working and counter electrodes were directly printed gold-stripes, while the reference electrode was a printed silver stripe onto which an AgCl layer was deposited electrochemically. The novel paper-based chips showed comparable performance to conventional electrochemical cells. Different types of electrode modifications were carried out to demonstrate that the printed electrodes behave similarly with conventional electrodes. Firstly, a self-assembled monolayer (SAM) of alkanethiols was successfully formed on the Au electrode surface. As a consequence, the peak currents were suppressed and no longer showed clear increase as a function of the scan rate. Such modified electrodes have potential in various sensor applications when terminally substituted thiols are used. Secondly, a polyaniline film was electropolymerized on the working electrode by cyclic voltammetry and used for potentiometric pH sensing. The calibration curve showed close to Nerstian response. Thirdly, a poly(3,4-ethylenedioxythiophene) (PEDOT) layer was electropolymerized both by galvanostatic and cyclic potential sweep method on the working electrode using two different dopants; Cl− to study ion-to-electron transduction on paper-Au/PEDOT system and glucose oxidase in order to fabricate a glucose biosensor. The planar paper-based electrochemical cell is a user-friendly platform that functions with low sample volume and allows the sample to be applied and changed by e.g. pipetting. Low unit cost is achieved with mask- and mesh-free inkjet-printing technology. <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose μPADs with Printed Electrodes <s> Abstract The present work describes for the first time the coupling of graphite pencil electrodes with paper-based analytical devices (μPADs) for glucose biosensing. Electrochemical measurement for μPADs using a two-electrode system was also developed. This dual-electrode configuration on paper provides electrochemical responses similar to those recorded by conventional electrochemical systems (three electrode systems). A wax printing process was used to define hydrophilic circular microzones by inserting hydrophobic patterns on paper. The microzones were employed one for filtration, one for an enzymatic reaction and one for electrochemical detection. By adding 4-aminophenylboronic acid as redox mediator and glucose oxidase to the reaction microzone, it was possible to reach low limits of detection for glucose with graphite pencil electrodes without modifying the electrode. The limit of detection of the proposed μPAD was found to be 0.38 μmol L −1 for glucose. Low sample consumption (40 μL) and fast analysis time (less than 5 min) combined with low cost electrodes and paper-based analytical platforms are attractive properties of the proposed μPAD with electrochemical detection. Artificial blood serum samples containing glucose were analyzed with the proposed device as proof of concept. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose μPADs with Printed Electrodes <s> Abstract The development of a miniaturized and low-cost platform for the highly sensitive, selective and rapid detection of multiplexed metabolites is of great interest for healthcare, pharmaceuticals, food science, and environmental monitoring. Graphene is a delicate single-layer, two-dimensional network of carbon atoms with extraordinary electrical sensing capability. Microfluidic paper with printing technique is a low cost matrix. Here, we demonstrated the development of graphene-ink based biosensor arrays on a microfluidic paper for the multiplexed detection of different metabolites, such as glucose, lactate, xanthine and cholesterol. Our results show that the graphene biosensor arrays can detect multiple metabolites on a microfluidic paper sensitively, rapidly and simultaneously. The device exhibits a fast measuring time of less than 2 min, a low detection limit of 0.3 μM, and a dynamic detection range of 0.3–15 μM. The process is simple and inexpensive to operate and requires a low consumption of sample volume. We anticipate that these results could open exciting opportunities for a variety of applications. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose μPADs with Printed Electrodes <s> The integration of paper with an electrochemical device has attracted growing attention for point-of-care testing, where it is of great importance to fabricate electrodes on paper in a low-cost, easy and versatile way. In this work, we report a simple strategy for directly writing electrodes on paper using a pressure-assisted ball pen to form a paper-based electrochemical device (PED). This method is demonstrated to be capable of fabricating electrodes on paper with good electrical conductivity and electrochemical performance, holding great potential to be employed in point-of-care applications, such as in human health diagnostics and food safety detection. As examples, the PEDs fabricated using the developed method are applied for detection of glucose in artificial urine and melamine in sample solutions. Furthermore, our developed strategy is also extended to fabricate PEDs with multi-electrode arrays and write electrodes on non-planar surfaces (e.g., paper cup, human skin), indicating the potential application of our method in other fields, such as fabricating biosensors, paper electronics etc. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose μPADs with Printed Electrodes <s> Abstract In this work, an origami paper-based analytical device for glucose biosensor by employing fully-drawn pencil electrodes has been reported. The three-electrode system was prepared on paper directly by drawing with nothing more than pencils. By simple printing, two separated zones on paper were designed for the immobilization of the mediator and glucose oxidase (GOx), respectively. The used paper provides a favorable and biocompatible support for maintaining the bioactivities of GOx. With a sandwich-type scheme, the origami biosensor exhibited great analytical performance for glucose sensing including acceptable reproducibility and favorable selectivity against common interferents in physiological fluids. The limit of detection and linear range achieved with the approach was 0.05 mM and 1–12 mM, respectively. Its analytical performance was also demonstrated in the analysis of human blood samples. Such fully-drawn paper-based device is cheap, flexible, portable, disposable, and environmentally friendly, affording great convenience for practical use under resource-limited conditions. We therefore envision that this approach can be extended to generate other functional paper-based devices. <s> BIB005 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose μPADs with Printed Electrodes <s> The present work describes the fabrication of paper-based analytical devices (μPADs) by immobilization of glucose oxidase onto the screen printed carbon electrodes (SPCEs) for the electrochemical glucose detection. The sensitivity towards glucose was improved by using a SPCE prepared from homemade carbon ink mixed with cellulose acetate. In addition, 4-aminophenylboronic acid (4-APBA) was used as a redox mediator giving a lower detection potential for improvement selectivity. Under optimized condition, the detection limit was 0.86 mM. The proposed device was applied in real samples. This μPAD has many advantages including low sample consumption, rapid analysis method, and low device cost. <s> BIB006 | An electrochemical sensor is composed of substrate and electrode so that it is important to fabricate electrodes on paper using an easy and versatile method. Some scientists directly printed electrodes on paper substrate instead of using commercial screen-printed electrodes BIB005 BIB001 BIB006 BIB004 BIB003 . Rungsawang et al. BIB006 used 4-aminophenylboronic acid (4-APBA) as redox mediator to improve the selectivity of the homemade screen-printed carbon electrode due to the low detection potential and the detection limit was 0.86 mM. Määttänen et al. BIB001 used an inkjet-printing paper-based device, whose working and counter electrodes were printed gold-stripes and a silver-stripe was printed onto an AgCl layer to form the reference electrode. Several modifications were carried to demonstrate the inkjet-printing paper-based device showed no difference with conventional electrodes. Li et al. BIB004 proposed a direct writing method using a pressure-assisted accessory ball pen to fabricate electrodes on paper (Figure 8 ). The electrodes fabricated on paper were demonstrated with great electrical conductivity and electrochemical performance, and the electrode could be used in the artificial urine samples, which exhibited the potential in practical application. Li et al. BIB005 developed a threeelectrode system prepared on paper directly by drawing with graphite pencils. The μPAD was designed with a sandwich-type structure that mediator and glucose oxidase were immobilized on separated zones. This origami μPAD showed acceptable reproducibility and high selectivity against interferents in physiological fluids. The linear calibration range was from 1 up to 12 mM and the LOD was 0.05 mM. Santhiago et al. BIB002 developed a dual-electrode system to replace the conventional three electrode systems. Graphite pencil was directly used as the working electrode instead of drawing on the paper. 4-aminophenylboronic acid was added as redox mediator to reach low limits glucose detection with a LOD of 0.38 μM. The LODs achieved versus the electrochemical specific mediators through enzymatic reactions and the kinds of electrodes explored were summarized in Table 2 . The LODs achieved versus the electrochemical specific mediators through enzymatic reactions and the kinds of electrodes explored were summarized in Table 2 . BIB002 4-APBA Graphite dual-electrode 0.38 µM |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.